WorldWideScience

Sample records for scientometric benchmarking procedures

  1. Tobacco Control: Visualisation of Research Activity Using Density-Equalizing Mapping and Scientometric Benchmarking Procedures

    Directory of Open Access Journals (Sweden)

    Beatrix Groneberg-Kloft

    2009-06-01

    Full Text Available Background: Tobacco smoking continues to be a major preventable cause of death and disease and therefore tobacco control research is extremely important. However, research in this area is often hampered by a lack in funding and there is a need for scientometric techniques to display research efforts. Methods: The present study combines classical bibliometric tools with novel scientometric and visualizing techniques in order to analyse and categorise research in the field of tobacco control. Results: All studies related to tobacco control and listed in the ISI database since 1900 were identified by the use of defined search terms.Using bibliometric approaches, a continuous increase in qualitative markers such as collaboration numbers or citations were found for tobacco control research. The combination with density equalizing mapping revealed a distinct global pattern of research productivity and citation activity. Radar chart techniques were used to visualize bi- and multilateral research cooperation and institutional cooperation. Conclusions: The present study supplies a first scientometricapproach that visualises research activity in the field of tobacco control. It provides data that can be used for funding policy and the identification of research clusters.

  2. Drowning--a scientometric analysis and data acquisition of a constant global problem employing density equalizing mapping and scientometric benchmarking procedures.

    Science.gov (United States)

    Groneberg, David A; Schilling, Ute; Scutaru, Cristian; Uibel, Stefanie; Zitnik, Simona; Mueller, Daniel; Klingelhoefer, Doris; Kloft, Beatrix

    2011-10-14

    Drowning is a constant global problem which claims approximately half a million victims worldwide each year, whereas the number of near-drowning victims is considerably higher. Public health strategies to reduce the burden of death are still limited. While research activities in the subject drowning grow constantly, yet there is no scientometric evaluation of the existing literature at the present time. The current study uses classical bibliometric tools and visualizing techniques such as density equalizing mapping to analyse and evaluate the scientific research in the field of drowning. The interpretation of the achieved results is also implemented in the context of the data collection of the WHO. All studies related to drowning and listed in the ISI-Web of Science database since 1900 were identified using the search term "drowning". Implementing bibliometric methods, a constant increase in quantitative markers such as number of publications per state, publication language or collaborations as well as qualitative markers such as citations were observed for research in the field of drowning. The combination with density equalizing mapping exposed different global patterns for research productivity and the total number of drowning deaths and drowning rates respectively. Chart techniques were used to illustrate bi- and multilateral research cooperation. The present study provides the first scientometric approach that visualizes research activity on the subject of drowning. It can be assumed that the scientific approach to this topic will achieve even greater dimensions because of its continuing actuality.

  3. Drowning - a scientometric analysis and data acquisition of a constant global problem employing density equalizing mapping and scientometric benchmarking procedures

    Science.gov (United States)

    2011-01-01

    Background Drowning is a constant global problem which claims approximately half a million victims worldwide each year, whereas the number of near-drowning victims is considerably higher. Public health strategies to reduce the burden of death are still limited. While research activities in the subject drowning grow constantly, yet there is no scientometric evaluation of the existing literature at the present time. Methods The current study uses classical bibliometric tools and visualizing techniques such as density equalizing mapping to analyse and evaluate the scientific research in the field of drowning. The interpretation of the achieved results is also implemented in the context of the data collection of the WHO. Results All studies related to drowning and listed in the ISI-Web of Science database since 1900 were identified using the search term "drowning". Implementing bibliometric methods, a constant increase in quantitative markers such as number of publications per state, publication language or collaborations as well as qualitative markers such as citations were observed for research in the field of drowning. The combination with density equalizing mapping exposed different global patterns for research productivity and the total number of drowning deaths and drowning rates respectively. Chart techniques were used to illustrate bi- and multilateral research cooperation. Conclusions The present study provides the first scientometric approach that visualizes research activity on the subject of drowning. It can be assumed that the scientific approach to this topic will achieve even greater dimensions because of its continuing actuality. PMID:21999813

  4. Drowning - a scientometric analysis and data acquisition of a constant global problem employing density equalizing mapping and scientometric benchmarking procedures

    Directory of Open Access Journals (Sweden)

    Groneberg David A

    2011-10-01

    Full Text Available Abstract Background Drowning is a constant global problem which claims approximately half a million victims worldwide each year, whereas the number of near-drowning victims is considerably higher. Public health strategies to reduce the burden of death are still limited. While research activities in the subject drowning grow constantly, yet there is no scientometric evaluation of the existing literature at the present time. Methods The current study uses classical bibliometric tools and visualizing techniques such as density equalizing mapping to analyse and evaluate the scientific research in the field of drowning. The interpretation of the achieved results is also implemented in the context of the data collection of the WHO. Results All studies related to drowning and listed in the ISI-Web of Science database since 1900 were identified using the search term "drowning". Implementing bibliometric methods, a constant increase in quantitative markers such as number of publications per state, publication language or collaborations as well as qualitative markers such as citations were observed for research in the field of drowning. The combination with density equalizing mapping exposed different global patterns for research productivity and the total number of drowning deaths and drowning rates respectively. Chart techniques were used to illustrate bi- and multilateral research cooperation. Conclusions The present study provides the first scientometric approach that visualizes research activity on the subject of drowning. It can be assumed that the scientific approach to this topic will achieve even greater dimensions because of its continuing actuality.

  5. Scientometrics

    NARCIS (Netherlands)

    Leydesdorff, L.; Milojević, S.; Wright, J.D.

    2015-01-01

    This article provides an overview of the field of scientometrics, that is, the study of science, technology, and innovation from a quantitative perspective. We cover major historical milestones in the development of this specialism from the 1960s to today and discuss its relationship with the

  6. The evaluation of research by scientometric indicators

    CERN Document Server

    Vinkler, Peter

    2010-01-01

    Aimed at academics, academic managers and administrators, professionals in scientometrics, information scientists and science policy makers at all levels. This book reviews the principles, methods and indicators of scientometric evaluation of information processes in science and assessment of the publication activity of individuals, teams, institutes and countries. It provides scientists, science officers, librarians and students with basic and advanced knowledge on evaluative scientometrics. Especially great stress is laid on the methods applicable in practice and on the clarification of quantitative aspects of impact of scientific publications measured by citation indicators.

  7. USING THE INTERNATIONAL SCIENTOMETRIC DATABASES OF OPEN ACCESS IN SCIENTIFIC RESEARCH

    Directory of Open Access Journals (Sweden)

    O. Galchevska

    2015-05-01

    Full Text Available In the article the problem of the use of international scientometric databases in research activities as web-oriented resources and services that are the means of publication and dissemination of research results is considered. Selection criteria of scientometric platforms of open access in conducting scientific researches (coverage Ukrainian scientific periodicals and publications, data accuracy, general characteristics of international scientometrics database, technical, functional characteristics and their indexes are emphasized. The review of the most popular scientometric databases of open access Google Scholar, Russian Scientific Citation Index (RSCI, Scholarometer, Index Copernicus (IC, Microsoft Academic Search is made. Advantages of usage of International Scientometrics database Google Scholar in conducting scientific researches and prospects of research that are in the separation of cloud information and analytical services of the system are determined.

  8. The Role and Situation of the Scientometrics in Development

    Directory of Open Access Journals (Sweden)

    Abdolreza Noroozi Chakoli

    2012-07-01

    Full Text Available The measurement and evaluation of science, that scientometrics is followed, subsequently has always been in the world since it has been assumed that science can help the health and welfare of the inhabitants of the planet. Using the results of researches can effect on the economic, social, political, scientific, and cultural foundations. Therefore, the scientometrics researches are attractive for scientific and research societies that draw the distant horizons for themselves. This article introduced the scientometrics dimensions concisely and discusses the effects of results of these researches on economic, social, political, scientific, and cultural development in the countries and emphasizes its effects on the services of library and information centers. So, this paper presented the situation of scientometrics in science policy processes and states its role in the society development process based on the library method and using the analytical approach.

  9. The 13th international conference on scientometrics and informetrics

    DEFF Research Database (Denmark)

    Ocholla, Dennis; Ingwersen, Peter; Noyons, Ed

    2012-01-01

    The 13th International Conference on Scientometrics and Informetrics took place in Durban, South Africa from 4 to 7 July 2011, Ocholla and Ingwersen (2011). The meeting was organised under the auspices of the International Society for Scientometrics and Informetrics (ISSI) and by the ISSI 2011...

  10. Scientometric and Webometric methods

    DEFF Research Database (Denmark)

    Ingwersen, Peter

    2010-01-01

    The paper presents two fundamental models of scientific communication and characterizes and exemplifies the concept of ‘Scientometrics' and its sub-research areas: publication analysis, including so-called publication point evaluation; citation analysis; and crown indicators for research evaluation...

  11. Co-word maps of biotechnology: an example of cognitive scientometrics

    NARCIS (Netherlands)

    Rip, Arie; Courtial, J.-P.

    1984-01-01

    To analyse developments of scientific fields, scientometrics provides useful tools, provided one is prepared to take the content of scientific articles into account. Such cognitive scientometrics is illustrated by using as data a ten-year period of articles from a biotechnology core journal. After

  12. Scientometrics of Forest Health and Tree Diseases: An Overview

    Directory of Open Access Journals (Sweden)

    Marco Pautasso

    2016-01-01

    Full Text Available Maintaining forest health is a worldwide challenge due to emerging tree diseases, shifts in climate conditions and other global change stressors. Research on forest health is thus accumulating rapidly, but there has been little use of scientometric approaches in forest pathology and dendrology. Scientometrics is the quantitative study of trends in the scientific literature. As with all tools, scientometrics needs to be used carefully (e.g., by checking findings in multiple databases and its results must be interpreted with caution. In this overview, we provide some examples of studies of patterns in the scientific literature related to forest health and tree pathogens. Whilst research on ash dieback has increased rapidly over the last years, papers mentioning the Waldsterben have become rare in the literature. As with human health and diseases, but in contrast to plant health and diseases, there are consistently more publications mentioning “tree health” than “tree disease,” possibly a consequence of the often holistic nature of forest pathology. Scientometric tools can help balance research attention towards understudied emerging risks to forest trees, as well as identify temporal trends in public interest in forests and their health.

  13. APPLIED SCIENTOMETRICS: ELIBRARY.RU VS GOOGLE

    Directory of Open Access Journals (Sweden)

    А В Юрков

    2015-12-01

    Full Text Available The practical issues associated with searching reliable information about the publication activity of Russian scientist are discussed in the article. The examples in [1] show that the effective solution of this problem requires the using of different scientometric services: both the domestic eLIBRARY.RU and its Russian Science Citation Index, and the Google Scholar as an alternative. At the time the work is published comparison was not in favor of the domestic resource. However, due to the RSCI project within a short period the eLibrary’s tools for building database of scientific publications have grown significantly and the opportunities to improve the quality of scientometric information become real. The article gives the examples.

  14. Scientometrics in a changing research landscape

    NARCIS (Netherlands)

    Bornmann, L.; Leydesdorff, L.

    2014-01-01

    Bibliometrics has become an integral component of quality assessment for science and funding decisions. The next challenge for scientometrics is to develop similarly reliable indicators for the social impact of research.

  15. Mapping the Interdisciplinarity in Scientometric Studies

    Directory of Open Access Journals (Sweden)

    Mahmood Khosrowjerdi

    2013-03-01

    The data was extracted from Web of Science (WoS. The results showed that scientometric studies were a part of interdisciplinary studies. Furthermore, the library and information science and computer science had major contribution to this field.

  16. Selected critical examples of scientometric publication analysis

    DEFF Research Database (Denmark)

    Ingwersen, Peter

    2014-01-01

    Objective: This paper selects and outlines factors of central importance in the calculation, presentation and interpretation of publication analysis results from a scientometric perspective. The paper focuses on growth, world share analyses and the logic behind the computation of average numbers...... of authors, institutions or countries per publication indexed by Web of Science. Methodology: The paper uses examples from earlier research evaluation studies and cases based on online data to describe issues, problematic details, pitfalls and how to overcome them in publication analysis with respect...... to analytic tool application, calculation, presentation and interpretation. Results: By means of different kinds of analysis and presentation, the paper provides insight into scientometrics in the context of informetric analysis, selected cases of research productivity, publication patterns and research...

  17. Use of scientometrics to assess nuclear and other analytical methods

    International Nuclear Information System (INIS)

    Lyon, W.S.

    1986-01-01

    Scientometrics involves the use of quantitative methods to investigate science viewed as an information process. Scientometric studies can be useful in ascertaining which methods have been most employed for various analytical determinations as well as for predicting which methods will continue to be used in the immediate future and which appear to be losing favor with the analytical community. Published papers in the technical literature are the primary source materials for scientometric studies; statistical methods and computer techniques are the tools. Recent studies have included growth and trends in prompt nuclear analysis impact of research published in a technical journal, and institutional and national representation, speakers and topics at several IAEA conferences, at modern trends in activation analysis conferences, and at other non-nuclear oriented conferences. Attempts have also been made to predict future growth of various topics and techniques. 13 refs., 4 figs., 17 tabs

  18. Use of scientometrics to assess nuclear and other analytical methods

    Energy Technology Data Exchange (ETDEWEB)

    Lyon, W.S.

    1986-01-01

    Scientometrics involves the use of quantitative methods to investigate science viewed as an information process. Scientometric studies can be useful in ascertaining which methods have been most employed for various analytical determinations as well as for predicting which methods will continue to be used in the immediate future and which appear to be losing favor with the analytical community. Published papers in the technical literature are the primary source materials for scientometric studies; statistical methods and computer techniques are the tools. Recent studies have included growth and trends in prompt nuclear analysis impact of research published in a technical journal, and institutional and national representation, speakers and topics at several IAEA conferences, at modern trends in activation analysis conferences, and at other non-nuclear oriented conferences. Attempts have also been made to predict future growth of various topics and techniques. 13 refs., 4 figs., 17 tabs.

  19. Scholia, Scientometrics and Wikidata

    DEFF Research Database (Denmark)

    Nielsen, Finn; Mietchen, Daniel; Willighagen, Egon

    2017-01-01

    Scholia is a tool to handle scientific bibliographic information through Wikidata. The Scholia Web service creates on-the-fly scholarly profiles for researchers, organizations, journals, publishers, individual scholarly works, and for research topics. To collect the data, it queries the SPARQL-ba...... service is also able to format Wikidata bibliographic entries for use in LaTeX/BIBTeX. Apart from detailing Scholia, we describe how Wikidata has been used for bibliographic information and we also provide some scientometric statistics on this information....

  20. Curare--a curative poison: a scientometric analysis.

    Directory of Open Access Journals (Sweden)

    Jil Carl

    Full Text Available INTRODUCTION: Curare is one of the best-examined neurotoxins of the world, which has empirically been used for centuries by American Indigenes. Research on curare has been performed much later, a global scientometric analysis on curare research or its derivates does not yet exist. This bibliometric analysis is part of the global NewQis-project and should illuminate both toxic and historic issues of research on curare. METHODS: The ISI Web of Science was searched for data covering 1900 to 2013 using a term which included as many original articles on curare as possible. 3,867 articles were found and analyzed for common bibliometric items such as the number of citations, language of the articles or the (modified Hirsch-Index (h-index. Results are illustrated utilizing modern density equalizing map projections (DEMP or beam diagrams. RESULTS: Most publications were located in North America and Europe. The USA has the highest number of publications as well as the highest h-index. The number of publications overall rose until the late 1990s and later decreased. Furthermore, sudden increases of research activity are ascribable to historic events, like the first use of curare as muscle relaxant during surgery. DISCUSSION: This scientometric analysis of curare research reflects several tendencies as previously seen in other bibliometric investigations, i.e. the scientific quality standard of North America and Europe. Research on curare decreased however, due to the declining attention towards this muscle relaxant. This work exemplifies also how scientometric methods can be used to illuminate historic circumstances immediately stimulating scientific research.

  1. A benchmarking procedure for PIGE related differential cross-sections

    Science.gov (United States)

    Axiotis, M.; Lagoyannis, A.; Fazinić, S.; Harissopulos, S.; Kokkoris, M.; Preketes-Sigalas, K.; Provatas, G.

    2018-05-01

    The application of standard-less PIGE requires the a priori knowledge of the differential cross section of the reaction used for the quantification of each detected light element. Towards this end, a lot of datasets have been published the last few years from several laboratories around the world. The discrepancies often found between different measured cross sections can be resolved by applying a rigorous benchmarking procedure through the measurement of thick target yields. Such a procedure is proposed in the present paper and is applied in the case of the 19F(p,p‧ γ)19F reaction.

  2. Scientometrics and information retrieval: weak-links revitalized

    NARCIS (Netherlands)

    Mayr, Philipp; Scharnhorst, Andrea

    This special issue brings together eight papers from experts of communities which often have been perceived as different once: bibliometrics, scientometrics and in- formetrics on the one side and information retrieval on the other. The idea of this special issue started at the workshop ‘‘Combining

  3. Qualiative conditions of scientometrics: the new challenges'

    NARCIS (Netherlands)

    Rip, Arie

    1997-01-01

    While scientometrics is now an established field, there are challenges. A closer look at how scientometricians aggregate building blocks into artfully made products, and point-represent these (e.g. as the map of field X) allows one to overcome the dependence on judgements of scientists for

  4. Scientometric analysis and mapping of scientific articles on Behcet's disease.

    Science.gov (United States)

    Shahram, Farhad; Jamshidi, Ahmad-Reza; Hirbod-Mobarakeh, Armin; Habibi, Gholamreza; Mardani, Amir; Ghaemi, Marjan

    2013-04-01

    Behçet's disease (BD) is a systemic vasculitis disease with oral and genital aphthous ulceration, uveitis, skin manifestations, arthritis and neurological involvement. Many investigators have published articles on BD in the last two decades since introduction of diagnosis criteria by the International Study Group for Behçet's Disease in 1990. However, there is no scientometric analysis available for this increasing amount of literature. A scientometric analysis method was used to achieve a view of scientific articles about BD which were published between 1990 and 2010, by data retrieving from ISI Web of Science. The specific features such as publication year, language of article, geographical distribution, main journal in this field, institutional affiliation and citation characteristics were retrieved and analyzed. International collaboration was analyzed using Intcoll and Pajek softwares. There was a growing trend in the number of BD articles from 1990 to 2010. The number of citations to BD literature also increased around 5.5-fold in this period. The countries found to have the highest output were Turkey, Japan, the USA and England; the first two universities were from Turkey. Most of the top 10 journals publishing BD articles were in the field of rheumatology, consistent with the subject areas of the articles. There was a correlation between the citations per paper and the impact factor of the publishing journal. This is the first scientometric analysis of BD, showing the scientometric characteristics of ISI publications on BD. © 2013 The Authors International Journal of Rheumatic Diseases © 2013 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.

  5. Strategic intelligence on emerging technologies: Scientometric overlay mapping

    NARCIS (Netherlands)

    Rotolo, D.; Rafols, I.; Hopkins, M.M.; Leydesdorff, L.

    This paper examines the use of scientometric overlay mapping as a tool of “strategic intelligence” to aid the governing of emerging technologies. We develop an integrative synthesis of different overlay mapping techniques and associated perspectives on technological emergence across geographical,

  6. The Vocational Guidance Research Database: A Scientometric Approach

    Science.gov (United States)

    Flores-Buils, Raquel; Gil-Beltran, Jose Manuel; Caballer-Miedes, Antonio; Martinez-Martinez, Miguel Angel

    2012-01-01

    The scientometric study of scientific output through publications in specialized journals cannot be undertaken exclusively with the databases available today. For this reason, the objective of this article is to introduce the "Base de Datos de Investigacion en Orientacion Vocacional" [Vocational Guidance Research Database], based on the…

  7. A review of theory and practice in scientometrics

    NARCIS (Netherlands)

    Mingers, J.; Leydesdorff, L.

    2015-01-01

    Scientometrics is the study of the quantitative aspects of the process of science as a communication system. It is centrally, but not only, concerned with the analysis of citations in the academic literature. In recent years it has come to play a major role in the measurement and evaluation of

  8. Internet of Things: A Scientometric Review

    Directory of Open Access Journals (Sweden)

    Juan Ruiz-Rosero

    2017-12-01

    Full Text Available Internet of Things (IoT is connecting billions of devices to the Internet. These IoT devices chain sensing, computation, and communication techniques, which facilitates remote data collection and analysis. wireless sensor networks (WSN connect sensing devices together on a local network, thereby eliminating wires, which generate a large number of samples, creating a big data challenge. This IoT paradigm has gained traction in recent years, yielding extensive research from an increasing variety of perspectives, including scientific reviews. These reviews cover surveys related to IoT vision, enabling technologies, applications, key features, co-word and cluster analysis, and future directions. Nevertheless, we lack an IoT scientometrics review that uses scientific databases to perform a quantitative analysis. This paper develops a scientometric review about IoT over a data set of 19,035 documents published over a period of 15 years (2002–2016 in two main scientific databases (Clarivate Web of Science and Scopus. A Python script called ScientoPy was developed to perform quantitative analysis of this data set. This provides insight into research trends by investigating a lead author’s country affiliation, most published authors, top research applications, communication protocols, software processing, hardware, operating systems, and trending topics. Furthermore, we evaluate the top trending IoT topics and the popular hardware and software platforms that are used to research these trends.

  9. Scientometric analysis of the Ethiopian Journal of Agricultural ...

    African Journals Online (AJOL)

    A Scientometric analysis of the Ethiopian Journal of Agricultural Sciences from volume 1 to 24 covering 279 papers is reported. The journal was covering all areas of agriculture with most papers having single, two or three authors and; in a rare cases up to nine and twelve authors. The number of papers in agronomy, field ...

  10. A Framework for Text Mining in Scientometric Study: A Case Study in Biomedicine Publications

    Science.gov (United States)

    Silalahi, V. M. M.; Hardiyati, R.; Nadhiroh, I. M.; Handayani, T.; Rahmaida, R.; Amelia, M.

    2018-04-01

    The data of Indonesians research publications in the domain of biomedicine has been collected to be text mined for the purpose of a scientometric study. The goal is to build a predictive model that provides a classification of research publications on the potency for downstreaming. The model is based on the drug development processes adapted from the literatures. An effort is described to build the conceptual model and the development of a corpus on the research publications in the domain of Indonesian biomedicine. Then an investigation is conducted relating to the problems associated with building a corpus and validating the model. Based on our experience, a framework is proposed to manage the scientometric study based on text mining. Our method shows the effectiveness of conducting a scientometric study based on text mining in order to get a valid classification model. This valid model is mainly supported by the iterative and close interactions with the domain experts starting from identifying the issues, building a conceptual model, to the labelling, validation and results interpretation.

  11. Foreign articles as growth factor of falsification for scientific publications and reduction of scientometric indicators of organizations

    Directory of Open Access Journals (Sweden)

    Pototskaya O.Yu.

    2016-09-01

    Full Text Available Some orders of Ministry of Education and Science of Ukraine encourage scientific authors to publish their articles abroad but do not detail selection criteria of the journals recommended. As a result, authors choose the easiest way and publish their papers in low-quality foreign journals. Moreover, some false journals have been created since the Orders entered into force. In this article we emphasize the importance of journal registration in scientometric databases for the international and regional rating of universities. Also attention is paid to the need of verification of journal quality and its reliability by the authors before sending manuscripts for publication. To resolve these problems we propose to indicate URL link of the article in the annual scientific report of scientists, departments and organizations. It will help to verify the quality of the journal and concentrate attention of the authors on checking the existence of journal’s web site before sending their manuscripts. It is important to emphasize that journals without web sites do not influence on the rating of scientific organizations in any scientometric databases. If such link is absent, article should not be scored up for the rating. Additional way is to create adequate system of scientific ratings. For example, articles published in major scientometric databases, such as Scopus and Web of Science, should be scored two (or more times higher than any other articles. Papers published in other scientometric databases (such as RISC also should be scored higher than onother articles. Information about principal scientometric databases should be clarified for scientists to help them in choosing optimal journal for manuscripts publishing.

  12. What kind of scientometrics and bibliometrics do we need in Poland? (in Polish

    Directory of Open Access Journals (Sweden)

    Michał KOKOWSKI

    2015-12-01

    Full Text Available The aim of this research study and review article is to examine the scientific basis of scientometrics and bibliometrics, i.e. to show their real “detection and measurement” capabilities. The analysis is conducted from the author’s perspective of the integrated science of science and the history and methodology of the science of science following this perspective. Particular emphasis is placed on the history and methodology of scientometrics and bibliometrics and the history and methodology of science. This perspective is a new approach to the subject matter and determines a how to select publications and their interpretations and b which hierarchy the analyzed issues should follow. The article describes the view, dominant both in the world and in Poland, on the basics of scientometrics and bibliometrics and their numerous serious scientific restrictions, such as: a the incompatibility of the so­‑called scientometric laws and the Garfield law of concentration with the empirical data; b the domain bias, the language bias and the geographical bias of indexation databases; c various practices of scientific communication; d the local (national or state­‑level orientation of humanities, social sciences and citation indexes; e the disadvantages of the impact factor (IF, the manipulations with its values and the “impact factor game”; f  the numerous problems with and abuses of citations, e.g. the Mendel syndrome, the “classic” publication bias, the palimpsestic syndrome, the effect of the disappearance of citations, the so­‑called Matthew effect, the theft of citations, the so­‑called secondary and tertiary citations, negative citations, “fashionable nonsenses”, forced citations, the pathologies of the so­‑called citation cartels or cooperative citations, the guest authorship and the honorable authorship; g the distinction between the “impact of publication” and the “importance of publication” or the

  13. Scientometric analysis in special education: importance and trends over the last 60 years

    Directory of Open Access Journals (Sweden)

    Anna Maria Canavarro Benite

    2011-10-01

    Full Text Available Special education in Brazil is defined in law as “the type of education offered preferentially in the regular classes for students with special needs education”. However, this proposal of special education has not always been defined in this way, and a factor that contributed greatly to the consolidation of special education as a specific field of study was the research and theis resulting publications. Thus, this study aimed to make a scientometric analysis in the field of special education in order to determine which are and were the main trends of research over the years, and to review literature on the history of special education. It was felt that the issue of special education has undergone major advances, with consequent recovery of the subject with special needs. The scientometric analysis in special education to suggest that there has been a great evolution in the area, including a fact explained by the large number of papers published over time and the different aspects of his publications. Reflections on scientometrics in special education characterize the global production of special education and provide input for further research are made in this area. Thus, the construction and analysis of these indicators have provided input to view the state of the art in special education.

  14. Scientometrics: Nature Index and Brazilian science.

    Science.gov (United States)

    Silva, Valter

    2016-09-01

    A recent published newspaper article commented on the (lack of) quality of Brazilian science and its (in) efficiency. The newspaper article was based on a special issue of Nature and on a new resource for scientometrics called Nature Index. I show here arguments and sources of bias that, under the light of the principle in dubio pro reo, it is questionable to dispute the quality and efficiency of the Brazilian science on these grounds, as it was commented on the referred article. A brief overview of Brazilian science is provided for readers to make their own judgment.

  15. Scientometric analysis: A technical need for medical science researchers either as authors or as peer reviewers.

    Science.gov (United States)

    Masic, Izet

    2016-01-01

    The nature of performing a scientific research is a process that has several different components which consist of identifying the key research question(s), choices of scientific approach for the study and data collection, data analysis, and finally reporting on results. Generally, peer review is a series of procedures in the evaluation of a creative work or performance by other people, who work in the same or related field, with the aim of maintaining and improving the quality of work or performance in that field. The assessment of the achievement of every scientist, and thus indirectly determining his reputation in the scientific community of these publications, especially journals, is done through the so-called impact factor index. The impact factor predicts or estimates that how many annual citations article may receive after its publication. Evaluation of scientific productivity and assessment of the published articles of researchers and scientists can be made through the so-called H-index. The quality of published results of scientific work largely depends on knowledge sources that are used in the preparation, which means that it should be considered to serve the purpose and the very relevance of the information used. Scientometrics as a field of science covers all aforementioned issues, and scientometric analysis is obligatory for quality assessment of the scientific validity of published articles and other type of publications.

  16. Laying the Foundations for Scientometric Research: A Data Science Approach

    Science.gov (United States)

    Perron, Brian E.; Victor, Bryan G.; Hodge, David R.; Salas-Wright, Christopher P.; Vaughn, Michael G.; Taylor, Robert Joseph

    2017-01-01

    Objective: Scientometric studies of social work have stagnated due to problems with the organization and structure of the disciplinary literature. This study utilized data science to produce a set of research tools to overcome these methodological challenges. Method: We constructed a comprehensive list of social work journals for a 25-year time…

  17. Scientific Production of Medical Universities in the West of Iran: a Scientometric Analysis.

    Science.gov (United States)

    Rasolabadi, Masoud; Khaledi, Shahnaz; Khayati, Fariba; Kalhor, Marya Maryam; Penjvini, Susan; Gharib, Alireza

    2015-08-01

    This study aimed to compare scientific production by providing quantitative evaluation of science output in five Western Iranian Medical Universities including Hamedan, Ilam, Kermanshah, Kurdistan and Lorestan University of Medical Sciences using scientometrics indicators based on data indexed in Scopus for period between the years 2010 to 2014. In this scientometric study data were collected using Scopus database. Both searching and analyzing features of Scopus were used to data retrieval and analysis. We used Scientometrics indicators including number of publications, number of citations, nationalization index (NI), Internationalization Index (INI), H-index, average number of citations per paper, and growth index. Five Western Iranian Universities produced over 3011 articles from 2010 to 2014. These articles were cited 7158 times with an average rate of 4.2 citations per article. H- Index of under study universities are varying from 14 to 30. Ilam University of Medical Sciences had the highest international collaboration with an INI of 0.33 compared to Hamedan and Kermanshah universities with INI of 0.20 and 0.16 respectively. The lowest international collaboration belonged to Lorestan University of Medical Sciences (0.07). The highest Growth Index belonged to Kurdistan University of Medical Sciences (69.7). Although scientific production of five Western Iranian Medical Universities was increasing, but this trend was not stable. To achieve better performance it is recommended that five Western Iranian Universities stabilize their budgeting and investment policies in research.

  18. Scientometric indicators for Brazilian research on High Energy Physics, 1983-2013

    Directory of Open Access Journals (Sweden)

    GONZALO R. ALVAREZ

    Full Text Available ABSTRACT This article presents an analysis of Brazilian research on High Energy Physics (HEP indexed by Web of Science (WoS from 1983 to 2013. Scientometric indicators for output, collaboration and impact were used to characterize the field under study. The results show that the Brazilian articles account for 3% of total HEP research worldwide and that the sharp rise in the scientific activity between 2009 and 2013 may have resulted from the consolidation of graduate programs, the increase of the funding and of the international collaboration as well as the implementation of the Rede Nacional de Física de Altas Energias (RENAFAE in 2008. Our results also indicate that the collaboration patterns in terms of the authors, the institutions and the countries confirm the presence of Brazil in multinational Big Science experiments, which may also explain the prevalence of foreign citing documents (all types, emphasizing the international prestige and visibility of the output of Brazilian scientists. We concluded that the scientometric indicators suggested scientific maturity in the Brazilian HEP community due to its long history of experimental research.

  19. Benchmarking a geostatistical procedure for the homogenisation of annual precipitation series

    Science.gov (United States)

    Caineta, Júlio; Ribeiro, Sara; Henriques, Roberto; Soares, Amílcar; Costa, Ana Cristina

    2014-05-01

    The European project COST Action ES0601, Advances in homogenisation methods of climate series: an integrated approach (HOME), has brought to attention the importance of establishing reliable homogenisation methods for climate data. In order to achieve that, a benchmark data set, containing monthly and daily temperature and precipitation data, was created to be used as a comparison basis for the effectiveness of those methods. Several contributions were submitted and evaluated by a number of performance metrics, validating the results against realistic inhomogeneous data. HOME also led to the development of new homogenisation software packages, which included feedback and lessons learned during the project. Preliminary studies have suggested a geostatistical stochastic approach, which uses Direct Sequential Simulation (DSS), as a promising methodology for the homogenisation of precipitation data series. Based on the spatial and temporal correlation between the neighbouring stations, DSS calculates local probability density functions at a candidate station to detect inhomogeneities. The purpose of the current study is to test and compare this geostatistical approach with the methods previously presented in the HOME project, using surrogate precipitation series from the HOME benchmark data set. The benchmark data set contains monthly precipitation surrogate series, from which annual precipitation data series were derived. These annual precipitation series were subject to exploratory analysis and to a thorough variography study. The geostatistical approach was then applied to the data set, based on different scenarios for the spatial continuity. Implementing this procedure also promoted the development of a computer program that aims to assist on the homogenisation of climate data, while minimising user interaction. Finally, in order to compare the effectiveness of this methodology with the homogenisation methods submitted during the HOME project, the obtained results

  20. Visualization of research activity using density-equalizing mapping and scientometric benchmarking procedures

    OpenAIRE

    Zell, Hanna

    2011-01-01

    Ever since the first purposeful utilization of fire by humankind, anthropogenic air pollution has been an issue. At least by the beginning of the industrialization it has become a severe problem. Today, the main causes of anthropogenic air pollution are road traffic (a “mobile” source), and processes in energy production and industry (“stationary” sources). Efforts to improve air quality are mainly aimed at anthropogenic air pollution, as it is the only susceptible factor. Polluted air ...

  1. Spatial Scientometrics and Scholarly Impact : A Review of Recent Studies, Tools, and Methods

    NARCIS (Netherlands)

    Frenken, Koen; Hoekman, Jarno

    2014-01-01

    Previously, we proposed a research program to analyze spatial aspects of the science system which we called “spatial scientometrics” (Frenken, Hardeman, & Hoekman, 2009). The aim of this chapter is to systematically review recent (post-2008) contributions to spatial scientometrics on the basis of a

  2. The impact of Monte Carlo simulation: a scientometric analysis of scholarly literature

    CERN Document Server

    Pia, Maria Grazia; Bell, Zane W; Dressendorfer, Paul V

    2010-01-01

    A scientometric analysis of Monte Carlo simulation and Monte Carlo codes has been performed over a set of representative scholarly journals related to radiation physics. The results of this study are reported and discussed. They document and quantitatively appraise the role of Monte Carlo methods and codes in scientific research and engineering applications.

  3. Scientometric trend analyses of publications on the history of psychology: Is psychology becoming an unhistorical science?

    Science.gov (United States)

    Krampen, Günter

    Examines scientometrically the trends in and the recent situation of research on and the teaching of the history of psychology in the German-speaking countries and compares the findings with the situation in other countries (mainly the United States) by means of the psychology databases PSYNDEX and PsycINFO. Declines of publications on the history of psychology are described scientometrically for both research communities since the 1990s. Some impulses are suggested for the future of research on and the teaching of the history of psychology. These include (1) the necessity and significance of an intensified use of quantitative, unobtrusive scientometric methods in historiography in times of digital "big data", (2) the necessity and possibilities to integrate qualitative and quantitative methodologies in historical research and teaching, (3) the reasonableness of interdisciplinary cooperation of specialist historians, scientometricians, and psychologists, (4) the meaningfulness and necessity to explore, investigate, and teach more intensively the past and the problem history of psychology as well as the understanding of the subject matter of psychology in its historical development in cultural contexts. The outlook on the future of such a more up-to-date research on and teaching of the history of psychology is-with some caution-positive.

  4. [Scientometric and publication malpractices. The appearance of globalization in biomedical publishing].

    Science.gov (United States)

    Fazekas, T; Varró, V

    2001-09-16

    Attention is drawn to publication and scientometric malpractices utilized by biomedical authors who do not adhere to the accepted ethical norms. The difference between duplicate/redundant and bilingual publications is defined. In the course of discussion of the manipulations that may be observed in the field of scientometry, it is pointed out that abstract of congress lectures/posters can not be taken into consideration for scientometric purposes even if such abstracts are published in journals with impact factors. A further behavioral form is likewise regarded as unacceptable from the aspect of publication ethics: when a physician who has participated in a multicentre, randomized clinical trial receives recognition (in an appendix or in an acknowledgement of an article) as having contributed data, but assesses this appreciation as co-authorship and thereby attempts to augment the value of his or her publication activity. The effects of globalization on biomedical publication activity are considered, and evidence is provided that the rapidly spreading electronic publication for a give rise to new types of ethical dilemmas. It is recommended that, in the current age of Anglo-American globalization, greater emphasis should be placed on the development of medical publication in the mother tongue (Hungarian).

  5. Scientometrics of drug discovery efforts: pain-related molecular targets

    Directory of Open Access Journals (Sweden)

    Kissin I

    2015-07-01

    Full Text Available Igor KissinDepartment of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USAAbstract: The aim of this study was to make a scientometric assessment of drug discovery efforts centered on pain-related molecular targets. The following scientometric indices were used: the popularity index, representing the share of articles (or patents on a specific topic among all articles (or patents on pain over the same 5-year period; the index of change, representing the change in the number of articles (or patents on a topic from one 5-year period to the next; the index of expectations, representing the ratio of the number of all types of articles on a topic in the top 20 journals relative to the number of articles in all (>5,000 biomedical journals covered by PubMed over a 5-year period; the total number of articles representing Phase I–III trials of investigational drugs over a 5-year period; and the trial balance index, a ratio of Phase I–II publications to Phase III publications. Articles (PubMed database and patents (US Patent and Trademark Office database on 17 topics related to pain mechanisms were assessed during six 5-year periods from 1984 to 2013. During the most recent 5-year period (2009–2013, seven of 17 topics have demonstrated high research activity (purinergic receptors, serotonin, transient receptor potential channels, cytokines, gamma aminobutyric acid, glutamate, and protein kinases. However, even with these seven topics, the index of expectations decreased or did not change compared with the 2004–2008 period. In addition, publications representing Phase I–III trials of investigational drugs (2009–2013 did not indicate great enthusiasm on the part of the pharmaceutical industry regarding drugs specifically designed for treatment of pain. A promising development related to the new tool of molecular targeting, ie, monoclonal antibodies, for pain treatment has not

  6. A 5-year scientometric analysis of research centers affiliated to Tehran University of Medical Sciences

    Science.gov (United States)

    Yazdani, Kamran; Rahimi-Movaghar, Afarin; Nedjat, Saharnaz; Ghalichi, Leila; Khalili, Malahat

    2015-01-01

    Background: Since Tehran University of Medical Sciences (TUMS) has the oldest and highest number of research centers among all Iranian medical universities, this study was conducted to evaluate scientific output of research centers affiliated to Tehran University of Medical Sciences (TUMS) using scientometric indices and the affecting factors. Moreover, a number of scientometric indicators were introduced. Methods: This cross-sectional study was performed to evaluate a 5-year scientific performance of research centers of TUMS. Data were collected through questionnaires, annual evaluation reports of the Ministry of Health, and also from Scopus database. We used appropriate measures of central tendency and variation for descriptive analyses. Moreover, uni-and multi-variable linear regression were used to evaluate the effect of independent factors on the scientific output of the centers. Results: The medians of the numbers of papers and books during a 5-year period were 150.5 and 2.5 respectively. The median of the "articles per researcher" was 19.1. Based on multiple linear regression, younger age centers (p=0.001), having a separate budget line (p=0.016), and number of research personnel (p<0.001) had a direct significant correlation with the number of articles while real properties had a reverse significant correlation with it (p=0.004). Conclusion: The results can help policy makers and research managers to allocate sufficient resources to improve current situation of the centers. Newly adopted and effective scientometric indices are is suggested to be used to evaluate scientific outputs and functions of these centers. PMID:26157724

  7. Scientometric methods for identifying emerging technologies

    Science.gov (United States)

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

    2015-11-03

    Provided is a method of generating a scientometric model that tracks the emergence of an identified technology from initial discovery (via original scientific and conference literature), through critical discoveries (via original scientific, conference literature and patents), transitioning through Technology Readiness Levels (TRLs) and ultimately on to commercial application. During the period of innovation and technology transfer, the impact of scholarly works, patents and on-line web news sources are identified. As trends develop, currency of citations, collaboration indicators, and on-line news patterns are identified. The combinations of four distinct and separate searchable on-line networked sources (i.e., scholarly publications and citation, worldwide patents, news archives, and on-line mapping networks) are assembled to become one collective network (a dataset for analysis of relations). This established network becomes the basis from which to quickly analyze the temporal flow of activity (searchable events) for the example subject domain.

  8. Scientometric Indicators as a Way to Classify Brands for Customer’s Information

    Directory of Open Access Journals (Sweden)

    Mihaela Paun

    2015-10-01

    Full Text Available The paper proposes a novel approach for classification of different brands that commercialize similar products, for customer information. The approach is tested on electronic shopping records found on Amazon.com, by quantifying customer behavior and comparing the results with classifications of the same brands found online through search engines. The indicators proposed for the classification are currently used scientometric measures that can be easily applied to marketing classification.

  9. Paul Hagenmüller's contribution to solid state chemistry: A scientometric analysis

    Science.gov (United States)

    El Aichouchi, Adil; Gorry, Philippe

    2018-06-01

    Paul Hagenmüller (1921-2017) is an important figure of French solid-state chemistry, who enjoyed scientific and institutional recognition. He published 796 papers and has been cited more than 16,000 times. This paper explores Hagenmüller's work using scientometric analysis to reveal the impact of his work, his main research topics and his collaborations. Although Hagenmüller was a recognized scientist, a subset of his work, now highly cited, attracted little attention at the time of publication. To understand this phenomenon, we detect and study papers with delayed recognition, also called 'Sleeping Beauties' (SBs). In scientometrics, SBs are publications that go unnoticed, or 'sleep' for a long time before suddenly attracting a lot of attention in terms of citations. We identify 7 SBs published between 1965 and 1985, and awakened between 1993 and 2010. The first SB reports the discovery of the clathrate structure of silicon. The second reports the isolation of four new phases with the formula NaxCoO2 (x < =1). The five other SBs investigate the electrochemical intercalation and deintercalation of sodium, and the structure and properties of layered oxides. Through interviews with his coworkers, we attempt to identify the reasons for the delayed recognition and the context of the renewed interest in those papers.

  10. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  11. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  12. The Emergence and Evolution of School Psychology Literature: A Scientometric Analysis from 1907 through 2014

    Science.gov (United States)

    Liu, Shuyan; Oakland, Thomas

    2016-01-01

    The objective of this current study is to identify the growth and development of scholarly literature that specifically references the term "school psychology" in the Science Citation Index from 1907 through 2014. Documents from Web of Science were accessed and analyzed through the use of scientometric analyses, including HistCite and…

  13. Global Research on Smoking and Pregnancy—A Scientometric and Gender Analysis

    Directory of Open Access Journals (Sweden)

    Mathias Mund

    2014-05-01

    Full Text Available The exposure to tobacco smoke during pregnancy is considered to be amongst the most harmful avoidable risk factors. In this scientometric and gender study scientific data on smoking and pregnancy was analyzed using a variety of objective scientometric methods like the number of scientific contributions, the number of citations and the modified h-index in combination with gender-specific investigations. Covering a time period from 1900 to 2012, publishing activities of 27,955 authors, institutions and countries, reception within the international scientific community and its reactions were analyzed and interpreted. Out of 10,043 publications the highest number of scientific works were published in the USA (35.5%, followed by the UK (9.9% and Canada (5.3%. These nations also achieve the highest modified h-indices of 128, 79 and 62 and the highest citation rates of 41.4%, 8.6% and 5.3%, respectively. Out of 12,596 scientists 6,935 are female (55.1%, however they account for no more than 49.7% of publications (12,470 and 42.8% of citations (172,733. The highest percentage of female experts about smoking and pregnancy is found in Australasia (60.7%, while the lowest is found in Asia (41.9%. The findings of the study indicate an increase in gender equality as well as in quantity and quality of international scientific research about smoking and pregnancy in the future.

  14. Global Research on Smoking and Pregnancy—A Scientometric and Gender Analysis

    Science.gov (United States)

    Mund, Mathias; Kloft, Beatrix; Bundschuh, Matthias; Klingelhoefer, Doris; Groneberg, David A.; Gerber, Alexander

    2014-01-01

    The exposure to tobacco smoke during pregnancy is considered to be amongst the most harmful avoidable risk factors. In this scientometric and gender study scientific data on smoking and pregnancy was analyzed using a variety of objective scientometric methods like the number of scientific contributions, the number of citations and the modified h-index in combination with gender-specific investigations. Covering a time period from 1900 to 2012, publishing activities of 27,955 authors, institutions and countries, reception within the international scientific community and its reactions were analyzed and interpreted. Out of 10,043 publications the highest number of scientific works were published in the USA (35.5%), followed by the UK (9.9%) and Canada (5.3%). These nations also achieve the highest modified h-indices of 128, 79 and 62 and the highest citation rates of 41.4%, 8.6% and 5.3%, respectively. Out of 12,596 scientists 6,935 are female (55.1%), however they account for no more than 49.7% of publications (12,470) and 42.8% of citations (172,733). The highest percentage of female experts about smoking and pregnancy is found in Australasia (60.7%), while the lowest is found in Asia (41.9%). The findings of the study indicate an increase in gender equality as well as in quantity and quality of international scientific research about smoking and pregnancy in the future. PMID:24879489

  15. What Is Citizen Science? – A Scientometric Meta-Analysis

    Science.gov (United States)

    Kullenberg, Christopher; Kasperowski, Dick

    2016-01-01

    Context The concept of citizen science (CS) is currently referred to by many actors inside and outside science and research. Several descriptions of this purportedly new approach of science are often heard in connection with large datasets and the possibilities of mobilizing crowds outside science to assists with observations and classifications. However, other accounts refer to CS as a way of democratizing science, aiding concerned communities in creating data to influence policy and as a way of promoting political decision processes involving environment and health. Objective In this study we analyse two datasets (N = 1935, N = 633) retrieved from the Web of Science (WoS) with the aim of giving a scientometric description of what the concept of CS entails. We account for its development over time, and what strands of research that has adopted CS and give an assessment of what scientific output has been achieved in CS-related projects. To attain this, scientometric methods have been combined with qualitative approaches to render more precise search terms. Results Results indicate that there are three main focal points of CS. The largest is composed of research on biology, conservation and ecology, and utilizes CS mainly as a methodology of collecting and classifying data. A second strand of research has emerged through geographic information research, where citizens participate in the collection of geographic data. Thirdly, there is a line of research relating to the social sciences and epidemiology, which studies and facilitates public participation in relation to environmental issues and health. In terms of scientific output, the largest body of articles are to be found in biology and conservation research. In absolute numbers, the amount of publications generated by CS is low (N = 1935), but over the past decade a new and very productive line of CS based on digital platforms has emerged for the collection and classification of data. PMID:26766577

  16. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  17. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  18. Bibliometrics, informetrics, scientometrics and other "Metric" in Brazil

    Directory of Open Access Journals (Sweden)

    Ruben Urbizagastegui

    2016-09-01

    Full Text Available http://dx.doi.org/10.5007/1518-2924.2016v21n47p51 Analyze the demographics of the published literature on "bibliometric studies" produced by Brazilian and foreign authors in Brazil from 1973 to 2012. Types of documents, journals and congresses most used to communicate the results of investigations are analyzed. The most productive authors were identified. 2,300 papers published until December 2012 by 3,320 authors were found. More common are papers published in academic journals (60% and papers presented at congresses and conferences (36.5%. Predominately, documents area published in Portuguese (87%. The journals with the largest number of published papers are Ciência da Informação, Scientometrics, Encontros Bibli and Perspectivas em Ciência da Informação.

  19. Benchmarking gate-based quantum computers

    Science.gov (United States)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  20. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  1. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  2. A scientometric examination of the water quality research in India.

    Science.gov (United States)

    Nishy, P; Saroja, Renuka

    2018-03-16

    Water quality has emerged as a fast-developing research area. Regular assessment of research activity is necessary for the successful R&D promotion. Water quality research work carried out in different countries increased over the years, and the USA ranked first in productivity while India stands in the seventh position in quantity and occupies the ninth position in quality of the research output. India observes a steady growth in the water quality research. Four thousand six hundred sixteen articles from India assessed from the aspect of citations received distributions of source countries, institutes, journals, impact factor, words in the title, author keywords. The qualitative and quantitative analysis identifies the contributions of the major institutions involved in research. Much of the country's water quality research is carried out by universities, public research institutions and science councils, whereas the contribution from Ministry of water resources not so significant. A considerable portion of Indian research is communicated through foreign journals, and the most active one is Environmental Monitoring and Assessment journal. Twenty-one percent of work is reported in journals published from India and around 7% ages in open access journals. The study highlights that international collaborative research resulted in high-quality papers. The authors meticulously analyse the published research works to gain a deeper understanding of focus areas through word cluster analyses on title words and keywords. When many papers deal with 'contamination', 'assessment' and 'treatment', enough studies done on 'water quality index', 'toxicity', considerable work is carried out in environmental, agricultural, industrial and health problems related to water quality. This detailed scientometric study from 1,09,766 research works from SCI-E during 1986-2015 plots the trends and identifies research hotspots for the benefit to scientists in the subject area. This study

  3. Contemporary state of the problem of radiation lesions of the lungs: scientometric analysis

    International Nuclear Information System (INIS)

    Artamonova, N.O.; Kulyinyich, G.V.; Gajsenyuk, L.O.; Masyich, O.V.; Pavlyichenko, Yu.V.

    2011-01-01

    Scientometric analysis of the contemporary state and prospects of development of the problem of lung radiation lesion allowed to prove its urgency. But among separate questions, the most important is solving the problem of lung radiation toxicity. The majority of publications about clinical trials suggest about a constant search for new means of prevention and treatment. It was established that the information search efficacy about lung radiation lesions depended on the adequate use of term corpus of PubMed database.

  4. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  5. Defining a methodology for benchmarking spectrum unfolding codes

    International Nuclear Information System (INIS)

    Meyer, W.; Kirmser, P.G.; Miller, W.H.; Hu, K.K.

    1976-01-01

    It has long been recognized that different neutron spectrum unfolding codes will produce significantly different results when unfolding the same measured data. In reviewing the results of such analyses it has been difficult to determine which result if any is the best representation of what was measured by the spectrometer detector. A proposal to develop a benchmarking procedure for spectrum unfolding codes is presented. The objective of the procedure will be to begin to develop a methodology and a set of data with a well established and documented result that could be used to benchmark and standardize the various unfolding methods and codes. It is further recognized that development of such a benchmark must involve a consensus of the technical community interested in neutron spectrum unfolding

  6. Suggested benchmarks for shape optimization for minimum stress concentration

    DEFF Research Database (Denmark)

    Pedersen, Pauli

    2008-01-01

    Shape optimization for minimum stress concentration is vital, important, and difficult. New formulations and numerical procedures imply the need for good benchmarks. The available analytical shape solutions rely on assumptions that are seldom satisfied, so here, we suggest alternative benchmarks...

  7. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  8. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  9. EU and OECD benchmarking and peer review compared

    NARCIS (Netherlands)

    Groenendijk, Nico

    2009-01-01

    Benchmarking and peer review are essential elements of the so-called EU open method of coordination (OMC) which has been contested in the literature for lack of effectiveness. In this paper we compare benchmarking and peer review procedures as used by the EU with those used by the OECD. Different

  10. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    Science.gov (United States)

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project.

  11. PROCEDURES FOR THE DERIVATION OF EQUILIBRIUM PARTITIONING SEDIMENT BENCHMARKS (ESBS) FOR THE PROTECTION OF BENTHIC ORGANISMS: COMPENDIUM OF TIER 2 VALUES FOR NONIONIC ORGANICS

    Science.gov (United States)

    This equilibrium partitioning sediment benchmark (ESB) document describes procedures to derive concentrations for 32 nonionic organic chemicals in sediment which are protective of the presence of freshwater and marine benthic organisms. The equilibrium partitioning (EqP) approach...

  12. Mezhdunarodnoe nauchnoe sotrudnichestvo v Baltijskom regione: naukometricheskij analiz [International research cooperation in the Baltic region: a scientometric analysis

    Directory of Open Access Journals (Sweden)

    Kuznetsova Tatyana

    2012-01-01

    Full Text Available This article examines the processes of international research cooperation in the Baltic Sea region. It focuses on research works published in the leading periodicals in 1993—2012. The empirical material is collected from the world’s largest abstract and citation database, SciVerse Scopus, which makes it possible to evaluate macroindicators at the national and global levels as well as the contribution of scholars to the global progress. The article also offers an assessment of the efficiency of research activities in the Baltic Sea region countries. It is based on a number of scientometric indicators that reflect the performance of universities in terms of research journal publications and the development of research cooperation in the field of Baltic studies. The authors consider the dynamics of research contribution and academic cooperation in the Baltic Sea countries in four leading fields — i. e. agricultural and biological sciences, Earth sciences, ecology and social sciences presented in the SciVerse Scopus scientometric database. The article provides a map of research cooperation in the Baltic Sea region.

  13. Different Traditions in the Study of Disciplinarity in Science--Science and Technology Studies, Library and Information Science and Scientometrics

    Science.gov (United States)

    Milojevic, Staša

    2013-01-01

    Introduction: Disciplinarity and other forms of differentiation in science have long been studied in the fields of science and technology studies, information science and scientometrics. However, it is not obvious whether these fields are building on each other's findings. Methods: An analysis is made of 609 articles on disciplinarity…

  14. Equilibrium Partitioning Sediment Benchmarks (ESBs) for the Protection of Benthic Organisms: Procedures for the Determination of the Freely Dissolved Interstitial Water Concentrations of Nonionic Organics

    Science.gov (United States)

    This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it account...

  15. Developing a Benchmarking Process in Perfusion: A Report of the Perfusion Downunder Collaboration

    Science.gov (United States)

    Baker, Robert A.; Newland, Richard F.; Fenton, Carmel; McDonald, Michael; Willcox, Timothy W.; Merry, Alan F.

    2012-01-01

    Abstract: Improving and understanding clinical practice is an appropriate goal for the perfusion community. The Perfusion Downunder Collaboration has established a multi-center perfusion focused database aimed at achieving these goals through the development of quantitative quality indicators for clinical improvement through benchmarking. Data were collected using the Perfusion Downunder Collaboration database from procedures performed in eight Australian and New Zealand cardiac centers between March 2007 and February 2011. At the Perfusion Downunder Meeting in 2010, it was agreed by consensus, to report quality indicators (QI) for glucose level, arterial outlet temperature, and pCO2 management during cardiopulmonary bypass. The values chosen for each QI were: blood glucose ≥4 mmol/L and ≤10 mmol/L; arterial outlet temperature ≤37°C; and arterial blood gas pCO2 ≥ 35 and ≤45 mmHg. The QI data were used to derive benchmarks using the Achievable Benchmark of Care (ABC™) methodology to identify the incidence of QIs at the best performing centers. Five thousand four hundred and sixty-five procedures were evaluated to derive QI and benchmark data. The incidence of the blood glucose QI ranged from 37–96% of procedures, with a benchmark value of 90%. The arterial outlet temperature QI occurred in 16–98% of procedures with the benchmark of 94%; while the arterial pCO2 QI occurred in 21–91%, with the benchmark value of 80%. We have derived QIs and benchmark calculations for the management of several key aspects of cardiopulmonary bypass to provide a platform for improving the quality of perfusion practice. PMID:22730861

  16. A Study of Scientometric Methods to Identify Emerging Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Abercrombie, Robert K [ORNL; Udoeyop, Akaninyene W [ORNL

    2011-01-01

    This work examines a scientometric model that tracks the emergence of an identified technology from initial discovery (via original scientific and conference literature), through critical discoveries (via original scientific, conference literature and patents), transitioning through Technology Readiness Levels (TRLs) and ultimately on to commercial application. During the period of innovation and technology transfer, the impact of scholarly works, patents and on-line web news sources are identified. As trends develop, currency of citations, collaboration indicators, and on-line news patterns are identified. The combinations of four distinct and separate searchable on-line networked sources (i.e., scholarly publications and citation, worldwide patents, news archives, and on-line mapping networks) are assembled to become one collective network (a dataset for analysis of relations). This established network becomes the basis from which to quickly analyze the temporal flow of activity (searchable events) for the example subject domain we investigated.

  17. JENDL-4.0 benchmarking for fission reactor applications

    International Nuclear Information System (INIS)

    Chiba, Go; Okumura, Keisuke; Sugino, Kazuteru; Nagaya, Yasunobu; Yokoyama, Kenji; Kugo, Teruhiko; Ishikawa, Makoto; Okajima, Shigeaki

    2011-01-01

    Benchmark testing for the newly developed Japanese evaluated nuclear data library JENDL-4.0 is carried out by using a huge amount of integral data. Benchmark calculations are performed with a continuous-energy Monte Carlo code and with the deterministic procedure, which has been developed for fast reactor analyses in Japan. Through the present benchmark testing using a wide range of benchmark data, significant improvement in the performance of JENDL-4.0 for fission reactor applications is clearly demonstrated in comparison with the former library JENDL-3.3. Much more accurate and reliable prediction for neutronic parameters for both thermal and fast reactors becomes possible by using the library JENDL-4.0. (author)

  18. Assessment of Non-Financial Criteria in the Selection of Investment Projects for Seed Capital Funding: the Contribution of Scientometrics and Patentometrics

    Directory of Open Access Journals (Sweden)

    Gustavo da Silva Motta

    2012-09-01

    Full Text Available The aim of this article is to assess the potential of using scientometric and patentometric indicators as a way of instrumentalizing the selection process of projects for seed capital funding. There is an increasing interest in technology based enterprises for their capacity to contribute to economic and social development, but there is also some difficulty in assessing non-financial criteria associated with technology for the purposes of financial funding. Thus, this research selected the case of the first enterprise invested in by the largest seed capital fund in Brazil, in order to create scientific and technological indicators and to assess the extent to which these indicators may contribute to understanding the market potential of the technology once it is assessed. It was concluded that scientometric and patentometric indicators favour the assessment process for non-financial criteria, in particular those criteria dealt with in this study: technology, market, divestment, and team.

  19. Hirschsprung Disease: Critical Evaluation of the Global Research Architecture Employing Scientometrics and Density-Equalizing Mapping.

    Science.gov (United States)

    Schöffel, Norman; Gfroerer, Stefan; Rolle, Udo; Bendels, Michael H K; Klingelhöfer, Doris; Groneberg-Kloft, Beatrix

    2017-04-01

    Introduction  Hirschsprung disease (HD) is a congenital bowel innervation disorder that involves several clinical specialties. There is an increasing interest on the topic reflected by the number of annually published items. It is therefore difficult for a single scientist to survey all published items and to gauge their scientific importance or value. Thus, tremendous efforts were made to establish sustainable parameters to evaluate scientific work within the past decades. It was the birth of scientometrics. Materials and Methods  To quantify the global research activity in this field, a scientometric analysis was conducted. We analyzed the research output of countries, individual institutions, authors, and their collaborative networks by using the Web of Science database. Density-equalizing maps and network diagrams were employed as state of the art visualization techniques. Results  The United States is the leading country in terms of published items ( n  = 685), institutions ( n  = 347), and cooperation ( n  = 112). However, although there is dominance in quantity, the most intensive international networks between authors and institutions are not linked to the United States. By contrast, most of the European countries combine the highest impact of publications. Further analysis reveal the influence of international cooperation and associated phenomena on the research field HD. Conclusion  We conclude that the field of HD is constantly progressing. The importance of international cooperation in the scientific community is continuously growing. Georg Thieme Verlag KG Stuttgart · New York.

  20. [Benchmarking in patient identification: An opportunity to learn].

    Science.gov (United States)

    Salazar-de-la-Guerra, R M; Santotomás-Pajarrón, A; González-Prieto, V; Menéndez-Fraga, M D; Rocha Hurtado, C

    To perform a benchmarking on the safe identification of hospital patients involved in "Club de las tres C" (Calidez, Calidad y Cuidados) in order to prepare a common procedure for this process. A descriptive study was conducted on the patient identification process in palliative care and stroke units in 5medium-stay hospitals. The following steps were carried out: Data collection from each hospital; organisation and data analysis, and preparation of a common procedure for this process. The data obtained for the safe identification of all stroke patients were: hospital 1 (93%), hospital 2 (93.1%), hospital 3 (100%), and hospital 5 (93.4%), and for the palliative care process: hospital 1 (93%), hospital 2 (92.3%), hospital 3 (92%), hospital 4 (98.3%), and hospital 5 (85.2%). The aim of the study has been accomplished successfully. Benchmarking activities have been developed and knowledge on the patient identification process has been shared. All hospitals had good results. The hospital 3 was best in the ictus identification process. The benchmarking identification is difficult, but, a useful common procedure that collects the best practices has been identified among the 5 hospitals. Copyright © 2017 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  1. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  2. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  3. Steroid compounds of phytogenic origin: scientometric research data of scientific and practical literature

    Directory of Open Access Journals (Sweden)

    Sukhanov А.Е.

    2017-03-01

    Full Text Available Steroid compounds of phytogenic origin are important in clinical medicine rendering anti-inflammatory, anti-prolifer-ative and antithrombotic actions. However, steroid compounds of phytogenic origin, in particular, steroid saponins are insufficiently studied from an identification position in tissues of vegetable organisms and methods of their physical and chemical analysis. The scientometric analysis of research data (abstract documents containing an analytical array of scientific publications concerning isolation, selection, cleaning, identification and the quantitative definition of steroid saponins in tissues of the higher vascular plants in the abstract bibliographic database SciVerse Scopus (Elsevier publishing house with use of criteria "key word" and "the key phrase" is provided in the article.

  4. The need for contextualized scientometric analysis: An opinion paper

    Energy Technology Data Exchange (ETDEWEB)

    Waltman, L.; Van Eck, N.J.

    2016-07-01

    Scientometric indicators, in particular indicators based on citations, nowadays play a prominent role in research evaluations. Given the importance of these indicators, scientometricians are putting a lot of effort into making technical improvements to the indicators in order to increase their accuracy. Especially indicators based on citations have received a lot of attention during recent years. This has resulted in the development of many advanced citation-based indicators (for a review of the literature, see Waltman, 2016). At the same time, scientometricians have been exploring all kinds of new indicators, many of which are referred to as altmetric indicators. Interest in these new indicators is largely driven by the availability of new data sources, but also seems to relate to changing viewpoints on research evaluation, in particular an increasing focus on evaluating the societal impact of research. Like in the case of traditional citation-based indicators, scientometricians are trying to obtain more and more accurate statistics by developing increasingly advanced indicators (e.g., Fairclough & Thelwall, 2015; Haunschild & Bornmann, 2016). (Author)

  5. Scientific production of Sports Science in Iran: A Scientometric Analysis.

    Science.gov (United States)

    Yaminfirooz, Mousa; Siamian, Hasan; Jahani, Mohammad Ali; Yaminifirouz, Masoud

    2014-06-01

    Physical education and sports science is one of the branches of humanities. The purpose of this study is determining the quantitative and qualitative rate of progress in scientific Production of Iran's researcher in Web of Science. Research Methods is Scientometric survey and Statistical Society Includes 233 Documents From 1993 to 2012 are indexed in ISI. Results showed that the time of this study, Iranian researchers' published 233 documents in this base during this period of time which has been cited 1106(4.76 times on average). The H- index has also been 17. Iran's most scientific productions in sports science realm was indexed in 2010 with 57 documents and the least in 2000. By considering the numbers of citations and the obtained H- index, it can be said that the quality of Iranian's articles is rather acceptable but in comparison to prestigious universities and large number of professors and university students in this field, the quantity of outputted articles is very low.

  6. Visibilidade dos pesquisadores no periódico Scientometrics a partir da perspectiva brasileira: um estudo de cocitação

    Directory of Open Access Journals (Sweden)

    Ely Francina Tannuri de Oliveira

    2012-12-01

    Full Text Available O objetivo desta pesquisa é verificar os autores que têm fundamentado as pesquisas brasileiras com inserção internacional na área de Bibliometria e Cientometria, por meio da análise de citação e cocitação dos artigos de brasileiros, publicados na revista Scientometrics. Utilizou-se a base Scopus, com os termos Scientometrics, em source title, e Brasil or Brazil, em affiliation country. Encontraram-se 53 artigos, com 741 referências e 19 autores citados 3 ou mais vezes. Em geral, os pesquisadores são advindos das áreas de saúde e biológicas. Com o software Ucinet, construiu-se a rede de cocitação e calcularam-se seus indicadores. Calculou-se o índice normalizado de cocitação. A densidade e a média da centralidade de grau normalizada foram de 65,5%. Conclui-se, destacando-se a presença significativa de brasileiros (32% e a dialogicidade ocorrendo em equilíbrio entre os citados brasileiros e estrangeiros, já com interlocução de brasileiros com pesquisadores internacionais reconhecidos na área de Bibliometria e Cientometria.

  7. Scientometrics of anesthetic drugs and their techniques of administration, 1984–2013

    Directory of Open Access Journals (Sweden)

    Vlassakov KV

    2014-12-01

    Full Text Available Kamen V Vlassakov, Igor Kissin Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA Abstract: The aim of this study was to assess progress in the field of anesthetic drugs over the past 30 years using scientometric indices: popularity indices (general and specific, representing the proportion of articles on a drug relative to all articles in the field of anesthetics (general index or the subfield of a specific class of anesthetics (specific index; index of change, representing the degree of growth in publications on a topic from one period to the next; index of expectations, representing the ratio of the number of articles on a topic in the top 20 journals relative to the number of articles in all (>5,000 biomedical journals covered by PubMed; and index of ultimate success, representing a publication outcome when a new drug takes the place of a common drug previously used for the same purpose. Publications on 58 topics were assessed during six 5-year periods from 1984 to 2013. Our analysis showed that during 2009–2013, out of seven anesthetics with a high general popularity index (≥2.0, only two were introduced after 1980, ie, the inhaled anesthetic sevoflurane and the local anesthetic ropivacaine; however, only sevoflurane had a high index of expectations (12.1. Among anesthetic adjuncts, in 2009–2013, only one agent, sugammadex, had both an extremely high index of change (>100 and a high index of expectations (25.0, reflecting the novelty of its mechanism of action. The index of ultimate success was positive with three anesthetics, ie, lidocaine, isoflurane, and propofol, all of which were introduced much longer than 30 years ago. For the past 30 years, there were no new anesthetics that have produced changes in scientometric indices indicating real progress. Keywords: anesthetics, anesthetic adjuvants, mortality, safety margins, therapeutic indices

  8. No evidence of real progress in treatment of acute pain, 1993–2012: scientometric analysis

    Directory of Open Access Journals (Sweden)

    Correll DJ

    2014-04-01

    Full Text Available Darin J Correll, Kamen V Vlassakov, Igor Kissin Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA Abstract: Over the past 2 decades, many new techniques and drugs for the treatment of acute pain have achieved widespread use. The main aim of this study was to assess the progress in their implementation using scientometric analysis. The following scientometric indices were used: 1 popularity index, representing the share of articles on a specific technique (or a drug relative to all articles in the field of acute pain; 2 index of change, representing the degree of growth in publications on a topic compared to the previous period; and 3 index of expectations, representing the ratio of the number of articles on a topic in the top 20 journals relative to the number of articles in all (>5,000 biomedical journals covered by PubMed. Publications on specific topics (ten techniques and 21 drugs were assessed during four time periods (1993–1997, 1998–2002, 2003–2007, and 2008–2012. In addition, to determine whether the status of routine acute pain management has improved over the past 20 years, we analyzed surveys designed to be representative of the national population that reflected direct responses of patients reporting pain scores. By the 2008–2012 period, popularity index had reached a substantial level (≥5% only with techniques or drugs that were introduced 30–50 years ago or more (epidural analgesia, patient-controlled analgesia, nerve blocks, epidural analgesia for labor or delivery, bupivacaine, and acetaminophen. In 2008–2012, promising (although modest changes of index of change and index of expectations were found only with dexamethasone. Six national surveys conducted for the past 20 years demonstrated an unacceptably high percentage of patients experiencing moderate or severe pain with not even a trend toward outcome improvement. Thus

  9. Equilibrium Partitioning Sediment Benchmarks (ESBs) for the ...

    Science.gov (United States)

    This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it accounts for the varying bioavailability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms.  This equilibrium partitioning sediment benchmark (ESB) document was prepared by scientists from the Atlantic Ecology Division, Mid-Continent Ecology Division, and Western Ecology Division, the Office of Water, and private consultants. The document describes procedures to determine the interstitial water concentrations of nonionic organic chemicals in contaminated sediments. Based on these concentrations, guidance is provided on the derivation of toxic units to assess whether the sediments are likely to cause adverse effects to benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it is based on the concentrations of chemical(s) that are known to be harmful and bioavailable in the environment.  This document, and five others published over the last nine years, will be useful for the Program Offices, including Superfund, a

  10. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    Science.gov (United States)

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  11. Research Trends in the Field of E-Learning from 2003 to 2008: A Scientometric and Content Analysis for Selected Journals and Conferences Using Visualization

    Science.gov (United States)

    Maurer, Hermann; Khan, Muhammad Salman

    2010-01-01

    Purpose: The purpose of this paper is to provide a scientometric and content analysis of the studies in the field of e-learning that were published in five Social Science Citation Index (SSCI) journals ("Journal of Computer Assisted Learning, Computers & Education, British Journal of Educational Technology, Innovations in Education and Teaching…

  12. Benchmark of systematic human action reliability procedure

    International Nuclear Information System (INIS)

    Spurgin, A.J.; Hannaman, G.W.; Moieni, P.

    1986-01-01

    Probabilistic risk assessment (PRA) methodology has emerged as one of the most promising tools for assessing the impact of human interactions on plant safety and understanding the importance of the man/machine interface. Human interactions were considered to be one of the key elements in the quantification of accident sequences in a PRA. The approach to quantification of human interactions in past PRAs has not been very systematic. The Electric Power Research Institute sponsored the development of SHARP to aid analysts in developing a systematic approach for the evaluation and quantification of human interactions in a PRA. The SHARP process has been extensively peer reviewed and has been adopted by the Institute of Electrical and Electronics Engineers as the basis of a draft guide for the industry. By carrying out a benchmark process, in which SHARP is an essential ingredient, however, it appears possible to assess the strengths and weaknesses of SHARP to aid human reliability analysts in carrying out human reliability analysis as part of a PRA

  13. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  14. The IEA Annex 20 Two-Dimensional Benchmark Test for CFD Predictions

    DEFF Research Database (Denmark)

    Nielsen, Peter V.; Rong, Li; Cortes, Ines Olmedo

    2010-01-01

    predictions both for isothermal flow and for nonisothermal flow. The benchmark is defined on a web page, which also shows about 50 different benchmark tests with studies of e.g. grid dependence, numerical schemes, different source codes, different turbulence models, RANS or LES, different turbulence levels...... in a supply opening, study of local emission and study of airborne chemical reactions. Therefore the web page is also a collection of information which describes the importance of the different elements of a CFD procedure. The benchmark is originally developed for test of two-dimensional flow, but the paper...

  15. Monte Carlo benchmarking: Validation and progress

    International Nuclear Information System (INIS)

    Sala, P.

    2010-01-01

    Document available in abstract form only. Full text of publication follows: Calculational tools for radiation shielding at accelerators are faced with new challenges from the present and next generations of particle accelerators. All the details of particle production and transport play a role when dealing with huge power facilities, therapeutic ion beams, radioactive beams and so on. Besides the traditional calculations required for shielding, activation predictions have become an increasingly critical component. Comparison and benchmarking with experimental data is obviously mandatory in order to build up confidence in the computing tools, and to assess their reliability and limitations. Thin target particle production data are often the best tools for understanding the predictive power of individual interaction models and improving their performances. Complex benchmarks (e.g. thick target data, deep penetration, etc.) are invaluable in assessing the overall performances of calculational tools when all ingredients are put at work together. A review of the validation procedures of Monte Carlo tools will be presented with practical and real life examples. The interconnections among benchmarks, model development and impact on shielding calculations will be highlighted. (authors)

  16. Ten-year analysis of hepatitis-related papers in the Middle East: a web of science-based scientometric study.

    Science.gov (United States)

    Rezaee Zavareh, Mohammad Saeid; Alavian, Seyed Moayed

    2017-01-01

    In the Middle East (ME), the proper understanding of hepatitis, especially viral hepatitis, is considered to be extremely important. However, no published paper has investigated the status of hepatitis-related research in the ME. A scientometric analysis based on the Web of Science database was conducted on hepatitis-related papers in the ME to determine the current status of research on this topic. A scientometric analysis using the Web of Science database, specifically articles from the Expanded Science Citation Index and Social Sciences Citation Index, was conducted on work published between 2005 and 2014 using the keyword "hepatitis" in conjunction with the names of countries in the ME. Of 103,096 papers that used the word "hepatitis" in their title, abstract, or keywords, only 6,540 papers (6.34%) were associated with countries in the ME. Turkey, Iran, Egypt, Israel, and Saudi Arabia were the top five countries in which hepatitis-related papers were published. Most papers on hepatitis A, B, and D and autoimmune hepatitis were published in Turkey, and most papers on hepatitis C were published in Egypt. We believe that both the quantity and the quality of hepatitis-related papers in this region should be improved. Implementing multicenter and international research projects, holding conferences and congress meetings, conducting educational workshops, and establishing high-quality medical research journals in the region will help countries in the ME address this issue effectively.

  17. Developing a benchmark for emotional analysis of music.

    Science.gov (United States)

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  18. Benchmarking local healthcare-associated infections: Available benchmarks and interpretation challenges

    Directory of Open Access Journals (Sweden)

    Aiman El-Saed

    2013-10-01

    Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections

  19. Impact of quantitative feedback and benchmark selection on radiation use by cardiologists performing cardiac angiography

    International Nuclear Information System (INIS)

    Smith, I. R.; Cameron, J.; Brighouse, R. D.; Ryan, C. M.; Foster, K. A.; Rivers, J. T.

    2013-01-01

    Audit of and feedback on both group and individual data provided immediately after the point of care and compared with realistic benchmarks of excellence have been demonstrated to drive change. This study sought to evaluate the impact of immediate benchmarked quantitative case-based performance feedback on the clinical practice of cardiologists practicing at a private hospital in Brisbane, Australia. The participating cardiologists were assigned to one of two groups: Group 1 received patient and procedural details for review and Group 2 received Group 1 data plus detailed radiation data relating to the procedures and comparative benchmarks. In Group 2, Linear-by-Linear Association analysis suggests a link between change in radiation use and initial radiation dose category (p50.014) with only those initially 'challenged' by the benchmarks showing improvement. Those not 'challenged' by the benchmarks deteriorated in performance compared with those starting well below the benchmarks showing greatest increase in radiation use. Conversely, those blinded to their radiation use (Group 1) showed general improvement in radiation use throughout the study compared with those performing initially close to the benchmarks showing greatest improvement. This study shows that use of non-challenging benchmarks in case-based radiation risk feedback does not promote a reduction in radiation use; indeed, it may contribute to increased doses. Paradoxically, cardiologists who are aware of performance monitoring but blinded to individual case data appear to maintain, if not reduce, their radiation use. (authors)

  20. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  1. A Proposal of Indicators and Policy Framework for Innovation Benchmark in Europe

    OpenAIRE

    García Manjón, Juan Vicente

    2010-01-01

    The implementation of innovation policies has been adopted at European level from a common perspective. The European Council (2000) established open methods of coordination (OMC) in order to gain mutual understanding and achieving greater convergence on innovation policies, constituting a benchmarking procedure. However, the development of benchmarking analysis for innovation policies faces two major inconveniences: the lack of accepted innovation policy frameworks and the existence of sui...

  2. Guideline for benchmarking thermal treatment systems for low-level mixed waste

    International Nuclear Information System (INIS)

    Hoffman, D.P.; Gibson, L.V. Jr.; Hermes, W.H.; Bastian, R.E.; Davis, W.T.

    1994-01-01

    A process for benchmarking low-level mixed waste (LLMW) treatment technologies has been developed. When used in conjunction with the identification and preparation of surrogate waste mixtures, and with defined quality assurance and quality control procedures, the benchmarking process will effectively streamline the selection of treatment technologies being considered by the US Department of Energy (DOE) for LLMW cleanup and management. Following the quantitative template provided in the benchmarking process will greatly increase the technical information available for the decision-making process. The additional technical information will remove a large part of the uncertainty in the selection of treatment technologies. It is anticipated that the use of the benchmarking process will minimize technology development costs and overall treatment costs. In addition, the benchmarking process will enhance development of the most promising LLMW treatment processes and aid in transferring the technology to the private sector. To instill inherent quality, the benchmarking process is based on defined criteria and a structured evaluation format, which are independent of any specific conventional treatment or emerging process technology. Five categories of benchmarking criteria have been developed for the evaluation: operation/design; personnel health and safety; economics; product quality; and environmental quality. This benchmarking document gives specific guidance on what information should be included and how it should be presented. A standard format for reporting is included in Appendix A and B of this document. Special considerations for LLMW are presented and included in each of the benchmarking categories

  3. National benchmarking against GLOBALGAP : Case studies of Good Agricultural Practices in Kenya, Malaysia, Mexico and Chile

    NARCIS (Netherlands)

    Valk, van der O.M.C.; Roest, van der J.G.

    2009-01-01

    This desk study examines the experiences and lessons learned from four case studies of countries aiming at the GLOBALGAP benchmarking procedure for national Good Agricultural Practices, namely Chile, Kenya, Malaysia, and Mexico. Aspects that determine the origin and character of the benchmarking

  4. A Study of Scientometric Methods to Identify Emerging Technologies via Modeling of Milestones

    Energy Technology Data Exchange (ETDEWEB)

    Abercrombie, Robert K [ORNL; Udoeyop, Akaninyene W [ORNL; Schlicher, Bob G [ORNL

    2012-01-01

    This work examines a scientometric model that tracks the emergence of an identified technology from initial discovery (via original scientific and conference literature), through critical discoveries (via original scientific, conference literature and patents), transitioning through Technology Readiness Levels (TRLs) and ultimately on to commercial application. During the period of innovation and technology transfer, the impact of scholarly works, patents and on-line web news sources are identified. As trends develop, currency of citations, collaboration indicators, and on-line news patterns are identified. The combinations of four distinct and separate searchable on-line networked sources (i.e., scholarly publications and citation, patents, news archives, and online mapping networks) are assembled to become one collective network (a dataset for analysis of relations). This established network becomes the basis from which to quickly analyze the temporal flow of activity (searchable events) for the example subject domain we investigated.

  5. Scientometric characterization of Medwave's scientific production 2010-2014.

    Science.gov (United States)

    Gallardo Sánchez, Yurieth; Gallardo Arzuaga, Ruber Luis; Fonseca Arias, Madelin; Pérez Atencio, María Esther

    2016-09-15

    The use of bibliometric indicators for the evaluation of science allows an analysis of scientific production both from a quantitative and qualitative point of view. To characterize the scientific production of Medwave during the period 2010 to 2014 in terms of visibility and productivity. A bibliometric study was carried out. Variables analyzed were offered by the “Publish or Perish” program working with the Google Scholar database. The number of articles published were related to the number of authors involved in each research work. The articles cited, number of citations, authors and year were reported. Indicators were obtained by placing in name of the journal and its International Standard Serial Number (ISSN) in the navigation box of Publish or Perish. There were 481 articles published with 220 citations; at a rate of more than 36 citations per year and 20 citations per author and year. An index h = 5 and index g = 6 were achieved. There was an average of two authors per article. Only five articles had more citations than the total they provided. The scientometric indicators found place the journal in a favorable position relative to other medical journals of the region, in terms of visibility and productivity. There was a low rate of cooperation since articles with individual authors prevailed. A low number of articles contributed to the productivity of the journal despite having significant number of citations.

  6. Do Scientometric Indices Require Revision?

    Directory of Open Access Journals (Sweden)

    Mozafar Khazaei

    2014-09-01

    Full Text Available The scientific output of a researcher includes academic publications, creditability of these publications and number of citations. Universities and institutions evaluating the research activities have always taken into account the academic status and ranking of the researchers. Selection and application of an appropriate method to assess the academic activities have also been a concern for scientometrics centers. In the past, criteria such as number of publications, total number of citations and average number of citation were taken into consideration. In the past decade, a physicist named Hirsch (2005 introduced an index known as hirsch (h index to evaluate scientific output (1. The h index determines both the academic productions of the researchers and the scientific impact of the productions by a number; the larger is the number, the higher is the scientific impact. The h index is used to compare the researchers in the same subject area, aiming to diff¬erentiate highly cited researchers from least-cited scholars. Numerous advantages have been introduced for this index, including simple calculation, quantitative and qualitative evaluation of the scientific outputs, disregarding most-cited and least-cited papers, and differentiating prominent researchers from the others. However, the disadvantages of this index, some of which are being mentioned as advantages, include neglecting the total number of publications, neglecting the academic life of a researcher, dependence on the research area (inapplicability to compare the researchers in different subject areas, ignoring multi-authorship and dependence on the duration of scientific activity (2. On the other hand, h index computation for young researchers is also not possible due to their short scientific activities. Moreover, despite the termination of the scientific life of a researcher and failure to present new publications, their previous publications may be cited. In addition, it is believed

  7. Developing Benchmarking Criteria for CO2 Emissions

    Energy Technology Data Exchange (ETDEWEB)

    Neelis, M.; Worrell, E.; Mueller, N.; Angelini, T. [Ecofys, Utrecht (Netherlands); Cremer, C.; Schleich, J.; Eichhammer, W. [The Fraunhofer Institute for Systems and Innovation research, Karlsruhe (Germany)

    2009-02-15

    A European Union (EU) wide greenhouse gas (GHG) allowance trading scheme (EU ETS) was implemented in the EU in 2005. In the first two trading periods of the scheme (running up to 2012), free allocation based on historical emissions was the main methodology for allocation of allowances to existing installations. For the third trading period (2013 - 2020), the European Commission proposed in January 2008 a more important role of auctioning of allowances rather then free allocation. (Transitional) free allocation of allowances to industrial sectors will be determined via harmonized allocation rules, where feasible based on benchmarking. In general terms, a benchmark based method allocates allowances based on a certain amount of emissions per unit of productive output (i.e. the benchmark). This study aims to derive criteria for an allocation methodology for the EU Emission Trading Scheme based on benchmarking for the period 2013 - 2020. To test the feasibility of the criteria, we apply them to four example product groups: iron and steel, pulp and paper, lime and glass. The basis for this study is the Commission proposal for a revised ETS directive put forward on 23 January 2008 and does not take into account any changes to this proposal in the co-decision procedure that resulted in the adoption of the Energy and Climate change package in December 2008.

  8. Benchmarking of surgical complications in gynaecological oncology: prospective multicentre study.

    Science.gov (United States)

    Burnell, M; Iyer, R; Gentry-Maharaj, A; Nordin, A; Liston, R; Manchanda, R; Das, N; Gornall, R; Beardmore-Gray, A; Hillaby, K; Leeson, S; Linder, A; Lopes, A; Meechan, D; Mould, T; Nevin, J; Olaitan, A; Rufford, B; Shanbhag, S; Thackeray, A; Wood, N; Reynolds, K; Ryan, A; Menon, U

    2016-12-01

    To explore the impact of risk-adjustment on surgical complication rates (CRs) for benchmarking gynaecological oncology centres. Prospective cohort study. Ten UK accredited gynaecological oncology centres. Women undergoing major surgery on a gynaecological oncology operating list. Patient co-morbidity, surgical procedures and intra-operative (IntraOp) complications were recorded contemporaneously by surgeons for 2948 major surgical procedures. Postoperative (PostOp) complications were collected from hospitals and patients. Risk-prediction models for IntraOp and PostOp complications were created using penalised (lasso) logistic regression using over 30 potential patient/surgical risk factors. Observed and risk-adjusted IntraOp and PostOp CRs for individual hospitals were calculated. Benchmarking using colour-coded funnel plots and observed-to-expected ratios was undertaken. Overall, IntraOp CR was 4.7% (95% CI 4.0-5.6) and PostOp CR was 25.7% (95% CI 23.7-28.2). The observed CRs for all hospitals were under the upper 95% control limit for both IntraOp and PostOp funnel plots. Risk-adjustment and use of observed-to-expected ratio resulted in one hospital moving to the >95-98% CI (red) band for IntraOp CRs. Use of only hospital-reported data for PostOp CRs would have resulted in one hospital being unfairly allocated to the red band. There was little concordance between IntraOp and PostOp CRs. The funnel plots and overall IntraOp (≈5%) and PostOp (≈26%) CRs could be used for benchmarking gynaecological oncology centres. Hospital benchmarking using risk-adjusted CRs allows fairer institutional comparison. IntraOp and PostOp CRs are best assessed separately. As hospital under-reporting is common for postoperative complications, use of patient-reported outcomes is important. Risk-adjusted benchmarking of surgical complications for ten UK gynaecological oncology centres allows fairer comparison. © 2016 Royal College of Obstetricians and Gynaecologists.

  9. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  10. Benchmark calculation for GT-MHR using HELIOS/MASTER code package and MCNP

    International Nuclear Information System (INIS)

    Lee, Kyung Hoon; Kim, Kang Seog; Noh, Jae Man; Song, Jae Seung; Zee, Sung Quun

    2005-01-01

    The latest research associated with the very high temperature gas-cooled reactor (VHTR) is focused on the verification of a system performance and safety under operating conditions for the VHTRs. As a part of those, an international gas-cooled reactor program initiated by IAEA is going on. The key objectives of this program are the validation of analytical computer codes and the evaluation of benchmark models for the projected and actual VHTRs. New reactor physics analysis procedure for the prismatic VHTR is under development by adopting the conventional two-step procedure. In this procedure, a few group constants are generated through the transport lattice calculations using the HELIOS code, and the core physics analysis is performed by the 3-dimensional nodal diffusion code MASTER. We evaluated the performance of the HELIOS/MASTER code package through the benchmark calculations related to the GT-MHR (Gas Turbine-Modular Helium Reactor) to dispose weapon plutonium. In parallel, MCNP is employed as a reference code to verify the results of the HELIOS/MASTER procedure

  11. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  12. FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark

    International Nuclear Information System (INIS)

    Sawan, M.E.

    1994-12-01

    During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)

  13. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  14. Scientometric Analysis and Mapping of Scientific Articles on Diabetic Retinopathy.

    Science.gov (United States)

    Ramin, Shahrokh; Gharebaghi, Reza; Heidary, Fatemeh

    2015-01-01

    Diabetic retinopathy (DR) is the major cause of blindness among the working-age population globally. No systematic research has been previously performed to analyze the research published on DR, despite the need for it. This study aimed to analyze the scientific production on DR to draw overall roadmap of future research strategic planning in this field. A bibliometric method was used to obtain a view on the scientific production about DR by the data extracted from the Institute for Scientific Information (ISI). Articles about DR published in 1993-2013 were analyzed to obtain a view of the topic's structure, history, and to document relationships. The trends in the most influential publications and authors were analyzed. Most highly cited articles addressed epidemiologic and translational research topics in this field. During the past 3 years, there has been a trend toward biomarker discovery and more molecular translational research. Areas such as gene therapy and micro-RNAs are also among the recent hot topics. Through analyzing the characteristics of papers and the trends in scientific production, we performed the first scientometric report on DR. Most influential articles have addressed epidemiology and translational research subjects in this field, which reflects that globally, the earlier diagnosis and treatment of this devastating disease still has the highest global priority.

  15. The ACCENT-protocol: a framework for benchmarking and model evaluation

    Directory of Open Access Journals (Sweden)

    V. Grewe

    2012-05-01

    Full Text Available We summarise results from a workshop on "Model Benchmarking and Quality Assurance" of the EU-Network of Excellence ACCENT, including results from other activities (e.g. COST Action 732 and publications. A formalised evaluation protocol is presented, i.e. a generic formalism describing the procedure of how to perform a model evaluation. This includes eight steps and examples from global model applications which are given for illustration. The first and important step is concerning the purpose of the model application, i.e. the addressed underlying scientific or political question. We give examples to demonstrate that there is no model evaluation per se, i.e. without a focused purpose. Model evaluation is testing, whether a model is fit for its purpose. The following steps are deduced from the purpose and include model requirements, input data, key processes and quantities, benchmark data, quality indicators, sensitivities, as well as benchmarking and grading. We define "benchmarking" as the process of comparing the model output against either observational data or high fidelity model data, i.e. benchmark data. Special focus is given to the uncertainties, e.g. in observational data, which have the potential to lead to wrong conclusions in the model evaluation if not considered carefully.

  16. An automated protocol for performance benchmarking a widefield fluorescence microscope.

    Science.gov (United States)

    Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T

    2014-11-01

    Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc. Published 2014 Wiley Periodicals Inc. This article is a US government work and, as such, is in the public domain in the United States of America.

  17. The ACCENT-protocol: a framework for benchmarking and model evaluation

    Science.gov (United States)

    Grewe, V.; Moussiopoulos, N.; Builtjes, P.; Borrego, C.; Isaksen, I. S. A.; Volz-Thomas, A.

    2012-05-01

    We summarise results from a workshop on "Model Benchmarking and Quality Assurance" of the EU-Network of Excellence ACCENT, including results from other activities (e.g. COST Action 732) and publications. A formalised evaluation protocol is presented, i.e. a generic formalism describing the procedure of how to perform a model evaluation. This includes eight steps and examples from global model applications which are given for illustration. The first and important step is concerning the purpose of the model application, i.e. the addressed underlying scientific or political question. We give examples to demonstrate that there is no model evaluation per se, i.e. without a focused purpose. Model evaluation is testing, whether a model is fit for its purpose. The following steps are deduced from the purpose and include model requirements, input data, key processes and quantities, benchmark data, quality indicators, sensitivities, as well as benchmarking and grading. We define "benchmarking" as the process of comparing the model output against either observational data or high fidelity model data, i.e. benchmark data. Special focus is given to the uncertainties, e.g. in observational data, which have the potential to lead to wrong conclusions in the model evaluation if not considered carefully.

  18. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  19. Journal Benchmarking for Strategic Publication Management and for Improving Journal Positioning in the World Ranking Systems

    Science.gov (United States)

    Moskovkin, Vladimir M.; Bocharova, Emilia A.; Balashova, Oksana V.

    2014-01-01

    Purpose: The purpose of this paper is to introduce and develop the methodology of journal benchmarking. Design/Methodology/ Approach: The journal benchmarking method is understood to be an analytic procedure of continuous monitoring and comparing of the advance of specific journal(s) against that of competing journals in the same subject area,…

  20. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  1. Influenza: a scientometric and density-equalizing analysis.

    Science.gov (United States)

    Fricke, Ralph; Uibel, Stefanie; Klingelhoefer, Doris; Groneberg, David A

    2013-09-30

    Novel influenza in 2009 caused by H1N1, as well as the seasonal influenza, still are a challenge for the public health sectors worldwide. An increasing number of publications referring to this infectious disease make it difficult to distinguish relevant research output. The current study used scientometric indices for a detailed investigation on influenza related research activity and the method of density equalizing mapping to make the differences of the overall research worldwide obvious. The aim of the study was to compare scientific effort over the time as well as geographical distribution including the cooperation on national and international level. Therefore, publication data was retrieved from Web of Science (WoS) of Thomson Scientific. Subsequently the data was analysed in order to show geographical distributions and the development of the research output over the time.The query retrieved 51,418 publications that are listed in WoS for the time interval from 1900 to 2009. There is a continuous increase in research output and general citation activity especially since 1990. The identified all in all 51,418 publications were published by researchers from 151 different countries. Scientists from the USA participate in more than 37 percent of all publications, followed by researchers from the UK and Germany with more than five percent. In addition, the USA is in the focus of international cooperation.In terms of number of publications on influenza, the Journal of Virology ranks first, followed by Vaccine and Virology. The highest impact factor (IF 2009) in this selection can be established for The Lancet (30.75). Robert Webster seems to be the most prolific author contributing the most publications in the field of influenza. This study reveals an increasing and wide research interest in influenza. Nevertheless, citation based-declaration of scientific quality should be considered critically due to distortion by self-citation and co-authorship.

  2. The Accent-protocol: a framework for benchmarking and model evaluation

    NARCIS (Netherlands)

    Builtjes, P.J.H.; Grewe, V.; Moussiopoulos, N.; Borrego, C.; Isaksen, I.S.A.; Volz-Thomas, A.

    2011-01-01

    We summarise results from a workshop on “Model Benchmarking and Quality Assurance” of the EU-Network of Excellence ACCENT, including results from other activities (e.g. COST Action 732) and publications. A formalised evaluation protocol is presented, i.e. a generic formalism describing the procedure

  3. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  4. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  5. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  6. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  7. A Scientometric Analysis of Publications in the Journal of Business-to-Business Marketing 1993–2014

    DEFF Research Database (Denmark)

    Young, Louise; Wilkinson, Ian; Smith, Andrew

    2015-01-01

    underlying conceptual themes: relationships, market, study, and business. But the focal mix of concepts has changed over time, from a narrower initial focus on distribution and power and conflict, to a greater focus on firm business marketing strategy and pedagogy, to a focus on networks, the Internet......ABSTRACT: Purpose: To conduct a scientometric analysis of the contents of the Journal of Business-to-Business Marketing from 1993 to 2014. Methodology/approach: The authors use the Leximancer computer-aided text analysis program, which reliably and reproducibly identifies the main concepts embedded...... and more collaborative relations, to a focus, in the most recent period, on psycho-social network concepts, such as trust and commitment. Research implications: The results complement and provide a baseline for evaluating and comparing researcher-conducted literature reviews of business marketing and JBBM...

  8. Benchmarking Brain-Computer Interfaces Outside the Laboratory: The Cybathlon 2016

    Directory of Open Access Journals (Sweden)

    Domen Novak

    2018-01-01

    Full Text Available This paper presents a new approach to benchmarking brain-computer interfaces (BCIs outside the lab. A computer game was created that mimics a real-world application of assistive BCIs, with the main outcome metric being the time needed to complete the game. This approach was used at the Cybathlon 2016, a competition for people with disabilities who use assistive technology to achieve tasks. The paper summarizes the technical challenges of BCIs, describes the design of the benchmarking game, then describes the rules for acceptable hardware, software and inclusion of human pilots in the BCI competition at the Cybathlon. The 11 participating teams, their approaches, and their results at the Cybathlon are presented. Though the benchmarking procedure has some limitations (for instance, we were unable to identify any factors that clearly contribute to BCI performance, it can be successfully used to analyze BCI performance in realistic, less structured conditions. In the future, the parameters of the benchmarking game could be modified to better mimic different applications (e.g., the need to use some commands more frequently than others. Furthermore, the Cybathlon has the potential to showcase such devices to the general public.

  9. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  10. The scientometric biography of a leading scientist working on the field of bio-energy

    Energy Technology Data Exchange (ETDEWEB)

    Konur, Ozcan [Sirnak University Faculty of Engineering, Department of Mechanical Engineering (Turkey)], email: okonur@hotmail.com

    2011-07-01

    This paper presents a scientometric biography of a Turkish scientist, Prof. Dr. Ayhan Demirbas, who is a leading figure in the field of bio-energy. It describes the method and importance of doing such biographies and suggests that there are too few of them, this one being the first in this specific area. It provides insight into the individual, his work, his research and links in his field of studies and research. Prof. Dr. Demirbas has spent almost three decades in research, particularly in the field of bio-energy. He has researched and taught in the field of renewable energies including biodiesels, biofuels, biomass pyrolysis, liquefaction and gasification, biogas, bioalcohols, and biohydrogen. He has also studied a great variety of subjects, such as the development of pulp from plants, chemical and engineering thermodynamics, chemical and energy education, global climate change, drinking water and cereal analyses. He has published 454 articles as of 2011.

  11. Benchmarking school nursing practice: the North West Regional Benchmarking Group

    OpenAIRE

    Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn

    2016-01-01

    It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...

  12. Reactor calculation benchmark PCA blind test results

    International Nuclear Information System (INIS)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables

  13. Reactor calculation benchmark PCA blind test results

    Energy Technology Data Exchange (ETDEWEB)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables.

  14. Mapping the evolution of entrepreneurship as a field of research (1990-2013): A scientometric analysis.

    Science.gov (United States)

    Chandra, Yanto

    2018-01-01

    This article applies scientometric techniques to study the evolution of the field of entrepreneurship between 1990 and 2013. Using a combination of topic mapping, author and journal co-citation analyses, and overlay visualization of new and hot topics in the field, this article makes important contribution to the entrepreneurship research by identifying 46 topics in the 24-year history of entrepreneurship research and demonstrates how they appear, disappear, reappear and stabilize over time. It also identifies five topics that are persistent across the 24-year study period--institutions and institutional entrepreneurship, innovation and technology management, policy and development, entrepreneurial process and opportunity, and new ventures--which I labeled as The Pentagon of Entrepreneurship. Overall, the analyses revealed patterns of convergence and divergence and the diversity of topics, specialization, and interdisciplinary engagement in entrepreneurship research, thus offering the latest insights on the state of the art of the field.

  15. Mapping the evolution of entrepreneurship as a field of research (1990-2013: A scientometric analysis.

    Directory of Open Access Journals (Sweden)

    Yanto Chandra

    Full Text Available This article applies scientometric techniques to study the evolution of the field of entrepreneurship between 1990 and 2013. Using a combination of topic mapping, author and journal co-citation analyses, and overlay visualization of new and hot topics in the field, this article makes important contribution to the entrepreneurship research by identifying 46 topics in the 24-year history of entrepreneurship research and demonstrates how they appear, disappear, reappear and stabilize over time. It also identifies five topics that are persistent across the 24-year study period--institutions and institutional entrepreneurship, innovation and technology management, policy and development, entrepreneurial process and opportunity, and new ventures--which I labeled as The Pentagon of Entrepreneurship. Overall, the analyses revealed patterns of convergence and divergence and the diversity of topics, specialization, and interdisciplinary engagement in entrepreneurship research, thus offering the latest insights on the state of the art of the field.

  16. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  17. Scientometrical approach of the definition of a research domain using scientific journals

    International Nuclear Information System (INIS)

    Signogneau, A.

    1995-01-01

    The goal of this thesis is to analyse how the academic domain of a research entity can be defined by a panel of scientific journals. The aim of this work is to contribute to the creation of information tools as a help in research management. The first part gives an analysis of the scientific journals as markers of the scientific development: the production and diffusion of scientific journals and their ''scientometrical'' analysis (references, citation reports, citation indexes etc..). In the second part, a research unit is analyzed according to its related scientific journals and to its research domain. The SPAM (Photons, Atoms and Molecules Service) of the CEA was chosen for this task (main journals and co-publications network, specialization, main topics, collaborations and competition). The OST (Observatory of Sciences and Techniques) has in charge the production of scientific and technical indicators for research operators. The third part evaluates the methods used by the OST (analyses of reviews and journals) to provide a documentary corpus, taking the topic of the environment as an example. Finally the relevance of the information products obtained is evaluated. (J.S.)

  18. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  19. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  20. Measuring originality: common patterns of invention in research and technology organizations

    Energy Technology Data Exchange (ETDEWEB)

    Tang, D.L.; Wiseman, E.; Keating, T.; Archambeault, J.

    2016-07-01

    The National Research Council of Canada (NRC) co-chairs an international working group on performance benchmarking and impact assessment of Research and Technology Organizations (RTO). The Knowledge Management branch of the NRC conducted the patent analysis portion of the benchmarking study. In this paper, we present a Weighted Originality index that can more accurately measure the spread of technological combinations in terms of hierarchical patent classifications. Using this patent indicator, we revealed a common pattern of distribution of invention originality in RTOs. Our work contributes to the methodological advancement of patent measures for the scientometric community. (Author)

  1. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  2. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  3. A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images

    Directory of Open Access Journals (Sweden)

    David Vázquez

    2017-01-01

    Full Text Available Colorectal cancer (CRC is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs. We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.

  4. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  5. Benchmarking in pathology: development of an activity-based costing model.

    Science.gov (United States)

    Burnett, Leslie; Wilson, Roger; Pfeffer, Sally; Lowry, John

    2012-12-01

    Benchmarking in Pathology (BiP) allows pathology laboratories to determine the unit cost of all laboratory tests and procedures, and also provides organisational productivity indices allowing comparisons of performance with other BiP participants. We describe 14 years of progressive enhancement to a BiP program, including the implementation of 'avoidable costs' as the accounting basis for allocation of costs rather than previous approaches using 'total costs'. A hierarchical tree-structured activity-based costing model distributes 'avoidable costs' attributable to the pathology activities component of a pathology laboratory operation. The hierarchical tree model permits costs to be allocated across multiple laboratory sites and organisational structures. This has enabled benchmarking on a number of levels, including test profiles and non-testing related workload activities. The development of methods for dealing with variable cost inputs, allocation of indirect costs using imputation techniques, panels of tests, and blood-bank record keeping, have been successfully integrated into the costing model. A variety of laboratory management reports are produced, including the 'cost per test' of each pathology 'test' output. Benchmarking comparisons may be undertaken at any and all of the 'cost per test' and 'cost per Benchmarking Complexity Unit' level, 'discipline/department' (sub-specialty) level, or overall laboratory/site and organisational levels. We have completed development of a national BiP program. An activity-based costing methodology based on avoidable costs overcomes many problems of previous benchmarking studies based on total costs. The use of benchmarking complexity adjustment permits correction for varying test-mix and diagnostic complexity between laboratories. Use of iterative communication strategies with program participants can overcome many obstacles and lead to innovations.

  6. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  7. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  8. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  9. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  10. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  11. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...... applications. The present study analyses voluntary benchmarking in a public setting that is oriented towards learning. The study contributes by showing how benchmarking can be mobilised for learning and offers evidence of the effects of such benchmarking for performance outcomes. It concludes that benchmarking...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....

  12. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  13. Benchmarking reference services: an introduction.

    Science.gov (United States)

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  14. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  15. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  16. Practice benchmarking in the age of targeted auditing.

    Science.gov (United States)

    Langdale, Ryan P; Holland, Ben F

    2012-11-01

    The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists.

  17. Technical Report: Benchmarking for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    Energy Technology Data Exchange (ETDEWEB)

    McLoughlin, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-22

    The software application “MetaQuant” was developed by our group at Lawrence Livermore National Laboratory (LLNL). It is designed to profile microbial populations in a sample using data from whole-genome shotgun (WGS) metagenomic DNA sequencing. Several other metagenomic profiling applications have been described in the literature. We ran a series of benchmark tests to compare the performance of MetaQuant against that of a few existing profiling tools, using real and simulated sequence datasets. This report describes our benchmarking procedure and results.

  18. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Visualizing the Knowledge Domain of Nanoparticle Drug Delivery Technologies: A Scientometric Review

    Directory of Open Access Journals (Sweden)

    Yen-Chun Lee

    2016-01-01

    Full Text Available The scientific literature of nanoparticle drug delivery technologies (NDDT between 2005 and 2014 was reviewed. The visualized co-citation network of its knowledge domain was characterized in terms of thematic concentrations of co-cited references and emerging trends of surging keywords and citations to references through a scientometric review. The combined dataset of 25,171 bibliographic records were constructed through topic search and citation expansion to ensure adequate coverage of the field. While research in gold nanoparticle and magnetic nanoparticle remains the two most prominent knowledge domains in the NDDT field, research related to clinical and therapeutic applications has experienced a considerable growth. In particular, clinical and therapeutic developments in NDDT have demonstrated profound connections with the mesoporous silica nanoparticle research and microcrystal research. A rapid adaptation of mesoporous silica-based nanomaterials and rare earth fluoride nano-/microcrystal in NDDT is evident. Innovative strategies have been employed to exploit the multicomponent, chemical synthesis, surface modification, and controlled release imparting functionalized targeting capabilities. This study not only facilitated the connection of authors and research themes in the NDDT community, but also demonstrated how research interests and trends evolve over time, which greatly contributes to our understanding of the NDDT knowledge domains.

  20. Benchmark calculation programme concerning typical LMFBR structures

    International Nuclear Information System (INIS)

    Donea, J.; Ferrari, G.; Grossetie, J.C.; Terzaghi, A.

    1982-01-01

    This programme, which is part of a comprehensive activity aimed at resolving difficulties encountered in using design procedures based on ASME Code Case N-47, should allow to get confidence in computer codes which are supposed to provide a realistic prediction of the LMFBR component behaviour. The calculations started on static analysis of typical structures made of non linear materials stressed by cyclic loads. The fluid structure interaction analysis is also being considered. Reasons and details of the different benchmark calculations are described, results obtained are commented and future computational exercise indicated

  1. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  2. Using an Individual Procedure Score Before and After the Advanced Surgical Skills Exposure for Trauma Course Training to Benchmark a Hemorrhage-Control Performance Metric.

    Science.gov (United States)

    Mackenzie, Colin F; Garofalo, Evan; Shackelford, Stacy; Shalin, Valerie; Pugh, Kristy; Chen, Hegang; Puche, Adam; Pasley, Jason; Sarani, Babak; Henry, Sharon; Bowyer, Mark

    2015-01-01

    Test with an individual procedure score (IPS) to assess whether an unpreserved cadaver trauma training course, including upper and lower limb vascular exposure, improves correct identification of surgical landmarks, underlying anatomy, and shortens time to vascular control. Prospective study of performance of 3 vascular exposure and control procedures (axillary, brachial, and femoral arteries) using IPS metrics by 2 colocated and trained evaluators before and after training with the Advanced Surgical Skills Exposure for Trauma (ASSET) course. IPS, including identification of anatomical landmarks, incisions, underlying structures, and time to completion of each procedure was compared before and after training using repeated measurement models. Audio-video instrumented cadaver laboratory at University of Maryland School of Medicine. A total of 41 second to sixth year surgical residents from surgical programs throughout Mid-Atlantic States who had not previously taken the ASSET course were enrolled, 40 completed the pre- and post-ASSET performance evaluations. After ASSET training, all components of IPS increased and time shortened for each of the 3 artery exposures. Procedure steps performed correctly increased 57%, anatomical knowledge increased 43% and skin incision to passage of a vessel loop twice around the correct vessel decreased by a mean of 2.5 minutes. An overall vascular trauma readiness index, a comprehensive IPS score for 3 procedures increased 28% with ASSET Training. Improved knowledge of surface landmarks and underlying anatomy is associated with increased IPS, faster procedures, more accurate incision placement, and successful vascular control. Structural recognition during specific procedural steps and anatomical knowledge were key points learned during the ASSET course. Such training may accelerate acquisition of specific trauma surgery skills to compensate for shortened training hours, infrequent exposure to major vascular injuries, or when just

  3. Benchmarking ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2006-01-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2 O, H 2 O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  4. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  5. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  6. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    Science.gov (United States)

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  7. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  8. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins , Robert

    2010-01-01

    Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...

  9. Benchmarking Using Basic DBMS Operations

    Science.gov (United States)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  10. High Energy Physics (HEP) benchmark program

    International Nuclear Information System (INIS)

    Yasu, Yoshiji; Ichii, Shingo; Yashiro, Shigeo; Hirayama, Hideo; Kokufuda, Akihiro; Suzuki, Eishin.

    1993-01-01

    High Energy Physics (HEP) benchmark programs are indispensable tools to select suitable computer for HEP application system. Industry standard benchmark programs can not be used for this kind of particular selection. The CERN and the SSC benchmark suite are famous HEP benchmark programs for this purpose. The CERN suite includes event reconstruction and event generator programs, while the SSC one includes event generators. In this paper, we found that the results from these two suites are not consistent. And, the result from the industry benchmark does not agree with either of these two. Besides, we describe comparison of benchmark results using EGS4 Monte Carlo simulation program with ones from two HEP benchmark suites. Then, we found that the result from EGS4 in not consistent with the two ones. The industry standard of SPECmark values on various computer systems are not consistent with the EGS4 results either. Because of these inconsistencies, we point out the necessity of a standardization of HEP benchmark suites. Also, EGS4 benchmark suite should be developed for users of applications such as medical science, nuclear power plant, nuclear physics and high energy physics. (author)

  11. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  12. Mapping the evolution of entrepreneurship as a field of research (1990–2013): A scientometric analysis

    Science.gov (United States)

    2018-01-01

    This article applies scientometric techniques to study the evolution of the field of entrepreneurship between 1990 and 2013. Using a combination of topic mapping, author and journal co-citation analyses, and overlay visualization of new and hot topics in the field, this article makes important contribution to the entrepreneurship research by identifying 46 topics in the 24-year history of entrepreneurship research and demonstrates how they appear, disappear, reappear and stabilize over time. It also identifies five topics that are persistent across the 24-year study period––institutions and institutional entrepreneurship, innovation and technology management, policy and development, entrepreneurial process and opportunity, and new ventures––which I labeled as The Pentagon of Entrepreneurship. Overall, the analyses revealed patterns of convergence and divergence and the diversity of topics, specialization, and interdisciplinary engagement in entrepreneurship research, thus offering the latest insights on the state of the art of the field. PMID:29300735

  13. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Sitnikov, Catalina; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  14. Comparison of typical inelastic analysis predictions with benchmark problem experimental results

    International Nuclear Information System (INIS)

    Clinard, J.A.; Corum, J.M.; Sartory, W.K.

    1975-01-01

    The results of exemplary inelastic analyses are presented for a series of experimental benchmark problems. Consistent analytical procedures and constitutive relations were used in each of the analyses, and published material behavior data were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for Type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)

  15. Analysis of the impact of correlated benchmark experiments on the validation of codes for criticality safety analysis

    International Nuclear Information System (INIS)

    Bock, M.; Stuke, M.; Behler, M.

    2013-01-01

    The validation of a code for criticality safety analysis requires the recalculation of benchmark experiments. The selected benchmark experiments are chosen such that they have properties similar to the application case that has to be assessed. A common source of benchmark experiments is the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) compiled by the 'International Criticality Safety Benchmark Evaluation Project' (ICSBEP). In order to take full advantage of the information provided by the individual benchmark descriptions for the application case, the recommended procedure is to perform an uncertainty analysis. The latter is based on the uncertainties of experimental results included in most of the benchmark descriptions. They can be performed by means of the Monte Carlo sampling technique. The consideration of uncertainties is also being introduced in the supplementary sheet of DIN 25478 'Application of computer codes in the assessment of criticality safety'. However, for a correct treatment of uncertainties taking into account the individual uncertainties of the benchmark experiments is insufficient. In addition, correlations between benchmark experiments have to be handled correctly. For example, these correlations can arise due to different cases of a benchmark experiment sharing the same components like fuel pins or fissile solutions. Thus, manufacturing tolerances of these components (e.g. diameter of the fuel pellets) have to be considered in a consistent manner in all cases of the benchmark experiment. At the 2012 meeting of the Expert Group on 'Uncertainty Analysis for Criticality Safety Assessment' (UACSA) of the OECD/NEA a benchmark proposal was outlined that aimed for the determination of the impact on benchmark correlations on the estimation of the computational bias of the neutron multiplication factor (k eff ). The analysis presented here is based on this proposal. (orig.)

  16. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  17. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  18. Human factors reliability benchmark exercise: a review

    International Nuclear Information System (INIS)

    Humphreys, P.

    1990-01-01

    The Human Factors Reliability Benchmark Exercise has addressed the issues of identification, analysis, representation and quantification of Human Error in order to identify the strengths and weaknesses of available techniques. Using a German PWR nuclear powerplant as the basis for the studies, fifteen teams undertook evaluations of a routine functional Test and Maintenance procedure plus an analysis of human actions during an operational transient. The techniques employed by the teams are discussed and reviewed on a comparative basis. The qualitative assessments performed by each team compare well, but at the quantification stage there is much less agreement. (author)

  19. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several...... visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  20. Interfield dysbalances in research input and output benchmarking: Visualisation by density equalizing procedures

    Directory of Open Access Journals (Sweden)

    Fischer Axel

    2008-08-01

    Full Text Available Abstract Background Historical, social and economic reasons can lead to major differences in the allocation of health system resources and research funding. These differences might endanger the progress in diagnostic and therapeutic approaches of socio-economic important diseases. The present study aimed to assess different benchmarking approaches that might be used to analyse these disproportions. Research in two categories was analysed for various output parameters and compared to input parameters. Germany was used as a high income model country. For the areas of cardiovascular and respiratory medicine density equalizing mapping procedures visualized major geographical differences in both input and output markers. Results An imbalance in the state financial input was present with 36 cardiovascular versus 8 respiratory medicine state-financed full clinical university departments at the C4/W3 salary level. The imbalance in financial input is paralleled by an imbalance in overall quantitative output figures: The 36 cardiology chairs published 2708 articles in comparison to 453 articles published by the 8 respiratory medicine chairs in the period between 2002 and 2006. This is a ratio of 75.2 articles per cardiology chair and 56.63 articles per respiratory medicine chair. A similar trend is also present in the qualitative measures. Here, the 2708 cardiology publications were cited 48337 times (7290 times for respiratory medicine which is an average citation of 17.85 per publication vs. 16.09 for respiratory medicine. The average number of citations per cardiology chair was 1342.69 in contrast to 911.25 citations per respiratory medicine chair. Further comparison of the contribution of the 16 different German states revealed major geographical differences concerning numbers of chairs, published items, total number of citations and average citations. Conclusion Despite similar significances of cardiovascular and respiratory diseases for the global

  1. Achieving palliative care research efficiency through defining and benchmarking performance metrics.

    Science.gov (United States)

    Lodato, Jordan E; Aziz, Noreen; Bennett, Rachael E; Abernethy, Amy P; Kutner, Jean S

    2012-12-01

    Research efficiency is gaining increasing attention in the research enterprise, including palliative care research. The importance of generating meaningful findings and translating these scientific advances to improved patient care creates urgency in the field to address well documented system inefficiencies. The Palliative Care Research Cooperative Group (PCRC) provides useful examples for ensuring research efficiency in palliative care. Literature on maximizing research efficiency focuses on the importance of clearly delineated process maps, working instructions, and standard operating procedures in creating synchronicity in expectations across research sites. Examples from the PCRC support these objectives and suggest that early creation and employment of performance metrics aligned with these processes are essential to generate clear expectations and identify benchmarks. These benchmarks are critical in effective monitoring and ultimately the generation of high-quality findings that are translatable to clinical populations. Prioritization of measurable goals and tasks to ensure that activities align with programmatic aims is critical. Examples from the PCRC affirm and expand the existing literature on research efficiency, providing a palliative care focus. Operating procedures, performance metrics, prioritization, and monitoring for success should all be informed by and inform the process map to achieve maximum research efficiency.

  2. The Journal Impact Factor: Moving Toward an Alternative and Combined Scientometric Approach.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Nurmashev, Bekaidar; Yessirkepov, Marlen; Udovik, Elena E; Baryshnikov, Aleksandr A; Kitas, George D

    2017-02-01

    The Journal Impact Factor (JIF) is a single citation metric, which is widely employed for ranking journals and choosing target journals, but is also misused as the proxy of the quality of individual articles and academic achievements of authors. This article analyzes Scopus-based publication activity on the JIF and overviews some of the numerous misuses of the JIF, global initiatives to overcome the 'obsession' with impact factors, and emerging strategies to revise the concept of the scholarly impact. The growing number of articles on the JIF, most of which are in English, reflects interest of experts in journal editing and scientometrics toward its uses, misuses, and options to overcome related problems. Solely displaying values of the JIFs on the journal websites is criticized by experts as these average metrics do not reflect skewness of citation distribution of individual articles. Emerging strategies suggest to complement the JIFs with citation plots and alternative metrics, reflecting uses of individual articles in terms of downloads and distribution of related information through social media and networking platforms. It is also proposed to revise the original formula of the JIF calculation and embrace the concept of the impact and importance of individual articles. The latter is largely dependent on ethical soundness of the journal instructions, proper editing and structuring of articles, efforts to promote related information through social media, and endorsements of professional societies.

  3. [PhD theses on gerontological topics in Russia 1995-2012: scientometric analysis].

    Science.gov (United States)

    Smol'kin, A A; Makarova, E A

    2014-01-01

    The paper presents a scientometric analysis of PhD theses on gerontological topics in Russian humanities (excluding economics) for the period from 1995 to 2012. During this period, 253 PhD theses (238 of "candidate dissertations," and 15 of "doctoral dissertations") were defended in Russia. Almost half of them were defended during the boom years (2005-2006; 2009-2010). The number of theses defended in the 2000-s has increased significantly compared to the second half of 1990-s. However for gerontological PhD-s overall as a percentage of all theses defended in Russian humanities, the number hardly changed and remained small (less than 0.3%). The leading discipline in the study of aging (within the humanities) is sociology accounting for more than a third of all defended theses. Though the theses were defended in 48 cities, more than half of them were defended in 3 cities, which are Moscow, St. Petersburg and Saratov. Thematic analysis showed that the leading position was occupied by two topics: "the elderly and the state" (42%) and "(re)socialization/adaptation of the elderly" (25%). 14% of the works are devoted to intergenerational relations and social status of the elderly. Other topics (old man/woman's personality, self-perceptions of aging, violence and crime against the elderly, loneliness, discrimination, etc.) are presented by very few studies.

  4. Benchmarking study of corporate research management and planning practices

    Science.gov (United States)

    McIrvine, Edward C.

    1992-05-01

    During 1983-84, Xerox Corporation was undergoing a change in corporate style through a process of training and altered behavior known as Leadership Through Quality. One tenet of Leadership Through Quality was benchmarking, a procedure whereby all units of the corporation were asked to compare their operation with the outside world. As a part of the first wave of benchmark studies, Xerox Corporate Research Group studied the processes of research management, technology transfer, and research planning in twelve American and Japanese companies. The approach taken was to separate `research yield' and `research productivity' (as defined by Richard Foster) and to seek information about how these companies sought to achieve high- quality results in these two parameters. The most significant findings include the influence of company culture, two different possible research missions (an innovation resource and an information resource), and the importance of systematic personal interaction between sources and targets of technology transfer.

  5. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  6. Comparison of typical inelastic analysis predictions with benchmark problem experimental results

    International Nuclear Information System (INIS)

    Clinard, J.A.; Corum, J.M.; Sartory, W.K.

    1975-01-01

    The results of exemplary inelastic analyses for experimental benchmark problems on reactor components are presented. Consistent analytical procedures and constitutive relations were used in each of the analyses, and the material behavior data presented in the Appendix were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses for the types of problems discussed, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)

  7. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  8. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  9. A CFD benchmarking exercise based on flow mixing in a T-junction

    Energy Technology Data Exchange (ETDEWEB)

    Smith, B.L., E-mail: brian.smith@psi.ch [Thermal Hydraulics Laboratory, Nuclear Energy and Safety Department, Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland); Mahaffy, J.H. [Wheelsmith Farm, Spring Mill, PA (United States); Angele, K. [Vattenfall R and D, Älvkarleby (Sweden)

    2013-11-15

    The paper describes an international benchmarking exercise, sponsored by the OECD Nuclear Energy Agency (NEA), aimed at testing the ability of state-of-the-art computational fluid dynamics (CFD) codes to predict the important fluid flow parameters affecting high-cycle thermal fatigue induced by turbulent mixing in T-junctions. The results from numerical simulations are compared to measured data from an experiment performed at 1:2 scale by Vattenfall Research and Development, Älvkarleby, Sweden. The test data were released only at the end of the exercise making this a truly blind CFD-validation benchmark. Details of the organizational procedures, the experimental set-up and instrumentation, the different modeling approaches adopted, synthesis of results, and overall conclusions and perspectives are presented.

  10. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...

  11. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts to...... contribution to the discussions within the Eusponsored BEST Thematic Network (Benchmarking European Sustainable Transport) which ran from 2000 to 2003....

  12. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind......Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... as it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and use benchmarking, hence, generating an uncertainty in understanding the effects of benchmarking. This paper addresses...

  13. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  14. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  15. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  16. Benchmarking the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William; Hui, Y.V.; Lam, Y. Miu

    2006-01-01

    Benchmarking energy-efficiency is an important tool to promote the efficient use of energy in commercial buildings. Benchmarking models are mostly constructed in a simple benchmark table (percentile table) of energy use, which is normalized with floor area and temperature. This paper describes a benchmarking process for energy efficiency by means of multiple regression analysis, where the relationship between energy-use intensities (EUIs) and the explanatory factors (e.g., operating hours) is developed. Using the resulting regression model, these EUIs are then normalized by removing the effect of deviance in the significant explanatory factors. The empirical cumulative distribution of the normalized EUI gives a benchmark table (or percentile table of EUI) for benchmarking an observed EUI. The advantage of this approach is that the benchmark table represents a normalized distribution of EUI, taking into account all the significant explanatory factors that affect energy consumption. An application to supermarkets is presented to illustrate the development and the use of the benchmarking method

  17. Numisheet2005 Benchmark Analysis on Forming of an Automotive Underbody Cross Member: Benchmark 2

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    This report presents an international cooperation benchmark effort focusing on simulations of a sheet metal stamping process. A forming process of an automotive underbody cross member using steel and aluminum blanks is used as a benchmark. Simulation predictions from each submission are analyzed via comparison with the experimental results. A brief summary of various models submitted for this benchmark study is discussed. Prediction accuracy of each parameter of interest is discussed through the evaluation of cumulative errors from each submission

  18. SKaMPI: A Comprehensive Benchmark for Public Benchmarking of MPI

    Directory of Open Access Journals (Sweden)

    Ralf Reussner

    2002-01-01

    Full Text Available The main objective of the MPI communication library is to enable portable parallel programming with high performance within the message-passing paradigm. Since the MPI standard has no associated performance model, and makes no performance guarantees, comprehensive, detailed and accurate performance figures for different hardware platforms and MPI implementations are important for the application programmer, both for understanding and possibly improving the behavior of a given program on a given platform, as well as for assuring a degree of predictable behavior when switching to another hardware platform and/or MPI implementation. We term this latter goal performance portability, and address the problem of attaining performance portability by benchmarking. We describe the SKaMPI benchmark which covers a large fraction of MPI, and incorporates well-accepted mechanisms for ensuring accuracy and reliability. SKaMPI is distinguished among other MPI benchmarks by an effort to maintain a public performance database with performance data from different hardware platforms and MPI implementations.

  19. Quantum-teleportation benchmarks for independent and identically distributed spin states and displaced thermal states

    International Nuclear Information System (INIS)

    Guta, Madalin; Bowles, Peter; Adesso, Gerardo

    2010-01-01

    A successful state-transfer (or teleportation) experiment must perform better than the benchmark set by the 'best' measure and prepare procedure. We consider the benchmark problem for the following families of states: (i) displaced thermal equilibrium states of a given temperature; (ii) independent identically prepared qubits with a completely unknown state. For the first family we show that the optimal procedure is heterodyne measurement followed by the preparation of a coherent state. This procedure was known to be optimal for coherent states and for squeezed states with the 'overlap fidelity' as the figure of merit. Here, we prove its optimality with respect to the trace norm distance and supremum risk. For the second problem we consider n independent and identically distributed (i.i.d.) spin-(1/2) systems in an arbitrary unknown state ρ and look for the measurement-preparation pair (M n ,P n ) for which the reconstructed state ω n :=P n circle M n (ρ xn ) is as close as possible to the input state (i.e., parallel ω n -ρ xn parallel 1 is small). The figure of merit is based on the trace norm distance between the input and output states. We show that asymptotically with n this problem is equivalent to the first one. The proof and construction of (M n ,P n ) uses the theory of local asymptotic normality developed for state estimation which shows that i.i.d. quantum models can be approximated in a strong sense by quantum Gaussian models. The measurement part is identical to 'optimal estimation', showing that 'benchmarking' and estimation are closely related problems in the asymptotic set up.

  20. Experiences with installing and benchmarking SCALE 4.0 on workstations

    International Nuclear Information System (INIS)

    Montierth, L.M.; Briggs, J.B.

    1992-01-01

    The advent of economical, high-speed workstations has placed on the criticality engineer's desktop the means to perform computational analysis that was previously possible only on mainframe computers. With this capability comes the need to modify and maintain criticality codes for use on a variety of different workstations. Due to the use of nonstandard coding, compiler differences [in lieu of American National Standards Institute (ANSI) standards], and other machine idiosyncrasies, there is a definite need to systematically test and benchmark all codes ported to workstations. Once benchmarked, a user environment must be maintained to ensure that user code does not become corrupted. The goal in creating a workstation version of the criticality safety analysis sequence (CSAS) codes in SCALE 4.0 was to start with the Cray versions and change as little source code as possible yet produce as generic a code as possible. To date, this code has been ported to the IBM RISC 6000, Data General AViiON 400, Silicon Graphics 4D-35 (all using the same source code), and to the Hewlett Packard Series 700 workstations. The code is maintained under a configuration control procedure. In this paper, the authors address considerations that pertain to the installation and benchmarking of CSAS

  1. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  2. Power reactor pressure vessel benchmarks

    International Nuclear Information System (INIS)

    Rahn, F.J.

    1978-01-01

    A review is given of the current status of experimental and calculational benchmarks for use in understanding the radiation embrittlement effects in the pressure vessels of operating light water power reactors. The requirements of such benchmarks for application to pressure vessel dosimetry are stated. Recent developments in active and passive neutron detectors sensitive in the ranges of importance to embrittlement studies are summarized and recommendations for improvements in the benchmark are made. (author)

  3. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  4. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  5. Benchmark enclosure fire suppression experiments - phase 1 test report.

    Energy Technology Data Exchange (ETDEWEB)

    Figueroa, Victor G.; Nichols, Robert Thomas; Blanchat, Thomas K.

    2007-06-01

    A series of fire benchmark water suppression tests were performed that may provide guidance for dispersal systems for the protection of high value assets. The test results provide boundary and temporal data necessary for water spray suppression model development and validation. A review of fire suppression in presented for both gaseous suppression and water mist fire suppression. The experimental setup and procedure for gathering water suppression performance data are shown. Characteristics of the nozzles used in the testing are presented. Results of the experiments are discussed.

  6. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  7. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    Kluth, Stefan

    2014-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  8. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...

  9. Benchmarking in Czech Higher Education

    OpenAIRE

    Plaček Michal; Ochrana František; Půček Milan

    2015-01-01

    The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Base...

  10. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  11. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  12. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  13. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts: this shape is considered as a fixed parameter in the benchmarking...

  14. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  15. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  16. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  17. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  18. Reevaluation of the case, de Hoffman, and Placzek one-group neutron transport benchmark solution in plane geometry

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1986-01-01

    In a course on neutron transport theory and also in the analytical neutron transport theory literature, the pioneering work of Case et al. (CdHP) is often referenced. This work was truly a monumental effort in that it treated the fundamental mathematical properties of the one-group neutron Boltzmann equation in detail as well as the numerical evaluation of most of the resulting solutions. Many mathematically and numerically oriented dissertations were based on this classic monograph. In light of the considerable advances made both in numerical methods and computer technology since 1953, when the historic CdHP monograph first appeared, it seems appropriate to reevaluate the numerical benchmark solutions found therein with present-day computational technology. In most transport theory courses, the subject of proper benchmarking of numerical algorithms and transport codes is seldom addressed at any great length. This may be the reason that the benchmarking procedure is so rarely practiced in the nuclear community and when practiced is improperly applied. In this presentation, the development of a new benchmark for the one-group neutron flux in an infinite medium will be detailed with emphasis placed on the educational aspects of the benchmarking activity

  19. Medical school benchmarking - from tools to programmes.

    Science.gov (United States)

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  20. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  1. Systems reliability Benchmark exercise part 1-Description and results

    International Nuclear Information System (INIS)

    Amendola, A.

    1986-01-01

    The report describes aims, rules and results of the Systems Reliability Benchmark Exercise, which has been performed in order to assess methods and procedures for reliability analysis of complex systems and involved a large number of European organizations active in NPP safety evaluation. The exercise included both qualitative and quantitative methods and was structured in such a way that separation of the effects of uncertainties in modelling and in data on the overall spread was made possible. Part I describes the way in which RBE has been performed, its main results and conclusions

  2. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  3. Benchmark thermal-hydraulic analysis with the Agathe Hex 37-rod bundle

    International Nuclear Information System (INIS)

    Barroyer, P.; Hudina, M.; Huggenberger, M.

    1981-09-01

    Different computer codes are compared, in prediction performance, based on the AGATHE HEX 37-rod bundle experimental results. The compilation of all available calculation results allows a critical assessment of the codes. For the time being, it is concluded which codes are best suited for gas cooled fuel element design purposes. Based on the positive aspects of these cooperative Benchmark exercises, an attempt is made to define a computer code verification procedure. (Auth.)

  4. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  5. A Bibliometric Profile of Disaster Medicine Research from 2008 to 2017: A Scientometric Analysis.

    Science.gov (United States)

    Zhou, Liang; Zhang, Ping; Zhang, Zhigang; Fan, Lidong; Tang, Shuo; Hu, Kunpeng; Xiao, Nan; Li, Shuguang

    2018-05-02

    ABSTRACTThis study analyzed and assessed publication trends in articles on "disaster medicine," using scientometric analysis. Data were obtained from the Web of Science Core Collection (WoSCC) of Thomson Reuters on March 27, 2017. A total of 564 publications on disaster medicine were identified. There was a mild increase in the number of articles on disaster medicine from 2008 (n=55) to 2016 (n=83). Disaster Medicine and Public Health Preparedness published the most articles, the majority of articles were published in the United States, and the leading institute was Tohoku University. F. Della Corte, M. D. Christian, and P. L. Ingrassia were the top authors on the topic, and the field of public health generated the most publications. Terms analysis indicated that emergency medicine, public health, disaster preparedness, natural disasters, medicine, and management were the research hotspots, whereas Hurricane Katrina, mechanical ventilation, occupational medicine, intensive care, and European journals represented the frontiers of disaster medicine research. Overall, our analysis revealed that disaster medicine studies are closely related to other medical fields and provides researchers and policy-makers in this area with new insight into the hotspots and dynamic directions. (Disaster Med Public Health Preparedness. 2018;page 1 of 8).

  6. A scientometric prediction of the discovery of the first potentially habitable planet with a mass similar to Earth.

    Science.gov (United States)

    Arbesman, Samuel; Laughlin, Gregory

    2010-10-04

    The search for a habitable extrasolar planet has long interested scientists, but only recently have the tools become available to search for such planets. In the past decades, the number of known extrasolar planets has ballooned into the hundreds, and with it, the expectation that the discovery of the first Earth-like extrasolar planet is not far off. Here, we develop a novel metric of habitability for discovered planets and use this to arrive at a prediction for when the first habitable planet will be discovered. Using a bootstrap analysis of currently discovered exoplanets, we predict the discovery of the first Earth-like planet to be announced in the first half of 2011, with the likeliest date being early May 2011. Our predictions, using only the properties of previously discovered exoplanets, accord well with external estimates for the discovery of the first potentially habitable extrasolar planet and highlight the the usefulness of predictive scientometric techniques to understand the pace of scientific discovery in many fields.

  7. Sino-Canadian collaborations in stem cell research: a scientometric analysis.

    Directory of Open Access Journals (Sweden)

    Sarah E Ali-Khan

    Full Text Available International collaboration (IC is essential for the advance of stem cell research, a field characterized by marked asymmetries in knowledge and capacity between nations. China is emerging as a global leader in the stem cell field. However, knowledge on the extent and characteristics of IC in stem cell science, particularly China's collaboration with developed economies, is lacking.We provide a scientometric analysis of the China-Canada collaboration in stem cell research, placing this in the context of other leading producers in the field. We analyze stem cell research published from 2006 to 2010 from the Scopus database, using co-authored papers as a proxy for collaboration. We examine IC levels, collaboration preferences, scientific impact, the collaborating institutions in China and Canada, areas of mutual interest, and funding sources. Our analysis shows rapid global expansion of the field with 48% increase in papers from 2006 to 2010. China now ranks second globally after the United States. China has the lowest IC rate of countries examined, while Canada has one of the highest. China-Canada collaboration is rising steadily, more than doubling during 2006-2010. China-Canada collaboration enhances impact compared to papers authored solely by China-based researchers This difference remained significant even when comparing only papers published in English.While China is increasingly courted in IC by developed countries as a partner in stem cell research, it is clear that it has reached its status in the field largely through domestic publications. Nevertheless, IC enhances the impact of stem cell research in China, and in the field in general. This study establishes an objective baseline for comparison with future studies, setting the stage for in-depth exploration of the dynamics and genesis of IC in stem cell research.

  8. Sino-Canadian collaborations in stem cell research: a scientometric analysis.

    Science.gov (United States)

    Ali-Khan, Sarah E; Ray, Monali; McMahon, Dominique S; Thorsteinsdóttir, Halla

    2013-01-01

    International collaboration (IC) is essential for the advance of stem cell research, a field characterized by marked asymmetries in knowledge and capacity between nations. China is emerging as a global leader in the stem cell field. However, knowledge on the extent and characteristics of IC in stem cell science, particularly China's collaboration with developed economies, is lacking. We provide a scientometric analysis of the China-Canada collaboration in stem cell research, placing this in the context of other leading producers in the field. We analyze stem cell research published from 2006 to 2010 from the Scopus database, using co-authored papers as a proxy for collaboration. We examine IC levels, collaboration preferences, scientific impact, the collaborating institutions in China and Canada, areas of mutual interest, and funding sources. Our analysis shows rapid global expansion of the field with 48% increase in papers from 2006 to 2010. China now ranks second globally after the United States. China has the lowest IC rate of countries examined, while Canada has one of the highest. China-Canada collaboration is rising steadily, more than doubling during 2006-2010. China-Canada collaboration enhances impact compared to papers authored solely by China-based researchers This difference remained significant even when comparing only papers published in English. While China is increasingly courted in IC by developed countries as a partner in stem cell research, it is clear that it has reached its status in the field largely through domestic publications. Nevertheless, IC enhances the impact of stem cell research in China, and in the field in general. This study establishes an objective baseline for comparison with future studies, setting the stage for in-depth exploration of the dynamics and genesis of IC in stem cell research.

  9. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  10. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  11. Integrating Best Practice and Performance Indicators To Benchmark the Performance of a School System. Benchmarking Paper 940317.

    Science.gov (United States)

    Cuttance, Peter

    This paper provides a synthesis of the literature on the role of benchmarking, with a focus on its use in the public sector. Benchmarking is discussed in the context of quality systems, of which it is an important component. The paper describes the basic types of benchmarking, pertinent research about its application in the public sector, the…

  12. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  13. Process benchmarking for improvement of environmental restoration activities

    International Nuclear Information System (INIS)

    Celorie, J.A.; Selman, J.R.; Larson, N.B.

    1995-01-01

    A process benchmarking study was initiated by the Office of Environmental Management (EM) of the US Department of Energy (DOE) to analyze and improve the department's environmental assessment and environmental restoration (ER) processes. The purpose of this study was to identify specific differences in the processes and implementation procedures used at comparable remediation sites to determine best practices which had the greatest potential to minimize the cost and time required to conduct remedial investigation/ feasibility study (RI/FS) activities. Technical criteria were identified and used to select four DOE, two Department of Defense (DOD), and two Environmental Protection Agency (EPA) restoration sites that exhibited comparable characteristics and regulatory environments. By comparing the process elements and activities executed at the different sites for similar endpoints, best practices were identified for streamlining process elements and minimizing non-value-added activities. Critical measures that influenced process performance were identified and characterized for the sites. This benchmarking study focused on two processes and the internal/external review of documents and the development of the initial evaluation and data collection plan (IEDCP)--since these had a great potential for savings, a high impact on other processes, and a high probability for implementation

  14. Procedure for Measuring and Reporting Commercial Building Energy Performance

    Energy Technology Data Exchange (ETDEWEB)

    Barley, D.; Deru, M.; Pless, S.; Torcellini, P.

    2005-10-01

    This procedure is intended to provide a standard method for measuring and characterizing the energy performance of commercial buildings. The procedure determines the energy consumption, electrical energy demand, and on-site energy production in existing commercial buildings of all types. The performance metrics determined here may be compared against benchmarks to evaluate performance and verify that performance targets have been achieved.

  15. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  16. Benchmarking for controllere: metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels Erik; Dietrichson, Lars Grubbe

    2008-01-01

    Benchmarking indgår på mange måder i både private og offentlige virksomheders ledelsespraksis. I økonomistyring anvendes benchmark-baserede indikatorer (eller nøgletal), eksempelvis ved fastlæggelse af mål i resultatkontrakter eller for at angive det ønskede niveau for visse nøgletal i et Balanced...... Scorecard eller tilsvarende målstyringsmodeller. Artiklen redegør for begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det, samt redegør for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et...... benchmarkingprojekt. Dernæst bliver forskellen på resultatbenchmarking og procesbenchmarking behandlet, hvorefter brugen af intern hhv. ekstern benchmarking, samt brugen af benchmarking i budgetlægning og budgetopfølgning, behandles....

  17. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  18. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  19. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  20. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  1. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  2. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  3. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  4. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  5. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  6. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    of an undergraduate business school education. This paper presents case analysis of the research-oriented participatory education curriculum developed at Copenhagen Business School because it appears uniquely suited, by a curious mix of Danish education tradition and deliberate innovation, to offer an educational......While the knowledge management literature has addressed the explicit and tacit skills needed for successful performance in the modern enterprise, little attention has been paid to date in this particular literature as to how these wide-ranging skills may be suitably acquired during the course...... experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...

  7. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-06-01

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  8. Ad hoc committee on reactor physics benchmarks

    International Nuclear Information System (INIS)

    Diamond, D.J.; Mosteller, R.D.; Gehin, J.C.

    1996-01-01

    In the spring of 1994, an ad hoc committee on reactor physics benchmarks was formed under the leadership of two American Nuclear Society (ANS) organizations. The ANS-19 Standards Subcommittee of the Reactor Physics Division and the Computational Benchmark Problem Committee of the Mathematics and Computation Division had both seen a need for additional benchmarks to help validate computer codes used for light water reactor (LWR) neutronics calculations. Although individual organizations had employed various means to validate the reactor physics methods that they used for fuel management, operations, and safety, additional work in code development and refinement is under way, and to increase accuracy, there is a need for a corresponding increase in validation. Both organizations thought that there was a need to promulgate benchmarks based on measured data to supplement the LWR computational benchmarks that have been published in the past. By having an organized benchmark activity, the participants also gain by being able to discuss their problems and achievements with others traveling the same route

  9. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  10. MTCB: A Multi-Tenant Customizable database Benchmark

    NARCIS (Netherlands)

    van der Zijden, WIm; Hiemstra, Djoerd; van Keulen, Maurice

    2017-01-01

    We argue that there is a need for Multi-Tenant Customizable OLTP systems. Such systems need a Multi-Tenant Customizable Database (MTC-DB) as a backing. To stimulate the development of such databases, we propose the benchmark MTCB. Benchmarks for OLTP exist and multi-tenant benchmarks exist, but no

  11. Classical and modern control strategies for the deployment, reconfiguration, and station-keeping of the National Aeronautics and Space Administration (NASA) Benchmark Tetrahedron Constellation

    Science.gov (United States)

    Capo-Lugo, Pedro A.

    Formation flying consists of multiple spacecraft orbiting in a required configuration about a planet or through Space. The National Aeronautics and Space Administration (NASA) Benchmark Tetrahedron Constellation is one of the proposed constellations to be launched in the year 2009 and provides the motivation for this investigation. The problem that will be researched here consists of three stages. The first stage contains the deployment of the satellites; the second stage is the reconfiguration process to transfer the satellites through different specific sizes of the NASA benchmark problem; and, the third stage is the station-keeping procedure for the tetrahedron constellation. Every stage contains different control schemes and transfer procedures to obtain/maintain the proposed tetrahedron constellation. In the first stage, the deployment procedure will depend on a combination of two techniques in which impulsive maneuvers and a digital controller are used to deploy the satellites and to maintain the tetrahedron constellation at the following apogee point. The second stage that corresponds to the reconfiguration procedure shows a different control scheme in which the intelligent control systems are implemented to perform this procedure. In this research work, intelligent systems will eliminate the use of complex mathematical models and will reduce the computational time to perform different maneuvers. Finally, the station-keeping process, which is the third stage of this research problem, will be implemented with a two-level hierarchical control scheme to maintain the separation distance constraints of the NASA Benchmark Tetrahedron Constellation. For this station-keeping procedure, the system of equations defining the dynamics of a pair of satellites is transformed to take in account the perturbation due to the oblateness of the Earth and the disturbances due to solar pressure. The control procedures used in this research will be transformed from a continuous

  12. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  13. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  14. Benchmarking the Netherlands. Benchmarking for growth

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity

  15. Hot steam header of a high temperature reactor as a benchmark problem

    International Nuclear Information System (INIS)

    Demierre, J.

    1990-01-01

    The International Atomic Energy Agency (IAEA) initiated a Coordinated Research Programme (CRP) on ''Design Codes for Gas-Cooled Reactor Components''. The specialists proposed to start with a benchmark design of a hot steam header in order to get a better understanding of the methods in the participating countries. The contribution of Switzerland carried out by Sulzer. The following report summarized the detailed calculations of dimensioning procedure and analysis. (author). 5 refs, 2 figs, 2 tabs

  16. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  17. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  18. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...

  19. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  20. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  1. Indicators System Creation For The Energy Efficiency Benchmarking Of Municipal Power System Facilities

    Directory of Open Access Journals (Sweden)

    Davydenko L.V.

    2015-04-01

    Full Text Available The issues of the dataware of the comparative analysis procedure (benchmarking for municipal power system facilities energy efficiency level estimation with a view of the hierarchical structure of the heat supply system are considered. The aim of the paper is the system of indicators formation for characterizing the efficiency of energy usage as on objects on lowest so on highest levels of power systems, proceeding from features of their functioning. Benchmarking methodology allows carrying out the estimation of energy efficiency level on the base of a plurality of parameters without their generalization in one indicator, but requires ensuring their comparability. Using the methodology of available statistical information that did not require deep specification and additional inspection structuring objectives and tasks of energy efficiency estimation problem has been proposed for ensuring the opportunity of benchmarking procedure implementation. This makes it possible to form the subset of indicators that ensure enough specification of the object of study, taking into account the degree of abstraction for every hierarchical level or sub problem. For a comparative analysis of energy using efficiency in municipal power systems at the highest levels of the hierarchy a plurality of indicators of the energy efficiency has been formed. Indicators have been determined with consideration of the structural elements of heat supply systems, but allowing taking into account the efficiency of the initial state of the objects, their functioning, and the questions of energy resources accounting organization. Usage of the proposed indicators provides implementation of energy using efficiency monitoring in the municipal power system and allows getting complete overview of the problem.

  2. Benchmark for Evaluating Moving Object Indexes

    DEFF Research Database (Denmark)

    Chen, Su; Jensen, Christian Søndergaard; Lin, Dan

    2008-01-01

    that targets techniques for the indexing of the current and near-future positions of moving objects. This benchmark enables the comparison of existing and future indexing techniques. It covers important aspects of such indexes that have not previously been covered by any benchmark. Notable aspects covered......Progress in science and engineering relies on the ability to measure, reliably and in detail, pertinent properties of artifacts under design. Progress in the area of database-index design thus relies on empirical studies based on prototype implementations of indexes. This paper proposes a benchmark...... include update efficiency, query efficiency, concurrency control, and storage requirements. Next, the paper applies the benchmark to half a dozen notable moving-object indexes, thus demonstrating the viability of the benchmark and offering new insight into the performance properties of the indexes....

  3. Is autoimmunology a discipline of its own? A big data-based bibliometric and scientometric analyses.

    Science.gov (United States)

    Watad, Abdulla; Bragazzi, Nicola Luigi; Adawi, Mohammad; Amital, Howard; Kivity, Shaye; Mahroum, Naim; Blank, Miri; Shoenfeld, Yehuda

    2017-06-01

    Autoimmunology is a super-specialty of immunology specifically dealing with autoimmune disorders. To assess the extant literature concerning autoimmune disorders, bibliometric and scientometric analyses (namely, research topics/keywords co-occurrence, journal co-citation, citations, and scientific output trends - both crude and normalized, authors network, leading authors, countries, and organizations analysis) were carried out using open-source software, namely, VOSviewer and SciCurve. A corpus of 169,519 articles containing the keyword "autoimmunity" was utilized, selecting PubMed/MEDLINE as bibliographic thesaurus. Journals specifically devoted to autoimmune disorders were six and covered approximately 4.15% of the entire scientific production. Compared with all the corpus (from 1946 on), these specialized journals have been established relatively few decades ago. Top countries were the United States, Japan, Germany, United Kingdom, Italy, China, France, Canada, Australia, and Israel. Trending topics are represented by the role of microRNAs (miRNAs) in the ethiopathogenesis of autoimmune disorders, contributions of genetics and of epigenetic modifications, role of vitamins, management during pregnancy and the impact of gender. New subsets of immune cells have been extensively investigated, with a focus on interleukin production and release and on Th17 cells. Autoimmunology is emerging as a new discipline within immunology, with its own bibliometric properties, an identified scientific community and specifically devoted journals.

  4. The emergence and evolution of school psychology literature: A scientometric analysis from 1907 through 2014.

    Science.gov (United States)

    Liu, Shuyan; Oakland, Thomas

    2016-03-01

    The objective of this current study is to identify the growth and development of scholarly literature that specifically references the term 'school psychology' in the Science Citation Index from 1907 through 2014. Documents from Web of Science were accessed and analyzed through the use of scientometric analyses, including HistCite and Pajek software, resulting in the identification of 4,806 scholars who contributed 3,260 articles in 311 journals. Whereas the database included journals from around the world, most articles were published by authors in the United States and in 20 journals, including the Journal of School Psychology, Psychology in the Schools, School Psychology Review, School Psychology International, and School Psychology Quarterly. Analyses of the database from the past century revealed that 20 of the most prolific scholars contributed 14% of all articles. Contributions from faculty and students at University of Minnesota-Twin Cities, University of Nebraska-Lincoln, University of South Carolina, University of Wisconsin-Madison, and University of Texas-Austin represented 10% of all articles including the term school psychology in the Science Citation Index. Relationships among some of the most highly cited articles are also described. Collectively, the series of analyses reported herein contribute to our understanding of scholarship in school psychology. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  6. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  7. Benchmarking: A Process for Improvement.

    Science.gov (United States)

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  8. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  9. Implementing a benchmarking and feedback concept decreases postoperative pain after total knee arthroplasty: A prospective study including 256 patients.

    Science.gov (United States)

    Benditz, A; Drescher, J; Greimel, F; Zeman, F; Grifka, J; Meißner, W; Völlner, F

    2016-12-05

    Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16 th in terms of activity-related pain and 9 th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1 st activity-related pain and to 2 nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA.

  10. Hospital benchmarking: are U.S. eye hospitals ready?

    Science.gov (United States)

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added

  11. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  12. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  13. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    Burns, Phil; Jenkins, Cloda; Riechmann, Christoph

    2005-01-01

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  14. Benchmarking pediatric cranial CT protocols using a dose tracking software system: a multicenter study.

    Science.gov (United States)

    De Bondt, Timo; Mulkens, Tom; Zanca, Federica; Pyfferoen, Lotte; Casselman, Jan W; Parizel, Paul M

    2017-02-01

    To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. • Significant differences were observed in the delivered dose between age-groups and hospitals. • Using age-adapted scanning protocols gives a nearly linear dose increase. • Sharing dose-data can be a trigger for hospitals to reduce dose levels.

  15. SP2Bench: A SPARQL Performance Benchmark

    Science.gov (United States)

    Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg

    A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.

  16. RB reactor benchmark cores

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)

  17. Evaluation of Scientific Outputs of Kashan University of Medical Sciences in Scopus Citation Database based on Scopus, ResearchGate, and Mendeley Scientometric Measures.

    Science.gov (United States)

    Batooli, Zahra; Ravandi, Somaye Nadi; Bidgoli, Mohammad Sabahi

    2016-02-01

    It is essential to evaluate the impact of scientific publications through citation analysis in citation indexes. In addition, scientometric measures of social media also should be assessed. These measures include how many times the publications were read, viewed, and downloaded. The present study aimed to assess the scientific output of scholars at Kashan University of Medical Sciences by the end of March 2014 based on scientometric measures of Scopus, ResearchGate, and Mendeley. A survey method was used to study the articles published in Scopus journals by scholars at Kashan University of Medical Sciences by the end of March 2014. The required data were collected from Scopus, ResearchGate, and Mendeley. The data were analyzed with descriptive statistics. Also, the Spearman correlation was used between the number of views of articles in ResearchGate with citation number of the articles in Scopus and reading frequency of the articles in Mendeley with citation number in Scopus were examined using the Spearman correlation in SPSS 16. Five-hundred and thirty-three articles were indexed in the Scopus Citation Database by the end of March 2014. Collectively, those articles were cited 1,315 times. The articles were covered by ResearchGate (74%) more than Mendeley (44%). In addition, 98% of the articles indexed in ResearchGate and 92% of the articles indexed in Mendeley were viewed at least once. The results showed that there was a positive correlation between the number of views of the articles in ResearchGate and Mendeley and the number of citations of the articles in Scopus. Coverage and the number of visitors were higher in ResearchGate than in Mendeley. The increase in the number of views of articles in ResearchGate and Mendeley also increased the number of citations of the papers. Social networks, such as ResearchGate and Mendeley, also can be used as tools for the evaluation of academics and scholars based on the scientific research they have conducted.

  18. A scientometric prediction of the discovery of the first potentially habitable planet with a mass similar to Earth.

    Directory of Open Access Journals (Sweden)

    Samuel Arbesman

    Full Text Available BACKGROUND: The search for a habitable extrasolar planet has long interested scientists, but only recently have the tools become available to search for such planets. In the past decades, the number of known extrasolar planets has ballooned into the hundreds, and with it, the expectation that the discovery of the first Earth-like extrasolar planet is not far off. METHODOLOGY/PRINCIPAL FINDINGS: Here, we develop a novel metric of habitability for discovered planets and use this to arrive at a prediction for when the first habitable planet will be discovered. Using a bootstrap analysis of currently discovered exoplanets, we predict the discovery of the first Earth-like planet to be announced in the first half of 2011, with the likeliest date being early May 2011. CONCLUSIONS/SIGNIFICANCE: Our predictions, using only the properties of previously discovered exoplanets, accord well with external estimates for the discovery of the first potentially habitable extrasolar planet and highlight the the usefulness of predictive scientometric techniques to understand the pace of scientific discovery in many fields.

  19. Benchmarking specialty hospitals, a scoping review on theory and practice.

    Science.gov (United States)

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  20. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  1. Development of a California commercial building benchmarking database

    International Nuclear Information System (INIS)

    Kinney, Satkartar; Piette, Mary Ann

    2002-01-01

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database

  2. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  3. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  4. Analytic Validation of Immunohistochemistry Assays: New Benchmark Data From a Survey of 1085 Laboratories.

    Science.gov (United States)

    Stuart, Lauren N; Volmar, Keith E; Nowak, Jan A; Fatheree, Lisa A; Souers, Rhona J; Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Astles, J Rex; Nakhleh, Raouf E

    2017-09-01

    - A cooperative agreement between the College of American Pathologists (CAP) and the United States Centers for Disease Control and Prevention was undertaken to measure laboratories' awareness and implementation of an evidence-based laboratory practice guideline (LPG) on immunohistochemical (IHC) validation practices published in 2014. - To establish new benchmark data on IHC laboratory practices. - A 2015 survey on IHC assay validation practices was sent to laboratories subscribed to specific CAP proficiency testing programs and to additional nonsubscribing laboratories that perform IHC testing. Specific questions were designed to capture laboratory practices not addressed in a 2010 survey. - The analysis was based on responses from 1085 laboratories that perform IHC staining. Ninety-six percent (809 of 844) always documented validation of IHC assays. Sixty percent (648 of 1078) had separate procedures for predictive and nonpredictive markers, 42.7% (220 of 515) had procedures for laboratory-developed tests, 50% (349 of 697) had procedures for testing cytologic specimens, and 46.2% (363 of 785) had procedures for testing decalcified specimens. Minimum case numbers were specified by 85.9% (720 of 838) of laboratories for nonpredictive markers and 76% (584 of 768) for predictive markers. Median concordance requirements were 95% for both types. For initial validation, 75.4% (538 of 714) of laboratories adopted the 20-case minimum for nonpredictive markers and 45.9% (266 of 579) adopted the 40-case minimum for predictive markers as outlined in the 2014 LPG. The most common method for validation was correlation with morphology and expected results. Laboratories also reported which assay changes necessitated revalidation and their minimum case requirements. - Benchmark data on current IHC validation practices and procedures may help laboratories understand the issues and influence further refinement of LPG recommendations.

  5. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  6. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  7. Review for session K - benchmarks

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1980-01-01

    Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually

  8. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  9. Benchmarking in Thoracic Surgery. Third Edition.

    Science.gov (United States)

    Freixinet Gilart, Jorge; Varela Simó, Gonzalo; Rodríguez Suárez, Pedro; Embún Flor, Raúl; Rivas de Andrés, Juan José; de la Torre Bravos, Mercedes; Molins López-Rodó, Laureano; Pac Ferrer, Joaquín; Izquierdo Elena, José Miguel; Baschwitz, Benno; López de Castro, Pedro E; Fibla Alfara, Juan José; Hernando Trancho, Florentino; Carvajal Carrasco, Ángel; Canalís Arrayás, Emili; Salvatierra Velázquez, Ángel; Canela Cardona, Mercedes; Torres Lanzas, Juan; Moreno Mata, Nicolás

    2016-04-01

    Benchmarking entails continuous comparison of efficacy and quality among products and activities, with the primary objective of achieving excellence. To analyze the results of benchmarking performed in 2013 on clinical practices undertaken in 2012 in 17 Spanish thoracic surgery units. Study data were obtained from the basic minimum data set for hospitalization, registered in 2012. Data from hospital discharge reports were submitted by the participating groups, but staff from the corresponding departments did not intervene in data collection. Study cases all involved hospital discharges recorded in the participating sites. Episodes included were respiratory surgery (Major Diagnostic Category 04, Surgery), and those of the thoracic surgery unit. Cases were labelled using codes from the International Classification of Diseases, 9th revision, Clinical Modification. The refined diagnosis-related groups classification was used to evaluate differences in severity and complexity of cases. General parameters (number of cases, mean stay, complications, readmissions, mortality, and activity) varied widely among the participating groups. Specific interventions (lobectomy, pneumonectomy, atypical resections, and treatment of pneumothorax) also varied widely. As in previous editions, practices among participating groups varied considerably. Some areas for improvement emerge: admission processes need to be standardized to avoid urgent admissions and to improve pre-operative care; hospital discharges should be streamlined and discharge reports improved by including all procedures and complications. Some units have parameters which deviate excessively from the norm, and these sites need to review their processes in depth. Coding of diagnoses and comorbidities is another area where improvement is needed. Copyright © 2015 SEPAR. Published by Elsevier Espana. All rights reserved.

  10. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  11. 40 CFR 141.172 - Disinfection profiling and benchmarking.

    Science.gov (United States)

    2010-07-01

    ... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to develop...

  12. Raising Quality and Achievement. A College Guide to Benchmarking.

    Science.gov (United States)

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  13. Prismatic Core Coupled Transient Benchmark

    International Nuclear Information System (INIS)

    Ortensi, J.; Pope, M.A.; Strydom, G.; Sen, R.S.; DeHart, M.D.; Gougar, H.D.; Ellis, C.; Baxter, A.; Seker, V.; Downar, T.J.; Vierow, K.; Ivanov, K.

    2011-01-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  14. Space network scheduling benchmark: A proof-of-concept process for technology transfer

    Science.gov (United States)

    Moe, Karen; Happell, Nadine; Hayden, B. J.; Barclay, Cathy

    1993-01-01

    This paper describes a detailed proof-of-concept activity to evaluate flexible scheduling technology as implemented in the Request Oriented Scheduling Engine (ROSE) and applied to Space Network (SN) scheduling. The criteria developed for an operational evaluation of a reusable scheduling system is addressed including a methodology to prove that the proposed system performs at least as well as the current system in function and performance. The improvement of the new technology must be demonstrated and evaluated against the cost of making changes. Finally, there is a need to show significant improvement in SN operational procedures. Successful completion of a proof-of-concept would eventually lead to an operational concept and implementation transition plan, which is outside the scope of this paper. However, a high-fidelity benchmark using actual SN scheduling requests has been designed to test the ROSE scheduling tool. The benchmark evaluation methodology, scheduling data, and preliminary results are described.

  15. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  16. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  17. Deflection-based method for seismic response analysis of concrete walls: Benchmarking of CAMUS experiment

    International Nuclear Information System (INIS)

    Basu, Prabir C.; Roshan, A.D.

    2007-01-01

    A number of shake table tests had been conducted on the scaled down model of a concrete wall as part of CAMUS experiment. The experiments were conducted between 1996 and 1998 in the CEA facilities in Saclay, France. Benchmarking of CAMUS experiments was undertaken as a part of the coordinated research program on 'Safety Significance of Near-Field Earthquakes' organised by International Atomic Energy Agency (IAEA). Technique of deflection-based method was adopted for benchmarking exercise. Non-linear static procedure of deflection-based method has two basic steps: pushover analysis, and determination of target displacement or performance point. Pushover analysis is an analytical procedure to assess the capacity to withstand seismic loading effect that a structural system can offer considering the redundancies and inelastic deformation. Outcome of a pushover analysis is the plot of force-displacement (base shear-top/roof displacement) curve of the structure. This is obtained by step-by-step non-linear static analysis of the structure with increasing value of load. The second step is to determine target displacement, which is also known as performance point. The target displacement is the likely maximum displacement of the structure due to a specified seismic input motion. Established procedures, FEMA-273 and ATC-40, are available to determine this maximum deflection. The responses of CAMUS test specimen are determined by deflection-based method and analytically calculated values compare well with the test results

  18. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  19. ZZ ECN-BUBEBO, ECN-Petten Burnup Benchmark Book, Inventories, Afterheat

    International Nuclear Information System (INIS)

    Kloosterman, Jan Leen

    1999-01-01

    Description of program or function: Contains experimental benchmarks which can be used for the validation of burnup code systems and accompanied data libraries. Although the benchmarks presented here are thoroughly described in literature, it is in many cases not straightforward to retrieve unambiguously the correct input data and corresponding results from the benchmark Descriptions. Furthermore, results which can easily be measured, are sometimes difficult to calculate because of conversions to be made. Therefore, emphasis has been put to clarify the input of the benchmarks and to present the benchmark results in such a way that they can easily be calculated and compared. For more thorough Descriptions of the benchmarks themselves, the literature referred to here should be consulted. This benchmark book is divided in 11 chapters/files containing the following in text and tabular form: chapter 1: Introduction; chapter 2: Burnup Credit Criticality Benchmark Phase 1-B; chapter 3: Yankee-Rowe Core V Fuel Inventory Study; chapter 4: H.B. Robinson Unit 2 Fuel Inventory Study; chapter 5: Turkey Point Unit 3 Fuel Inventory Study; chapter 6: Turkey Point Unit 3 Afterheat Power Study; chapter 7: Dickens Benchmark on Fission Product Energy Release of U-235; chapter 8: Dickens Benchmark on Fission Product Energy Release of Pu-239; chapter 9: Yarnell Benchmark on Decay Heat Measurements of U-233; chapter 10: Yarnell Benchmark on Decay Heat Measurements of U-235; chapter 11: Yarnell Benchmark on Decay Heat Measurements of Pu-239

  20. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  1. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  2. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  3. [Do you mean benchmarking?].

    Science.gov (United States)

    Bonnet, F; Solignac, S; Marty, J

    2008-03-01

    The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.

  4. Benchmarking in digital circuit design automation

    NARCIS (Netherlands)

    Jozwiak, L.; Gawlowski, D.M.; Slusarczyk, A.S.

    2008-01-01

    This paper focuses on benchmarking, which is the main experimental approach to the design method and EDA-tool analysis, characterization and evaluation. We discuss the importance and difficulties of benchmarking, as well as the recent research effort related to it. To resolve several serious

  5. Benchmarking, Total Quality Management, and Libraries.

    Science.gov (United States)

    Shaughnessy, Thomas W.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) in higher education and academic libraries focuses on the identification, collection, and use of reliable data. Methods for measuring quality, including benchmarking, are described; performance measures are considered; and benchmarking techniques are examined. (11 references) (MES)

  6. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II.

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  7. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  8. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  9. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    International Nuclear Information System (INIS)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-01-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR-06 are highlighted, and the future of the two projects is discussed

  10. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  11. Analysis of a molten salt reactor benchmark

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Bajpai, Anil; Degweker, S.B.

    2013-01-01

    This paper discusses results of our studies of an IAEA molten salt reactor (MSR) benchmark. The benchmark, proposed by Japan, involves burnup calculations of a single lattice cell of a MSR for burning plutonium and other minor actinides. We have analyzed this cell with in-house developed burnup codes BURNTRAN and McBURN. This paper also presents a comparison of the results of our codes and those obtained by the proposers of the benchmark. (author)

  12. Diabetes research in Middle East countries; a scientometrics study from 1990 to 2012

    Directory of Open Access Journals (Sweden)

    Niloofar Peykari

    2015-01-01

    Full Text Available Background: Diabetes burden is a serious warning for urgent action plan across the world. Knowledge production in this context could provide evidences for more efficient interventions. Aimed to that, we quantify the trend of diabetes research outputs of Middle East countries focusing on the scientific publication numbers, citations, and international collaboration. Materials and Methods: This scientometrics study was performed based on the systematic analysis through three international databases; ISI, PubMed, and Scopus from 1990 to 2012. International collaboration of Middle East countries and citations was analyzed based on Scopus. Diabetes′ publications in Iran specifically were assessed, and frequent used terms were mapped by VOSviewer software. Results: Over 23-year period, the number of diabetes publications and related citations in Middle East countries had increasing trend. The number of articles on diabetes in ISI, PubMed, and Scopus were respectively; 13,994, 11,336, and 20,707. Turkey, Israel, Iran, Saudi Arabia, and Egypt have devoted the five top competition positions. In addition, Israel, Turkey, and Iran were leading countries in citation analysis. The most collaborative country with Middle East countries was USA and within the region, the most collaborative country was Saudi Arabia. Iran in all databases stands on third position and produced 12.7% of diabetes publications within region. Regarding diabetes researches, the frequent used terms in Iranian articles were "effect," "woman," and "metabolic syndrome." Conclusion: Ascending trend of diabetes research outputs in Middle East countries is appreciated but encouraging to strategic planning for maintaining this trend, and more collaboration between researchers is needed to regional health promotion.

  13. Scientometric analyses of studies on the role of innate variation in athletic performance.

    Science.gov (United States)

    Lombardo, Michael P; Emiah, Shadie

    2014-01-01

    Historical events have produced an ideologically charged atmosphere in the USA surrounding the potential influences of innate variation on athletic performance. We tested the hypothesis that scientific studies of the role of innate variation in athletic performance were less likely to have authors with USA addresses than addresses elsewhere because of this cultural milieu. Using scientometric data collected from 290 scientific papers published in peer-reviewed journals from 2000-2012, we compared the proportions of authors with USA addresses with those that listed addresses elsewhere that studied the relationships between athletic performance and (a) prenatal exposure to androgens, as indicated by the ratio between digits 2 and 4, and (b) the genotypes for angiotensin converting enzyme, α-actinin-3, and myostatin; traits often associated with athletic performance. Authors with USA addresses were disproportionately underrepresented on papers about the role of innate variation in athletic performance. We searched NIH and NSF databases for grant proposals solicited or funded from 2000-2012 to determine if the proportion of authors that listed USA addresses was associated with funding patterns. NIH did not solicit grant proposals designed to examine these factors in the context of athletic performance and neither NIH nor NSF funded grants designed to study these topics. We think the combined effects of a lack of government funding and the avoidance of studying controversial or non-fundable topics by USA based scientists are responsible for the observation that authors with USA addresses were underrepresented on scientific papers examining the relationships between athletic performance and innate variation.

  14. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  15. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark.

    Science.gov (United States)

    Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W

    2017-08-28

    The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No

  16. Benchmarking: contexts and details matter.

    Science.gov (United States)

    Zheng, Siyuan

    2017-07-05

    Benchmarking is an essential step in the development of computational tools. We take this opportunity to pitch in our opinions on tool benchmarking, in light of two correspondence articles published in Genome Biology.Please see related Li et al. and Newman et al. correspondence articles: www.dx.doi.org/10.1186/s13059-017-1256-5 and www.dx.doi.org/10.1186/s13059-017-1257-4.

  17. Benchmark analysis of MCNP trademark ENDF/B-VI iron

    International Nuclear Information System (INIS)

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets

  18. Analysis of an OECD/NEA high-temperature reactor benchmark

    International Nuclear Information System (INIS)

    Hosking, J. G.; Newton, T. D.; Koeberl, O.; Morris, P.; Goluoglu, S.; Tombakoglu, T.; Colak, U.; Sartori, E.

    2006-01-01

    This paper describes analyses of the OECD/NEA HTR benchmark organized by the 'Working Party on the Scientific Issues of Reactor Systems (WPRS)', formerly the 'Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles'. The benchmark was specifically designed to provide inter-comparisons for plutonium and thorium fuels when used in HTR systems. Calculations considering uranium fuel have also been included in the benchmark, in order to identify any increased uncertainties when using plutonium or thorium fuels. The benchmark consists of five phases, which include cell and whole-core calculations. Analysis of the benchmark has been performed by a number of international participants, who have used a range of deterministic and Monte Carlo code schemes. For each of the benchmark phases, neutronics parameters have been evaluated. Comparisons are made between the results of the benchmark participants, as well as comparisons between the predictions of the deterministic calculations and those from detailed Monte Carlo calculations. (authors)

  19. Obesity Researches Over the Past 24 years: A Scientometrics Study in Middle East Countries.

    Science.gov (United States)

    Djalalinia, Shirin; Peykari, Niloofar; Qorbani, Mostafa; Moghaddam, Sahar Saeedi; Larijani, Bagher; Farzadfar, Farshad

    2015-01-01

    Researchers, practitioners, and policy-makers call for updated valid evidences to monitor, prevent, and control of alarming trends of obesity. We quantify the trends of obesity/overweight researches outputs of Middle East countries. We systematically searched Scopus database as the only sources for multidisciplinary citation reports, with the most coverage in health and biomedicine disciplines for all related obesity/overweight publications, from 1990 to 2013. These scientometrics analysis assessed the trends of scientific products, citations, and collaborative papers in Middle East countries. We also provided Information on top institutions, journals, and collaborative research centers in the field of obesity/overweight. Over 24-year period, the number of obesity/overweight publications and related citations in Middle East countries had increasing trend. Globally, during 1990-2013, 415,126 papers have been published, from them, 3.56% were affiliated to Middle East countries. Iran with 26.27%, compare with other countries in the regions, after Turkey (47.94%) and Israel (35.25%), had the third position. Israel, Turkey, and Iran were leading countries in citation analysis. The most collaborative country with Middle East countries was USA and within the region, the most collaborative country was Saudi Arabia. Despite the ascending trends in research outputs, more efforts required for promotion of collaborative partnerships. Results could be useful for better health policy and more planned studies in this field. These findings also could be used for future complementary analysis.

  20. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  1. MoMaS reactive transport benchmark using PFLOTRAN

    Science.gov (United States)

    Park, H.

    2017-12-01

    MoMaS benchmark was developed to enhance numerical simulation capability for reactive transport modeling in porous media. The benchmark was published in late September of 2009; it is not taken from a real chemical system, but realistic and numerically challenging tests. PFLOTRAN is a state-of-art massively parallel subsurface flow and reactive transport code that is being used in multiple nuclear waste repository projects at Sandia National Laboratories including Waste Isolation Pilot Plant and Used Fuel Disposition. MoMaS benchmark has three independent tests with easy, medium, and hard chemical complexity. This paper demonstrates how PFLOTRAN is applied to this benchmark exercise and shows results of the easy benchmark test case which includes mixing of aqueous components and surface complexation. Surface complexations consist of monodentate and bidentate reactions which introduces difficulty in defining selectivity coefficient if the reaction applies to a bulk reference volume. The selectivity coefficient becomes porosity dependent for bidentate reaction in heterogeneous porous media. The benchmark is solved by PFLOTRAN with minimal modification to address the issue and unit conversions were made properly to suit PFLOTRAN.

  2. Benchmarking pediatric cranial CT protocols using a dose tracking software system: a multicenter study

    Energy Technology Data Exchange (ETDEWEB)

    Bondt, Timo de; Parizel, Paul M. [Antwerp University Hospital and University of Antwerp, Department of Radiology, Antwerp (Belgium); Mulkens, Tom [H. Hart Hospital, Department of Radiology, Lier (Belgium); Zanca, Federica [GE Healthcare, DoseWatch, Buc (France); KU Leuven, Imaging and Pathology Department, Leuven (Belgium); Pyfferoen, Lotte; Casselman, Jan W. [AZ St. Jan Brugge-Oostende AV Hospital, Department of Radiology, Brugge (Belgium)

    2017-02-15

    To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value < 0.001) dose differences among hospitals were observed. The hospital with lowest dose levels showed smallest dose variability and used age-stratified protocols for standardizing paediatric head exams. Erroneous selection of adult protocols for children still occurred, mostly in the oldest age-group. Even though all hospitals complied with national and international DRLs, dose tracking and benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. (orig.)

  3. Benchmarking whole-building energy performance with multi-criteria technique for order preference by similarity to ideal solution using a selective objective-weighting approach

    International Nuclear Information System (INIS)

    Wang, Endong

    2015-01-01

    Highlights: • A TOPSIS based multi-criteria whole-building energy benchmarking is developed. • A selective objective-weighting procedure is used for a cost-accuracy tradeoff. • Results from a real case validated the benefits of the presented approach. - Abstract: This paper develops a robust multi-criteria Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) based building energy efficiency benchmarking approach. The approach is explicitly selective to address multicollinearity trap due to the subjectivity in selecting energy variables by considering cost-accuracy trade-off. It objectively weights the relative importance of individual pertinent efficiency measuring criteria using either multiple linear regression or principal component analysis contingent on meta data quality. Through this approach, building energy performance is comprehensively evaluated and optimized. Simultaneously, the significant challenges associated with conventional single-criterion benchmarking models can be avoided. Together with a clustering algorithm on a three-year panel dataset, the benchmarking case of 324 single-family dwellings demonstrated an improved robustness of the presented multi-criteria benchmarking approach over the conventional single-criterion ones

  4. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  5. Criticality safety benchmarking of PASC-3 and ECNJEF1.1

    International Nuclear Information System (INIS)

    Li, J.

    1992-09-01

    To validate the code system PASC-3 and the multigroup cross section library ECNJEF1.1 on various applications many benchmarks are required. This report presents the results of critically safety benchmarking for five calculational and four experimental benchmarks. These benchmarks are related to the transport package of fissile materials such as spent fuel. The fissile nuclides in these benchmarks are 235 U and 239 Pu. The modules of PASC-3 which have been used for the calculations are BONAMI, NITAWL and KENO.5A. The final results for the experimental benchmarks do agree well with experimental data. For the calculational benchmarks the results presented here are in reasonable agreement with the results from other investigations. (author). 8 refs.; 20 figs.; 5 tabs

  6. Revaluering benchmarking - A topical theme for the construction industry

    OpenAIRE

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in...

  7. A Benchmark of Lidar-Based Single Tree Detection Methods Using Heterogeneous Forest Data from the Alpine Space

    Directory of Open Access Journals (Sweden)

    Lothar Eysn

    2015-05-01

    Full Text Available In this study, eight airborne laser scanning (ALS-based single tree detection methods are benchmarked and investigated. The methods were applied to a unique dataset originating from different regions of the Alpine Space covering different study areas, forest types, and structures. This is the first benchmark ever performed for different forests within the Alps. The evaluation of the detection results was carried out in a reproducible way by automatically matching them to precise in situ forest inventory data using a restricted nearest neighbor detection approach. Quantitative statistical parameters such as percentages of correctly matched trees and omission and commission errors are presented. The proposed automated matching procedure presented herein shows an overall accuracy of 97%. Method based analysis, investigations per forest type, and an overall benchmark performance are presented. The best matching rate was obtained for single-layered coniferous forests. Dominated trees were challenging for all methods. The overall performance shows a matching rate of 47%, which is comparable to results of other benchmarks performed in the past. The study provides new insight regarding the potential and limits of tree detection with ALS and underlines some key aspects regarding the choice of method when performing single tree detection for the various forest types encountered in alpine regions.

  8. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  9. Benchmarking and validation activities within JEFF project

    Directory of Open Access Journals (Sweden)

    Cabellos O.

    2017-01-01

    Full Text Available The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  10. Benchmarking and validation activities within JEFF project

    Science.gov (United States)

    Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der

    2017-09-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  11. Benchmarking

    OpenAIRE

    Beretta Sergio; Dossi Andrea; Grove Hugh

    2000-01-01

    Due to their particular nature, the benchmarking methodologies tend to exceed the boundaries of management techniques, and to enter the territories of managerial culture. A culture that is also destined to break into the accounting area not only strongly supporting the possibility of fixing targets, and measuring and comparing the performance (an aspect that is already innovative and that is worthy of attention), but also questioning one of the principles (or taboos) of the accounting or...

  12. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  13. Three-dimensional RAMA fluence methodology benchmarking

    International Nuclear Information System (INIS)

    Baker, S. P.; Carter, R. G.; Watkins, K. E.; Jones, D. B.

    2004-01-01

    This paper describes the benchmarking of the RAMA Fluence Methodology software, that has been performed in accordance with U. S. Nuclear Regulatory Commission Regulatory Guide 1.190. The RAMA Fluence Methodology has been developed by TransWare Enterprises Inc. through funding provided by the Electric Power Research Inst., Inc. (EPRI) and the Boiling Water Reactor Vessel and Internals Project (BWRVIP). The purpose of the software is to provide an accurate method for calculating neutron fluence in BWR pressure vessels and internal components. The Methodology incorporates a three-dimensional deterministic transport solution with flexible arbitrary geometry representation of reactor system components, previously available only with Monte Carlo solution techniques. Benchmarking was performed on measurements obtained from three standard benchmark problems which include the Pool Criticality Assembly (PCA), VENUS-3, and H. B. Robinson Unit 2 benchmarks, and on flux wire measurements obtained from two BWR nuclear plants. The calculated to measured (C/M) ratios range from 0.93 to 1.04 demonstrating the accuracy of the RAMA Fluence Methodology in predicting neutron flux, fluence, and dosimetry activation. (authors)

  14. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  15. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  16. H.B. Robinson-2 pressure vessel benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Remec, I.; Kam, F.B.K.

    1998-02-01

    The H. B. Robinson Unit 2 Pressure Vessel Benchmark (HBR-2 benchmark) is described and analyzed in this report. Analysis of the HBR-2 benchmark can be used as partial fulfillment of the requirements for the qualification of the methodology for calculating neutron fluence in pressure vessels, as required by the U.S. Nuclear Regulatory Commission Regulatory Guide DG-1053, Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence. Section 1 of this report describes the HBR-2 benchmark and provides all the dimensions, material compositions, and neutron source data necessary for the analysis. The measured quantities, to be compared with the calculated values, are the specific activities at the end of fuel cycle 9. The characteristic feature of the HBR-2 benchmark is that it provides measurements on both sides of the pressure vessel: in the surveillance capsule attached to the thermal shield and in the reactor cavity. In section 2, the analysis of the HBR-2 benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed with three multigroup libraries based on ENDF/B-VI: BUGLE-93, SAILOR-95 and BUGLE-96. The average ratio of the calculated-to-measured specific activities (C/M) for the six dosimeters in the surveillance capsule was 0.90 {+-} 0.04 for all three libraries. The average C/Ms for the cavity dosimeters (without neptunium dosimeter) were 0.89 {+-} 0.10, 0.91 {+-} 0.10, and 0.90 {+-} 0.09 for the BUGLE-93, SAILOR-95 and BUGLE-96 libraries, respectively. It is expected that the agreement of the calculations with the measurements, similar to the agreement obtained in this research, should typically be observed when the discrete-ordinates method and ENDF/B-VI libraries are used for the HBR-2 benchmark analysis.

  17. Supply network configuration—A benchmarking problem

    Science.gov (United States)

    Brandenburg, Marcus

    2018-03-01

    Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.

  18. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  19. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  20. A simplified 2D HTTR benchmark problem

    International Nuclear Information System (INIS)

    Zhang, Z.; Rahnema, F.; Pounders, J. M.; Zhang, D.; Ougouag, A.

    2009-01-01

    To access the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of relevant whole core configurations. In this paper we have created a numerical benchmark problem in 2D configuration typical of a high temperature gas cooled prismatic core. This problem was derived from the HTTR start-up experiment. For code-to-code verification, complex details of geometry and material specification of the physical experiments are not necessary. To this end, the benchmark problem presented here is derived by simplifications that remove the unnecessary details while retaining the heterogeneity and major physics properties from the neutronics viewpoint. Also included here is a six-group material (macroscopic) cross section library for the benchmark problem. This library was generated using the lattice depletion code HELIOS. Using this library, benchmark quality Monte Carlo solutions are provided for three different configurations (all-rods-in, partially-controlled and all-rods-out). The reference solutions include the core eigenvalue, block (assembly) averaged fuel pin fission density distributions, and absorption rate in absorbers (burnable poison and control rods). (authors)

  1. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  2. Second benchmark problem for WIPP structural computations

    International Nuclear Information System (INIS)

    Krieg, R.D.; Morgan, H.S.; Hunter, T.O.

    1980-12-01

    This report describes the second benchmark problem for comparison of the structural codes used in the WIPP project. The first benchmark problem consisted of heated and unheated drifts at a depth of 790 m, whereas this problem considers a shallower level (650 m) more typical of the repository horizon. But more important, the first problem considered a homogeneous salt configuration, whereas this problem considers a configuration with 27 distinct geologic layers, including 10 clay layers - 4 of which are to be modeled as possible slip planes. The inclusion of layering introduces complications in structural and thermal calculations that were not present in the first benchmark problem. These additional complications will be handled differently by the various codes used to compute drift closure rates. This second benchmark problem will assess these codes by evaluating the treatment of these complications

  3. Benchmarks: The Development of a New Approach to Student Evaluation.

    Science.gov (United States)

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  4. Aerodynamic benchmarking of the DeepWind design

    DEFF Research Database (Denmark)

    Bedon, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts. The blade shape is considered as a fixed parameter...

  5. Integral benchmarks with reference to thorium fuel cycle

    International Nuclear Information System (INIS)

    Ganesan, S.

    2003-01-01

    This is a power point presentation about the Indian participation in the CRP 'Evaluated Data for the Thorium-Uranium fuel cycle'. The plans and scope of the Indian participation are to provide selected integral experimental benchmarks for nuclear data validation, including Indian Thorium burn up benchmarks, post-irradiation examination studies, comparison of basic evaluated data files and analysis of selected benchmarks for Th-U fuel cycle

  6. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    International Nuclear Information System (INIS)

    Bess, John D.; Montierth, Leland; Köberl, Oliver

    2014-01-01

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the 235 U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  7. The extent of benchmarking in the South African financial sector

    Directory of Open Access Journals (Sweden)

    W Vermeulen

    2014-06-01

    Full Text Available Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective implementation of benchmarking. Based on the research findings, recommendations for achieving success are suggested.

  8. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  9. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    International Nuclear Information System (INIS)

    Orii, Shigeo

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  10. Benchmarking to improve the quality of cystic fibrosis care.

    Science.gov (United States)

    Schechter, Michael S

    2012-11-01

    Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.

  11. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  12. Benchmarking multi-dimensional large strain consolidation analyses

    International Nuclear Information System (INIS)

    Priestley, D.; Fredlund, M.D.; Van Zyl, D.

    2010-01-01

    Analyzing the consolidation of tailings slurries and dredged fills requires a more extensive formulation than is used for common (small strain) consolidation problems. Large strain consolidation theories have traditionally been limited to 1-D formulations. SoilVision Systems has developed the capacity to analyze large strain consolidation problems in 2 and 3-D. The benchmarking of such formulations is not a trivial task. This paper presents several examples of modeling large strain consolidation in the beta versions of the new software. These examples were taken from the literature and were used to benchmark the large strain formulation used by the new software. The benchmarks reported here are: a comparison to the consolidation software application CONDES0, Townsend's Scenario B and a multi-dimensional analysis of long-term column tests performed on oil sands tailings. All three of these benchmarks were attained using the SVOffice suite. (author)

  13. Scientometric trends and knowledge maps of global health systems research.

    Science.gov (United States)

    Yao, Qiang; Chen, Kai; Yao, Lan; Lyu, Peng-hui; Yang, Tian-an; Luo, Fei; Chen, Shan-quan; He, Lu-yang; Liu, Zhi-yong

    2014-06-05

    In the last few decades, health systems research (HSR) has garnered much attention with a rapid increase in the related literature. This study aims to review and evaluate the global progress in HSR and assess the current quantitative trends. Based on data from the Web of Science database, scientometric methods and knowledge visualization techniques were applied to evaluate global scientific production and develop trends of HSR from 1900 to 2012. HSR has increased rapidly over the past 20 years. Currently, there are 28,787 research articles published in 3,674 journals that are listed in 140 Web of Science subject categories. The research in this field has mainly focused on public, environmental and occupational health (6,178, 21.46%), health care sciences and services (5,840, 20.29%), and general and internal medicine (3,783, 13.14%). The top 10 journals had published 2,969 (10.31%) articles and received 5,229 local citations and 40,271 global citations. The top 20 authors together contributed 628 papers, which accounted for a 2.18% share in the cumulative worldwide publications. The most productive author was McKee, from the London School of Hygiene & Tropical Medicine, with 48 articles. In addition, USA and American institutions ranked the first in health system research productivity, with high citation times, followed by the UK and Canada. HSR is an interdisciplinary area. Organization for Economic Co-operation and Development countries showed they are the leading nations in HSR. Meanwhile, American and Canadian institutions and the World Health Organization play a dominant role in the production, collaboration, and citation of high quality articles. Moreover, health policy and analysis research, health systems and sub-systems research, healthcare and services research, health, epidemiology and economics of communicable and non-communicable diseases, primary care research, health economics and health costs, and pharmacy of hospital have been identified as the

  14. Benchmark Tests to Develop Analytical Time-Temperature Limit for HANA-6 Cladding for Compliance with New LOCA Criteria

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sung Yong; Jang, Hun; Lim, Jea Young; Kim, Dae Il; Kim, Yoon Ho; Mok, Yong Kyoon [KEPCO Nuclear Fuel Co. Ltd., Daejeon (Korea, Republic of)

    2016-10-15

    According to 10CFR50.46c, two analytical time and temperature limits for breakaway oxidation and postquench ductility (PQD) should be determined by approved experimental procedure as described in NRC Regulatory Guide (RG) 1.222 and 1.223. According to RG 1.222 and 1.223, rigorous qualification requirements for test system are required, such as thermal and weight gain benchmarks. In order to meet these requirements, KEPCO NF has developed the new special facility to evaluate LOCA performance of zirconium alloy cladding. In this paper, qualification results for test facility and HT oxidation model for HANA-6 are summarized. The results of thermal benchmark tests of LOCA HT oxidation tester is summarized as follows. 1. The best estimate HT oxidation model of HANA- 6 was developed for the vender proprietary HT oxidation model. 2. In accordance with the RG 1.222 and 1.223, Benchmark tests were performed by using LOCA HT oxidation tester 3. The maximum axial and circumferential temperature difference are ± 9 .deg. C and ± 2 .deg. C at 1200 .deg. C, respectively. At the other temperature conditions, temperature difference is less than 1200 .deg. C result. Thermal benchmark test results meet the requirements of NRC RG 1.222 and 1.223.

  15. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  16. Mathematics Research in Association of Southeast Asian Nations Countries: A Scientometric Analysis of Patterns and Impacts

    Directory of Open Access Journals (Sweden)

    Thao P. Ho-Le

    2018-02-01

    Full Text Available In this study, we aimed at mapping the trend and impact of mathematics research originated from Association of Southeast Asian Nations (ASEAN countries by using a scientometric approach. We extracted the Web of Science’s article-level data of all publications concerning mathematics research during the period of 2006–2015 for ASEAN countries. The impact of research was assessed in terms of citation, and the pattern of international collaboration was mapped by the presence of coauthorship and international affiliations. During the coverage period, ASEAN countries had published 9,890 papers in mathematics, accounting for 3.8% of total ISI-indexed publications from the region. Almost 95% of the mathematics publication was from Singapore (4,107 papers, Vietnam (2,046, Malaysia (1,927, and Thailand (1,317. Approximately 54% of mathematics papers from ASEAN countries had international coauthorship, and these papers had greater yearly citation rate than those without international collaboration. With the exception of Singapore, the citation rate for other ASEAN countries was below the world average by 8–30%. The most important predictor of citations was journal impact factor, which accounted for 5.2% of total variation in citations between papers. These findings suggest that the contribution of ASEAN countries as a group to mathematics research worldwide is modest in terms of research output and impact.

  17. A 3D stylized half-core CANDU benchmark problem

    International Nuclear Information System (INIS)

    Pounders, Justin M.; Rahnema, Farzad; Serghiuta, Dumitru; Tholammakkil, John

    2011-01-01

    A 3D stylized half-core Canadian deuterium uranium (CANDU) reactor benchmark problem is presented. The benchmark problem is comprised of a heterogeneous lattice of 37-element natural uranium fuel bundles, heavy water moderated, heavy water cooled, with adjuster rods included as reactivity control devices. Furthermore, a 2-group macroscopic cross section library has been developed for the problem to increase the utility of this benchmark for full-core deterministic transport methods development. Monte Carlo results are presented for the benchmark problem in cooled, checkerboard void, and full coolant void configurations.

  18. International handbook of evaluated criticality safety benchmark experiments

    International Nuclear Information System (INIS)

    2010-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span over 55,000 pages and contain 516 evaluations with benchmark specifications for 4,405 critical, near critical, or subcritical configurations, 24 criticality alarm placement / shielding configurations with multiple dose points for each, and 200 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these evaluations; however, benchmark specifications are not derived for such experiments (in some cases models are provided in an appendix). Approximately 770 experimental configurations are categorized as unacceptable for use as criticality safety benchmark experiments. Additional evaluations are in progress and will be

  19. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  20. ZZ WPPR, Pu Recycling Benchmark Results

    International Nuclear Information System (INIS)

    Lutz, D.; Mattes, M.; Delpech, Marc; Juanola, Marc

    2002-01-01

    Description of program or function: The NEA NSC Working Party on Physics of Plutonium Recycling has commissioned a series of benchmarks covering: - Plutonium recycling in pressurized-water reactors; - Void reactivity effect in pressurized-water reactors; - Fast Plutonium-burner reactors: beginning of life; - Plutonium recycling in fast reactors; - Multiple recycling in advanced pressurized-water reactors. The results have been published (see references). ZZ-WPPR-1-A/B contains graphs and tables relative to the PWR Mox pin cell benchmark, representing typical fuel for plutonium recycling, one corresponding to a first cycle, the second for a fifth cycle. These computer readable files contain the complete set of results, while the printed report contains only a subset. ZZ-WPPR-2-CYC1 are the results from cycle 1 of the multiple recycling benchmarks

  1. Interior beam searchlight semi-analytical benchmark

    International Nuclear Information System (INIS)

    Ganapol, Barry D.; Kornreich, Drew E.

    2008-01-01

    Multidimensional semi-analytical benchmarks to provide highly accurate standards to assess routine numerical particle transport algorithms are few and far between. Because of the well-established 1D theory for the analytical solution of the transport equation, it is sometimes possible to 'bootstrap' a 1D solution to generate a more comprehensive solution representation. Here, we consider the searchlight problem (SLP) as a multidimensional benchmark. A variation of the usual SLP is the interior beam SLP (IBSLP) where a beam source lies beneath the surface of a half space and emits directly towards the free surface. We consider the establishment of a new semi-analytical benchmark based on a new FN formulation. This problem is important in radiative transfer experimental analysis to determine cloud absorption and scattering properties. (authors)

  2. Benchmark referencing of neutron dosimetry measurements

    International Nuclear Information System (INIS)

    Eisenhauer, C.M.; Grundl, J.A.; Gilliam, D.M.; McGarry, E.D.; Spiegel, V.

    1980-01-01

    The concept of benchmark referencing involves interpretation of dosimetry measurements in applied neutron fields in terms of similar measurements in benchmark fields whose neutron spectra and intensity are well known. The main advantage of benchmark referencing is that it minimizes or eliminates many types of experimental uncertainties such as those associated with absolute detection efficiencies and cross sections. In this paper we consider the cavity external to the pressure vessel of a power reactor as an example of an applied field. The pressure vessel cavity is an accessible location for exploratory dosimetry measurements aimed at understanding embrittlement of pressure vessel steel. Comparisons with calculated predictions of neutron fluence and spectra in the cavity provide a valuable check of the computational methods used to estimate pressure vessel safety margins for pressure vessel lifetimes

  3. Benchmarking of the simulation of the ATLAS HaLL background

    International Nuclear Information System (INIS)

    Vincke, H.

    2000-01-01

    The LHC, mainly to be used as a proton-proton collider, providing collisions at energies of 14 TeV, will be operational in the year 2005. ATLAS, one of the LHC experiments, will provide high accuracy measurements concerning these p-p collisions. In these collisions also a high particle background is produced. This background was already calculated with the Monte Carlo simulation program FLUKA. Unfortunately, the prediction concerning this background rate is only understood within an uncertainty level of five. The main contribution of this factor can be seen as limited knowledge concerning the ability of FLUKA to simulate these kinds of scenarios. In order to reduce the uncertainty, benchmarking simulations of experiments similar to the ATLAS background situation were performed. The comparison of the simulations with the experiments proves to which extent FLUKA is able to provide reliable results concerning the ATLAS background situation. In order to perform this benchmark, an iron construction was irradiated by a hadron beam. The primary particles had ATLAS equivalent energies. Behind the iron structure, the remnants of the shower processes are measured and simulated. The simulation procedure and its encouraging results, including the comparison with the measured numbers, are presented and discussed in this work. (author)

  4. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    Makai, M.

    1998-01-01

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  5. The rise of health biotechnology research in Latin America: A scientometric analysis of health biotechnology production and impact in Argentina, Brazil, Chile, Colombia, Cuba and Mexico

    Science.gov (United States)

    2018-01-01

    This paper analyzes the patterns of health biotechnology publications in six Latin American countries from 2001 to 2015. The countries studied were Argentina, Brazil, Chile, Colombia, Cuba and Mexico. Before our study, there were no data available on HBT development in half of the Latin-American countries we studied, i.e., Argentina, Colombia and Chile. To include these countries in a scientometric analysis of HBT provides fuller coverage of HBT development in Latin America. The scientometric study used the Web of Science database to identify health biotechnology publications. The total amount of health biotechnology production in the world during the period studied was about 400,000 papers. A total of 1.2% of these papers, were authored by the six Latin American countries in this study. The results show a significant growth in health biotechnology publications in Latin America despite some of the countries having social and political instability, fluctuations in their gross domestic expenditure in research and development or a trade embargo that limits opportunities for scientific development. The growth in the field of some of the Latin American countries studied was larger than the growth of most industrialized nations. Still, the visibility of the Latin American research (measured in the number of citations) did not reach the world average, with the exception of Colombia. The main producers of health biotechnology papers in Latin America were universities, except in Cuba were governmental institutions were the most frequent producers. The countries studied were active in international research collaboration with Colombia being the most active (64% of papers co-authored internationally), whereas Brazil was the least active (35% of papers). Still, the domestic collaboration was even more prevalent, with Chile being the most active in such collaboration (85% of papers co-authored domestically) and Argentina the least active (49% of papers). We conclude that the

  6. The rise of health biotechnology research in Latin America: A scientometric analysis of health biotechnology production and impact in Argentina, Brazil, Chile, Colombia, Cuba and Mexico.

    Science.gov (United States)

    León-de la O, Dante Israel; Thorsteinsdóttir, Halla; Calderón-Salinas, José Víctor

    2018-01-01

    This paper analyzes the patterns of health biotechnology publications in six Latin American countries from 2001 to 2015. The countries studied were Argentina, Brazil, Chile, Colombia, Cuba and Mexico. Before our study, there were no data available on HBT development in half of the Latin-American countries we studied, i.e., Argentina, Colombia and Chile. To include these countries in a scientometric analysis of HBT provides fuller coverage of HBT development in Latin America. The scientometric study used the Web of Science database to identify health biotechnology publications. The total amount of health biotechnology production in the world during the period studied was about 400,000 papers. A total of 1.2% of these papers, were authored by the six Latin American countries in this study. The results show a significant growth in health biotechnology publications in Latin America despite some of the countries having social and political instability, fluctuations in their gross domestic expenditure in research and development or a trade embargo that limits opportunities for scientific development. The growth in the field of some of the Latin American countries studied was larger than the growth of most industrialized nations. Still, the visibility of the Latin American research (measured in the number of citations) did not reach the world average, with the exception of Colombia. The main producers of health biotechnology papers in Latin America were universities, except in Cuba were governmental institutions were the most frequent producers. The countries studied were active in international research collaboration with Colombia being the most active (64% of papers co-authored internationally), whereas Brazil was the least active (35% of papers). Still, the domestic collaboration was even more prevalent, with Chile being the most active in such collaboration (85% of papers co-authored domestically) and Argentina the least active (49% of papers). We conclude that the

  7. The CMSSW benchmarking suite: Using HEP code to measure CPU performance

    International Nuclear Information System (INIS)

    Benelli, G

    2010-01-01

    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the CPU performance estimates given by the reference benchmark for HEP computing (SPECint) and actual performances of HEP code. Making use of the CPU performance tools from the CMSSW performance suite, comparative CPU performance studies have been carried out on several architectures. A benchmarking suite has been developed and integrated in the CMSSW framework, to allow computing centers and interested third parties to benchmark architectures directly with CMSSW. The CMSSW benchmarking suite can be used out of the box, to test and compare several machines in terms of CPU performance and report with the wanted level of detail the different benchmarking scores (e.g. by processing step) and results. In this talk we describe briefly the CMSSW software performance suite, and in detail the CMSSW benchmarking suite client/server design, the performance data analysis and the available CMSSW benchmark scores. The experience in the use of HEP code for benchmarking will be discussed and CMSSW benchmark results presented.

  8. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    Science.gov (United States)

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  9. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  10. The extent of benchmarking in the South African financial sector

    OpenAIRE

    W Vermeulen

    2014-01-01

    Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective impleme...

  11. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  12. Numisheet2005 Benchmark Analysis on Forming of an Automotive Deck Lid Inner Panel: Benchmark 1

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    Numerical simulations in sheet metal forming processes have been a very challenging topic in industry. There are many computer codes and modeling techniques existing today. However, there are many unknowns affecting the prediction accuracy. Systematic benchmark tests are needed to accelerate the future implementations and to provide as a reference. This report presents an international cooperative benchmark effort for an automotive deck lid inner panel. Predictions from simulations are analyzed and discussed against the corresponding experimental results. The correlations between accuracy of each parameter of interest are discussed in this report

  13. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  14. Bench-marking beam-beam simulations using coherent quadrupole effects

    International Nuclear Information System (INIS)

    Krishnagopal, S.; Chin, Y.H.

    1992-06-01

    Computer simulations are used extensively in the study of the beam-beam interaction. The proliferation of such codes raises the important question of their reliability, and motivates the development of a dependable set of bench-marks. We argue that rather than detailed quantitative comparisons, the ability of different codes to predict the same qualitative physics should be used as a criterion for such bench-marks. We use the striking phenomenon of coherent quadrupole oscillations as one such bench-mark, and demonstrate that our codes do indeed observe this behaviour. We also suggest some other tests that could be used as bench-marks

  15. Bench-marking beam-beam simulations using coherent quadrupole effects

    International Nuclear Information System (INIS)

    Krishnagopal, S.; Chin, Y.H.

    1992-01-01

    Computer simulations are used extensively in the study of the beam-beam interaction. The proliferation of such codes raises the important question of their reliability, and motivates the development of a dependable set of bench-marks. We argue that rather than detailed quantitative comparisons, the ability of different codes to predict the same qualitative physics should be used as a criterion for such bench-marks. We use the striking phenomenon of coherent quadrupole oscillations as one such bench-mark, and demonstrate that our codes do indeed observe this behavior. We also suggest some other tests that could be used as bench-marks

  16. [The impact factor--a reliable sciento-metric parameter?].

    Science.gov (United States)

    Meenen, N M

    1997-08-01

    With shortage of research funds and increasing competition for medical posts, performance indicators and control instruments are being looked for in order to be able to allot research funds and make professorial appointments in relation to scientific performance. Incomprehensibly for many, the impact factor has become the decisive scientometric indicator at German universities despite of substantial systematic limitations. The impact factor is derived from the journal citation reports. Its basis of calculation entails the following problems: the editorial board of the private Institute of Scientific Information (ISI) decides on whether a journal is to be classified as a source journal. The citation index of all journals is calculated from their citations alone. Crucial means of influencing the impact factor result from self-citations and citation groups in these source journals. Languages other than English and other than Latin alphabets are appreciably disadvantaged by the citation index, which is why for example despite its international significance the rapid development of the osteosynthesis technique in German speaking countries went unnoticed by British and American orthopedic surgeons and scientists. The articles on postgraduate training necessarily published by clinicians in the respective language of their country are not cited because the addresses of such publications do not engage in research. Clinical disciplines (especially highly specialized disciplines such as trauma and hand surgery) thus attain appreciably lower impact factors for their journals than basic disciplines and interdisciplinary clinical sectors which lead the ranking of journals. The period covered in calculating the impact factor is only 2 years. Very modern and widely disseminated organs of publication with a short information halflife are favored. From the 10 objectively most often cited and most important journals for the scientific society, only 2 are to be found amongst those

  17. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  18. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  19. Systems reliability Benchmark exercise part 2-Contributions by the participants

    International Nuclear Information System (INIS)

    Amendola, A.

    1986-01-01

    The report describes aims, rules and results of the Systems Reliability Benchmark Exercise, which has been performed in order to assess methods and procedures for reliability analysis of complex systems and involved a large number of European organizations active in NPP safety evaluation. The exercise included both qualitative and quantitative methods and was structured in such a way that separation of the effects of uncertainties in modelling and in data on the overall spread was made possible. This second part of the report is devoted to the documentation of the single contributions by the participant teams (Swedish, GRS, ENEA, NIRA and ENEL, EWE, EdF, Risoe, KWU/IA, ECN, KEMA/KUL, and Framatome contributions)

  20. An Arbitrary Benchmark CAPM: One Additional Frontier Portfolio is Sufficient

    OpenAIRE

    Ekern, Steinar

    2008-01-01

    First draft: July 16, 2008 This version: October 7, 2008 The benchmark CAPM linearly relates the expected returns on an arbitrary asset, an arbitrary benchmark portfolio, and an arbitrary MV frontier portfolio. The benchmark is not required to be on the frontier and may be non-perfectly correlated with the frontier portfolio. The benchmark CAPM extends and generalizes previous CAPM formulations, including the zero beta, two correlated frontier portfolios, riskless augmented frontier, an...

  1. Benchmarking the implementation of E-Commerce A Case Study Approach

    OpenAIRE

    von Ettingshausen, C. R. D. Freiherr

    2009-01-01

    The purpose of this thesis was to develop a guideline to support the implementation of E-Commerce with E-Commerce benchmarking. Because of its importance as an interface with the customer, web-site benchmarking has been a widely researched topic. However, limited research has been conducted on benchmarking E-Commerce across other areas of the value chain. Consequently this thesis aims to extend benchmarking into E-Commerce related subjects. The literature review examined ...

  2. WWER in-core fuel management benchmark definition

    International Nuclear Information System (INIS)

    Apostolov, T.; Alekova, G.; Prodanova, R.; Petrova, T.; Ivanov, K.

    1994-01-01

    Two benchmark problems for WWER-440, including design parameters, operating conditions and measured quantities are discussed in this paper. Some benchmark results for infinitive multiplication factor -K eff , natural boron concentration - C β and relative power distribution - K q obtained by use of the code package are represented. (authors). 5 refs., 3 tabs

  3. CFD validation in OECD/NEA t-junction benchmark.

    Energy Technology Data Exchange (ETDEWEB)

    Obabko, A. V.; Fischer, P. F.; Tautges, T. J.; Karabasov, S.; Goloviznin, V. M.; Zaytsev, M. A.; Chudanov, V. V.; Pervichko, V. A.; Aksenova, A. E. (Mathematics and Computer Science); (Cambridge Univ.); (Moscow Institute of Nuclar Energy Safety)

    2011-08-23

    When streams of rapidly moving flow merge in a T-junction, the potential arises for large oscillations at the scale of the diameter, D, with a period scaling as O(D/U), where U is the characteristic flow velocity. If the streams are of different temperatures, the oscillations result in experimental fluctuations (thermal striping) at the pipe wall in the outlet branch that can accelerate thermal-mechanical fatigue and ultimately cause pipe failure. The importance of this phenomenon has prompted the nuclear energy modeling and simulation community to establish a benchmark to test the ability of computational fluid dynamics (CFD) codes to predict thermal striping. The benchmark is based on thermal and velocity data measured in an experiment designed specifically for this purpose. Thermal striping is intrinsically unsteady and hence not accessible to steady state simulation approaches such as steady state Reynolds-averaged Navier-Stokes (RANS) models.1 Consequently, one must consider either unsteady RANS or large eddy simulation (LES). This report compares the results for three LES codes: Nek5000, developed at Argonne National Laboratory (USA), and Cabaret and Conv3D, developed at the Moscow Institute of Nuclear Energy Safety at (IBRAE) in Russia. Nek5000 is based on the spectral element method (SEM), which is a high-order weighted residual technique that combines the geometric flexibility of the finite element method (FEM) with the tensor-product efficiencies of spectral methods. Cabaret is a 'compact accurately boundary-adjusting high-resolution technique' for fluid dynamics simulation. The method is second-order accurate on nonuniform grids in space and time, and has a small dispersion error and computational stencil defined within one space-time cell. The scheme is equipped with a conservative nonlinear correction procedure based on the maximum principle. CONV3D is based on the immersed boundary method and is validated on a wide set of the experimental

  4. Proposal of a benchmark for core burnup calculations for a VVER-1000 reactor core

    International Nuclear Information System (INIS)

    Loetsch, T.; Khalimonchuk, V.; Kuchin, A.

    2009-01-01

    In the framework of a project supported by the German BMU the code DYN3D should be further validated and verified. During the work a lack of a benchmark on core burnup calculations for VVER-1000 reactors was noticed. Such a benchmark is useful for validating and verifying the whole package of codes and data libraries for reactor physics calculations including fuel assembly modelling, fuel assembly data preparation, few group data parametrisation and reactor core modelling. The benchmark proposed specifies the core loading patterns of burnup cycles for a VVER-1000 reactor core as well as a set of operational data such as load follow, boron concentration in the coolant, cycle length, measured reactivity coefficients and power density distributions. The reactor core characteristics chosen for comparison and the first results obtained during the work with the reactor physics code DYN3D are presented. This work presents the continuation of efforts of the projects mentioned to estimate the accuracy of calculated characteristics of VVER-1000 reactor cores. In addition, the codes used for reactor physics calculations of safety related reactor core characteristics should be validated and verified for the cases in which they are to be used. This is significant for safety related evaluations and assessments carried out in the framework of licensing and supervision procedures in the field of reactor physics. (authors)

  5. Introducing a Generic Concept for an Online IT-Benchmarking System

    OpenAIRE

    Ziaie, Pujan;Ziller, Markus;Wollersheim, Jan;Krcmar, Helmut

    2014-01-01

    While IT benchmarking has grown considerably in the last few years, conventional benchmarking tools have not been able to adequately respond to the rapid changes in technology and paradigm shifts in IT-related domains. This paper aims to review benchmarking methods and leverage design science methodology to present design elements for a novel software solution in the field of IT benchmarking. The solution, which introduces a concept for generic (service-independent) indicators is based on and...

  6. Strategic behaviour under regulatory benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Jamasb, T. [Cambridge Univ. (United Kingdom). Dept. of Applied Economics; Nillesen, P. [NUON NV (Netherlands); Pollitt, M. [Cambridge Univ. (United Kingdom). Judge Inst. of Management

    2004-09-01

    In order to improve the efficiency of electricity distribution networks, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ''regulation game,'' the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behaviour by firms. We then use the Data Envelopment Analysis (DEA) method with US utility data to examine implications of illustrative cases of strategic behaviour reported by regulators. The results show that gaming can have significant effects on the measured performance and profitability of firms. (author)

  7. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    Takeda, T.; Ikeda, H.

    1991-03-01

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of K eff , control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  8. Benchmarking road safety performance: Identifying a meaningful reference (best-in-class).

    Science.gov (United States)

    Chen, Faan; Wu, Jiaorong; Chen, Xiaohong; Wang, Jianjun; Wang, Di

    2016-01-01

    For road safety improvement, comparing and benchmarking performance are widely advocated as the emerging and preferred approaches. However, there is currently no universally agreed upon approach for the process of road safety benchmarking, and performing the practice successfully is by no means easy. This is especially true for the two core activities of which: (1) developing a set of road safety performance indicators (SPIs) and combining them into a composite index; and (2) identifying a meaningful reference (best-in-class), one which has already obtained outstanding road safety practices. To this end, a scientific technique that can combine the multi-dimensional safety performance indicators (SPIs) into an overall index, and subsequently can identify the 'best-in-class' is urgently required. In this paper, the Entropy-embedded RSR (Rank-sum ratio), an innovative, scientific and systematic methodology is investigated with the aim of conducting the above two core tasks in an integrative and concise procedure, more specifically in a 'one-stop' way. Using a combination of results from other methods (e.g. the SUNflower approach) and other measures (e.g. Human Development Index) as a relevant reference, a given set of European countries are robustly ranked and grouped into several classes based on the composite Road Safety Index. Within each class the 'best-in-class' is then identified. By benchmarking road safety performance, the results serve to promote best practice, encourage the adoption of successful road safety strategies and measures and, more importantly, inspire the kind of political leadership needed to create a road transport system that maximizes safety. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Introduction to 'International Handbook of Criticality Safety Benchmark Experiments'

    International Nuclear Information System (INIS)

    Komuro, Yuichi

    1998-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization for Economic Cooperation and Development-Nuclear Energy Agency (OECD-NEA). 'International Handbook of Criticality Safety Benchmark Experiments' was prepared and is updated year by year by the working group of the project. This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used. The author briefly introduces the informative handbook and would like to encourage Japanese engineers who are in charge of nuclear criticality safety to use the handbook. (author)

  10. Evolution of primary care databases in UK: a scientometric analysis of research output.

    Science.gov (United States)

    Vezyridis, Paraskevas; Timmons, Stephen

    2016-10-11

    To identify publication and citation trends, most productive institutions and countries, top journals, most cited articles and authorship networks from articles that used and analysed data from primary care databases (CPRD, THIN, QResearch) of pseudonymised electronic health records (EHRs) in UK. Descriptive statistics and scientometric tools were used to analyse a SCOPUS data set of 1891 articles. Open access software was used to extract networks from the data set (Table2Net), visualise and analyse coauthorship networks of scholars and countries (Gephi) and density maps (VOSviewer) of research topics co-occurrence and journal cocitation. Research output increased overall at a yearly rate of 18.65%. While medicine is the main field of research, studies in more specialised areas include biochemistry and pharmacology. Researchers from UK, USA and Spanish institutions have published the most papers. Most of the journals that publish this type of research and most cited papers come from UK and USA. Authorship varied between 3 and 6 authors. Keyword analyses show that smoking, diabetes, cardiovascular diseases and mental illnesses, as well as medication that can treat such medical conditions, such as non-steroid anti-inflammatory agents, insulin and antidepressants constitute the main topics of research. Coauthorship network analyses show that lead scientists, directors or founders of these databases are, to various degrees, at the centre of clusters in this scientific community. There is a considerable increase of publications in primary care research from EHRs. The UK has been well placed at the centre of an expanding global scientific community, facilitating international collaborations and bringing together international expertise in medicine, biochemical and pharmaceutical research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  11. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias; Smith, Neil; Ghanem, Bernard

    2016-01-01

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  12. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  13. Benchmarking CRISPR on-target sgRNA design.

    Science.gov (United States)

    Yan, Jifang; Chuai, Guohui; Zhou, Chi; Zhu, Chenyu; Yang, Jing; Zhang, Chao; Gu, Feng; Xu, Han; Wei, Jia; Liu, Qi

    2017-02-15

    CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats)-based gene editing has been widely implemented in various cell types and organisms. A major challenge in the effective application of the CRISPR system is the need to design highly efficient single-guide RNA (sgRNA) with minimal off-target cleavage. Several tools are available for sgRNA design, while limited tools were compared. In our opinion, benchmarking the performance of the available tools and indicating their applicable scenarios are important issues. Moreover, whether the reported sgRNA design rules are reproducible across different sgRNA libraries, cell types and organisms remains unclear. In our study, a systematic and unbiased benchmark of the sgRNA predicting efficacy was performed on nine representative on-target design tools, based on six benchmark data sets covering five different cell types. The benchmark study presented here provides novel quantitative insights into the available CRISPR tools. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Criticality Benchmark Results Using Various MCNP Data Libraries

    International Nuclear Information System (INIS)

    Frankle, Stephanie C.

    1999-01-01

    A suite of 86 criticality benchmarks has been recently implemented in MCNPtrademark as part of the nuclear data validation effort. These benchmarks have been run using two sets of MCNP continuous-energy neutron data: ENDF/B-VI based data through Release 2 (ENDF60) and the ENDF/B-V based data. New evaluations were completed for ENDF/B-VI for a number of the important nuclides such as the isotopes of H, Be, C, N, O, Fe, Ni, 235,238 U, 237 Np, and 239,240 Pu. When examining the results of these calculations for the five manor categories of 233 U, intermediate-enriched 235 U (IEU), highly enriched 235 U (HEU), 239 Pu, and mixed metal assembles, we find the following: (1) The new evaluations for 9 Be, 12 C, and 14 N show no net effect on k eff ; (2) There is a consistent decrease in k eff for all of the solution assemblies for ENDF/B-VI due to 1 H and 16 O, moving k eff further from the benchmark value for uranium solutions and closer to the benchmark value for plutonium solutions; (3) k eff decreased for the ENDF/B-VI Fe isotopic data, moving the calculated k eff further from the benchmark value; (4) k eff decreased for the ENDF/B-VI Ni isotopic data, moving the calculated k eff closer to the benchmark value; (5) The W data remained unchanged and tended to calculate slightly higher than the benchmark values; (6) For metal uranium systems, the ENDF/B-VI data for 235 U tends to decrease k eff while the 238 U data tends to increase k eff . The net result depends on the energy spectrum and material specifications for the particular assembly; (7) For more intermediate-energy systems, the changes in the 235,238 U evaluations tend to increase k eff . For the mixed graphite and normal uranium-reflected assembly, a large increase in k eff due to changes in the 238 U evaluation moved the calculated k eff much closer to the benchmark value. (8) There is little change in k eff for the uranium solutions due to the new 235,238 U evaluations; and (9) There is little change in k eff

  15. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  16. Towards benchmarking an in-stream water quality model

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available A method of model evaluation is presented which utilises a comparison with a benchmark model. The proposed benchmarking concept is one that can be applied to many hydrological models but, in this instance, is implemented in the context of an in-stream water quality model. The benchmark model is defined in such a way that it is easily implemented within the framework of the test model, i.e. the approach relies on two applications of the same model code rather than the application of two separate model codes. This is illustrated using two case studies from the UK, the Rivers Aire and Ouse, with the objective of simulating a water quality classification, general quality assessment (GQA, which is based on dissolved oxygen, biochemical oxygen demand and ammonium. Comparisons between the benchmark and test models are made based on GQA, as well as a step-wise assessment against the components required in its derivation. The benchmarking process yields a great deal of important information about the performance of the test model and raises issues about a priori definition of the assessment criteria.

  17. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    Science.gov (United States)

    Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki

    2017-09-01

    There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  18. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    Directory of Open Access Journals (Sweden)

    Murata Isao

    2017-01-01

    Full Text Available There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author’s group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is “equally” due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A making neutrons conveying the contribution, indirect controbution of neutrons (B making the neutrons (A and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  19. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    Energy Technology Data Exchange (ETDEWEB)

    Nowell, Lisa H., E-mail: lhnowell@usgs.gov [U.S. Geological Survey, California Water Science Center, Placer Hall, 6000 J Street, Sacramento, CA 95819 (United States); Norman, Julia E., E-mail: jnorman@usgs.gov [U.S. Geological Survey, Oregon Water Science Center, 2130 SW 5" t" h Avenue, Portland, OR 97201 (United States); Ingersoll, Christopher G., E-mail: cingersoll@usgs.gov [U.S. Geological Survey, Columbia Environmental Research Center, 4200 New Haven Road, Columbia, MO 65021 (United States); Moran, Patrick W., E-mail: pwmoran@usgs.gov [U.S. Geological Survey, Washington Water Science Center, 934 Broadway, Suite 300, Tacoma, WA 98402 (United States)

    2016-04-15

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics

  20. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    International Nuclear Information System (INIS)

    Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.

    2016-01-01

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics

  1. Parametric Sensitivity Tests- European PEM Fuel Cell Stack Test Procedures

    DEFF Research Database (Denmark)

    Araya, Samuel Simon; Andreasen, Søren Juhl; Kær, Søren Knudsen

    2014-01-01

    performed based on test procedures proposed by a European project, Stack-Test. The sensitivity of a Nafion-based low temperature PEMFC stack’s performance to parametric changes was the main objective of the tests. Four crucial parameters for fuel cell operation were chosen; relative humidity, temperature......As fuel cells are increasingly commercialized for various applications, harmonized and industry-relevant test procedures are necessary to benchmark tests and to ensure comparability of stack performance results from different parties. This paper reports the results of parametric sensitivity tests......, pressure, and stoichiometry at varying current density. Furthermore, procedures for polarization curve recording were also tested both in ascending and descending current directions....

  2. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  3. The national hydrologic bench-mark network

    Science.gov (United States)

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  4. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  5. Benchmarking in the globalised world and its impact on South ...

    African Journals Online (AJOL)

    In order to understand the potential impact of international benchmarking on South African institutions, it is important to explore the future role of benchmarking on the international level. In this regard, examples of transnational benchmarking activities will be considered. As a result of the involvement of South African ...

  6. INTEGRATION OF UKRAINIAN INDUSTRY SCIENTIFIC PERIODACLS INTO WORLD SCIENTIFIC INFORMATION SPACE: PROBLEMS AND SOLUTIONS

    Directory of Open Access Journals (Sweden)

    T. O. Kolesnykova

    2013-11-01

    Full Text Available Purpose. Problem of representation lack of scientists’ publications, including transport scientists, in the international scientometric databases is the urgent one for Ukrainian science. To solve the problem one should study the structure and quality of the information flow of scientific periodicals of railway universities in Ukraine and to determine the integration algorithm of scientific publications of Ukrainian scientists into the world scientific information space. Methodology. Applying the methods of scientific analysis, synthesis, analogy, comparison and prediction the author has investigated the problem of scientific knowledge distribution using formal communications. The readiness of Ukrainian railway periodicals to registration procedure in the international scientometric systems was analyzed. The level of representation of articles and authors of Ukrainian railway universities in scientometric database Scopus was studied. Findings. Monitoring of the portals of railway industry universities of Ukraine and the sites of their scientific periodicals and analysis of obtained data prove insufficient readiness of most scientific publications for submission to scientometric database. The ways providing sufficient "visibility" of industry periodicals of Ukrainian universities in the global scientific information space were proposed. Originality. The structure and quality of documentary flow of scientific periodicals in railway transport universities of Ukraine and its reflection in scientometric DB Scopus were first investigated. The basic directions of university activities to integrate the results of transport scientists research into the global scientific digital environment were outlined. It was determined the leading role of university libraries in the integration processes of scientific documentary resources of universities into the global scientific and information communicative space. Practical value. Implementation of the proposed

  7. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    .... The design of this study included two parts: (1) eleven expert panelists involved in a Delphi technique to identify and rate importance of foodservice performance measures and rate the importance of benchmarking activities, and (2...

  8. A procedure for effective Dancoff factor calculation

    International Nuclear Information System (INIS)

    Milosevic, M.

    2001-01-01

    In this paper, a procedure for Dancoff factors calculation based on equivalence principle and its application in the SCALE-4.3 code system is described. This procedure is founded on principle of conservation of neutron absorption for resolved resonance range in a heterogeneous medium and an equivalent medium consisted of an infinite array of two-region pin cells, where the presence of other fuel rods is taken into account through a Dancoff factor. The neutron absorption in both media is obtained using a fine-group elastic slowing-down calculation. This procedure is implemented in a design oriented lattice physics code, which is applicable for any geometry where the method of collision probability is possible to apply to get a flux solution. Proposed procedure was benchmarked for recent exercise that represents a system with a fuel double heterogeneity, i.e., fuel in solid form (pellets) surrounded by fissile material in solution, and for a 5x5 irregular pressurised water reactor assembly, which requires different Dancoff factors. (author)

  9. Controller tuning with evolutionary multiobjective optimization a holistic multiobjective optimization design procedure

    CERN Document Server

    Reynoso Meza, Gilberto; Sanchis Saez, Javier; Herrero Durá, Juan Manuel

    2017-01-01

    This book is devoted to Multiobjective Optimization Design (MOOD) procedures for controller tuning applications, by means of Evolutionary Multiobjective Optimization (EMO). It presents developments in tools, procedures and guidelines to facilitate this process, covering the three fundamental steps in the procedure: problem definition, optimization and decision-making. The book is divided into four parts. The first part, Fundamentals, focuses on the necessary theoretical background and provides specific tools for practitioners. The second part, Basics, examines a range of basic examples regarding the MOOD procedure for controller tuning, while the third part, Benchmarking, demonstrates how the MOOD procedure can be employed in several control engineering problems. The fourth part, Applications, is dedicated to implementing the MOOD procedure for controller tuning in real processes.

  10. [Does implementation of benchmarking in quality circles improve the quality of care of patients with asthma and reduce drug interaction?].

    Science.gov (United States)

    Kaufmann-Kolle, Petra; Szecsenyi, Joachim; Broge, Björn; Haefeli, Walter Emil; Schneider, Antonius

    2011-01-01

    The purpose of this cluster-randomised controlled trial was to evaluate the efficacy of quality circles (QCs) working either with general data-based feedback or with an open benchmark within the field of asthma care and drug-drug interactions. Twelve QCs, involving 96 general practitioners from 85 practices, were randomised. Six QCs worked with traditional anonymous feedback and six with an open benchmark. Two QC meetings supported with feedback reports were held covering the topics "drug-drug interactions" and "asthma"; in both cases discussions were guided by a trained moderator. Outcome measures included health-related quality of life and patient satisfaction with treatment, asthma severity and number of potentially inappropriate drug combinations as well as the general practitioners' satisfaction in relation to the performance of the QC. A significant improvement in the treatment of asthma was observed in both trial arms. However, there was only a slight improvement regarding inappropriate drug combinations. There were no relevant differences between the group with open benchmark (B-QC) and traditional quality circles (T-QC). The physicians' satisfaction with the QC performance was significantly higher in the T-QCs. General practitioners seem to take a critical perspective about open benchmarking in quality circles. Caution should be used when implementing benchmarking in a quality circle as it did not improve healthcare when compared to the traditional procedure with anonymised comparisons. Copyright © 2011. Published by Elsevier GmbH.

  11. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    Science.gov (United States)

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  12. THE IMPORTANCE OF BENCHMARKING IN MAKING MANAGEMENT DECISIONS

    Directory of Open Access Journals (Sweden)

    Adriana-Mihaela IONESCU

    2016-06-01

    Full Text Available Launching a new business or project leads managers to make decisions and choose strategies that will then apply in their company. Most often, they take decisions only on instinct, but there are also companies that use benchmarking studies. Benchmarking is a highly effective management tool and is useful in the new competitive environment that has emerged from the need of organizations to constantly improve their performance in order to be competitive. Using this benchmarking process, organizations try to find the best practices applied in a business, learn from famous leaders and identify ways to increase their performance and competitiveness. Thus, managers gather information about market trends and about competitors, especially about the leaders in the field, and use these information in finding ideas and setting of guidelines for development. Benchmarking studies are often used in businesses of commerce, real estate, and industry and high-tech software.

  13. The International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    Briggs, J.B.

    2003-01-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organisation for Economic Cooperation and Development (OECD) - Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Israel, Spain, and Brazil are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled 'International Handbook of Evaluated Criticality Safety Benchmark Experiments.' The 2003 Edition of the Handbook contains benchmark model specifications for 3070 critical or subcritical configurations that are intended for validating computer codes that calculate effective neutron multiplication and for testing basic nuclear data. (author)

  14. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.E.; Cheng, E.T.

    1985-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li/sub 17/Pb/sub 83/ and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the TBR to group structure and weighting spectrum increases and Li enrichment decrease with up to 20% discrepancies for thin natural Li/sub 17/Pb/sub 83/ blankets

  15. A scientometric evaluation of the Chagas disease implementation research programme of the PAHO and TDR.

    Directory of Open Access Journals (Sweden)

    Ana Laura Carbajal-de-la-Fuente

    2013-11-01

    Full Text Available The Special Programme for Research and Training in Tropical Diseases (TDR is an independent global programme of scientific collaboration cosponsored by the United Nations Children's Fund, the United Nations Development Program, the World Bank, and the World Health Organization. TDR's strategy is based on stewardship for research on infectious diseases of poverty, empowerment of endemic countries, research on neglected priority needs, and the promotion of scientific collaboration influencing global efforts to combat major tropical diseases. In 2001, in view of the achievements obtained in the reduction of transmission of Chagas disease through the Southern Cone Initiative and the improvement in Chagas disease control activities in some countries of the Andean and the Central American Initiatives, TDR transferred the Chagas Disease Implementation Research Programme (CIRP to the Communicable Diseases Unit of the Pan American Health Organization (CD/PAHO. This paper presents a scientometric evaluation of the 73 projects from 18 Latin American and European countries that were granted by CIRP/PAHO/TDR between 1997 and 2007. We analyzed all final reports of the funded projects and scientific publications, technical reports, and human resource training activities derived from them. Results about the number of projects funded, countries and institutions involved, gender analysis, number of published papers in indexed scientific journals, main topics funded, patents inscribed, and triatomine species studied are presented and discussed. The results indicate that CIRP/PAHO/TDR initiative has contributed significantly, over the 1997-2007 period, to Chagas disease knowledge as well as to the individual and institutional-building capacity.

  16. Effects of Print Publication Lag in Dual Format Journals on Scientometric Indicators

    Science.gov (United States)

    Heneberg, Petr

    2013-01-01

    Background Publication lag between manuscript submission and its final publication is considered as an important factor affecting the decision to submit, the timeliness of presented data, and the scientometric measures of the particular journal. Dual-format peer-reviewed journals (publishing both print and online editions of their content) adopted a broadly accepted strategy to shorten the publication lag: to publish the accepted manuscripts online ahead of their print editions, which may follow days, but also years later. Effects of this widespread habit on the immediacy index (average number of times an article is cited in the year it is published) calculation were never analyzed. Methodology/Principal Findings Scopus database (which contains nearly up-to-date documents in press, but does not reveal citations by these documents until they are finalized) was searched for the journals with the highest total counts of articles in press, or highest counts of articles in press appearing online in 2010–2011. Number of citations received by the articles in press available online was found to be nearly equal to citations received within the year when the document was assigned to a journal issue. Thus, online publication of in press articles affects severely the calculation of immediacy index of their source titles, and disadvantages online-only and print-only journals when evaluating them according to the immediacy index and probably also according to the impact factor and similar measures. Conclusions/Significance Caution should be taken when evaluating dual-format journals supporting long publication lag. Further research should answer the question, on whether the immediacy index should be replaced by an indicator based on the date of first publication (online or in print, whichever comes first) to eliminate the problems analyzed in this report. Information value of immediacy index is further questioned by very high ratio of authors’ self-citations among the

  17. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  18. Benchmark matrix and guide: Part II.

    Science.gov (United States)

    1991-01-01

    In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.

  19. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  20. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  1. International benchmark and best practices on national infrastructure plans. Application to Spanish strategic planning

    Energy Technology Data Exchange (ETDEWEB)

    Pino Hernandez, E.M.; Delgado Quiralte, C.

    2016-07-01

    The need for planning regarding investment in infrastructures is recognised and supported by most governments around the world. Planning helps to take effective and correct decisions, provides a basis for monitoring its impacts and also facilitates further developments. However it requires a high level of organization, coordination among stakeholders and anticipation of transport needs. There are some different methodological approaches for strategic planning. This paper examines the importance of infrastructure planning and how it is undertaken in different countries from Europe and other continents. It is based on a benchmarking about planning procedures of 7 reference countries (UK, France, the Netherlands, Poland, Germany, Japan and USA), in addition to others whose strategic plans are being developed at the present moment such as Croatia or Romania. This benchmarking aims to extract and compare best practices carried out in this field and to define the optimal formulation of strategic planning. In this regard, the benchmarking is focused on some key aspects: firstly, on the plan structure and its main contents. There are a lot of differences about how each country defines the future needs for transport and how it establishes the objectives and the strategies to be followed. Secondly, on the characterisation of the authorities which are responsible of the plan development (level of dependence from the government, know-how…) along with the time frame and final validity of the plans. And finally, the level of detail of the proposed actions and budgetary commitments provided by the strategic plans. Throughout the comparative analysis, the knowledge generated by this benchmarking has allowed setting a series of specific recommendations in strategic planning which can be applied as innovative solutions and best practices in future planning processes in Spain. (Author)

  2. Simplified two and three dimensional HTTR benchmark problems

    International Nuclear Information System (INIS)

    Zhang Zhan; Rahnema, Farzad; Zhang Dingkang; Pounders, Justin M.; Ougouag, Abderrafi M.

    2011-01-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  3. A large-scale benchmark of gene prioritization methods.

    Science.gov (United States)

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  4. Decoys Selection in Benchmarking Datasets: Overview and Perspectives

    Science.gov (United States)

    Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Lagarde, Nathalie; Montes, Matthieu

    2018-01-01

    Virtual Screening (VS) is designed to prospectively help identifying potential hits, i.e., compounds capable of interacting with a given target and potentially modulate its activity, out of large compound collections. Among the variety of methodologies, it is crucial to select the protocol that is the most adapted to the query/target system under study and that yields the most reliable output. To this aim, the performance of VS methods is commonly evaluated and compared by computing their ability to retrieve active compounds in benchmarking datasets. The benchmarking datasets contain a subset of known active compounds together with a subset of decoys, i.e., assumed non-active molecules. The composition of both the active and the decoy compounds subsets is critical to limit the biases in the evaluation of the VS methods. In this review, we focus on the selection of decoy compounds that has considerably changed over the years, from randomly selected compounds to highly customized or experimentally validated negative compounds. We first outline the evolution of decoys selection in benchmarking databases as well as current benchmarking databases that tend to minimize the introduction of biases, and secondly, we propose recommendations for the selection and the design of benchmarking datasets. PMID:29416509

  5. Core Benchmarks Descriptions

    International Nuclear Information System (INIS)

    Pavlovichev, A.M.

    2001-01-01

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented

  6. A Benchmark Estimate for the Capital Stock. An Optimal Consistency Method

    OpenAIRE

    Jose Miguel Albala-Bertrand

    2001-01-01

    There are alternative methods to estimate a capital stock for a benchmark year. These methods, however, do not allow for an independent check, which could establish whether the estimated benchmark level is too high or too low. I propose here an optimal consistency method (OCM), which may allow estimating a capital stock level for a benchmark year and/or checking the consistency of alternative estimates of a benchmark capital stock.

  7. The institutionalization of benchmarking in the Danish construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard; Gottlieb, Stefan Christoffer

    , the chapter accounts for the data collection methods used to conduct the empirical data collection and the appertaining choices that are made, based on the account for analyzing institutionalization processes. The analysis unfolds over seven chapters, starting with an exposition of the political foundation...... and disseminated to the construction industry. The fourth chapter demonstrates how benchmarking was concretized into a benchmarking system and articulated to address several political focus areas for the construction industry. BEC accordingly became a political arena where many local perspectives and strategic...... emerged as actors expressed diverse political interests in the institutionalization of benchmarking. The political struggles accounted for in chapter five constituted a powerful political pressure and called for transformations of the institutionalization in order for benchmarking to attain institutional...

  8. Benchmarking af kommunernes førtidspensionspraksis

    DEFF Research Database (Denmark)

    Gregersen, Ole

    Hvert år udgiver Den Sociale Ankestyrelse statistikken over afgørelser i sager om førtidspension. I forbindelse med årsstatistikken udgives resultater fra en benchmarking model, hvor antal tilkendelser i den enkelte kommune sammenlignes med et forventet antal tilkendelser, hvis kommunen havde haft...... samme afgørelsespraksis, som den "gennemsnitlige kommune", når vi korrigerer for den sociale struktur i kommunen. Den hidtil anvendte benchmarking model er dokumenteret i Ole Gregersen (1994): Kommunernes Pensionspraksis, Servicerapport, Socialforskningsinstituttet. I dette notat dokumenteres en...

  9. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.L.; Cheng, E.T.

    1986-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li 17 Pb 83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li 17 Pb 83 blankets. (author)

  10. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we ...

  11. Benchmark calculations for VENUS-2 MOX -fueled reactor dosimetry

    International Nuclear Information System (INIS)

    Kim, Jong Kung; Kim, Hong Chul; Shin, Chang Ho; Han, Chi Young; Na, Byung Chan

    2004-01-01

    As a part of a Nuclear Energy Agency (NEA) Project, it was pursued the benchmark for dosimetry calculation of the VENUS-2 MOX-fueled reactor. In this benchmark, the goal is to test the current state-of-the-art computational methods of calculating neutron flux to reactor components against the measured data of the VENUS-2 MOX-fuelled critical experiments. The measured data to be used for this benchmark are the equivalent fission fluxes which are the reaction rates divided by the U 235 fission spectrum averaged cross-section of the corresponding dosimeter. The present benchmark is, therefore, defined to calculate reaction rates and corresponding equivalent fission fluxes measured on the core-mid plane at specific positions outside the core of the VENUS-2 MOX-fuelled reactor. This is a follow-up exercise to the previously completed UO 2 -fuelled VENUS-1 two-dimensional and VENUS-3 three-dimensional exercises. The use of MOX fuel in LWRs presents different neutron characteristics and this is the main interest of the current benchmark compared to the previous ones

  12. Benchmarking the cost efficiency of community care in Australian child and adolescent mental health services: implications for future benchmarking.

    Science.gov (United States)

    Furber, Gareth; Brann, Peter; Skene, Clive; Allison, Stephen

    2011-06-01

    The purpose of this study was to benchmark the cost efficiency of community care across six child and adolescent mental health services (CAMHS) drawn from different Australian states. Organizational, contact and outcome data from the National Mental Health Benchmarking Project (NMHBP) data-sets were used to calculate cost per "treatment hour" and cost per episode for the six participating organizations. We also explored the relationship between intake severity as measured by the Health of the Nations Outcome Scales for Children and Adolescents (HoNOSCA) and cost per episode. The average cost per treatment hour was $223, with cost differences across the six services ranging from a mean of $156 to $273 per treatment hour. The average cost per episode was $3349 (median $1577) and there were significant differences in the CAMHS organizational medians ranging from $388 to $7076 per episode. HoNOSCA scores explained at best 6% of the cost variance per episode. These large cost differences indicate that community CAMHS have the potential to make substantial gains in cost efficiency through collaborative benchmarking. Benchmarking forums need considerable financial and business expertise for detailed comparison of business models for service provision.

  13. Calculation of the 5th AER dynamic benchmark with APROS

    International Nuclear Information System (INIS)

    Puska, E.K.; Kontio, H.

    1998-01-01

    The model used for calculation of the 5th AER dynamic benchmark with APROS code is presented. In the calculation of the 5th AER dynamic benchmark the three-dimensional neutronics model of APROS was used. The core was divided axially into 20 nodes according to the specifications of the benchmark and each six identical fuel assemblies were placed into one one-dimensional thermal hydraulic channel. The five-equation thermal hydraulic model was used in the benchmark. The plant process and automation was described with a generic VVER-440 plant model created by IVO PE. (author)

  14. Benchmarking criticality safety calculations with subcritical experiments

    International Nuclear Information System (INIS)

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments

  15. Are larger effect sizes in experimental studies good predictors of higher citation rates?

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg; Henriksen, Dorte

    2013-01-01

    the studies are published. Contrary to the previous findings, and in fact most studies in scientometrics, we examine the hypothesis with a Bayesian model selection procedure. This is advantageous, as we thereby are able to quantify the statistical evidence for both hypotheses, H0 and H1. This is not possible...

  16. Presidential Address 1997--Benchmarks for the Next Millennium.

    Science.gov (United States)

    Baker, Pamela C.

    1997-01-01

    Reflects on the century's preeminent benchmarks, including the evolution in the lives of people with disabilities and the prevention of many causes of mental retardation. The ethical challenges of genetic engineering and diagnostic technology and the need for new benchmarks in policy, practice, and research are discussed. (CR)

  17. [Benchmarking and other functions of ROM: back to basics].

    Science.gov (United States)

    Barendregt, M

    2015-01-01

    Since 2011 outcome data in the Dutch mental health care have been collected on a national scale. This has led to confusion about the position of benchmarking in the system known as routine outcome monitoring (rom). To provide insight into the various objectives and uses of aggregated outcome data. A qualitative review was performed and the findings were analysed. Benchmarking is a strategy for finding best practices and for improving efficacy and it belongs to the domain of quality management. Benchmarking involves comparing outcome data by means of instrumentation and is relatively tolerant with regard to the validity of the data. Although benchmarking is a function of rom, it must be differentiated form other functions from rom. Clinical management, public accountability, research, payment for performance and information for patients are all functions of rom which require different ways of data feedback and which make different demands on the validity of the underlying data. Benchmarking is often wrongly regarded as being simply a synonym for 'comparing institutions'. It is, however, a method which includes many more factors; it can be used to improve quality and has a more flexible approach to the validity of outcome data and is less concerned than other rom functions about funding and the amount of information given to patients. Benchmarking can make good use of currently available outcome data.

  18. The future of simulation technologies for complex cardiovascular procedures.

    Science.gov (United States)

    Cates, Christopher U; Gallagher, Anthony G

    2012-09-01

    Changing work practices and the evolution of more complex interventions in cardiovascular medicine are forcing a paradigm shift in the way doctors are trained. Implantable cardioverter defibrillator (ICD), transcatheter aortic valve implantation (TAVI), carotid artery stenting (CAS), and acute stroke intervention procedures are forcing these changes at a faster pace than in other disciplines. As a consequence, cardiovascular medicine has had to develop a sophisticated understanding of precisely what is meant by 'training' and 'skill'. An evolving conclusion is that procedure training on a virtual reality (VR) simulator presents a viable current solution. These simulations should characterize the important performance characteristics of procedural skill that have metrics derived and defined from, and then benchmarked to experienced operators (i.e. level of proficiency). Simulation training is optimal with metric-based feedback, particularly formative trainee error assessments, proximate to their performance. In prospective, randomized studies, learners who trained to a benchmarked proficiency level on the simulator performed significantly better than learners who were traditionally trained. In addition, cardiovascular medicine now has available the most sophisticated virtual reality simulators in medicine and these have been used for the roll-out of interventions such as CAS in the USA and globally with cardiovascular society and industry partnered training programmes. The Food and Drug Administration has advocated the use of VR simulation as part of the approval of new devices and the American Board of Internal Medicine has adopted simulation as part of its maintenance of certification. Simulation is rapidly becoming a mainstay of cardiovascular education, training, certification, and the safe adoption of new technology. If cardiovascular medicine is to continue to lead in the adoption and integration of simulation, then, it must take a proactive position in the

  19. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  20. Results of the reliability benchmark exercise and the future CEC-JRC program

    International Nuclear Information System (INIS)

    Amendola, A.

    1985-01-01

    As a contribution towards identifying problem areas and for assessing probabilistic safety assessment (PSA) methods and procedures of analysis, JRC has organized a wide-range Benchmark Exercise on systems reliability. This has been executed by ten different teams involving seventeen organizations from nine European countries. The exercise has been based on a real case (Auxiliary Feedwater System of EDF Paluel PWR 1300 MWe Unit), starting from analysis of technical specifications, logical and topological layout and operational procedures. Terms of references included both qualitative and quantitative analyses. The subdivision of the exercise into different phases and the rules adopted allowed assessment of the different components of the spread of the overall results. It appeared that modelling uncertainties may overwhelm data uncertainties and major efforts must be spent in order to improve consistency and completeness of qualitative analysis. After successful completion of the first exercise, CEC-JRC program has planned separate exercises on analysis of dependent failures and human factors before approaching the evaluation of a complete accident sequence

  1. Reactor group constants and benchmark test

    Energy Technology Data Exchange (ETDEWEB)

    Takano, Hideki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    The evaluated nuclear data files such as JENDL, ENDF/B-VI and JEF-2 are validated by analyzing critical mock-up experiments for various type reactors and assessing applicability for nuclear characteristics such as criticality, reaction rates, reactivities, etc. This is called Benchmark Testing. In the nuclear calculations, the diffusion and transport codes use the group constant library which is generated by processing the nuclear data files. In this paper, the calculation methods of the reactor group constants and benchmark test are described. Finally, a new group constants scheme is proposed. (author)

  2. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Benchmarking with the BLASST Sessional Staff Standards Framework

    Science.gov (United States)

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  4. International benchmarking and best practice management: in search of health care and hospital excellence.

    Science.gov (United States)

    von Eiff, Wilfried

    2015-01-01

    exceed this best practice in your institution. Focus on simple and effective ways to implement solutions. Comparing only figures, such as average length of stay, costs of procedures, infection rates, or out-of-stock rates, can lead easily to wrong conclusions and decision making with often-disastrous consequences. Just looking at figures and ratios is not the basis for detecting potential excellence. It is necessary to look beyond the numbers to understand how processes work and contribute to best-in-class results. Best practices from even quite different industries can enable hospitals to leapfrog results in patient orientation, clinical excellence, and cost-effectiveness. Despite common benchmarking approaches, it is pointed out that a comparison without "looking behind the figures" (what it means to be familiar with the process structure, process dynamic and drivers, process institutions/rules and process-related incentive components) will be extremely limited referring to reliability and quality of findings. In order to demonstrate transferability of benchmarking results between different industries practical examples from health care, automotive, and hotel service have been selected. Additionally, it is depicted that international comparisons between hospitals providing medical services in different health care systems do have a great potential for achieving leapfrog results in medical quality, organization of service provision, effective work structures, purchasing and logistics processes, or management, etc.

  5. Analytical benchmarks for nuclear engineering applications. Case studies in neutron transport theory

    International Nuclear Information System (INIS)

    2008-01-01

    The developers of computer codes involving neutron transport theory for nuclear engineering applications seldom apply analytical benchmarking strategies to ensure the quality of their programs. A major reason for this is the lack of analytical benchmarks and their documentation in the literature. The few such benchmarks that do exist are difficult to locate, as they are scattered throughout the neutron transport and radiative transfer literature. The motivation for this benchmark compendium, therefore, is to gather several analytical benchmarks appropriate for nuclear engineering applications under one cover. We consider the following three subject areas: neutron slowing down and thermalization without spatial dependence, one-dimensional neutron transport in infinite and finite media, and multidimensional neutron transport in a half-space and an infinite medium. Each benchmark is briefly described, followed by a detailed derivation of the analytical solution representation. Finally, a demonstration of the evaluation of the solution representation includes qualified numerical benchmark results. All accompanying computer codes are suitable for the PC computational environment and can serve as educational tools for courses in nuclear engineering. While this benchmark compilation does not contain all possible benchmarks, by any means, it does include some of the most prominent ones and should serve as a valuable reference. (author)

  6. International Benchmark on Pressurised Water Reactor Sub-channel and Bundle Tests. Volume II: Benchmark Results of Phase I: Void Distribution

    International Nuclear Information System (INIS)

    Rubin, Adam; Avramova, Maria; Velazquez-Lozada, Alexander

    2016-03-01

    This report summarised the first phase of the Nuclear Energy Agency (NEA) and the US Nuclear Regulatory Commission Benchmark based on NUPEC PWR Sub-channel and Bundle Tests (PSBT), which was intended to provide data for the verification of void distribution models in participants' codes. This phase was composed of four exercises; Exercise 1: steady-state single sub-channel benchmark, Exercise 2: steady-state rod bundle benchmark, Exercise 3: transient rod bundle benchmark and Exercise 4: a pressure drop benchmark. The experimental data provided to the participants of this benchmark is from a series of void measurement tests using full-size mock-up tests for both Boiling Water Reactors (BWRs) and Pressurised Water Reactors (PWRs). These tests were performed from 1987 to 1995 by the Nuclear Power Engineering Corporation (NUPEC) in Japan and made available by the Japan Nuclear Energy Safety Organisation (JNES) for the purposes of this benchmark, which was organised by Pennsylvania State University. Twenty-one institutions from nine countries participated in this benchmark. Seventeen different computer codes were used in Exercises 1, 2, 3 and 4. Among the computer codes were porous media, sub-channel, systems thermal-hydraulic code and Computational Fluid Dynamics (CFD) codes. It was observed that the codes tended to overpredict the thermal equilibrium quality at lower elevations and under predict it at higher elevations. There was also a tendency to overpredict void fraction at lower elevations and underpredict it at high elevations for the bundle test cases. The overprediction of void fraction at low elevations is likely caused by the x-ray densitometer measurement method used. Under sub-cooled boiling conditions, the voids accumulate at heated surfaces (and are therefore not seen in the centre of the sub-channel, where the measurements are being taken), so the experimentally-determined void fractions will be lower than the actual void fraction. Some of the best

  7. Consistency check of iron and sodium cross sections with integral benchmark experiments using a large amount of experimental information

    International Nuclear Information System (INIS)

    Baechle, R.-D.; Hehn, G.; Pfister, G.; Perlini, G.; Matthes, W.

    1984-01-01

    Single material benchmark experiments are designed to check neutron and gamma cross-sections of importance for deep penetration problems. At various penetration depths a large number of activation detectors and spectrometers are placed to measure the radiation field as completely as possible. The large amount of measured data in benchmark experiments can be evaluated best by the global detector concept applied to nuclear data adjustment. A new iteration procedure is presented for adjustment of a large number of multigroup cross sections, which has been implemented now in the modular adjustment code ADJUST-EUR. A theoretical test problem has been deviced to check the total program system with high precision. The method and code are going to be applied for validating the new European Data Files (JEF and EFF) in progress. (Auth.)

  8. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  9. Implementation of benchmark management in quality assurance audit activities

    International Nuclear Information System (INIS)

    Liu Yongmei

    2008-01-01

    The concept of Benchmark Management is that the practices of the best competitor are taken as benchmark, to analyze and study the distance between that competitor and the institute, and take efficient actions to catch up and even exceed the competitor. This paper analyzes and rebuilds all the process for quality assurance audit with the concept of Benchmark Management, based on the practices during many years of quality assurance audits, in order to improve the level and effect of quality assurance audit activities. (author)

  10. Benchmarking strategies for measuring the quality of healthcare: problems and prospects.

    Science.gov (United States)

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

  11. How to Use Benchmarking in Small and Medium-Sized Businesses

    OpenAIRE

    Alexandrache (Hrimiuc) Olivia Bianca

    2011-01-01

    In nowadays benchmarking become a powerful management tool that stimulates innovative improvement through exchange of corporate information, performance measurement, and adoption of best practices. It has been used for to improve productivity and quality in leading manufacturing organizations. In the last years, companies of different sizes and business sectors are getting involved in benchmarking activities. Despite the differences of benchmarking practices between smaller and bigger organiz...

  12. Benchmarking: a method for continuous quality improvement in health.

    Science.gov (United States)

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  13. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  14. Implementation of a multi-lingual, Internet-supported benchmarking system for compressed-air installations; Umsetzung eines mehrsprachigen internetgestuetzten Benchmarking von Druckluftanlagen

    Energy Technology Data Exchange (ETDEWEB)

    Radgen, P.

    2005-07-01

    This final report for the Swiss Federal Office of Energy (SFOE) discusses how know-how can be improved and how optimisation activities can be stimulated in the area of compressed-air generation. The authors estimate that potential energy-savings of 20 to 40% are possible. The aim of the project - to introduce a benchmarking system already in use in Germany to the Swiss market - is discussed. This benchmarking is to help companies identify weak points in their compressed-air systems. An Internet-based information platform is introduced which was realised in 2004 and is being continually extended. The use of the benchmarking process is illustrated with a comprehensive flow-diagram and 'screen-shots' of the relevant Internet pages.

  15. The Medical Library Association Benchmarking Network: development and implementation.

    Science.gov (United States)

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C; Smith, Bernie Todd

    2006-04-01

    This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program.

  16. Upgrade of the Department of Energy's Savannah River Site's reactor operations and maintenance procedures

    International Nuclear Information System (INIS)

    Walsh, T.E.

    1991-01-01

    This paper describes the program in progress at the Savannah River Site (SRS) to upgrade the existing reactor operating and maintenance procedures to current commercial nuclear industry standards. In order to meet this goal, the following elements were established: administrative procedures to govern the upgrade process, tracking system to provide status and accountability; and procedure writing guides. The goal is to establish a benchmark of excellence by which other Department of Energy (DOE) sites will measure themselves. The above three elements are addressed in detail in this paper

  17. MCNP simulation of the TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Jeraj, R.; Glumac, B.; Maucec, M.

    1996-01-01

    The complete 3D MCNP model of the TRIGA Mark II reactor is presented. It enables precise calculations of some quantities of interest in a steady-state mode of operation. Calculational results are compared to the experimental results gathered during reactor reconstruction in 1992. Since the operating conditions were well defined at that time, the experimental results can be used as a benchmark. It may be noted that this benchmark is one of very few high enrichment benchmarks available. In our simulations experimental conditions were thoroughly simulated: fuel elements and control rods were precisely modeled as well as entire core configuration and the vicinity of the core. ENDF/B-VI and ENDF/B-V libraries were used. Partial results of benchmark calculations are presented. Excellent agreement of core criticality, excess reactivity and control rod worths can be observed. (author)

  18. Exploring the path to success : A review of the Strategic IT benchmarking literature

    NARCIS (Netherlands)

    Ebner, Katharina; Urbach, Nils; Mueller, Benjamin

    IT organizations use strategic IT benchmarking (SITBM) to revise IT strategies or perform internal marketing. Despite benchmarking's long tradition, many strategic IT benchmarking initiatives do not reveal the desired outcomes. The vast body of knowledge on benchmarking and IT management does not

  19. Implementation and verification of global optimization benchmark problems

    Science.gov (United States)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  20. OWL2 benchmarking for the evaluation of knowledge based systems.

    Directory of Open Access Journals (Sweden)

    Sher Afgun Khan

    Full Text Available OWL2 semantics are becoming increasingly popular for the real domain applications like Gene engineering and health MIS. The present work identifies the research gap that negligible attention has been paid to the performance evaluation of Knowledge Base Systems (KBS using OWL2 semantics. To fulfil this identified research gap, an OWL2 benchmark for the evaluation of KBS is proposed. The proposed benchmark addresses the foundational blocks of an ontology benchmark i.e. data schema, workload and performance metrics. The proposed benchmark is tested on memory based, file based, relational database and graph based KBS for performance and scalability measures. The results show that the proposed benchmark is able to evaluate the behaviour of different state of the art KBS on OWL2 semantics. On the basis of the results, the end users (i.e. domain expert would be able to select a suitable KBS appropriate for his domain.

  1. The VULCANO VE-U7 Corium spreading benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Journeau, Christophe; Haquet, Jean-Francois [CEA Cadarache, Severe Accident Mastering experimental Laboratory (DEN/DTN/STRI/LMA), 13108 St Paul lez Durance (France); Spindler, Bertrand [CEA Grenoble, Physicochemistry and Multiphasic Thermalhydraulics Laboratory (DEN/DTN/SE2T/LPTM), 17 rue des Martyrs, F-38054 Grenoble CEDEX 9 (France); Spengler, Claus [Gesellschaft fuer Reaktorsicherheit mbH, Department for Thermohydraulics/Process Engineering, Schwertnergasse 1, D-50667 Koeln (Germany); Foit, Jerzy [Forschungszentrum Karlsruhe GmbH, Institut fuer Kern nd Energietechnik (IKET), P.O. Box 3640, D-76021 Karlsruhe (Germany)

    2006-07-01

    In a hypothetical nuclear reactor severe accident, corium spreading is one possible mitigation measure that has been selected for the EPR design. A post-test benchmark exercise has been organized on the VULCANO VE-U7 corium spreading experiment. In this test, a prototypic corium mixture representative of what could be expected at the opening of EPR reactor-pit gate has been spread on siliceous concrete and on a reference channel in inert refractory ceramic. The spreading progression was not much affected by the presence of concrete and sparging gases. The procedure used to estimate the corium physical properties from its composition and temperature provided a satisfactory data set. The CORFLOW, LAVA and THEMA codes provide satisfactory calculations of the spreading front evolution and of its final length. LAVA and THEMA estimations of the substrate temperatures, which are the initial conditions for longer term Molten Core Concrete Interaction or Corium Ceramic Interaction computations, are also close to the measured data, within the experimental uncertainties. (authors)

  2. The VULCANO VE-U7 Corium spreading benchmark

    International Nuclear Information System (INIS)

    Journeau, Christophe; Haquet, Jean-Francois; Spindler, Bertrand; Spengler, Claus; Foit, Jerzy

    2006-01-01

    In a hypothetical nuclear reactor severe accident, corium spreading is one possible mitigation measure that has been selected for the EPR design. A post-test benchmark exercise has been organized on the VULCANO VE-U7 corium spreading experiment. In this test, a prototypic corium mixture representative of what could be expected at the opening of EPR reactor-pit gate has been spread on siliceous concrete and on a reference channel in inert refractory ceramic. The spreading progression was not much affected by the presence of concrete and sparging gases. The procedure used to estimate the corium physical properties from its composition and temperature provided a satisfactory data set. The CORFLOW, LAVA and THEMA codes provide satisfactory calculations of the spreading front evolution and of its final length. LAVA and THEMA estimations of the substrate temperatures, which are the initial conditions for longer term Molten Core Concrete Interaction or Corium Ceramic Interaction computations, are also close to the measured data, within the experimental uncertainties. (authors)

  3. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    Science.gov (United States)

    Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.

    2016-01-01

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical

  4. ICSBEP-2007, International Criticality Safety Benchmark Experiment Handbook

    International Nuclear Information System (INIS)

    Blair Briggs, J.

    2007-01-01

    1 - Description: The Critically Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United Sates Department of Energy. The project quickly became an international effort as scientist from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization of Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA). This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material. The example calculations presented do not constitute a validation of the codes or cross section data. The work of the ICSBEP is documented as an International Handbook of Evaluated Criticality Safety Benchmark Experiments. Currently, the handbook spans over 42,000 pages and contains 464 evaluations representing 4,092 critical, near-critical, or subcritical configurations and 21 criticality alarm placement/shielding configurations with multiple dose points for each and 46 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The handbook is intended for use by criticality safety analysts to perform necessary validations of their calculational techniques and is expected to be a valuable tool for decades to come. The ICSBEP Handbook is available on DVD. You may request a DVD by completing the DVD Request Form on the internet. Access to the Handbook on the Internet requires a password. You may request a password by completing the Password Request Form. The Web address is: http://icsbep.inel.gov/handbook.shtml 2 - Method of solution: Experiments that are found

  5. Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects

    Science.gov (United States)

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140

  6. Ansys Benchmark of the Single Heater Test

    International Nuclear Information System (INIS)

    H.M. Wade; H. Marr; M.J. Anderson

    2006-01-01

    The Single Heater Test (SHT) is the first of three in-situ thermal tests included in the site characterization program for the potential nuclear waste monitored geologic repository at Yucca Mountain. The heating phase of the SHT started in August 1996 and was concluded in May 1997 after 9 months of heating. Cooling continued until January 1998, at which time post-test characterization of the test block commenced. Numerous thermal, hydrological, mechanical, and chemical sensors monitored the coupled processes in the unsaturated fractured rock mass around the heater (CRWMS M and O 1999). The objective of this calculation is to benchmark a numerical simulation of the rock mass thermal behavior against the extensive data set that is available from the thermal test. The scope is limited to three-dimensional (3-D) numerical simulations of the computational domain of the Single Heater Test and surrounding rock mass. This calculation supports the waste package thermal design methodology, and is developed by Waste Package Department (WPD) under Office of Civilian Radioactive Waste Management (OCRWM) procedure AP-3.12Q, Revision 0, ICN 3, BSCN 1, Calculations

  7. Performance evaluation of different types of particle representation procedures of Particle Swarm Optimization in Job-shop Scheduling Problems

    Science.gov (United States)

    Izah Anuar, Nurul; Saptari, Adi

    2016-02-01

    This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.

  8. MIPS bacterial genomes functional annotation benchmark dataset.

    Science.gov (United States)

    Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen

    2005-05-15

    Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab

  9. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ...) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate... planning services and supplies and other appropriate preventive services, as designated by the Secretary... State for purposes of comparison in establishing the aggregate actuarial value of the benchmark...

  10. International Criticality Safety Benchmark Evaluation Project (ICSBEP) - ICSBEP 2015 Handbook

    International Nuclear Information System (INIS)

    Bess, John D.

    2015-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy (DOE). The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirements and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross-section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span approximately 69000 pages and contain 567 evaluations with benchmark specifications for 4874 critical, near-critical or subcritical configurations, 31 criticality alarm placement/shielding configurations with multiple dose points for each, and 207 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the handbook are benchmark specifications for neutron activation foil and thermoluminescent dosimeter measurements performed at the SILENE critical assembly in Valduc, France as part of a joint venture in 2010 between the US DOE and the French Alternative Energies and Atomic Energy Commission (CEA). A photograph of this experiment is shown on the front cover. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these

  11. Parametric Sensitivity Tests—European Polymer Electrolyte Membrane Fuel Cell Stack Test Procedures

    DEFF Research Database (Denmark)

    Araya, Samuel Simon; Andreasen, Søren Juhl; Kær, Søren Knudsen

    2014-01-01

    performed based on test procedures proposed by a European project, Stack-Test. The sensitivity of a Nafion-based low temperature PEMFC stack’s performance to parametric changes was the main objective of the tests. Four crucial parameters for fuel cell operation were chosen; relative humidity, temperature......As fuel cells are increasingly commercialized for various applications, harmonized and industry-relevant test procedures are necessary to benchmark tests and to ensure comparability of stack performance results from different parties. This paper reports the results of parametric sensitivity tests......, pressure, and stoichiometry at varying current density. Furthermore, procedures for polarization curve recording were also tested both in ascending and descending current directions....

  12. Benchmarking in pathology: development of a benchmarking complexity unit and associated key performance indicators.

    Science.gov (United States)

    Neil, Amanda; Pfeffer, Sally; Burnett, Leslie

    2013-01-01

    This paper details the development of a new type of pathology laboratory productivity unit, the benchmarking complexity unit (BCU). The BCU provides a comparative index of laboratory efficiency, regardless of test mix. It also enables estimation of a measure of how much complex pathology a laboratory performs, and the identification of peer organisations for the purposes of comparison and benchmarking. The BCU is based on the theory that wage rates reflect productivity at the margin. A weighting factor for the ratio of medical to technical staff time was dynamically calculated based on actual participant site data. Given this weighting, a complexity value for each test, at each site, was calculated. The median complexity value (number of BCUs) for that test across all participating sites was taken as its complexity value for the Benchmarking in Pathology Program. The BCU allowed implementation of an unbiased comparison unit and test listing that was found to be a robust indicator of the relative complexity for each test. Employing the BCU data, a number of Key Performance Indicators (KPIs) were developed, including three that address comparative organisational complexity, analytical depth and performance efficiency, respectively. Peer groups were also established using the BCU combined with simple organisational and environmental metrics. The BCU has enabled productivity statistics to be compared between organisations. The BCU corrects for differences in test mix and workload complexity of different organisations and also allows for objective stratification into peer groups.

  13. WIPP Benchmark calculations with the large strain SPECTROM codes

    International Nuclear Information System (INIS)

    Callahan, G.D.; DeVries, K.L.

    1995-08-01

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems

  14. Test One to Test Many: A Unified Approach to Quantum Benchmarks

    Science.gov (United States)

    Bai, Ge; Chiribella, Giulio

    2018-04-01

    Quantum benchmarks are routinely used to validate the experimental demonstration of quantum information protocols. Many relevant protocols, however, involve an infinite set of input states, of which only a finite subset can be used to test the quality of the implementation. This is a problem, because the benchmark for the finitely many states used in the test can be higher than the original benchmark calculated for infinitely many states. This situation arises in the teleportation and storage of coherent states, for which the benchmark of 50% fidelity is commonly used in experiments, although finite sets of coherent states normally lead to higher benchmarks. Here, we show that the average fidelity over all coherent states can be indirectly probed with a single setup, requiring only two-mode squeezing, a 50-50 beam splitter, and homodyne detection. Our setup enables a rigorous experimental validation of quantum teleportation, storage, amplification, attenuation, and purification of noisy coherent states. More generally, we prove that every quantum benchmark can be tested by preparing a single entangled state and measuring a single observable.

  15. Benchmarking: A tool for conducting self-assessment

    International Nuclear Information System (INIS)

    Perkey, D.N.

    1992-01-01

    There is more information on nuclear plant performance available than can reasonably be assimilated and used effectively by plant management or personnel responsible for self-assessment. Also, it is becoming increasingly more important that an effective self-assessment program uses internal parameters not only to evaluate performance, but to incorporate lessons learned from other plants. Because of the quantity of information available, it is important to focus efforts and resources in areas where safety or performance is a concern and where the most improvement can be realized. One of the techniques that is being used to effectively accomplish this is benchmarking. Benchmarking involves the use of various sources of information to self-identify a plant's strengths and weaknesses, identify which plants are strong performers in specific areas, evaluate what makes a top performer, and incorporate the success factors into existing programs. The formality with which benchmarking is being implemented varies widely depending on the objective. It can be as simple as looking at a single indicator, such as systematic assessment of licensee performance (SALP) in engineering and technical support, then surveying the top performers with specific questions. However, a more comprehensive approach may include the performance of a detailed benchmarking study. Both operational and economic indicators may be used in this type of evaluation. Some of the indicators that may be considered and the limitations of each are discussed

  16. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    Science.gov (United States)

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  17. Does Your Terrestrial Model Capture Key Arctic-Boreal Relationships?: Functional Benchmarks in the ABoVE Model Benchmarking System

    Science.gov (United States)

    Stofferahn, E.; Fisher, J. B.; Hayes, D. J.; Schwalm, C. R.; Huntzinger, D. N.; Hantson, W.

    2017-12-01

    The Arctic-Boreal Region (ABR) is a major source of uncertainties for terrestrial biosphere model (TBM) simulations. These uncertainties are precipitated by a lack of observational data from the region, affecting the parameterizations of cold environment processes in the models. Addressing these uncertainties requires a coordinated effort of data collection and integration of the following key indicators of the ABR ecosystem: disturbance, vegetation / ecosystem structure and function, carbon pools and biogeochemistry, permafrost, and hydrology. We are continuing to develop the model-data integration framework for NASA's Arctic Boreal Vulnerability Experiment (ABoVE), wherein data collection is driven by matching observations and model outputs to the ABoVE indicators via the ABoVE Grid and Projection. The data are used as reference datasets for a benchmarking system which evaluates TBM performance with respect to ABR processes. The benchmarking system utilizes two types of performance metrics to identify model strengths and weaknesses: standard metrics, based on the International Land Model Benchmarking (ILaMB) system, which relate a single observed variable to a single model output variable, and functional benchmarks, wherein the relationship of one variable to one or more variables (e.g. the dependence of vegetation structure on snow cover, the dependence of active layer thickness (ALT) on air temperature and snow cover) is ascertained in both observations and model outputs. This in turn provides guidance to model development teams for reducing uncertainties in TBM simulations of the ABR.

  18. Nomenclatural Benchmarking: The roles of digital typification and telemicroscopy

    Science.gov (United States)

    The process of nomenclatural benchmarking is the examination of type specimens of all available names to ascertain which currently accepted species the specimen bearing the name falls within. We propose a strategy for addressing four challenges for nomenclatural benchmarking. First, there is the mat...

  19. Implementation and verification of global optimization benchmark problems

    Directory of Open Access Journals (Sweden)

    Posypkin Mikhail

    2017-12-01

    Full Text Available The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its’ gradient at a given point and the interval estimates of a function and its’ gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  20. The Medical Library Association Benchmarking Network: development and implementation*

    Science.gov (United States)

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd

    2006-01-01

    Objective: This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. Methods: The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. Results: The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. Conclusions: The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program. PMID:16636702

  1. The Development of a Benchmark Tool for NoSQL Databases

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2013-07-01

    Full Text Available The aim of this article is to describe a proposed benchmark methodology and software application targeted at measuring the performance of both SQL and NoSQL databases. These represent the results obtained during PhD research (being actually a part of a larger application intended for NoSQL database management. A reason for aiming at this particular subject is the complete lack of benchmarking tools for NoSQL databases, except for YCBS [1] and a benchmark tool made specifically to compare Redis to RavenDB. While there are several well-known benchmarking systems for classical relational databases (starting with the canon TPC-C, TPC-E and TPC-H, on the other side of databases world such tools are mostly missing and seriously needed.

  2. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    Science.gov (United States)

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

  3. Calculation of the fifth atomic energy research dynamic benchmark with APROS

    International Nuclear Information System (INIS)

    Puska Eija Karita; Kontio Harii

    1998-01-01

    The band-out presents the model used for calculation of the fifth atomic energy research dynamic benchmark with APROS code. In the calculation of the fifth atomic energy research dynamic benchmark the three-dimensional neutronics model of APROS was used. The core was divided axially into 20 nodes according to the specifications of the benchmark and each six identical fuel assemblies were placed into one one-dimensional thermal hydraulic channel. The five-equation thermal hydraulic model was used in the benchmark. The plant process and automation was described with a generic WWER-440 plant model created by IVO Power Engineering Ltd. - Finland. (Author)

  4. ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers.

    Directory of Open Access Journals (Sweden)

    Douglas Teodoro

    Full Text Available The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms.

  5. ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers

    Science.gov (United States)

    Sundvall, Erik; João Junior, Mario; Ruch, Patrick; Miranda Freire, Sergio

    2018-01-01

    The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR) systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS) containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms. PMID:29293556

  6. ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers.

    Science.gov (United States)

    Teodoro, Douglas; Sundvall, Erik; João Junior, Mario; Ruch, Patrick; Miranda Freire, Sergio

    2018-01-01

    The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR) systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS) containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms.

  7. The Benchmarking of Integrated Business Structures

    Directory of Open Access Journals (Sweden)

    Nifatova Olena M.

    2017-12-01

    Full Text Available The aim of the article is to study the role of the benchmarking in the process of integration of business structures in the aspect of knowledge sharing. The results of studying the essential content of the concept “integrated business structure” and its semantic analysis made it possible to form our own understanding of this category with an emphasis on the need to consider it in the plane of three projections — legal, economic and organizational one. The economic projection of the essential content of integration associations of business units is supported by the organizational projection, which is expressed through such essential aspects as existence of a single center that makes key decisions; understanding integration as knowledge sharing; using the benchmarking as exchange of experience on key business processes. Understanding the process of integration of business units in the aspect of knowledge sharing involves obtaining certain information benefits. Using the benchmarking as exchange of experience on key business processes in integrated business structures will help improve the basic production processes, increase the efficiency of activity of both the individual business unit and the IBS as a whole.

  8. BIM quickscan: benchmark of BIM performance in the Netherlands

    NARCIS (Netherlands)

    Berlo, L.A.H.M. van; Dijkmans, T.J.A.; Hendriks, H.; Spekkink, D.; Pel, W.

    2012-01-01

    In 2009 a “BIM QuickScan” for benchmarking BIM performance was created in the Netherlands (Sebastian, Berlo 2010). This instrument aims to provide insight into the current BIM performance of a company. The benchmarking instrument combines quantitative and qualitative assessments of the ‘hard’ and

  9. MoleculeNet: a benchmark for molecular machine learning.

    Science.gov (United States)

    Wu, Zhenqin; Ramsundar, Bharath; Feinberg, Evan N; Gomes, Joseph; Geniesse, Caleb; Pappu, Aneesh S; Leswing, Karl; Pande, Vijay

    2018-01-14

    Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.

  10. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  11. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  12. Benchmark calculations of power distribution within assemblies

    International Nuclear Information System (INIS)

    Cavarec, C.; Perron, J.F.; Verwaerde, D.; West, J.P.

    1994-09-01

    The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (P ij , S n , Monte Carlo). This report presents an analysis and intercomparisons of all the results received

  13. Boiling water reactor turbine trip (TT) benchmark

    International Nuclear Information System (INIS)

    2001-06-01

    In the field of coupled neutronics/thermal-hydraulics computation there is a need to enhance scientific knowledge in order to develop advanced modelling techniques for new nuclear technologies and concepts, as well as for current nuclear applications Recently developed 'best-estimate' computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear cores and for the coupling of core phenomena and system dynamics (PWR, BWR, VVER) need to be compared against each other and validated against results from experiments. International benchmark studies have been set up for the purpose. The present volume describes the specification of such a benchmark. The transient addressed is a turbine trip (TT) in a BWR involving pressurization events in which the coupling between core phenomena and system dynamics plays an important role. In addition, the data made available from experiments carried out at the plant make the present benchmark very valuable. The data used are from events at the Peach Bottom 2 reactor (a GE-designed BWR/4). (authors)

  14. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants

    International Nuclear Information System (INIS)

    Suter, G.W. II; Will, M.E.; Evans, C.

    1993-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is the screening of contaminants to determine which of them are worthy of further consideration as ''contaminants of potential concern.'' This process is termed ''contaminant screening.'' It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 34 chemicals potentially associated with US Department of Energy (DOE) sites. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern. The purpose of this report is to present plant toxicity data and discuss their utility as benchmarks for determining the hazard to terrestrial plants caused by contaminants in soil. Benchmarks are provided for soils and solutions

  15. Key performance indicators to benchmark hospital information systems - a delphi study.

    Science.gov (United States)

    Hübner-Bloder, G; Ammenwerth, E

    2009-01-01

    To identify the key performance indicators for hospital information systems (HIS) that can be used for HIS benchmarking. A Delphi survey with one qualitative and two quantitative rounds. Forty-four HIS experts from health care IT practice and academia participated in all three rounds. Seventy-seven performance indicators were identified and organized into eight categories: technical quality, software quality, architecture and interface quality, IT vendor quality, IT support and IT department quality, workflow support quality, IT outcome quality, and IT costs. The highest ranked indicators are related to clinical workflow support and user satisfaction. Isolated technical indicators or cost indicators were not seen as useful. The experts favored an interdisciplinary group of all the stakeholders, led by hospital management, to conduct the HIS benchmarking. They proposed benchmarking activities both in regular (annual) intervals as well as at defined events (for example after IT introduction). Most of the experts stated that in their institutions no HIS benchmarking activities are being performed at the moment. In the context of IT governance, IT benchmarking is gaining importance in the healthcare area. The found indicators reflect the view of health care IT professionals and researchers. Research is needed to further validate and operationalize key performance indicators, to provide an IT benchmarking framework, and to provide open repositories for a comparison of the HIS benchmarks of different hospitals.

  16. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II.

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  17. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  18. Energy benchmarking for shopping centers in Gulf Coast region

    International Nuclear Information System (INIS)

    Juaidi, Adel; AlFaris, Fadi; Montoya, Francisco G.; Manzano-Agugliaro, Francisco

    2016-01-01

    Building sector consumes a significant amount of energy worldwide (up to 40% of the total global energy); moreover, by the year 2030 the consumption is expected to increase by 50%. One of the reasons is that the performance of buildings and its components degrade over the years. In recent years, energy benchmarking for government office buildings, large scale public buildings and large commercial buildings is one of the key energy saving projects for promoting the development of building energy efficiency and sustainable energy savings in Gulf Cooperation Council (GCC) countries. Benchmarking would increase the purchase of energy efficient equipment, reducing energy bills, CO_2 emissions and conventional air pollution. This paper focuses on energy benchmarking for shopping centers in Gulf Coast Region. In addition, this paper will analyze a sample of shopping centers data in Gulf Coast Region (Dubai, Ajman, Sharjah, Oman and Bahrain). It aims to develop a benchmark for these shopping centers by highlighting the status of energy consumption performance. This research will support the sustainability movement in Gulf area through classifying the shopping centers into: Poor, Usual and Best Practices in terms of energy efficiency. According to the benchmarking analysis in this paper, the shopping centers best energy management practices in the Gulf Coast Region are the buildings that consume less than 810 kW h/m"2/yr, whereas the poor building practices are the centers that consume greater than 1439 kW h/m"2/yr. The conclusions of this work can be used as a reference for shopping centres benchmarking with similar climate. - Highlights: •The energy consumption data of shopping centers in Gulf Coast Region were gathered. •A benchmarking of energy consumption for the public areas for the shopping centers in the Gulf Coast Region was developed. •The shopping centers have the usual practice in the region between 810 kW h/m"2/yr and 1439 kW h/m"2/yr.

  19. Testing and qualification of confidence in statistical procedures

    Energy Technology Data Exchange (ETDEWEB)

    Serghiuta, D.; Tholammakkil, J.; Hammouda, N. [Canadian Nuclear Safety Commission (Canada); O' Hagan, A. [Sheffield Univ. (United Kingdom)

    2014-07-01

    This paper discusses a framework for designing artificial test problems, evaluation criteria, and two of the benchmark tests developed under a research project initiated by the Canadian Nuclear Safety Commission to investigate the approaches for qualification of tolerance limit methods and algorithms proposed for application in optimization of CANDU regional/neutron overpower protection trip setpoints for aged conditions. A significant component of this investigation has been the development of a series of benchmark problems of gradually increased complexity, from simple 'theoretical' problems up to complex problems closer to the real application. The first benchmark problem discussed in this paper is a simplified scalar problem which does not involve extremal, maximum or minimum, operations, typically encountered in the real applications. The second benchmark is a high dimensional, but still simple, problem for statistical inference of maximum channel power during normal operation. Bayesian algorithms have been developed for each benchmark problem to provide an independent way of constructing tolerance limits from the same data and allow assessing how well different methods make use of those data and, depending on the type of application, evaluating what the level of 'conservatism' is. The Bayesian method is not, however, used as a reference method, or 'gold' standard, but simply as an independent review method. The approach and the tests developed can be used as a starting point for developing a generic suite (generic in the sense of potentially applying whatever the proposed statistical method) of empirical studies, with clear criteria for passing those tests. Some lessons learned, in particular concerning the need to assure the completeness of the description of the application and the role of completeness of input information, are also discussed. It is concluded that a formal process which includes extended and detailed benchmark

  20. Qinshan CANDU NPP outage performance improvement through benchmarking

    International Nuclear Information System (INIS)

    Jiang Fuming

    2005-01-01

    With the increasingly fierce competition in the deregulated Energy Market, the optimization of outage duration has become one of the focal points for the Nuclear Power Plant owners around the world. People are seeking various ways to shorten the outage duration of NPP. Great efforts have been made in the Light Water Reactor (LWR) family with the concept of benchmarking and evaluation, which great reduced the outage duration and improved outage performance. The average capacity factor of LWRs has been greatly improved over the last three decades, which now is close to 90%. CANDU (Pressurized Heavy Water Reactor) stations, with its unique feature of on power refueling, of nuclear fuel remaining in the reactor all through the planned outage, have given raise to more stringent safety requirements during planned outage. In addition, the above feature gives more variations to the critical path of planned outage in different station. In order to benchmarking again the best practices in the CANDU stations, Third Qinshan Nuclear Power Company (TQNPC) have initiated the benchmarking program among the CANDU stations aiming to standardize the outage maintenance windows and optimize the outage duration. The initial benchmarking has resulted the optimization of outage duration in Qinshan CANDU NPP and the formulation of its first long-term outage plan. This paper describes the benchmarking works that have been proven to be useful for optimizing outage duration in Qinshan CANDU NPP, and the vision of further optimize the duration with joint effort from the CANDU community. (authors)