WorldWideScience

Sample records for benchmarks improvement trends

  1. Improving patient safety culture in Saudi Arabia (2012-2015): trending, improvement and benchmarking.

    Science.gov (United States)

    Alswat, Khalid; Abdalla, Rawia Ahmad Mustafa; Titi, Maher Abdelraheim; Bakash, Maram; Mehmood, Faiza; Zubairi, Beena; Jamal, Diana; El-Jardali, Fadi

    2017-08-02

    Measuring patient safety culture can provide insight into areas for improvement and help monitor changes over time. This study details the findings of a re-assessment of patient safety culture in a multi-site Medical City in Riyadh, Kingdom of Saudi Arabia (KSA). Results were compared to an earlier assessment conducted in 2012 and benchmarked with regional and international studies. Such assessments can provide hospital leadership with insight on how their hospital is performing on patient safety culture composites as a result of quality improvement plans. This paper also explored the association between patient safety culture predictors and patient safety grade, perception of patient safety, frequency of events reported and number of events reported. We utilized a customized version of the patient safety culture survey developed by the Agency for Healthcare Research and Quality. The Medical City is a tertiary care teaching facility composed of two sites (total capacity of 904 beds). Data was analyzed using SPSS 24 at a significance level of 0.05. A t-Test was used to compare results from the 2012 survey to that conducted in 2015. Two adopted Generalized Estimating Equations in addition to two linear models were used to assess the association between composites and patient safety culture outcomes. Results were also benchmarked against similar initiatives in Lebanon, Palestine and USA. Areas of strength in 2015 included Teamwork within units, and Organizational Learning-Continuous Improvement; areas requiring improvement included Non-Punitive Response to Error, and Staffing. Comparing results to the 2012 survey revealed improvement on some areas but non-punitive response to error and Staffing remained the lowest scoring composites in 2015. Regression highlighted significant association between managerial support, organizational learning and feedback and improved survey outcomes. Comparison to international benchmarks revealed that the hospital is performing at or

  2. Benchmarking 2010: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  3. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  4. Benchmarking: A Process for Improvement.

    Science.gov (United States)

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  5. Benchmarking 2011: Trends in Education Philanthropy

    Science.gov (United States)

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  6. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  7. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  8. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  9. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind......Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... as it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and use benchmarking, hence, generating an uncertainty in understanding the effects of benchmarking. This paper addresses...

  10. Oncology practice trends from the national practice benchmark.

    Science.gov (United States)

    Barr, Thomas R; Towle, Elaine L

    2012-09-01

    In 2011, we made predictions on the basis of data from the National Practice Benchmark (NPB) reports from 2005 through 2010. With the new 2011 data in hand, we have revised last year's predictions and projected for the next 3 years. In addition, we make some new predictions that will be tracked in future benchmarking surveys. We also outline a conceptual framework for contemplating these data based on an ecological model of the oncology delivery system. The 2011 NPB data are consistent with last year's prediction of a decrease in the operating margins necessary to sustain a community oncology practice. With the new data in, we now predict these reductions to occur more slowly than previously forecast. We note an ease to the squeeze observed in last year's trend analysis, which will allow more time for practices to adapt their business models for survival and offer the best of these practices an opportunity to invest earnings into operations to prepare for the inevitable shift away from historic payment methodology for clinical service. This year, survey respondents reported changes in business structure, first measured in the 2010 data, indicating an increase in the percentage of respondents who believe that change is coming soon, but the majority still have confidence in the viability of their existing business structure. Although oncology practices are in for a bumpy ride, things are looking less dire this year for practices participating in our survey.

  11. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    Science.gov (United States)

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  12. Benchmark and Continuous Improvement of Performance

    Directory of Open Access Journals (Sweden)

    Alina Alecse Stanciu

    2017-12-01

    Full Text Available The present Economic Environment is challenge us to perform, to think and re-think our personal strategies in according with our entities strategies, even if we are simply employed or we are entrepreneurs. Is an environment characterised by Volatility, Uncertainity, Complexity and Ambiguity - a VUCA World in which the entities must fight for their position gained in the market, disrupt new markets and new economies, developing their client portofolio, with the Performance as one final goal. The pressure of driving forces known as the 2030 Megatrends: Globalization 2.0, Environmental Crisis and the Scarcity of Resources, Individualism and Value Pluralism, Demographic Change, This paper examines whether using benchmark is an opportunity to increase the competitiveness of Romanian SMEs and the results show that benchmark is therefore a powerful instrument, combining reduced negative impact on the environment with a positive impact on the economy and society.

  13. Public sector benchmarking and performance improvement : what is the link and can it be improved?

    NARCIS (Netherlands)

    Tillema, Sandra

    2010-01-01

    Benchmarking is often used in the public sector as a way of driving up performance. This article explains why benchmarking does not necessarily lead to better performance and why it can generate unwanted consequences. The article recommends ways of improving the link between benchmarking and

  14. Benchmarking to improve the quality of cystic fibrosis care.

    Science.gov (United States)

    Schechter, Michael S

    2012-11-01

    Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.

  15. Benchmarking: a method for continuous quality improvement in health.

    Science.gov (United States)

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  16. Benchmarking and performance improvement at Rocky Flats Technology Site

    International Nuclear Information System (INIS)

    Elliott, C.; Doyle, G.; Featherman, W.L.

    1997-03-01

    The Rocky Flats Environmental Technology Site has initiated a major work process improvement campaign using the tools of formalized benchmarking and streamlining. This paper provides insights into some of the process improvement activities performed at Rocky Flats from November 1995 through December 1996. It reviews the background, motivation, methodology, results, and lessons learned from this ongoing effort. The paper also presents important gains realized through process analysis and improvement including significant cost savings, productivity improvements, and an enhanced understanding of site work processes

  17. Qinshan CANDU NPP outage performance improvement through benchmarking

    International Nuclear Information System (INIS)

    Jiang Fuming

    2005-01-01

    With the increasingly fierce competition in the deregulated Energy Market, the optimization of outage duration has become one of the focal points for the Nuclear Power Plant owners around the world. People are seeking various ways to shorten the outage duration of NPP. Great efforts have been made in the Light Water Reactor (LWR) family with the concept of benchmarking and evaluation, which great reduced the outage duration and improved outage performance. The average capacity factor of LWRs has been greatly improved over the last three decades, which now is close to 90%. CANDU (Pressurized Heavy Water Reactor) stations, with its unique feature of on power refueling, of nuclear fuel remaining in the reactor all through the planned outage, have given raise to more stringent safety requirements during planned outage. In addition, the above feature gives more variations to the critical path of planned outage in different station. In order to benchmarking again the best practices in the CANDU stations, Third Qinshan Nuclear Power Company (TQNPC) have initiated the benchmarking program among the CANDU stations aiming to standardize the outage maintenance windows and optimize the outage duration. The initial benchmarking has resulted the optimization of outage duration in Qinshan CANDU NPP and the formulation of its first long-term outage plan. This paper describes the benchmarking works that have been proven to be useful for optimizing outage duration in Qinshan CANDU NPP, and the vision of further optimize the duration with joint effort from the CANDU community. (authors)

  18. Benchmarking

    OpenAIRE

    Beretta Sergio; Dossi Andrea; Grove Hugh

    2000-01-01

    Due to their particular nature, the benchmarking methodologies tend to exceed the boundaries of management techniques, and to enter the territories of managerial culture. A culture that is also destined to break into the accounting area not only strongly supporting the possibility of fixing targets, and measuring and comparing the performance (an aspect that is already innovative and that is worthy of attention), but also questioning one of the principles (or taboos) of the accounting or...

  19. Benchmarking of business excellence as a determinant of quality improvement

    Directory of Open Access Journals (Sweden)

    Srejović Milan

    2017-01-01

    Full Text Available In order for a process to operate successfully, it is necessary to constantly measure and improve its performance. One way to analyze the current state of performances of a company and its improvement is benchmarking. In a market-oriented environment, an enterprise must meet the expectations of different interest groups, or key stakeholders. However, in order to achieve business excellence, it is necessary to fulfill the requirements prescribed by the relevant standards. In this paper, the focus will be on the requirements of the ISO 9004: 2009 standard. The aim of the paper is to highlight the significance of the benchmark technique in measuring the business performances of companies. By implementing it, you can identify the strengths and weaknesses of the company in question. In this way, the process parameters that need to be improved are identified so that the company can improve its competitive position.

  20. Healthcare Analytics: Creating a Prioritized Improvement System with Performance Benchmarking.

    Science.gov (United States)

    Kolker, Eugene; Kolker, Evelyne

    2014-03-01

    The importance of healthcare improvement is difficult to overstate. This article describes our collaborative work with experts at Seattle Children's to create a prioritized improvement system using performance benchmarking. We applied analytics and modeling approaches to compare and assess performance metrics derived from U.S. News and World Report benchmarking data. We then compared a wide range of departmental performance metrics, including patient outcomes, structural and process metrics, survival rates, clinical practices, and subspecialist quality. By applying empirically simulated transformations and imputation methods, we built a predictive model that achieves departments' average rank correlation of 0.98 and average score correlation of 0.99. The results are then translated into prioritized departmental and enterprise-wide improvements, following a data to knowledge to outcomes paradigm. These approaches, which translate data into sustainable outcomes, are essential to solving a wide array of healthcare issues, improving patient care, and reducing costs.

  1. Benchmarking and Performance Improvement at Rocky Flats Environmental Technology Site

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C. [Kaiser-Hill Co., LLC, Golden, CO (United States)], Doyle, D. [USDOE Rocky Flats Office, Golden, CO (United States)], Featherman, W.D. [Project Performance Corp., Sterline, VA (United States)

    1997-12-31

    The Rocky Flats Environmental Technology Site (RFETS) has initiated a major work process improvement campaign using the tools of formalized benchmarking and streamlining. This paper provides insights into some of the process improvement activities performed at Rocky Flats from November 1995 through December 1996. It reviews the background, motivation, methodology, results, and lessons learned from this ongoing effort. The paper also presents important gains realized through process analysis and improvement including significant cost savings, productivity improvements, and an enhanced understanding of site work processes.

  2. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  3. Plant improvements through the use of benchmarking analysis

    International Nuclear Information System (INIS)

    Messmer, J.R.

    1993-01-01

    As utilities approach the turn of the century, customer and shareholder satisfaction is threatened by rising costs. Environmental compliance expenditures, coupled with low load growth and aging plant assets are forcing utilities to operate existing resources in a more efficient and productive manner. PSI Energy set out in the spring of 1992 on a benchmarking mission to compare four major coal fired plants against others of similar size and makeup, with the goal of finding the best operations in the country. Following extensive analysis of the 'Best in Class' operation, detailed goals and objectives were established for each plant in seven critical areas. Three critical processes requiring rework were identified and required an integrated effort from all plants. The Plant Improvement process has already resulted in higher operation productivity, increased emphasis on planning, and lower costs due to effective material management. While every company seeks improvement, goals are often set in an ambiguous manner. Benchmarking aids in setting realistic goals based on others' actual accomplishments. This paper describes how the utility's short term goals will move them toward being a lower cost producer

  4. Benchmarking and audit of breast units improves quality of care.

    Science.gov (United States)

    van Dam, P A; Verkinderen, L; Hauspy, J; Vermeulen, P; Dirix, L; Huizing, M; Altintas, S; Papadimitriou, K; Peeters, M; Tjalma, W

    2013-01-01

    Quality Indicators (QIs) are measures of health care quality that make use of readily available hospital inpatient administrative data. Assessment quality of care can be performed on different levels: national, regional, on a hospital basis or on an individual basis. It can be a mandatory or voluntary system. In all cases development of an adequate database for data extraction, and feedback of the findings is of paramount importance. In the present paper we performed a Medline search on "QIs and breast cancer" and "benchmarking and breast cancer care", and we have added some data from personal experience. The current data clearly show that the use of QIs for breast cancer care, regular internal and external audit of performance of breast units, and benchmarking are effective to improve quality of care. Adherence to guidelines improves markedly (particularly regarding adjuvant treatment) and there are data emerging showing that this results in a better outcome. As quality assurance benefits patients, it will be a challenge for the medical and hospital community to develop affordable quality control systems, which are not leading to excessive workload.

  5. Quality Improvement Practices and Trends

    DEFF Research Database (Denmark)

    Dahlgaard, Jens J.; Hartz, Ove; Edgeman, Rick L.

    1998-01-01

    professor, as well as key individuals from various industries. In addition to the above activities, Rick will be working with the European Foundation for Quality Management on their "European Master's Programme in Total Quality Management." That program involves a consortium of European universities. Rick......The following article, "Quality Improvement Practices and Trends in Denmark," is the first in a series of papers arranged for and co-authored by Dr. Rick L. Edgeman. Rick is a member of QE's Editorial Board and is on sabbatical from Colorado State University. During the year, Rick and his family...... has begun the process of developing a comparable consortium of American universities for the same purpose-- an activity which is cosponsored by the Education Division of the American Society for Quality (ASQ)....

  6. Process benchmarking for improvement of environmental restoration activities

    International Nuclear Information System (INIS)

    Celorie, J.A.; Selman, J.R.; Larson, N.B.

    1995-01-01

    A process benchmarking study was initiated by the Office of Environmental Management (EM) of the US Department of Energy (DOE) to analyze and improve the department's environmental assessment and environmental restoration (ER) processes. The purpose of this study was to identify specific differences in the processes and implementation procedures used at comparable remediation sites to determine best practices which had the greatest potential to minimize the cost and time required to conduct remedial investigation/ feasibility study (RI/FS) activities. Technical criteria were identified and used to select four DOE, two Department of Defense (DOD), and two Environmental Protection Agency (EPA) restoration sites that exhibited comparable characteristics and regulatory environments. By comparing the process elements and activities executed at the different sites for similar endpoints, best practices were identified for streamlining process elements and minimizing non-value-added activities. Critical measures that influenced process performance were identified and characterized for the sites. This benchmarking study focused on two processes and the internal/external review of documents and the development of the initial evaluation and data collection plan (IEDCP)--since these had a great potential for savings, a high impact on other processes, and a high probability for implementation

  7. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    Science.gov (United States)

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  8. Benchmarking as a Global Strategy for Improving Instruction in Higher Education.

    Science.gov (United States)

    Clark, Karen L.

    This paper explores the concept of benchmarking in institutional research, a comparative analysis methodology designed to help colleges and universities increase their educational quality and delivery systems. The primary purpose of benchmarking is to compare an institution to its competitors in order to improve the product (in this case…

  9. Professional Learning: Trends in State Efforts. Benchmarking State Implementation of College- and Career-Readiness Standards

    Science.gov (United States)

    Anderson, Kimberly; Mire, Mary Elizabeth

    2016-01-01

    This report presents a multi-year study of how states are implementing their state college- and career-readiness standards. In this report, the Southern Regional Education Board's (SREB's) Benchmarking State Implementation of College- and Career-Readiness Standards project studied state efforts in 2014-15 and 2015-16 to foster effective…

  10. A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data.

    Science.gov (United States)

    Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu

    2016-01-01

    A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.

  11. BUSINESS PROCESS IMPROVEMENT BY APPLYING BENCHMARKING BASED MODEL

    Directory of Open Access Journals (Sweden)

    Aleksandar Vujovic

    2013-09-01

    Full Text Available The choice of theme is identified by the need to improve business processes in organizations, as well as the continuous improvement of overall quality, which is under-represented in the Montenegrin organizations. The state of Montenegro has recognized t he growing importance of small and medium-sized organizations in the development of the national economy. Small and medium-sized organizations are the drivers of future economic growth and development of every country whose competitiveness has to pay special attention. One of the main sources of the competitiveness of small and medium-sized organizations is their pursuit to the business excellence, because it has become the most powerful means of achieving competitive advantage of organizations. The paper investigates certified organizations in Montenegro and their contemporary business and commitment towards business excellence. These organizations in Montenegro adapt its business to international standards and procedures that represent the future of economic growth and development of modern business. Research results of Montenegrin organizations were compared with small and medium-sized organizations from Serbia, which won the awards for business excellence "Quality Oscar" in the category of small and medium-sized organizations, for the last three years (2009, 2010, and 2011. The idea comes from the neccessity of Montenegrin economy to give small contribution in order that small and medium organizations adjust their businesses to the new business.

  12. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  13. Kaiser Permanente's performance improvement system, Part 1: From benchmarking to executing on strategic priorities.

    Science.gov (United States)

    Schilling, Lisa; Chase, Alide; Kehrli, Sommer; Liu, Amy Y; Stiefel, Matt; Brentari, Ruth

    2010-11-01

    By 2004, senior leaders at Kaiser Permanente, the largest not-for-profit health plan in the United States, recognizing variations across service areas in quality, safety, service, and efficiency, began developing a performance improvement (PI) system to realizing best-in-class quality performance across all 35 medical centers. MEASURING SYSTEMWIDE PERFORMANCE: In 2005, a Web-based data dashboard, "Big Q," which tracks the performance of each medical center and service area against external benchmarks and internal goals, was created. PLANNING FOR PI AND BENCHMARKING PERFORMANCE: In 2006, Kaiser Permanente national and regional continued planning the PI system, and in 2007, quality, medical group, operations, and information technology leaders benchmarked five high-performing organizations to identify capabilities required to achieve consistent best-in-class organizational performance. THE PI SYSTEM: The PI system addresses the six capabilities: leadership priority setting, a systems approach to improvement, measurement capability, a learning organization, improvement capacity, and a culture of improvement. PI "deep experts" (mentors) consult with national, regional, and local leaders, and more than 500 improvement advisors are trained to manage portfolios of 90-120 day improvement initiatives at medical centers. Between the second quarter of 2008 and the first quarter of 2009, performance across all Kaiser Permanente medical centers improved on the Big Q metrics. The lessons learned in implementing and sustaining PI as it becomes fully integrated into all levels of Kaiser Permanente can be generalized to other health care systems, hospitals, and other health care organizations.

  14. Evidence for acid-precipitation-induced trends in stream chemistry at hydrologic bench-mark stations

    Science.gov (United States)

    Smith, Richard A.; Alexander, Richard B.

    1983-01-01

    Ten- to 15-year water-quality records from a network of headwater sampling stations show small declines in stream sulfate concentrations at stations in the northeastern quarter of the Nation and small increases in sulfate at most southeastern and western sites. The regional pattern of stream sulfate trends is similar to that reported for trends in S02 emissions to the atmosphere during the same period. Trends in the ratio of alkalinity to total major cation concentrations at the stations follow an inverse pattern of small increases in the Northeast and small, but widespread decreases elsewhere. The undeveloped nature of the sampled basins and the magnitude and direction of observed changes in relation to SO2 emissions support the hypothesis that the observed patterns in water quality trends reflect regional changes in the rates of acid deposition.

  15. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  16. Benchmarking as an Instrument for Improvement of Quality Management in Higher Education

    Directory of Open Access Journals (Sweden)

    Narimantas Kazimieras Paliulis

    2015-06-01

    Full Text Available Conditioned by globalisation and constant change, higher education institutions (HEIs are forced to pursue new instruments for quality assurance in higher education. States seem to pursue this aim by attempting to create an efficiently operating system of higher education that satisfies needs of diverse societal groups. Quality dimension is the most important element of efficient and effective higher education. From the perspective of a state, assessment and monitoring of quality are instruments for the management of processes of higher education. The article substantiates these statements using the evolution of the dimension of quality in the European and Lithuanian higher education in the course of the Bologna Process. The article also presents a benchmarking method and discusses its development and application tendencies in business organisations. Also, it looks at possibilities to apply this method in higher education. The main aim of this article is to explore benchmarking as an effective instrument for the improvement of performance quality in HEIs and complement the already implemented quality management systems. Another aim is to suggest this method to national agencies for quality assurance in higher education for monitoring and analysis of qualitative changes on the systematic level. The object of the article is the improvement of performance quality in HEIs. Benchmarking is proposed for the use in higher education on the institutional level as an instrument that complements presently introduced quality management systems in Lithuanian HEIs. This way, it will contribute to the formation of the culture of quality in higher education.

  17. Student Retention Indicators Benchmark Report for Four-Year and Two-Year Institutions, 2013. Noel-Levitz Report on Undergraduate Trends in Enrollment Management. Higher Ed Benchmarks

    Science.gov (United States)

    Noel-Levitz, Inc., 2013

    2013-01-01

    This biennial report from Noel-Levitz assists colleges and universities with raising the bar on student retention and degree completion subgoals by benchmarking key predictive indicators such as term-to-term persistence and the ratio of credit hours completed vs. credit hours attempted. The report is based on a Web-based poll of campus officials…

  18. The hydrologic bench-mark program; a standard to evaluate time-series trends in selected water-quality constituents for streams in Georgia

    Science.gov (United States)

    Buell, G.R.; Grams, S.C.

    1985-01-01

    Significant temporal trends in monthly pH, specific conductance, total alkalinity, hardness, total nitrite-plus-nitrite nitrogen, and total phosphorus measurements at five stream sites in Georgia were identified using a rank correlation technique, the seasonal Kendall test and slope estimator. These sites include a U.S. Geological Survey Hydrologic Bench-Mark site, Falling Creek near Juliette, and four periodic water-quality monitoring sites. Comparison of raw data trends with streamflow-residual trends and, where applicable, with chemical-discharge trends (instantaneous fluxes) shws that some of these trends are responses to factors other than changing streamflow. Percentages of forested, agricultural, and urban cover with each basin did not change much during the periods of water-quality record, and therefore these non-flow-related trends are not obviously related to changes in land cover or land use. Flow-residual water-quality trends at the Hydrologic Bench-Mark site and at the Chattooga River site probably indicate basin reponses to changes in the chemical quality of atmospheric deposition. These two basins are predominantly forested and have received little recent human use. Observed trends at the other three sites probably indicate basin responses to various land uses and water uses associated with agricultural and urban land or to changes in specific uses. (USGS)

  19. Reliable B cell epitope predictions: impacts of method development and improved benchmarking

    DEFF Research Database (Denmark)

    Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole

    2012-01-01

    biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping...... evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve highly significant predictive performances suggesting these tools to be a powerful asset in rational epitope discovery. The updated version...

  20. How to Use Benchmark and Cross-section Studies to Improve Data Libraries and Models

    Science.gov (United States)

    Wagner, V.; Suchopár, M.; Vrzalová, J.; Chudoba, P.; Svoboda, O.; Tichý, P.; Krása, A.; Majerle, M.; Kugler, A.; Adam, J.; Baldin, A.; Furman, W.; Kadykov, M.; Solnyshkin, A.; Tsoupko-Sitnikov, S.; Tyutyunikov, S.; Vladimirovna, N.; Závorka, L.

    2016-06-01

    Improvements of the Monte Carlo transport codes and cross-section libraries are very important steps towards usage of the accelerator-driven transmutation systems. We have conducted a lot of benchmark experiments with different set-ups consisting of lead, natural uranium and moderator irradiated by relativistic protons and deuterons within framework of the collaboration “Energy and Transmutation of Radioactive Waste”. Unfortunately, the knowledge of the total or partial cross-sections of important reactions is insufficient. Due to this reason we have started extensive studies of different reaction cross-sections. We measure cross-sections of important neutron reactions by means of the quasi-monoenergetic neutron sources based on the cyclotrons at Nuclear Physics Institute in Řež and at The Svedberg Laboratory in Uppsala. Measurements of partial cross-sections of relativistic deuteron reactions were the second direction of our studies. The new results obtained during last years will be shown. Possible use of these data for improvement of libraries, models and benchmark studies will be discussed.

  1. An improved benchmark model for the Big Ten critical assembly - 021

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    2010-01-01

    A new benchmark specification is developed for the BIG TEN uranium critical assembly. The assembly has a fast spectrum, and its core contains approximately 10 wt.% enriched uranium. Detailed specifications for the benchmark are provided, and results from the MCNP5 Monte Carlo code using a variety of nuclear-data libraries are given for this benchmark and two others. (authors)

  2. Investigation on the improvement of genetic algorithm for PWR loading pattern search and its benchmark verification

    International Nuclear Information System (INIS)

    Li Qianqian; Jiang Xiaofeng; Zhang Shaohong

    2009-01-01

    In this study, the age technique, the concepts of relativeness degree and worth function are exploited to improve the performance of genetic algorithm (GA) for PWR loading pattern search. Among them, the age technique endows the algorithm be capable of learning from previous search 'experience' and guides it to do a better search in the vicinity ora local optimal; the introduction of the relativeness degree checks the relativeness of two loading patterns before performing crossover between them, which can significantly reduce the possibility of prematurity of the algorithm; while the application of the worth function makes the algorithm be capable of generating new loading patterns based on the statistics of common features of evaluated good loading patterns. Numerical verification against a loading pattern search benchmark problem ora two-loop reactor demonstrates that the adoption of these techniques is able to significantly enhance the efficiency of the genetic algorithm while improves the quality of the final solution as well. (authors)

  3. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  5. Effect of benchmarking projects on outcomes of coronary artery bypass graft surgery: challenges and prospects regarding the quality improvement initiative.

    Science.gov (United States)

    Miyata, Hiroaki; Motomura, Noboru; Murakami, Arata; Takamoto, Shinichi

    2012-06-01

    The Japan Cardiovascular Surgery Database (JCVSD) was established in 2000 and initiated a benchmarking project to improve the quality of cardiovascular surgery. Although the importance of quality improvement initiatives has been emphasized, few studies have reported the effects on outcomes. To examine the time-trend effects in initial JCVSD participants (n = 44), we identified 8224 isolated coronary artery bypass graft (CABG) procedures performed between 2004 and 2007. The impact of surgery year was examined using a multiple logistic regression model that set previously identified clinical risk factors and surgery year as fixed effects. To examine the difference in outcomes between initial participants (n = 44) and halfway participants (n = 55), we identified 3882 isolated CABG procedures performed in 2007. The differences between the 2 hospital groups were examined using a multiple logistic regression model that set clinical risk factors, hospital procedure volume, and hospital groups as fixed effects. For operative mortality, the odds ratio of surgery year was 0.88 (P = .083). Observed/expected (OE) ratios for operative mortality were 0.71 in 2004, 0.73 in 2005, 0.63 in 2006, and 0.54 in 2007. As for composite mortality and major morbidities (reoperation, stroke, dialysis, infection, and prolonged ventilation), odds ratio of surgery year was 0.97 (P = .361). OE ratios for composite mortality and morbidities were 1.01 in 2004, 1.04 in 2005, 1.04 in 2006, and 0.94 in 2007. Compared with halfway participants, initial participants had a significantly lower rate of operative mortality (odds ratio = 0.527; P = .008) and composite mortality and major morbidities (odds ratio 0.820; P = .047). This study demonstrated that a quality improvement initiative for cardiovascular surgery has positive impacts on risk-adjusted outcomes. Although the primary target of benchmarking was 30-day mortality in Japan, major morbidities were less affected by those activities. Copyright

  6. [Does implementation of benchmarking in quality circles improve the quality of care of patients with asthma and reduce drug interaction?].

    Science.gov (United States)

    Kaufmann-Kolle, Petra; Szecsenyi, Joachim; Broge, Björn; Haefeli, Walter Emil; Schneider, Antonius

    2011-01-01

    The purpose of this cluster-randomised controlled trial was to evaluate the efficacy of quality circles (QCs) working either with general data-based feedback or with an open benchmark within the field of asthma care and drug-drug interactions. Twelve QCs, involving 96 general practitioners from 85 practices, were randomised. Six QCs worked with traditional anonymous feedback and six with an open benchmark. Two QC meetings supported with feedback reports were held covering the topics "drug-drug interactions" and "asthma"; in both cases discussions were guided by a trained moderator. Outcome measures included health-related quality of life and patient satisfaction with treatment, asthma severity and number of potentially inappropriate drug combinations as well as the general practitioners' satisfaction in relation to the performance of the QC. A significant improvement in the treatment of asthma was observed in both trial arms. However, there was only a slight improvement regarding inappropriate drug combinations. There were no relevant differences between the group with open benchmark (B-QC) and traditional quality circles (T-QC). The physicians' satisfaction with the QC performance was significantly higher in the T-QCs. General practitioners seem to take a critical perspective about open benchmarking in quality circles. Caution should be used when implementing benchmarking in a quality circle as it did not improve healthcare when compared to the traditional procedure with anonymised comparisons. Copyright © 2011. Published by Elsevier GmbH.

  7. [An approach to care indicators benchmarking. Learning to improve patient safety].

    Science.gov (United States)

    de Andrés Gimeno, B; Salazar de la Guerra, R M; Ferrer Arnedo, C; Revuelta Zamorano, M; Ayuso Murillo, D; González Soria, J

    2014-01-01

    Improvements in clinical safety can be achieved by promoting a safety culture, professional training, and learning through benchmarking. The aim of this study was to identify areas for improvement after analysing the safety indicators in two public Hospitals in North-West Madrid Region. Descriptive study performed during 2011 in Hospital Universitario Puerta de Hierro Majadahonda (HUPHM) and Hospital de Guadarrama (HG). The variables under study were 40 indicators on nursing care related to patient safety. Nineteen of them were defined in the SENECA project as care quality standards in order to improve patient safety in the hospitals. The data collected were clinical history, Madrid Health Service assessment reports, care procedures, and direct observation Within the 40 indicators: 22 of them were structured (procedures), HUPHM had 86%, and HG 95% 14 process indicators (training and protocols compliance) with similar results in both hospitals, apart from the care continuity reports and training in hand hygiene. The 4 results indicators (pressure ulcer, falls and pain) showed different results. The analysis of the indicators allowed the following actions to be taken: to identify improvements to be made in each hospital, to develop joint safety recommendations in nursing care protocols in prevention and treatment of chronic wound, to establish systematic pain assessments, and to prepare continuity care reports on all patients transferred from HUPHM to HG. Copyright © 2013 SECA. Published by Elsevier Espana. All rights reserved.

  8. USING BENCHMARKING TO IMPROVE THE FINANCIAL AND SOCIAL SUSTAINABILITY OF COMMERCIAL GOAT MEAT, CASHMERE AND MOHAIR FARMS IN AUSTRALIA

    Directory of Open Access Journals (Sweden)

    Bruce Allan McGregor

    2009-02-01

    Full Text Available Production and financial benchmarking was undertaken with commercially motivated mohair, cashmere and goat meat farmers in Australia. There were large differences in animal and fleece production and financial returns between the best and worst performing farms. Farmers and industry groups reported that the process and results were helpful and resulted in them changing management practices. Benchmarking demonstrated that there is substantial scope to increase productivity and profitability through improved genetic selection and improved management of pastures, breeding flocks and in kid survival and growth.

  9. Journal Benchmarking for Strategic Publication Management and for Improving Journal Positioning in the World Ranking Systems

    Science.gov (United States)

    Moskovkin, Vladimir M.; Bocharova, Emilia A.; Balashova, Oksana V.

    2014-01-01

    Purpose: The purpose of this paper is to introduce and develop the methodology of journal benchmarking. Design/Methodology/ Approach: The journal benchmarking method is understood to be an analytic procedure of continuous monitoring and comparing of the advance of specific journal(s) against that of competing journals in the same subject area,…

  10. Benchmark Report on Key Outage Attributes: An Analysis of Outage Improvement Opportunities and Priorities

    Energy Technology Data Exchange (ETDEWEB)

    Germain, Shawn St. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Farris, Ronald [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2014-09-01

    Advanced Outage Control Center (AOCC), is a multi-year pilot project targeted at Nuclear Power Plant (NPP) outage improvement. The purpose of this pilot project is to improve management of NPP outages through the development of an AOCC that is specifically designed to maximize the usefulness of communication and collaboration technologies for outage coordination and problem resolution activities. This report documents the results of a benchmarking effort to evaluate the transferability of technologies demonstrated at Idaho National Laboratory and the primary pilot project partner, Palo Verde Nuclear Generating Station. The initial assumption for this pilot project was that NPPs generally do not take advantage of advanced technology to support outage management activities. Several researchers involved in this pilot project have commercial NPP experience and believed that very little technology has been applied towards outage communication and collaboration. To verify that the technology options researched and demonstrated through this pilot project would in fact have broad application for the US commercial nuclear fleet, and to look for additional outage management best practices, LWRS program researchers visited several additional nuclear facilities.

  11. Republic of India : Service Level Benchmarking, Citizen Voice and Performance Improvement Strategies in Urban Water Supply and Sanitation

    OpenAIRE

    World Bank Group

    2016-01-01

    This synthesis report details the process, outputs and intermediate outcomes of the Water and Sanitation Program - World Bank (WSP) Technical Assistance (TA) to Service Level Benchmarking, Citizen Voice and Performance Improvement Strategies in Urban Water Supply and Sanitation (UWSS) in India. This technical assistance (TA) sought to strengthen accountability for service outcomes in urban...

  12. Benchmarking Is Associated With Improved Quality of Care in Type 2 Diabetes

    Science.gov (United States)

    Hermans, Michel P.; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-01-01

    OBJECTIVE To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. RESEARCH DESIGN AND METHODS Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. RESULTS Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile. PMID:23846810

  13. Improving the accuracy of self-assessment of practical clinical skills using video feedback--the importance of including benchmarks.

    Science.gov (United States)

    Hawkins, S C; Osborne, A; Schofield, S J; Pournaras, D J; Chester, J F

    2012-01-01

    Isolated video recording has not been demonstrated to improve self-assessment accuracy. This study examines if the inclusion of a defined standard benchmark performance in association with video feedback of a student's own performance improves the accuracy of student self-assessment of clinical skills. Final year medical students were video recorded performing a standardised suturing task in a simulated environment. After the exercise, the students self-assessed their performance using global rating scales (GRSs). An identical self-assessment process was repeated following video review of their performance. Students were then shown a video-recorded 'benchmark performance', which was specifically developed for the study. This demonstrated the competency levels required to score full marks (30 points). A further self-assessment task was then completed. Students' scores were correlated against expert assessor scores. A total of 31 final year medical students participated. Student self-assessment scores before video feedback demonstrated moderate positive correlation with expert assessor scores (r = 0.48, p benchmark performance demonstration, self-assessment scores demonstrated a very strong positive correlation with expert scores (r = 0.83, p benchmark performance in combination with video feedback may significantly improve the accuracy of students' self-assessments.

  14. Validation of neutron-transport calculations in benchmark facilities for improved damage-fluence predictions

    International Nuclear Information System (INIS)

    Williams, M.L.; Stallmann, F.W.; Maerker, R.E.; Kam, F.B.K.

    1983-01-01

    An accurate determination of damage fluence accumulated by reactor pressure vessels (RPV) as a function of time is essential in order to evaluate the vessel integrity for both pressurized thermal shock (PTS) transients and end-of-life considerations. The desired accuracy for neutron exposure parameters such as displacements per atom or fluence (E > 1 MeV) is of the order of 20 to 30%. However, these types of accuracies can only be obtained realistically by validation of nuclear data and calculational methods in benchmark facilities. The purposes of this paper are to review the needs and requirements for benchmark experiments, to discuss the status of current benchmark experiments, to summarize results and conclusions obtained so far, and to suggest areas where further benchmarking is needed

  15. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  16. A Multicenter Performance Improvement Program Uses Rural Trauma Filters for Benchmarking: An Evaluation of the Findings.

    Science.gov (United States)

    Coniglio, Ray; McGraw, Constance; Archuleta, Mike; Bentler, Heather; Keiter, Leigh; Ramstetter, Julie; Reis, Elizabeth; Romans, Cristi; Schell, Rachael; Ross, Kelli; Smith, Rachel; Townsend, Jodi; Orlando, Alessandro; Mains, Charles W

    Colorado requires Level III and IV trauma centers to conduct a formal performance improvement program (PI), but provides limited support for program development. Trauma program managers and coordinators in rural facilities rarely have experience in the development or management of a PI program. As a result, rural trauma centers often face challenges in evaluating trauma outcomes adequately. Through a multidisciplinary outreach program, our Trauma System worked with a group of rural trauma centers to identify and define seven specific PI filters based on key program elements of rural trauma centers. This retrospective observational project sought to develop and examine these PI filters so as to enhance the review and evaluation of patient care. The project included 924 trauma patients from eight Level IV and one Level III trauma centers. Seven PI filters were retrospectively collected and analyzed by quarter in 2016: prehospital managed airway for patients with a Glasgow Coma Scale (GCS) score of less than 9; adherence to trauma team activation criteria; evidence of physician team leader presence within 20 min of activation; patient with a GCS score less than 9 in the emergency department (ED): intubated in less than 20 min; ED length of stay (LOS) less than 4 hr from patient arrival to transfer; adherence to admission criteria; documentation of GCS on arrival, discharge, or with change of status. There was a significantly increasing compliance trend toward appropriate documentation of GCS (p trend used to develop compliance thresholds, to identify areas for improvement, and create corrective action plans as necessary.

  17. THE APPLICATION OF DATA ENVELOPMENT ANALYSIS METHODOLOGY TO IMPROVE THE BENCHMARKING PROCESS IN THE EFQM BUSINESS MODEL (CASE STUDY: AUTOMOTIVE INDUSTRY OF IRAN

    Directory of Open Access Journals (Sweden)

    K. Shahroudi

    2009-10-01

    Full Text Available This paper reports a survey and case study research outcomes on the application of Data Envelopment Analysis (DEA to the ranking method of European Foundation for Quality Management (EFQM Business Excellence Model in Iran’s Automotive Industry and improving benchmarking process after assessment. Following the global trend, the Iranian industry leaders have introduced the EFQM practice to their supply chain in order to improve the supply base competitiveness during the last four years. A question which is raises is whether the EFQM model can be combined with a mathematical model such as DEA in order to generate a new ranking method and develop or facilitate the benchmarking process. The developed model of this paper is simple. However, it provides some new and interesting insights. The paper assesses the usefulness and capability of the DEA technique to recognize a new scoring system in order to compare the classical ranking method and the EFQM business model. We used this method to identify meaningful exemplar companies for each criterion of the EFQM model then we designed a road map based on realistic targets in the criterion which have currently been achieved by exemplar companies. The research indicates that the DEA approach is a reliable tool to analyze the latent knowledge of scores generated by conducting self- assessments. The Wilcoxon Rank Sum Test is used to compare two scores and the Test of Hypothesis reveals the meaningful relation between the EFQM and DEA new ranking methods. Finally, we drew a road map based on the benchmarking concept using the research results.

  18. Implications of the Trauma Quality Improvement Project inclusion of nonsurvivable injuries in performance benchmarking.

    Science.gov (United States)

    Heaney, Jiselle Bock; Schroll, Rebecca; Turney, Jennifer; Stuke, Lance; Marr, Alan B; Greiffenstein, Patrick; Robledo, Rosemarie; Theriot, Amanda; Duchesne, Juan; Hunt, John

    2017-10-01

    The Trauma Quality Improvement Project (TQIP) uses an injury prediction model for performance benchmarking. We hypothesize that at a Level I high-volume penetrating trauma center, performance outcomes will be biased due to inclusion of patients with nonsurvivable injuries. Retrospective chart review was conducted for all patients included in the institutional TQIP analysis from 2013 to 2014 with length of stay (LOS) less than 1 day to determine survivability of the injuries. Observed (O)/expected (E) mortality ratios were calculated before and after exclusion of these patients. Completeness of data reported to TQIP was examined. Eight hundred twenty-six patients were reported to TQIP including 119 deaths. Nonsurvivable injuries accounted 90.9% of the deaths in patients with an LOS of 1 day or less. The O/E mortality ratio for all patients was 1.061, and the O/E ratio after excluding all patients with LOS less than 1 day found to have nonsurvivable injuries was 0.895. Data for key variables were missing in 63.3% of patients who died in the emergency department, 50% of those taken to the operating room and 0% of those admitted to the intensive care unit. Charts for patients who died with LOS less than 1 day were significantly more likely than those who lived to be missing crucial. This study shows TQIP inclusion of patients with nonsurvivable injuries biases outcomes at an urban trauma center. Missing data results in imputation of values, increasing inaccuracy. Further investigation is needed to determine if these findings exist at other institutions, and whether the current TQIP model needs revision to accurately identify and exclude patients with nonsurvivable injuries. Prognostic and epidemiological, level III.

  19. Use of KPIs to Regulate Co-operation and to Improve Intercompany Benchmarking in the Construction Industry

    DEFF Research Database (Denmark)

    Bohnstedt, Kristian Ditlev

    2013-01-01

    With the aim to investigate whether it can fairly be assumed that the prequalification and selection of co-operators via Key Performance Indicators (KPI) provide increased odds in favour of success, a study on contractors consisting of benchmark results from The Benchmark Centre for the Danish...... Construction Sector and financial statements has been accomplished. Through a comparative analysis based on detected correlations between economy and KPIs, it is explained how these factors are linked. The research showed that the use of KPIs has motivated companies to improve their competitive ability, thus...... their ability to become prequalified. It also showed that the use of KPIs (e.g. quality, timeliness, economy, customer satisfaction and accident rates) for selection and prequalification improves the probability of success in the building process as highly rated KPIs have a positive linear relationship...

  20. Benchmarking and improving point cloud data management in MonetDB

    NARCIS (Netherlands)

    Martinez-Rubi, O.; Van Oosterom, P.J.M.; Goncalves, R.; Tijssen, T.P.M.; Ivanova, M.; Kersten, M.L.; Alvanaki, F.

    2015-01-01

    The popularity, availability and sizes of point cloud data sets are increasing, thus raising interesting data management and processing challenges. Various software solutions are available for the management of point cloud data. A benchmark for point cloud data management systems was defined and it

  1. Benchmarking and improving point cloud data management in MonetDB

    NARCIS (Netherlands)

    O. Martinez-Rubi (Oscar); P. van Oosterom; R.A. Goncalves (Romulo); T. Tijssen; M.G. Ivanova (Milena); M.L. Kersten (Martin); F. Alvanaki (Foteini)

    2014-01-01

    htmlabstractThe popularity, availability and sizes of point cloud data sets are increasing, thus raising interesting data management and processing challenges. Various software solutions are available for the management of point cloud data. A benchmark for point cloud data management systems was

  2. A ChIP-Seq benchmark shows that sequence conservation mainly improves detection of strong transcription factor binding sites.

    Directory of Open Access Journals (Sweden)

    Tony Håndstad

    Full Text Available BACKGROUND: Transcription factors are important controllers of gene expression and mapping transcription factor binding sites (TFBS is key to inferring transcription factor regulatory networks. Several methods for predicting TFBS exist, but there are no standard genome-wide datasets on which to assess the performance of these prediction methods. Also, it is believed that information about sequence conservation across different genomes can generally improve accuracy of motif-based predictors, but it is not clear under what circumstances use of conservation is most beneficial. RESULTS: Here we use published ChIP-seq data and an improved peak detection method to create comprehensive benchmark datasets for prediction methods which use known descriptors or binding motifs to detect TFBS in genomic sequences. We use this benchmark to assess the performance of five different prediction methods and find that the methods that use information about sequence conservation generally perform better than simpler motif-scanning methods. The difference is greater on high-affinity peaks and when using short and information-poor motifs. However, if the motifs are specific and information-rich, we find that simple motif-scanning methods can perform better than conservation-based methods. CONCLUSIONS: Our benchmark provides a comprehensive test that can be used to rank the relative performance of transcription factor binding site prediction methods. Moreover, our results show that, contrary to previous reports, sequence conservation is better suited for predicting strong than weak transcription factor binding sites.

  3. Spot Markets Indices as Benchmarks of Formation of Future Price Trends in the Power Exchanges of Eastern Europe

    Directory of Open Access Journals (Sweden)

    Polikevych Nataliya I.

    2016-01-01

    Full Text Available The article is concerned with a theoretical generalization of the use of indices for electric power at the European spot exchanges and elaborating proposals on establishment of a similar spot index for the Ukrainian power exchange. 16 indices that are published daily by the power exchanges BSP Regional Energy Exchange, Power Exchange Central Europe, Polish Power Exchange and Opcom have been analyzed. It has been indicated that these indices are used for electricity price forecasting and monitoring the situation in the power market. The article examines the way spot indices are calculated by power exchanges, based on the value of the arithmetic average of market prices «day ahead». Imperfection of such way of calculation for price index values has been substantiated. The key characteristics of the future price index for Ukrainian spot market as benchmarks within the introduction of futures contracts for electricity have been identified.

  4. Development of a new energy benchmark for improving the operational rating system of office buildings using various data-mining techniques

    International Nuclear Information System (INIS)

    Park, Hyo Seon; Lee, Minhyun; Kang, Hyuna; Hong, Taehoon; Jeong, Jaewook

    2016-01-01

    Highlights: • This study developed a new energy benchmark for office buildings. • Correlation analysis, decision tree, and analysis of variance were used. • The data from 1072 office buildings in South Korea were used. • As a result, six types of energy benchmarks for office buildings were developed. • The operational rating system can be improved by using the new energy benchmark. - Abstract: As improving energy efficiency in buildings has become a global issue today, many countries have adopted the operational rating system to evaluate the energy performance of a building based on the actual energy consumption. A rational and reasonable energy benchmark can be used in the operational rating system to evaluate the energy performance of a building accurately and effectively. This study aims to develop a new energy benchmark for improving the operational rating system of office buildings. Toward this end, this study used various data-mining techniques such as correlation analysis, decision tree (DT) analysis, and analysis of variance (ANOVA). Based on data from 1072 office buildings in South Korea, this study was conducted in three steps: (i) Step 1: establishment of the database; (ii) Step 2: development of the new energy benchmark; and (iii) Step 3: application of the new energy benchmark for improving the operational rating system. As a result, six types of energy benchmarks for office buildings were developed using DT analysis based on the gross floor area (GFA) and the building use ratio (BUR) of offices, and these new energy benchmarks were validated using ANOVA. To ensure the effectiveness of the new energy benchmark, it was applied to three operational rating systems for comparison: (i) the baseline system (the same energy benchmark is used for all office buildings); (ii) the conventional system (different energy benchmarks are used depending on the GFA, currently used in South Korea); and (iii) the proposed system (different energy benchmarks are

  5. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  6. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  7. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  8. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  9. BENCHMARKING AS A TOOL FOR IMPROVING THE MANAGEMENT OF ORGANIZATIONS OF SECONDARY VOCATIONAL EDUCATION IN THE CONTEXT OF EXECUTION OF THE STATE PROGRAM "DEVELOPMENT OF EDUCATION IN MOSCOW"

    OpenAIRE

    Sosnitskiy K.M.

    2015-01-01

    This article discusses some features of benchmarking and the possibility of its practical use in the system of professional education during the implementation of the state program "stolichnoe obrazovanie" in order to improve the efficiency of the educational organization

  10. Using benchmarking techniques and the 2011 maternity practices infant nutrition and care (mPINC) survey to improve performance among peer groups across the United States.

    Science.gov (United States)

    Edwards, Roger A; Dee, Deborah; Umer, Amna; Perrine, Cria G; Shealy, Katherine R; Grummer-Strawn, Laurence M

    2014-02-01

    A substantial proportion of US maternity care facilities engage in practices that are not evidence-based and that interfere with breastfeeding. The CDC Survey of Maternity Practices in Infant Nutrition and Care (mPINC) showed significant variation in maternity practices among US states. The purpose of this article is to use benchmarking techniques to identify states within relevant peer groups that were top performers on mPINC survey indicators related to breastfeeding support. We used 11 indicators of breastfeeding-related maternity care from the 2011 mPINC survey and benchmarking techniques to organize and compare hospital-based maternity practices across the 50 states and Washington, DC. We created peer categories for benchmarking first by region (grouping states by West, Midwest, South, and Northeast) and then by size (grouping states by the number of maternity facilities and dividing each region into approximately equal halves based on the number of facilities). Thirty-four states had scores high enough to serve as benchmarks, and 32 states had scores low enough to reflect the lowest score gap from the benchmark on at least 1 indicator. No state served as the benchmark on more than 5 indicators and no state was furthest from the benchmark on more than 7 indicators. The small peer group benchmarks in the South, West, and Midwest were better than the large peer group benchmarks on 91%, 82%, and 36% of the indicators, respectively. In the West large, the Midwest large, the Midwest small, and the South large peer groups, 4-6 benchmarks showed that less than 50% of hospitals have ideal practice in all states. The evaluation presents benchmarks for peer group state comparisons that provide potential and feasible targets for improvement.

  11. Trends in Child Poverty Using an Improved Measure of Poverty.

    Science.gov (United States)

    Wimer, Christopher; Nam, JaeHyun; Waldfogel, Jane; Fox, Liana

    2016-04-01

    The official measure of poverty has been used to assess trends in children's poverty rates for many decades. But because of flaws in official poverty statistics, these basic trends have the potential to be misleading. We use an augmented Current Population Survey data set that calculates an improved measure of poverty to reexamine child poverty rates between 1967 and 2012. This measure, the Anchored Supplemental Poverty Measure, is based partially on the US Census Bureau and Bureau of Labor Statistics' new Supplemental Poverty Measure. We focus on 3 age groups of children, those aged 0 to 5, 6 to 11, and 12 to 17 years. Young children have the highest poverty rates, both historically and today. However, among all age groups, long-term poverty trends have been more favorable than official statistics would suggest. This is entirely due to the effect of counting resources from government policies and programs, which have reduced poverty rates substantially for children of all ages. However, despite this progress, considerable disparities in the risk of poverty continue to exist by education level and family structure. Copyright © 2016 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  12. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    Science.gov (United States)

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  13. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  14. Evaluating the scope for energy-efficiency improvements in the public sector: Benchmarking NHSScotland's smaller health buildings

    International Nuclear Information System (INIS)

    Murray, Joe; Pahl, O.; Burek, S.

    2008-01-01

    The National Health Service in Scotland (NHSScotland) has, in recent years, done much to reduce energy consumption in its major healthcare buildings (hospitals). On average, a reduction of 2% per year has been achieved since 2000, based on hospital buildings. However, there had been little or no attention paid to smaller premises such as health centres, clinics, dentists, etc. Such smaller healthcare buildings in Scotland constitute 29% of the total treated floor area of all NHSScotland buildings and, therefore, may contribute a similar percentage of carbon and other emissions to the environment. By concentrating on a sample of local health centres in Scotland, this paper outlines the creation of an energy benchmark target, which is part of a wider research project to investigate the environmental impacts of small healthcare buildings in Scotland and the scope for improvements. It was found that energy consumption varied widely between different centres but this variation could not be linked to building style, floor area or volume. Overall, it was found that a benchmark of 0.2 GJ/m 3 would be challenging, but realistic

  15. Relevance of introducing the concept of benchmarking on the education market of Ukraine for the development of higher educational institutions

    Directory of Open Access Journals (Sweden)

    Kostiuk Mariia

    2016-06-01

    Full Text Available On the Ukrainian education market indicators of demand and supply of educational services show a steady trend of growth. Due to increasing competition between institutions of all countries, it is impossible to do without innovative tools to enhance competitiveness – namely benchmarking. The article substantiates the need for benchmarking and implementation of education on the Ukrainian market. It considers the best examples of benchmarking universities in the United States, studies the stages of benchmarking. Based on the results of the study recommendations for improving the competitiveness of Ukrainian higher education institutions and their access to new markets through benchmarking have been formulated.

  16. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  17. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  18. Benchmarking ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2006-01-01

    benchmarks deviates only 0.017% from the measured benchmark value. Moreover, no clear trends (with e.g. enrichment, lattice pitch, or spectrum) have been observed. Also for fast spectrum benchmarks, both for intermediately or highly enriched uranium and for plutonium, clear improvements are apparent from the calculations. The results for bare assemblies have improved, as well as those with a depleted or natural uranium reflector. On the other hand, the results for plutonium solutions (PU-SOL-THERM) are still high, on average (over 120 benchmarks) roughly 0.6%. Furthermore there still is a bias for a range of benchmarks based on cores in the Zero Power Reactor (ANL) with sizable amounts of tungsten in them. The results for the fusion shielding benchmarks have not changed significantly, compared to ENDF/B-VI.8, for most materials. The delayed neutron testing shows that the values for both thermal and fast spectrum cases are now well predicted, which is an improvement when compared with ENDF/B-VI.8

  19. Improvement of crop yield in dry environments: benchmarks, levels of organisation and the role of nitrogen.

    Science.gov (United States)

    Sadras, V O; Richards, R A

    2014-05-01

    Crop yield in dry environments can be improved with complementary approaches including selecting for yield in the target environments, selecting for yield potential, and using indirect, trait- or genomic-based methods. This paper (i) outlines the achievements of direct selection for yield in improving drought adaptation, (ii) discusses the limitations of indirect approaches in the context of levels of organization, and (iii) emphasizes trade-offs and synergies between nitrogen nutrition and drought adaptation. Selection for yield in the water- and nitrogen-scarce environments of Australia improved wheat yield per unit transpiration at a rate of 0.12kg ha(-1) mm(-1) yr(-1); for indirect methods to be justified, they must return superior rates of improvement, achieve the same rate at lower cost or provide other cost-effective benefits, such as expanding the genetic basis for selection. Slow improvement of crop adaptation to water stress using indirect methods is partially related to issues of scale. Traits are thus classified into three broad groups: those that generally scale up from low levels of organization to the crop level (e.g. herbicide resistance), those that do not (e.g. grain yield), and traits that might scale up provided they are considered in a integrated manner with scientifically sound scaling assumptions, appropriate growing conditions, and screening techniques (e.g. stay green). Predicting the scalability of traits may help to set priorities in the investment of research efforts. Primary productivity in arid and semi-arid environments is simultaneously limited by water and nitrogen, but few attempts are made to target adaptation to water and nitrogen stress simultaneously. Case studies in wheat and soybean highlight biological links between improved nitrogen nutrition and drought adaptation.

  20. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  1. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  2. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  3. Improving energy productivity in paddy production through benchmarking-An application of data envelopment analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chauhan, Narvendra Singh [Department of Agronomy, Uttar Banga Krishi Viswavidyalaya, P.O. Pundibari, District Cooch Behar (West Bengal) 736 165 (India)]. E-mail: nsc_01@rediffmail.com; Mohapatra, Pratap K.J. [Department of Industrial Engineering and Management, Indian Institute of Technology, Kharagpur (West Bengal) 721 302 (India); Pandey, Keshaw Prasad [Department of Agricultural and Food Engineering, Indian Institute of Technology, Kharagpur (West Bengal) 721 302 (India)

    2006-06-15

    In this study, a data envelopment analysis approach has been used to determine the efficiencies of farmers with regard to energy use in rice production activities in the alluvial zone in the state of West Bengal in India. The study has helped to segregate efficient farmers from inefficient ones, identify wasteful uses of energy from different sources by inefficient farmers and to suggest reasonable savings in energy uses from different sources. The methods of cross efficiency matrix and distribution of virtual inputs are used to get insights into the performance of individual farmers, rank efficient farmers and identify the improved operating practices followed by a group of truly efficient farmers. The results reveal that, on an average, about 11.6% of the total input energy could be saved if the farmers follow the input package recommended by the study. The study also suggests that better use of power tillers and introduction of improved machinery would improve the efficiency of energy use and thereby improve the energy productivity of the rice production system in the zone.

  4. Improving energy productivity in paddy production through benchmarking-An application of data envelopment analysis

    International Nuclear Information System (INIS)

    Chauhan, Narvendra Singh; Mohapatra, Pratap K.J.; Pandey, Keshaw Prasad

    2006-01-01

    In this study, a data envelopment analysis approach has been used to determine the efficiencies of farmers with regard to energy use in rice production activities in the alluvial zone in the state of West Bengal in India. The study has helped to segregate efficient farmers from inefficient ones, identify wasteful uses of energy from different sources by inefficient farmers and to suggest reasonable savings in energy uses from different sources. The methods of cross efficiency matrix and distribution of virtual inputs are used to get insights into the performance of individual farmers, rank efficient farmers and identify the improved operating practices followed by a group of truly efficient farmers. The results reveal that, on an average, about 11.6% of the total input energy could be saved if the farmers follow the input package recommended by the study. The study also suggests that better use of power tillers and introduction of improved machinery would improve the efficiency of energy use and thereby improve the energy productivity of the rice production system in the zone

  5. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...... applications. The present study analyses voluntary benchmarking in a public setting that is oriented towards learning. The study contributes by showing how benchmarking can be mobilised for learning and offers evidence of the effects of such benchmarking for performance outcomes. It concludes that benchmarking...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....

  6. Benchmarking Using Basic DBMS Operations

    Science.gov (United States)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  7. Benchmarking reference services: an introduction.

    Science.gov (United States)

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  8. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  9. An improved synthesis of pentacene: rapid access to a benchmark organic semiconductor.

    Science.gov (United States)

    Pramanik, Chandrani; Miller, Glen P

    2012-04-20

    Pentacene is an organic semiconductor used in a variety of thin-film organic electronic devices. Although at least six separate syntheses of pentacene are known (two from dihydropentacenes, two from 6,13-pentacenedione and two from 6,13-dihydro-6,13-dihydroxypentacene), none is ideal and several utilize elevated temperatures that may facilitate the oxidation of pentacene as it is produced. Here, we present a fast (-2 min of reaction time), simple, high-yielding (≥ 90%), low temperature synthesis of pentacene from readily available 6,13-dihydro-6,13-dihydroxypentacene. Further, we discuss the mechanism of this highly efficient reaction. With this improved synthesis, researchers gain rapid, affordable access to high purity pentacene in excellent yield and without the need for a time consuming sublimation.

  10. An Improved Synthesis of Pentacene: Rapid Access to a Benchmark Organic Semiconductor

    Directory of Open Access Journals (Sweden)

    Glen P. Miller

    2012-04-01

    Full Text Available Pentacene is an organic semiconductor used in a variety of thin-film organic electronic devices. Although at least six separate syntheses of pentacene are known (two from dihydropentacenes, two from 6,13-pentacenedione and two from 6,13-dihydro-6,13-dihydroxypentacene, none is ideal and several utilize elevated temperatures that may facilitate the oxidation of pentacene as it is produced. Here, we present a fast (~2 min of reaction time, simple, high-yielding (≥90%, low temperature synthesis of pentacene from readily available 6,13-dihydro-6,13-dihydroxypentacene. Further, we discuss the mechanism of this highly efficient reaction. With this improved synthesis, researchers gain rapid, affordable access to high purity pentacene in excellent yield and without the need for a time consuming sublimation.

  11. Benchmarking is associated with improved quality of care in type 2 diabetes: the OPTIMISE randomized, controlled trial.

    Science.gov (United States)

    Hermans, Michel P; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-11-01

    To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile.

  12. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  13. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    Science.gov (United States)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we

  14. THE IMPORTANCE OF BENCHMARKING IN MAKING MANAGEMENT DECISIONS

    Directory of Open Access Journals (Sweden)

    Adriana-Mihaela IONESCU

    2016-06-01

    Full Text Available Launching a new business or project leads managers to make decisions and choose strategies that will then apply in their company. Most often, they take decisions only on instinct, but there are also companies that use benchmarking studies. Benchmarking is a highly effective management tool and is useful in the new competitive environment that has emerged from the need of organizations to constantly improve their performance in order to be competitive. Using this benchmarking process, organizations try to find the best practices applied in a business, learn from famous leaders and identify ways to increase their performance and competitiveness. Thus, managers gather information about market trends and about competitors, especially about the leaders in the field, and use these information in finding ideas and setting of guidelines for development. Benchmarking studies are often used in businesses of commerce, real estate, and industry and high-tech software.

  15. A benchmarking program to reduce red blood cell outdating: implementation, evaluation, and a conceptual framework.

    Science.gov (United States)

    Barty, Rebecca L; Gagliardi, Kathleen; Owens, Wendy; Lauzon, Deborah; Scheuermann, Sheena; Liu, Yang; Wang, Grace; Pai, Menaka; Heddle, Nancy M

    2015-07-01

    Benchmarking is a quality improvement tool that compares an organization's performance to that of its peers for selected indicators, to improve practice. Processes to develop evidence-based benchmarks for red blood cell (RBC) outdating in Ontario hospitals, based on RBC hospital disposition data from Canadian Blood Services, have been previously reported. These benchmarks were implemented in 160 hospitals provincewide with a multifaceted approach, which included hospital education, inventory management tools and resources, summaries of best practice recommendations, recognition of high-performing sites, and audit tools on the Transfusion Ontario website (http://transfusionontario.org). In this study we describe the implementation process and the impact of the benchmarking program on RBC outdating. A conceptual framework for continuous quality improvement of a benchmarking program was also developed. The RBC outdating rate for all hospitals trended downward continuously from April 2006 to February 2012, irrespective of hospitals' transfusion rates or their distance from the blood supplier. The highest annual outdating rate was 2.82%, at the beginning of the observation period. Each year brought further reductions, with a nadir outdating rate of 1.02% achieved in 2011. The key elements of the successful benchmarking strategy included dynamic targets, a comprehensive and evidence-based implementation strategy, ongoing information sharing, and a robust data system to track information. The Ontario benchmarking program for RBC outdating resulted in continuous and sustained quality improvement. Our conceptual iterative framework for benchmarking provides a guide for institutions implementing a benchmarking program. © 2015 AABB.

  16. SafeCare: An Innovative Approach for Improving Quality Through Standards, Benchmarking, and Improvement in Low- and Middle- Income Countries.

    Science.gov (United States)

    Johnson, Michael C; Schellekens, Onno; Stewart, Jacqui; van Ostenberg, Paul; de Wit, Tobias Rinke; Spieker, Nicole

    2016-08-01

    In low- and middle-income countries (LMICs), patients often have limited access to high-quality care because of a shortage of facilities and human resources, inefficiency of resource allocation, and limited health insurance. SafeCare was developed to provide innovative health care standards; surveyor training; a grading system for quality of care; a quality improvement process that is broken down into achievable, measurable steps to facilitate incremental improvement; and a private sector-supported health financing model. Three organizations-PharmAccess Foundation, Joint Commission International, and the Council for Health Service Accreditation of Southern Africa-launched SafeCare in 2011 as a formal partnership. Five SafeCare levels of improvement are allocated on the basis of an algorithm that incorporates both the overall score and weighted criteria, so that certain high-risk criteria need to be in place before a facility can move to the next SafeCare certification level. A customized quality improvement plan based on the SafeCare assessment results lists the specific, measurable activities that should be undertaken to address gaps in quality found during the initial assessment and to meet the nextlevel SafeCare certificate. The standards have been implemented in more than 800 primary and secondary facilities by qualified local surveyors, in partnership with various local public and private partner organizations, in six sub-Saharan African countries (Ghana, Kenya, Nigeria, Namibia, Tanzania, and Zambia). Expanding access to care and improving health care quality in LMICs will require a coordinated effort between institutions and other stakeholders. SafeCare's standards and assessment methodology can help build trust between stakeholders and lay the foundation for country-led quality monitoring systems.

  17. Early Surgical Site Infection Following Tissue Expander Breast Reconstruction with or without Acellular Dermal Matrix: National Benchmarking Using National Surgical Quality Improvement Program

    Directory of Open Access Journals (Sweden)

    Sebastian Winocour

    2015-03-01

    Full Text Available BackgroundSurgical site infections (SSIs result in significant patient morbidity following immediate tissue expander breast reconstruction (ITEBR. This study determined a single institution's 30-day SSI rate and benchmarked it against that among national institutions participating in the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP.MethodsWomen who underwent ITEBR with/without acellular dermal matrix (ADM were identified using the ACS-NSQIP database between 2005 and 2011. Patient characteristics associated with the 30-day SSI rate were determined, and differences in rates between our institution and the national database were assessed.Results12,163 patients underwent ITEBR, including 263 at our institution. SSIs occurred in 416 (3.4% patients nationwide excluding our institution, with lower rates observed at our institution (1.9%. Nationwide, SSIs were significantly more common in ITEBR patients with ADM (4.5% compared to non-ADM patients (3.2%, P=0.005, and this trend was observed at our institution (2.1% vs. 1.6%, P=1.00. A multivariable analysis of all institutions identified age ≥50 years (odds ratio [OR], 1.4; confidence interval [CI], 1.1-1.7, body mass index ≥30 kg/m2 vs. 4.25 hours (OR, 1.9; CI, 1.5-2.4 as risk factors for SSIs. Our institutional SSI rate was lower than the nationwide rate (OR, 0.4; CI, 0.2-1.1, although this difference was not statistically significant (P=0.07.ConclusionsThe 30-day SSI rate at our institution in patients who underwent ITEBR was lower than the nation. SSIs occurred more frequently in procedures involving ADM both nationally and at our institution.

  18. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  19. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  20. Achievements and trends of using induced mutations in crop improvement

    International Nuclear Information System (INIS)

    Nichterlein, K.; Maluszynski, M.; ); Bohlmann, H.; Nielen, S.; )

    2000-01-01

    Mutation techniques have been employed for the genetic improvement of crops and ornamentals leading to the official release of more than 2200 improved varieties. Some of them have made a major impact on crop productivity and achieved great economic success. Induced mutations play an important role in plant genome research to understand the function of genes aiming to improve food security and diversity. (author)

  1. [Do you mean benchmarking?].

    Science.gov (United States)

    Bonnet, F; Solignac, S; Marty, J

    2008-03-01

    The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.

  2. Power reactor pressure vessel benchmarks

    International Nuclear Information System (INIS)

    Rahn, F.J.

    1978-01-01

    A review is given of the current status of experimental and calculational benchmarks for use in understanding the radiation embrittlement effects in the pressure vessels of operating light water power reactors. The requirements of such benchmarks for application to pressure vessel dosimetry are stated. Recent developments in active and passive neutron detectors sensitive in the ranges of importance to embrittlement studies are summarized and recommendations for improvements in the benchmark are made. (author)

  3. Trends in Mediation Analysis in Nursing Research: Improving Current Practice.

    Science.gov (United States)

    Hertzog, Melody

    2018-06-01

    The purpose of this study was to describe common approaches used by nursing researchers to test mediation models and evaluate them within the context of current methodological advances. MEDLINE was used to locate studies testing a mediation model and published from 2004 to 2015 in nursing journals. Design (experimental/correlation, cross-sectional/longitudinal, model complexity) and analysis (method, inclusion of test of mediated effect, violations/discussion of assumptions, sample size/power) characteristics were coded for 456 studies. General trends were identified using descriptive statistics. Consistent with findings of reviews in other disciplines, evidence was found that nursing researchers may not be aware of the strong assumptions and serious limitations of their analyses. Suggestions for strengthening the rigor of such studies and an overview of current methods for testing more complex models, including longitudinal mediation processes, are presented.

  4. Improve processes on healthcare: current issues and future trends.

    Science.gov (United States)

    Chen, Jason C H; Dolan, Matt; Lin, Binshan

    2004-01-01

    Information Technology (IT) is a critical resource for improving today's business competitiveness. However, many healthcare providers do not proactively manage or improve the efficiency and effectiveness of their services with IT. Survival in a competitive business environment demands continuous improvements in quality and service, while rigorously maintaining core values. Electronic commerce continues its development, gaining ground as the preferred means of business transactions. Embracing e-healthcare and treating IT as a strategic tool to improve patient safety and the quality of care enables healthcare professionals to benefit from technology formerly used only for management purposes. Numerous improvement initiatives, introduced by both the federal government and the private sector, seek to better the status quo in IT. This paper examines the current IT climate using an enhanced "Built to Last" model, and comments on future IT strategies within the healthcare industry.

  5. TV Energy Consumption Trends and Energy-Efficiency Improvement Options

    Energy Technology Data Exchange (ETDEWEB)

    Park, Won Young; Phadke, Amol; Shah, Nihar; Letschert, Virginie

    2011-07-01

    The SEAD initiative aims to transform the global market by increasing the penetration of highly efficient equipment and appliances. SEAD is a government initiative whose activities and projects engage the private sector to realize the large global energy savings potential from improved appliance and equipment efficiency. SEAD seeks to enable high-level global action by informing the Clean Energy Ministerial dialogue as one of the initiatives in the Global Energy Efficiency Challenge. In keeping with its goal of achieving global energy savings through efficiency, SEAD was approved as a task within the International Partnership for Energy Efficiency Cooperation (IPEEC) in January 2010. SEAD partners work together in voluntary activities to: (1) ?raise the efficiency ceiling? by pulling super-efficient appliances and equipment into the market through cooperation on measures like incentives, procurement, awards, and research and development (R&D) investments; (2) ?raise the efficiency floor? by working together to bolster national or regional policies like minimum efficiency standards; and (3) ?strengthen the efficiency foundations? of programs by coordinating technical work to support these activities. Although not all SEAD partners may decide to participate in every SEAD activity, SEAD partners have agreed to engage actively in their particular areas of interest through commitment of financing, staff, consultant experts, and other resources. In addition, all SEAD partners are committed to share information, e.g., on implementation schedules for and the technical detail of minimum efficiency standards and other efficiency programs. Information collected and created through SEAD activities will be shared among all SEAD partners and, to the extent appropriate, with the global public.As of April 2011, the governments participating in SEAD are: Australia, Brazil, Canada, the European Commission, France, Germany, India, Japan, Korea, Mexico, Russia, South Africa, Sweden

  6. Recent trends on crop genetic improvement using mutation techniques

    International Nuclear Information System (INIS)

    Kang, Siyong

    2008-01-01

    The radiation breeding technology has been significantly achieved on creation of mutation genetic resources of plants for commercial cultivation and genomic study since 1920s. According to the FAO-IAEA Mutant Variety Database, more than 2600 varieties have been released in the world. Induction of mutations with radiation has been the most frequently used by sources of X-ray and gamma ray, but in recent Japanese scientist have been used the heavy ion beam as a new radiation sources. And China has been made remarkable outcomes in the mutant creation using new space breeding technology since 1990s. In Korea, more about 40 varieties have been developed by using the mutation breeding method since the mid-1960s. Most of the released mutant varieties in Korea were food and oil seed crops, especially for improving agronomic traits such as yield, lodging tolerance, maturity, and functional compounds. Currently the mutation breeding program in Korea has assigned more resources to develop high functional crops and ornamental plants. These functional and ornamental plants are ideal systems for a mutation breeding. A research program for the development of potential varieties of flowering and ornamental crops as rose, chrysanthemum, lily, carnation, orchids, and wild flowers was started with financial support from the Bio green 21 project of Korean government. The potential outcomes from the program will be new highly valued-added varieties which will provide greater money gains to Korean farmers and lots of valued mutants used for a gene isolation of interest and reverse genetics or functional genomic. Scientific interest in mutation breeding has drastically be ed focused to the field of functional genomic. Scientific interest in mutation breeding has drastically be ed focused to the field of functional genomic after a completion of genome sequencing of some model plant species. A direct approach of discovering the function of a novel gene is to use a mutant which has altered

  7. Recent trends on crop genetic improvement using mutation techniques

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Siyong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-04-15

    The radiation breeding technology has been significantly achieved on creation of mutation genetic resources of plants for commercial cultivation and genomic study since 1920s. According to the FAO-IAEA Mutant Variety Database, more than 2600 varieties have been released in the world. Induction of mutations with radiation has been the most frequently used by sources of X-ray and gamma ray, but in recent Japanese scientist have been used the heavy ion beam as a new radiation sources. And China has been made remarkable outcomes in the mutant creation using new space breeding technology since 1990s. In Korea, more about 40 varieties have been developed by using the mutation breeding method since the mid-1960s. Most of the released mutant varieties in Korea were food and oil seed crops, especially for improving agronomic traits such as yield, lodging tolerance, maturity, and functional compounds. Currently the mutation breeding program in Korea has assigned more resources to develop high functional crops and ornamental plants. These functional and ornamental plants are ideal systems for a mutation breeding. A research program for the development of potential varieties of flowering and ornamental crops as rose, chrysanthemum, lily, carnation, orchids, and wild flowers was started with financial support from the Bio green 21 project of Korean government. The potential outcomes from the program will be new highly valued-added varieties which will provide greater money gains to Korean farmers and lots of valued mutants used for a gene isolation of interest and reverse genetics or functional genomic. Scientific interest in mutation breeding has drastically be ed focused to the field of functional genomic. Scientific interest in mutation breeding has drastically be ed focused to the field of functional genomic after a completion of genome sequencing of some model plant species. A direct approach of discovering the function of a novel gene is to use a mutant which has altered

  8. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Sitnikov, Catalina; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  9. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  10. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  11. Enabling benchmarking and improving operational efficiency at nuclear power plants through adoption of a common process model: SNPM (standard nuclear performance model)

    International Nuclear Information System (INIS)

    Pete Karns

    2006-01-01

    To support the projected increase in base-load electricity demand, nuclear operating companies must maintain or improve upon current generation rates, all while their assets continue to age. Certainly new plants are and will be built, however the bulk of the world's nuclear generation comes from plants constructed in the 1970's and 1980's. The nuclear energy industry in the United States has dramatically increased its electricity production over the past decade; from 75.1% in 1994 to 91.9% by 2002 (source NEI US Nuclear Industry Net Capacity Factors - 1980 to 2003). This increase, coupled with lowered production costs; $2.43 in 1994 to $1.71 in 2002 (factored for inflation source NEI US Nuclear Industry net Production Costs 1980 to 2002) is due in large part to a focus on operational excellence that is driven by an industry effort to develop and share best practices for the purposes of benchmarking and improving overall performance. These best-practice processes, known as the standard nuclear performance model (SNPM), present an opportunity for European nuclear power generators who are looking to improve current production rates. In essence the SNPM is a model for the safe, reliable, and economically competitive nuclear power generation. The SNPM has been a joint effort of several industry bodies: Nuclear Energy Institute, Electric Cost Utility Group, and Institute of Nuclear Power Operations (INPO). The standard nuclear performance model (see figure 1) is comprised of eight primary processes, supported by forty four sub-processes and a number of company specific activities and tasks. The processes were originally envisioned by INPO in 1994 and evolved into the SNPM that was originally launched in 1998. Since that time communities of practice (CoPs) have emerged via workshops to further improve the processes and their inter-operability, CoP representatives include people from: nuclear power operating companies, policy bodies, industry suppliers and consultants, and

  12. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  13. Warehousing performance improvement using Frazelle Model and per group benchmarking: A case study in retail warehouse in Yogyakarta and Central Java

    Directory of Open Access Journals (Sweden)

    Kusrini Elisa

    2018-01-01

    Full Text Available Warehouse performance management has an important role in improving logistic's business activities. Good warehouse management could increase profit, time delivery, quality and customer service. This study is conducted to assess performance of retail warehouses in some supermarket located in Central Java and Yogyakarta. Performance improvement is proposed base on the warehouse measurement using Frazelle model (2002, that measure on five indicators, namely Financial, Productivity, Utility, Quality and Cycle time along five business process in warehousing, i.e. Receiving, Put Away, Storage, Order picking and shipping. In order to obtain more precise performance, the indicators are weighted using Analytic Hierarchy Analysis (AHP method. Then, warehouse performance are measured and final score is determined using SNORM method. From this study, it is found the final score of each warehouse and opportunity to improve warehouse performance using peer group benchmarking

  14. Temporal aggregation of migration counts can improve accuracy and precision of trends

    Directory of Open Access Journals (Sweden)

    Tara L. Crewe

    2016-12-01

    Full Text Available Temporal replicate counts are often aggregated to improve model fit by reducing zero-inflation and count variability, and in the case of migration counts collected hourly throughout a migration, allows one to ignore nonindependence. However, aggregation can represent a loss of potentially useful information on the hourly or seasonal distribution of counts, which might impact our ability to estimate reliable trends. We simulated 20-year hourly raptor migration count datasets with known rate of change to test the effect of aggregating hourly counts to daily or annual totals on our ability to recover known trend. We simulated data for three types of species, to test whether results varied with species abundance or migration strategy: a commonly detected species, e.g., Northern Harrier, Circus cyaneus; a rarely detected species, e.g., Peregrine Falcon, Falco peregrinus; and a species typically counted in large aggregations with overdispersed counts, e.g., Broad-winged Hawk, Buteo platypterus. We compared accuracy and precision of estimated trends across species and count types (hourly/daily/annual using hierarchical models that assumed a Poisson, negative binomial (NB or zero-inflated negative binomial (ZINB count distribution. We found little benefit of modeling zero-inflation or of modeling the hourly distribution of migration counts. For the rare species, trends analyzed using daily totals and an NB or ZINB data distribution resulted in a higher probability of detecting an accurate and precise trend. In contrast, trends of the common and overdispersed species benefited from aggregation to annual totals, and for the overdispersed species in particular, trends estimating using annual totals were more precise, and resulted in lower probabilities of estimating a trend (1 in the wrong direction, or (2 with credible intervals that excluded the true trend, as compared with hourly and daily counts.

  15. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    NARCIS (Netherlands)

    van Lent, W.A.M.; de Beer, Relinde; van Harten, Willem H.

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the

  16. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  17. Operating Room Efficiency before and after Entrance in a Benchmarking Program for Surgical Process Data.

    Science.gov (United States)

    Pedron, Sara; Winter, Vera; Oppel, Eva-Maria; Bialas, Enno

    2017-08-23

    Operating room (OR) efficiency continues to be a high priority for hospitals. In this context the concept of benchmarking has gained increasing importance as a means to improve OR performance. The aim of this study was to investigate whether and how participation in a benchmarking and reporting program for surgical process data was associated with a change in OR efficiency, measured through raw utilization, turnover times, and first-case tardiness. The main analysis is based on panel data from 202 surgical departments in German hospitals, which were derived from the largest database for surgical process data in Germany. Panel regression modelling was applied. Results revealed no clear and univocal trend of participation in a benchmarking and reporting program for surgical process data. The largest trend was observed for first-case tardiness. In contrast to expectations, turnover times showed a generally increasing trend during participation. For raw utilization no clear and statistically significant trend could be evidenced. Subgroup analyses revealed differences in effects across different hospital types and department specialties. Participation in a benchmarking and reporting program and thus the availability of reliable, timely and detailed analysis tools to support the OR management seemed to be correlated especially with an increase in the timeliness of staff members regarding first-case starts. The increasing trend in turnover time revealed the absence of effective strategies to improve this aspect of OR efficiency in German hospitals and could have meaningful consequences for the medium- and long-run capacity planning in the OR.

  18. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    Science.gov (United States)

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. Results We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations. Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. Conclusions The improved

  19. Improving the "Quality of Life" in School and Business Organizations: Historical and Contemporary Trends.

    Science.gov (United States)

    Karr-Kidwell, PJ

    Numerous attempts have been made to improve the effectiveness of decision-making in organizational settings. Some of the historical and contemporary organizational trends regarding these efforts, both in business and in school settings, are presented in this paper. The focus is on the related expectations and outcomes that are evident in diverse…

  20. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  1. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  2. Using benchmarking to identify inter-centre differences in persistent ductus arteriosus treatment: can we improve outcome?

    Science.gov (United States)

    Jansen, Esther J S; Dijkman, Koen P; van Lingen, Richard A; de Vries, Willem B; Vijlbrief, Daniel C; de Boode, Willem P; Andriessen, Peter

    2017-10-01

    The aim of this study was to identify inter-centre differences in persistent ductus arteriosus treatment and their related outcomes. Materials and methods We carried out a retrospective, multicentre study including infants between 24+0 and 27+6 weeks of gestation in the period between 2010 and 2011. In all centres, echocardiography was used as the standard procedure to diagnose a patent ductus arteriosus and to document ductal closure. In total, 367 preterm infants were included. All four participating neonatal ICU had a comparable number of preterm infants; however, differences were observed in the incidence of treatment (33-63%), choice and dosing of medication (ibuprofen or indomethacin), number of pharmacological courses (1-4), and the need for surgical ligation after failure of pharmacological treatment (8-52%). Despite the differences in treatment, we found no difference in short-term morbidity between the centres. Adjusted mortality showed independent risk contribution of gestational age, birth weight, ductal ligation, and perinatal centre. Using benchmarking as a tool identified inter-centre differences. In these four perinatal centres, the factors that explained the differences in patent ductus arteriosus treatment are quite complex. Timing, choice of medication, and dosing are probably important determinants for successful patent ductus arteriosus closure.

  3. FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark

    International Nuclear Information System (INIS)

    Sawan, M.E.

    1994-12-01

    During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)

  4. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  5. Qualification and improvement of iron ENDF/B-VI and JEF-2 evaluations by interpretation of the Aspis Benchmark

    International Nuclear Information System (INIS)

    Zheng, S.H.; Kodeli, I.; Raepsaet, C.; Diop, C.M.; Nimal, J.C.; Monnier, A.

    1992-01-01

    The aim of the present study is to contribute to the validation of the new evaluated nuclear data files like ENDF/B-VI or JEF-2.2. The new cross-section evaluation for iron isotopes are of particular interest for the nuclear community, since it is well known that the ENDF/B-IV data underestimate the neutron flux on deep penetration problems. The performances of the new nuclear data libraries are compared with those of ENDF-B-IV. The ASPIS benchmark, where the neutron transports through more than one meter iron plate, was chosen for this study. The cross-section libraries were produced by the THEMIS/NJOY (ref 1) processing system and the transport calculations were carried out using the 3D Monte-Carlo code TRIPOLI. The influence of different multigroup cross-section representations was investigated. Finally, sensitivity, uncertainty and data adjustment analyses were carried out to obtain some additional informations about the quality of the cross-section data in ENDF/B-VI files. The analyses were performed using the code package set up of different modules, either developed at CEA or obtained from the NEA Data Bank. The adjustment indicated that some modifications have to be introduced to the neutron cross-sections of iron and the whole calculations were repeated with the adjusted set of cross sections. The comparison of the results of the uncertainty and the adjustment analyses applied to ENDF/B-IV and ENDF/B-VI iron data permits to establish the progress made and gives some indications about the state-of-the-art of the cross-section data

  6. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...

  7. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  8. Using benchmarking to assist the improvement of service quality in home support services for older people-IN TOUCH (Integrated Networks Towards Optimising Understanding of Community Health).

    Science.gov (United States)

    Jacobs, Stephen P; Parsons, Matthew; Rouse, Paul; Parsons, John; Gunderson-Reid, Michelle

    2018-04-01

    Service providers and funders need ways to work together to improve services. Identifying critical performance variables provides a mechanism by which funders can understand what they are purchasing without getting caught up in restrictive service specifications that restrict the ability of service providers to meet the needs of the clients. An implementation pathway and benchmarking programme called IN TOUCH provided contracted providers of home support and funders with a consistent methodology to follow when developing and implementing new restorative approaches for service delivery. Data from performance measurement was used to triangulate the personal and social worlds of the stakeholders enabling them to develop a shared understanding of what is working and what is not. The initial implementation of IN TOUCH involved five District Health Boards. The recursive dialogue encouraged by the IN TOUCH programme supports better and more sustainable service development because performance management is anchored to agreed data that has meaning to all stakeholders. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Benchmarking homogenization algorithms for monthly data

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2012-01-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  10. Modeling Systematic Change in Stopover Duration Does Not Improve Bias in Trends Estimated from Migration Counts.

    Directory of Open Access Journals (Sweden)

    Tara L Crewe

    Full Text Available The use of counts of unmarked migrating animals to monitor long term population trends assumes independence of daily counts and a constant rate of detection. However, migratory stopovers often last days or weeks, violating the assumption of count independence. Further, a systematic change in stopover duration will result in a change in the probability of detecting individuals once, but also in the probability of detecting individuals on more than one sampling occasion. We tested how variation in stopover duration influenced accuracy and precision of population trends by simulating migration count data with known constant rate of population change and by allowing daily probability of survival (an index of stopover duration to remain constant, or to vary randomly, cyclically, or increase linearly over time by various levels. Using simulated datasets with a systematic increase in stopover duration, we also tested whether any resulting bias in population trend could be reduced by modeling the underlying source of variation in detection, or by subsampling data to every three or five days to reduce the incidence of recounting. Mean bias in population trend did not differ significantly from zero when stopover duration remained constant or varied randomly over time, but bias and the detection of false trends increased significantly with a systematic increase in stopover duration. Importantly, an increase in stopover duration over time resulted in a compounding effect on counts due to the increased probability of detection and of recounting on subsequent sampling occasions. Under this scenario, bias in population trend could not be modeled using a covariate for stopover duration alone. Rather, to improve inference drawn about long term population change using counts of unmarked migrants, analyses must include a covariate for stopover duration, as well as incorporate sampling modifications (e.g., subsampling to reduce the probability that individuals will

  11. Benchmarking Supplier Development: An Empirical Case Study of Validating a Framework to Improve Buyer-Supplier Relationship

    OpenAIRE

    Shahzad Khuram; Sillanpää Ilkka; Sillanpää Elina; Imeri Shpend

    2016-01-01

    In today’s dynamic business environment, firms are required to utilize efficiently and effectively all the useful resources to gain competitive advantage. Supplier development has evolved as an important strategic instrument to improve buyer supplier relationships. For that reason, this study focuses on providing the strategic significance of supplier development approaches to improve business relationships. By using qualitative research method, an integrated framework of supplier development...

  12. Improvement of Global and Regional Mean Sea Level Trends Derived from all Altimetry Missions.

    Science.gov (United States)

    Ablain, Michael; Benveniste, Jérôme; Faugere, Yannice; Larnicol, Gilles; Cazenave, Anny; Johannessen, Johnny A.; Stammer, Detlef; Timms, Gary

    2012-07-01

    The global mean sea level (GMSL) has been calculated on a continual basis since January 1993 using data from satellite altimetry missions. The global mean sea level (MSL) deduced from TOPEX/Poseidon, Jason-1 and Jason-2 is increasing with a global trend of 3.2 mm from 1993 to 2010 applying the post glacial rebound (MSL Aviso website http://www.jason.oceanobs.com/msl). Besides, the regional sea level trends bring out an inhomogeneous repartition of the ocean elevation with local MSL slopes ranging from +/- 8 mm/year. A study published in 2009 [Ablain et al., 2009] has shown that the global MSL trend uncertainty was estimated at +/-0.6 mm/year with a confidence interval of 90%. The main sources of errors at global and regional scales are due to the orbit calculation and the wet troposphere correction. But others sea-level components have also a significant impact on the long-term stability of MSL as for instance the stability of instrumental parameters and the atmospheric corrections. Thanks to recent studies performed in Sea Level Essential Climate Variable Project in the frame of the Climate Change Initiative, an ESA Programme, in addition to activities performed within the SALP/CNES, strong improvements have been provided for the estimation of the global and regional MSL trends. In this paper, we propose to describe them; they concern the orbit calculation thanks to new gravity fields, the atmospheric corrections thanks to ERA-interim reanalyses, the wet troposphere corrections thanks to the stability improvement, and also empirical corrections allowing us to link regional time series together better. These improvements are described at global and regional scale for all the altimetry missions.

  13. Improving benchmarking by using an explicit framework for the development of composite indicators: an example using pediatric quality of care

    OpenAIRE

    Profit, Jochen; Typpo, Katri V; Hysong, Sylvia J; Woodard, LeChauncy D; Kallen, Michael A; Petersen, Laura A

    2010-01-01

    Abstract Background The measurement of healthcare provider performance is becoming more widespread. Physicians have been guarded about performance measurement, in part because the methodology for comparative measurement of care quality is underdeveloped. Comprehensive quality improvement will require comprehensive measurement, implying the aggregation of multiple quality metrics into composite indicators. Objective To present a conceptual framework to develop comprehensive, robust, and transp...

  14. Improving trends in gender disparities in the Department of Veterans Affairs: 2008-2013.

    Science.gov (United States)

    Whitehead, Alison M; Czarnogorski, Maggie; Wright, Steve M; Hayes, Patricia M; Haskell, Sally G

    2014-09-01

    Increasing numbers of women veterans using Department of Veterans Affairs (VA) services has contributed to the need for equitable, high-quality care for women. The VA has evaluated performance measure data by gender since 2006. In 2008, the VA launched a 5-year women's health redesign, and, in 2011, gender disparity improvement was included on leadership performance plans. We examined data from VA Office of Analytics and Business Intelligence quarterly gender reports for trends in gender disparities in gender-neutral performance measures from 2008 to 2013. Through reporting of data by gender, leadership involvement, electronic reminders, and population management dashboards, VA has seen a decreasing trend in gender inequities on most Health Effectiveness Data and Information Set performance measures.

  15. Improving Trends in Gender Disparities in the Department of Veterans Affairs: 2008–2013

    Science.gov (United States)

    Czarnogorski, Maggie; Wright, Steve M.; Hayes, Patricia M.; Haskell, Sally G.

    2014-01-01

    Increasing numbers of women veterans using Department of Veterans Affairs (VA) services has contributed to the need for equitable, high-quality care for women. The VA has evaluated performance measure data by gender since 2006. In 2008, the VA launched a 5-year women’s health redesign, and, in 2011, gender disparity improvement was included on leadership performance plans. We examined data from VA Office of Analytics and Business Intelligence quarterly gender reports for trends in gender disparities in gender-neutral performance measures from 2008 to 2013. Through reporting of data by gender, leadership involvement, electronic reminders, and population management dashboards, VA has seen a decreasing trend in gender inequities on most Health Effectiveness Data and Information Set performance measures. PMID:25100416

  16. Efficiency-improving fossil fuel technologies for electricity generation: Data selection and trends

    Energy Technology Data Exchange (ETDEWEB)

    Lanzi, Elisa [Fondazione Eni Enrico Mattei (Italy); Verdolini, Elena, E-mail: elena.verdolini@feem.it [Fondazione Eni Enrico Mattei (Italy); Universita Cattolica, del Sacro Cuore di Milano (Italy); Hascic, Ivan [OECD Environment Directorate (France)

    2011-11-15

    This paper studies patenting dynamics in efficiency improving electricity generation technologies as an important indicator of innovation activity. We build a novel database of worldwide patent applications in efficiency-improving fossil fuel technologies for electricity generation and then analyse patenting trends over time and across countries. We find that patenting has mostly been stable over time, with a recent decreasing trend. OECD countries represent the top innovators and the top markets for technology. Some non-OECD countries, and particularly China, are also very active in terms of patenting activity in this sector. The majority of patents are first filed in OECD countries and only then in BRIC and other non-OECD countries. BRIC and other non-OECD countries apply for patents that are mostly marketed domestically, but BRIC countries represent important markets for patent duplication of OECD inventions. These results are indicative of significant technology transfer in the field of efficiency-improving technologies for electricity production. - Highlights: > We study innovation in efficiency-improving electricity generation technologies. > Relevant patents are identified and used as an indicator of innovation. > We show that there is significant technology transfer in this field. > Most patents are first filed in OECD countries and then in non-OECD countries. > Patents in non-OECD countries are mostly marketed domestically.

  17. Efficiency-improving fossil fuel technologies for electricity generation: Data selection and trends

    International Nuclear Information System (INIS)

    Lanzi, Elisa; Verdolini, Elena; Hascic, Ivan

    2011-01-01

    This paper studies patenting dynamics in efficiency improving electricity generation technologies as an important indicator of innovation activity. We build a novel database of worldwide patent applications in efficiency-improving fossil fuel technologies for electricity generation and then analyse patenting trends over time and across countries. We find that patenting has mostly been stable over time, with a recent decreasing trend. OECD countries represent the top innovators and the top markets for technology. Some non-OECD countries, and particularly China, are also very active in terms of patenting activity in this sector. The majority of patents are first filed in OECD countries and only then in BRIC and other non-OECD countries. BRIC and other non-OECD countries apply for patents that are mostly marketed domestically, but BRIC countries represent important markets for patent duplication of OECD inventions. These results are indicative of significant technology transfer in the field of efficiency-improving technologies for electricity production. - Highlights: → We study innovation in efficiency-improving electricity generation technologies. → Relevant patents are identified and used as an indicator of innovation. → We show that there is significant technology transfer in this field. → Most patents are first filed in OECD countries and then in non-OECD countries. → Patents in non-OECD countries are mostly marketed domestically.

  18. An improved method for Multipath Hemispherical Map (MHM) based on Trend Surface Analysis

    Science.gov (United States)

    Wang, Zhiren; Chen, Wen; Dong, Danan; Yu, Chao

    2017-04-01

    Among various approaches developed for detecting the multipath effect in high-accuracy GNSS positioning, Only MHM (Multipath Hemispherical Map) and SF (Sidereal Filtering) can be implemented to real-time GNSS data processing. SF is based on the time repeatability of satellites which just suitable for static environment, while the spatiotemporal repeatability-based MHM is applicable not only for static environment but also for dynamic carriers with static multipath environment such as ships and airplanes, and utilizes much smaller number of parameters than ASF. However, the MHM method also has certain defects. Since the MHM take the mean of residuals from the grid as the filter value, it is more suitable when the multipath regime is medium to low frequency. Now existing research data indicate that the newly advanced Sidereal Filtering (ASF) method perform better with high frequency multipath reduction than MHM by contrast. To solve the above problem and improve MHM's performance on high frequency multipath, we combined binary trend surface analysis method with original MHM model to effectively analyze particular spatial distribution and variation trends of multipath effect. We computed trend surfaces of the residuals within a grid by least-square procedures, and chose the best results through the moderate successive test. The enhanced MHM grid was constructed from a set of coefficients of the fitted equation instead of mean value. According to the analysis of the actual observation, the improved MHM model shows positive effect on high frequency multipath reduction, and significantly reduced the root mean square (RMS) value of the carrier residuals. Keywords: Trend Surface Analysis; Multipath Hemispherical Map; high frequency multipath effect

  19. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  20. Improving benchmarking by using an explicit framework for the development of composite indicators: an example using pediatric quality of care

    Science.gov (United States)

    2010-01-01

    Background The measurement of healthcare provider performance is becoming more widespread. Physicians have been guarded about performance measurement, in part because the methodology for comparative measurement of care quality is underdeveloped. Comprehensive quality improvement will require comprehensive measurement, implying the aggregation of multiple quality metrics into composite indicators. Objective To present a conceptual framework to develop comprehensive, robust, and transparent composite indicators of pediatric care quality, and to highlight aspects specific to quality measurement in children. Methods We reviewed the scientific literature on composite indicator development, health systems, and quality measurement in the pediatric healthcare setting. Frameworks were selected for explicitness and applicability to a hospital-based measurement system. Results We synthesized various frameworks into a comprehensive model for the development of composite indicators of quality of care. Among its key premises, the model proposes identifying structural, process, and outcome metrics for each of the Institute of Medicine's six domains of quality (safety, effectiveness, efficiency, patient-centeredness, timeliness, and equity) and presents a step-by-step framework for embedding the quality of care measurement model into composite indicator development. Conclusions The framework presented offers researchers an explicit path to composite indicator development. Without a scientifically robust and comprehensive approach to measurement of the quality of healthcare, performance measurement will ultimately fail to achieve its quality improvement goals. PMID:20181129

  1. Benchmarking Supplier Development: An Empirical Case Study of Validating a Framework to Improve Buyer-Supplier Relationship

    Directory of Open Access Journals (Sweden)

    Shahzad Khuram

    2016-03-01

    Full Text Available In today’s dynamic business environment, firms are required to utilize efficiently and effectively all the useful resources to gain competitive advantage. Supplier development has evolved as an important strategic instrument to improve buyer supplier relationships. For that reason, this study focuses on providing the strategic significance of supplier development approaches to improve business relationships. By using qualitative research method, an integrated framework of supplier development and buyer-supplier relationship development has been tested and validated in a Finnish case company to provide empirical evidence. It particularly investigates how supplier development approaches can develop buyer-supplier relationships. The study present a set of propositions that identify significant supplier development approaches critical for the development of buyer-supplier relationships and develop a theoretical framework that specifies how these different supplier development approaches support in order to strengthen the relationships. The results are produced from an in-depth case study by implementing the proposed research framework. The findings reveal that supplier development strategies i.e., supplier incentives and direct involvements strongly effect in developing buyer-supplier relationships. Further research may focus on considering in-depth investigation of trust and communication factors along with propositions developed in the study to find out general applicability in dynamic business environment. Proposed integrated framework along with propositions is a unique combination of useful solutions for tactical and strategic management’s decision making and also valid for academic researchers to develop supplier development theories.

  2. CURRENT TRENDS FOR DEVELOPMENT AND IMPROVEMENT OF THE ORGANIZATIONAL STRUCTURES IN MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Angel Doraliyski

    2017-03-01

    Full Text Available One of the key components of the management system is the organizational structure of management. This report presents the current trends of development and improvement of organizational structures of management. The semantics of the term "organizational structure" used in the report is clarified. Key aspects of structural policy of organizations are discussed, as well as the introduction of flexible organizational forms. The main directions for analysis of organizational policy of organizations are also mentioned. It is stressed that the analysis of current trends for the future development of organizational structures of management should be an integral component of the management policy of every organization. The trends are flexibility and adaptability of the organizational structure, rational proportion between centralization and decentralization, an appropriate balance between the rights, duties and responsibilities, reducing regulations, rules and other normative documents. A recommendetaion is made that organizational governance structures must be adapted constantly in accordance with changes in the overall management system, as they are one of its main components.

  3. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  4. Strategic behaviour under regulatory benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Jamasb, T. [Cambridge Univ. (United Kingdom). Dept. of Applied Economics; Nillesen, P. [NUON NV (Netherlands); Pollitt, M. [Cambridge Univ. (United Kingdom). Judge Inst. of Management

    2004-09-01

    In order to improve the efficiency of electricity distribution networks, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ''regulation game,'' the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behaviour by firms. We then use the Data Envelopment Analysis (DEA) method with US utility data to examine implications of illustrative cases of strategic behaviour reported by regulators. The results show that gaming can have significant effects on the measured performance and profitability of firms. (author)

  5. Improving competitiveness of small medium Batik printing industry on quality & productivity using value chain benchmarking (Case study SME X, SME Y & SME Z)

    Science.gov (United States)

    Fauzi, Rizky Hanif; Liquiddanu, Eko; Suletra, I. Wayan

    2018-02-01

    Batik printing is made by way of night printing as well as conventional batik & through the dyeing process like batik making in general. One of the areas that support the batik industry in Karisidenan Surakarta is Kliwonan Village, Masaran District, Sragen. Masaran district is known as one of batik centers originated from batik workers in Laweyan Solo area from Masaran, they considered that it would be more economical to produce batik in their village which is Masaran Sragen because it is impossible to do production from upstream to downstream in Solo. SME X is one of SME batik in Kliwonan Village, Masaran, Sragen which has been able to produce batik printing with sales coverage to national. One of the key SME X in selling its products is by participating in various national & international exhibitions that are able to catapult its name. SME Y & SME Z are also SMEs in Kliwonan Village, Masaran, Sragen producing batik printing. From the observations made there are several problems that must be fixed in SME Y & SME Z. The production process is delayed from schedule, maintenance of used equipment, procedures for batik workmanship, supervision of operators as well as unknown SMEY & SMEZ products are the problems found. The purpose of this research is to improve the primary activity in SME Y & Z value chain on batik prioting product by benchmarking to small & medium scale industries (SME) X which have better competence.

  6. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  7. Land degradation and improvement trends over Iberia in the last three decades

    Science.gov (United States)

    Gouveia, Célia M.; Páscoa, Patrícia; Russo, Ana; Trigo, Ricardo

    2017-04-01

    Land degradation and desertification are recognised as an important environmental and social problem in arid and semiarid regions, particularly within a climate change context. In the last three decades the entire Mediterranean basin has been affected by more frequent droughts, covering large sectors and often lasting several months. Simultaneously, the stress imposed by land management practices, such as land abandonment and intensification, highlights the need of a continuous monitoring and early detection of degradation. The Normalized Difference Vegetation Index (NDVI) from GIMMS dataset was used as an indicator of land degradation or improvement over Iberia between 1982 and 2012. The precipitation influence on NDVI was previously removed and the negative/positive trends of the obtained residuals were presumed to indicate land degradation/improvement. Overall the Iberian Peninsula is dominated by widespread land improvement with only a few hot spots of land degradation located in central and southern sectors and also in east Mediterranean and Atlantic coasts. Less than 20% of the area presenting land degradation is located within regions where land cover changes were observed, being the new land cover types associated with transitional woodland-shrub, permanent and annual crops and permanently irrigated land areas. Although being a very small fraction, the pixels of land degradation are mainly located on a semi-arid region. The monotonic changes and trend shifts present in the NDVI dataset were also assessed. The major shifts in vegetation trends and the corresponding year of occurrence were associated with the main disturbances observed in Iberia, namely the major wildfires' seasons in Portugal, and also to land abandonment and to new agricultural practices that resulted from the construction of new dams. The results obtained provide a new outlook of the real nature of degradation or improvement of vegetation in Iberia in the last three decades

  8. Time trends, improvements and national auditing of rectal cancer management over an 18-year period.

    Science.gov (United States)

    Kodeda, K; Johansson, R; Zar, N; Birgisson, H; Dahlberg, M; Skullman, S; Lindmark, G; Glimelius, B; Påhlman, L; Martling, A

    2015-09-01

    The main aims were to explore time trends in the management and outcome of patients with rectal cancer in a national cohort and to evaluate the possible impact of national auditing on overall outcomes. A secondary aim was to provide population-based data for appraisal of external validity in selected patient series. Data from the Swedish ColoRectal Cancer Registry with virtually complete national coverage were utilized in this cohort study on 29 925 patients with rectal cancer diagnosed between 1995 and 2012. Of eligible patients, nine were excluded. During the study period, overall, relative and disease-free survival increased. Postoperative mortality after 30 and 90 days decreased to 1.7% and 2.9%. The 5-year local recurrence rate dropped to 5.0%. Resection margins improved, as did peri-operative blood loss despite more multivisceral resections being performed. Fewer patients underwent palliative resection and the proportion of non-operated patients increased. The proportions of temporary and permanent stoma formation increased. Preoperative radiotherapy and chemoradiotherapy became more common as did multidisciplinary team conferences. Variability in rectal cancer management between healthcare regions diminished over time when new aspects of patient care were audited. There have been substantial changes over time in the management of patients with rectal cancer, reflected in improved outcome. Much indirect evidence indicates that auditing matters, but without a control group it is not possible to draw firm conclusions regarding the possible impact of a quality control registry on faster shifts in time trends, decreased variability and improvements. Registry data were made available for reference. Colorectal Disease © 2015 The Association of Coloproctology of Great Britain and Ireland.

  9. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  10. Benchmarking specialty hospitals, a scoping review on theory and practice.

    Science.gov (United States)

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  11. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  12. Improving Energy Efficiency Through Technology. Trends, Investment Behaviour and Policy Design

    Energy Technology Data Exchange (ETDEWEB)

    Florax, R.J.G.M. [Purdue University, West Lafayette, IN (United States); De Groot, H.L.F. [VU University, Amsterdam (Netherlands); Mulder, P. [Tinbergen Institute, Amsterdam (Netherlands)] (eds.)

    2011-10-15

    This innovative book explores the adoption of energy-saving technologies and their impact on energy efficiency improvements. It contains a mix of theoretical and empirical contributions, and combines and compares economic and physical indicators to monitor and analyse trends in energy efficiency. The authors pay considerable attention to empirical research on the determinants of energy-saving investment including uncertainty, energy-price volatility and subsidies. They also discuss the role of energy modelling in policy design and the potential effect of energy policies on technology diffusion in energy-extensive sectors. Written from a multi-disciplinary perspective, this book will appeal to academics and graduates in the areas of energy-saving technologies, energy economics and natural resource economics, as well as policy makers - particularly those in energy policy.

  13. Near-surface modifications for improved crack tolerant behavior of high strength alloys: trends and prospects

    International Nuclear Information System (INIS)

    Hettche, L.R.; Rath, B.B.

    1982-01-01

    The purpose of this chapter is to examine the potential of surface modifications in improving the crack tolerant behavior of high strength alloys. Provides a critique of two of the most promising and versatile techniques: ion implantation and laser beam surface processing. Discusses crack tolerant properties; engineering characterization; publication trends and Department of Defense interests; and emergent surface modification techniques. Finds that the efficiency with which high strength alloys can be incorporated into a structure or component is dependent on the following crack tolerant properties: fracture toughness, fatigue resistance, sustained loading cracking resistance, fretting fatigue resistance, and hydrogen embrittlement resistance. Concludes that ion implantation and laser surface processing coupled with other advanced metallurgical procedures and fracture mechanic analyses provide the means to optimize both the bulk and surface controlled crack tolerant properties

  14. Benchmarking Academic Anatomic Pathologists

    Directory of Open Access Journals (Sweden)

    Barbara S. Ducatman MD

    2016-10-01

    value unit productivity approximated MGMA and FPSC benchmark data, we conclude that more rigorous standardization of academic faculty effort assignment will be needed to improve the value of work relative value unit measurements of faculty productivity.

  15. Longitudinal trends with improvement in quality of life after TVT, TVT O and Burch colposuspension procedures.

    Science.gov (United States)

    Drahoradova, Petra; Martan, Alois; Svabik, Kamil; Zvara, Karel; Otava, Martin; Masata, Jaromir

    2011-02-01

    Comparison of the quality of life (QoL) trends after TVT, TVT O and Burch colposuspension (BCS) procedures and comparison of long-term subjective and objective outcomes. The study included 215 women who underwent a TVT, TVT O or BCS procedure. We monitored QoL after each procedure and the effect of complications on the QoL as assessed by the IQOL questionnaire over a 3-year period. The study was completed by 74.5% of women after TVT, 74.5% after TVT O, and 65.2% after BCS procedure. In the long-term, the QoL improved from 46.9 to 88.7 and remained stable after BCS; after TVT and TVT O, it declined, but only after TVT O was the decline statistically significant compared to BCS. The IQOL for women with post-operative complications has a clear descending tendency. The effect of the complications is highly significant (pTVT O, but not with TVT or BCS. Anti-incontinence operations significantly improve quality of life for women with MI, but compared to the SI group, the quality of life is worse when measured at a longer time interval after the operation. Anti-incontinence operations significantly improve quality of life, and the difference in preoperative status in the long-term follow-up is demonstrable.

  16. The extent of benchmarking in the South African financial sector

    OpenAIRE

    W Vermeulen

    2014-01-01

    Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective impleme...

  17. Medical school benchmarking - from tools to programmes.

    Science.gov (United States)

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  18. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  19. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  20. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  1. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  2. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  3. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  4. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  5. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  6. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  7. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  8. Using a multifaceted quality improvement initiative to reverse the rising trend of cesarean births.

    Science.gov (United States)

    Ogunyemi, Dotun; McGlynn, Sara; Ronk, Anne; Knudsen, Patricia; Andrews-Johnson, Tonyie; Raczkiewicz, Angeline; Jovanovski, Andrew; Kaur, Sangeeta; Dykowski, Mark; Redman, Mark; Bahado-Singh, Ray

    2018-03-01

    National efforts exist to safely reduce the rate of cesarean delivery, a major source of increased morbidity and healthcare costs. This is a report of a quality improvement study targeting reduction of primary cesarean deliveries. From March 2014 to March 2016, interventions included a nested case-control review of local risk factors, provider and patient education, multidisciplinary reviews based on published guidelines with feedback, provider report cards, commitment to labor duration guidelines, and a focus on natural labor. Primary outcomes were the total primary singleton vertex and the nulliparous term singleton vertex (NTSV) cesarean delivery rates. Secondary outcome measures were postpartum hemorrhage, chorioamnionitis, perineal laceration, operative delivery, neonatal intensive care unit (NICU) admission, stillbirth, and neonatal mortality. Statistical process control charts identified significant temporal trends. Control chart analysis demonstrated that the institutional cesarean delivery rate was due to culture and not "outlier" obstetricians. The primary singleton vertex cesarean rate decreased from 23.4% to 14.1% and the NTSV rate decreased from 34.5% to 19.2% (both p cesarean deliveries without increasing maternal or perinatal morbidity.

  9. The Remuneration Policy in the Budgetary Sphere of Ukraine: Main Trends, Shortcomings, Suggestions for Improvement

    Directory of Open Access Journals (Sweden)

    Tsymbaliuk Svitlana O.

    2017-09-01

    Full Text Available The aim of the work is to identify the main trends and shortcomings of the remuneration policy in the budgetary sphere of Ukraine and develop proposals for its improvement. There determined the main problems of the remuneration policy in the budgetary sphere, including the low level and unsatisfactory wage differentiation, intersectoral imbalances in remuneration, rigid framework of a unified tariff net, lack of an objective methodology for assessing the complexity of duties and work of employees, and forming qualification groups for labor remuneration. It was determined that the reform of the minimum wage institution led to an increase in the leveling of the remuneration of employees of various categories and professional groups, which practically led to the destruction of the tariff remuneration system in the budgetary sphere. The necessity of reforming the policy of employee remuneration in the budgetary sphere is substantiated. There formulated directions for improving the tariff labor remuneration: construction of a unified tariff net based on flexible principles, formation of qualification groups for labor remuneration to develop a unified remuneration scale, development of a methodology for evaluating positions and jobs, and ensuring an objective pay gap between two related qualifying groups. With the aim of renewing the ratios for various categories and professional groups, it is important to reduce the gap between the subsistence minimum and the minimum wage. Prospects for further research should be the development of a methodology for evaluating positions and job for an objective comparison of the complexity of tasks and responsibilities of budgetary sector employees, substantiating qualification ratios in wages, forming indicators to determine the basic wages of employees within the developed ranges.

  10. Benchmarking of the FENDL-3 Neutron Cross-Section Data Library for Fusion Applications

    International Nuclear Information System (INIS)

    Fischer, U.; Kondo, K.; Angelone, M.; Batistoni, P.; Villari, R.; Bohm, T.; Sawan, M.; Walker, B.; Konno, C.

    2014-03-01

    This report summarizes the benchmark analyses performed in a joint effort of ENEA (Italy), JAEA (Japan), KIT (Germany), and the University of Wisconsin (USA) with the objective to test and qualify the neutron induced general purpose FENDL-3.0 data library for fusion applications. The benchmark approach consisted of two major steps including the analysis of a simple ITER-like computational benchmark, and a series of analyses of benchmark experiments conducted previously at the 14 MeV neutron generator facilities at ENEA Frascati, Italy (FNG) and JAEA, Tokai-mura, Japan (FNS). The computational benchmark revealed a modest increase of the neutron flux levels in the deep penetration regions and a substantial increase of the gas production in steel components. The comparison to experimental results showed good agreement with no substantial differences between FENDL-3.0 and FENDL-2.1 for most of the responses analysed. There is a slight trend, however, for an increase of the fast neutron flux in the shielding experiment and a decrease in the breeder mock-up experiments. The photon flux spectra measured in the bulk shield and the tungsten experiments are significantly better reproduced with FENDL-3.0 data. In general, FENDL-3, as compared to FENDL-2.1, shows an improved performance for fusion neutronics applications. It is thus recommended to ITER to replace FENDL-2.1 as reference data library for neutronics calculation by FENDL-3.0. (author)

  11. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Trends in the design, construction and operation of green roofs to improve the rainwater quality. State of the art

    Directory of Open Access Journals (Sweden)

    Jair Andrés Morales Mojica

    2017-07-01

    Full Text Available The green roofs appear as technology for the improvement water quality. This article identifies trends in the conditions of design, construction and operation of green roofs, which aim is to improve the quality of rainwater. A literature review was carried out in order to collect 45 original research papers from databases as Scopus, Science Direct, and Redalyc. From the information collected trends in increments and reductions in the concentrations of the main water quality parameters, seasons of the year with the best results, types of green roofs , types of substrate and most common components, construction trends (dimensions, inclination, Materials and layers and vegetation used in these systems have been determined. The results show that green roofs have the ability to neutralize acid rain. Extensive type roofs are the ones most commonly used, due to its characteristics of construction, functionality and low maintenance requirements.

  13. The extent of benchmarking in the South African financial sector

    Directory of Open Access Journals (Sweden)

    W Vermeulen

    2014-06-01

    Full Text Available Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective implementation of benchmarking. Based on the research findings, recommendations for achieving success are suggested.

  14. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  15. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  16. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  17. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  18. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    .... The design of this study included two parts: (1) eleven expert panelists involved in a Delphi technique to identify and rate importance of foodservice performance measures and rate the importance of benchmarking activities, and (2...

  19. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  20. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  1. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  2. Benchmarking homogenization algorithms for monthly data

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.

    2013-09-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  3. Using trend templates in a neonatal seizure algorithm improves detection of short seizures in a foetal ovine model.

    Science.gov (United States)

    Zwanenburg, Alex; Andriessen, Peter; Jellema, Reint K; Niemarkt, Hendrik J; Wolfs, Tim G A M; Kramer, Boris W; Delhaas, Tammo

    2015-03-01

    Seizures below one minute in duration are difficult to assess correctly using seizure detection algorithms. We aimed to improve neonatal detection algorithm performance for short seizures through the use of trend templates for seizure onset and end. Bipolar EEG were recorded within a transiently asphyxiated ovine model at 0.7 gestational age, a common experimental model for studying brain development in humans of 30-34 weeks of gestation. Transient asphyxia led to electrographic seizures within 6-8 h. A total of 3159 seizures, 2386 shorter than one minute, were annotated in 1976 h-long EEG recordings from 17 foetal lambs. To capture EEG characteristics, five features, sensitive to seizures, were calculated and used to derive trend information. Feature values and trend information were used as input for support vector machine classification and subsequently post-processed. Performance metrics, calculated after post-processing, were compared between analyses with and without employing trend information. Detector performance was assessed after five-fold cross-validation conducted ten times with random splits. The use of trend templates for seizure onset and end in a neonatal seizure detection algorithm significantly improves the correct detection of short seizures using two-channel EEG recordings from 54.3% (52.6-56.1) to 59.5% (58.5-59.9) at FDR 2.0 (median (range); p seizures by EEG monitoring at the NICU.

  4. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  5. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  6. Improving Google Flu Trends estimates for the United States through transformation.

    Directory of Open Access Journals (Sweden)

    Leah J Martin

    Full Text Available Google Flu Trends (GFT uses Internet search queries in an effort to provide early warning of increases in influenza-like illness (ILI. In the United States, GFT estimates the percentage of physician visits related to ILI (%ILINet reported by the Centers for Disease Control and Prevention (CDC. However, during the 2012-13 influenza season, GFT overestimated %ILINet by an appreciable amount and estimated the peak in incidence three weeks late. Using data from 2010-14, we investigated the relationship between GFT estimates (%GFT and %ILINet. Based on the relationship between the relative change in %GFT and the relative change in %ILINet, we transformed %GFT estimates to better correspond with %ILINet values. In 2010-13, our transformed %GFT estimates were within ± 10% of %ILINet values for 17 of the 29 weeks that %ILINet was above the seasonal baseline value determined by the CDC; in contrast, the original %GFT estimates were within ± 10% of %ILINet values for only two of these 29 weeks. Relative to the %ILINet peak in 2012-13, the peak in our transformed %GFT estimates was 2% lower and one week later, whereas the peak in the original %GFT estimates was 74% higher and three weeks later. The same transformation improved %GFT estimates using the recalibrated 2013 GFT model in early 2013-14. Our transformed %GFT estimates can be calculated approximately one week before %ILINet values are reported by the CDC and the transformation equation was stable over the time period investigated (2010-13. We anticipate our results will facilitate future use of GFT.

  7. Current trends for improving the design of membrane devices for photoautotrophic biosynthesis is light dependent microorganisms

    Directory of Open Access Journals (Sweden)

    A. A. Shevtsov

    2016-01-01

    Full Text Available Modern trends in improving the design of membrane devices for photoautotrophic biosynthesis dependent lighting microorganisms aimed at a significant increase in the productivity of valuable products from biomass of microalgae and obtaining on the basis of their individual useful substances (drugs used in various industries and medicine. In film devices effectively the processes of heat - and mass-exchange with the gas comes into contact with the culture fluid flowing as a film on a transparent film-forming surface is STI in its light intensity and autotrophic biosynthesis occurs only in the presence of a mixture of air with carbon dioxide. Thus, completely eliminated the accumulation of metabolic products due to their continuous removal from film culture liquid with the process gas, which is not typical for devices of other types. Small size membrane bioreactors may increase the degree of saturation of the liquid carbon dioxide with the possibility of changing the concentration of gas in the culture fluid and to ensure the cultivation of microorganisms with a specified biomass yield. At present up to date-developed a significant number of ways to ensure contact of the gas with the liquid (bubbling, gas-lift, mechanical stirring, jet, membrane, etc. on the basis of which an industrial bioreactor, with various "stress" effect. It is believed that for the cultivation of the most optimal are bioreactors with mechanical stirring of the liquid, which allow the greatest productivity of biomass. However, the applied model of a mechanical mixing device to create a work whose cavity of the bioreactor chaotic, disorganized mixing, which contributes to the emergence, insufficient for the sustenance of the cell cultures and microorganisms. Analysis of the interactions of the gas with the liquid film devices showed the need to create a new generation of bioreactor with intensive mass transfer without the possibility of limiting the productivity of

  8. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  9. First 5 tower WIMP-search results from the Cryogenic Dark Matter Search with improved understanding of neutron backgrounds and benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Hennings-Yeomans, Raul [Case Western Reserve Univ., Cleveland, OH (United States)

    2009-02-01

    Non-baryonic dark matter makes one quarter of the energy density of the Universe and is concentrated in the halos of galaxies, including the Milky Way. The Weakly Interacting Massive Particle (WIMP) is a dark matter candidate with a scattering cross section with an atomic nucleus of the order of the weak interaction and a mass comparable to that of an atomic nucleus. The Cryogenic Dark Matter Search (CDMS-II) experiment, using Ge and Si cryogenic particle detectors at the Soudan Underground Laboratory, aims to directly detect nuclear recoils from WIMP interactions. This thesis presents the first 5 tower WIMP-search results from CDMS-II, an estimate of the cosmogenic neutron backgrounds expected at the Soudan Underground Laboratory, and a proposal for a new measurement of high-energy neutrons underground to benchmark the Monte Carlo simulations. Based on the non-observation of WIMPs and using standard assumptions about the galactic halo [68], the 90% C.L. upper limit of the spin-independent WIMPnucleon cross section for the first 5 tower run is 6.6 × 10-44cm2 for a 60 GeV/c2 WIMP mass. A combined limit using all the data taken at Soudan results in an upper limit of 4.6×10-44cm2 at 90% C.L.for a 60 GeV/c2 WIMP mass. This new limit corresponds to a factor of ~3 improvement over any previous CDMS-II limit and a factor of ~2 above 60 GeV/c 2 better than any other WIMP search to date. This thesis presents an estimation, based on Monte Carlo simulations, of the nuclear recoils produced by cosmic-ray muons and their secondaries (at the Soudan site) for a 5 tower Ge and Si configuration as well as for a 7 supertower array. The results of the Monte Carlo are that CDMS-II should expect 0.06 ± 0.02+0.18 -0.02 /kgyear unvetoed single nuclear recoils in Ge for the 5 tower configuration, and 0.05 ± 0.01+0.15 -0.02 /kg-year for the 7 supertower configuration. The systematic error is based on the available

  10. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  11. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  12. Benchmarking the Netherlands. Benchmarking for growth

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity

  13. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  14. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  15. Operating Room Efficiency before and after Entrance in a Benchmarking Program for Surgical Process Data

    DEFF Research Database (Denmark)

    Pedron, Sara; Winter, Vera; Oppel, Eva-Maria

    2017-01-01

    Operating room (OR) efficiency continues to be a high priority for hospitals. In this context the concept of benchmarking has gained increasing importance as a means to improve OR performance. The aim of this study was to investigate whether and how participation in a benchmarking and reporting...... program for surgical process data was associated with a change in OR efficiency, measured through raw utilization, turnover times, and first-case tardiness. The main analysis is based on panel data from 202 surgical departments in German hospitals, which were derived from the largest database for surgical...... the availability of reliable, timely and detailed analysis tools to support the OR management seemed to be correlated especially with an increase in the timeliness of staff members regarding first-case starts. The increasing trend in turnover time revealed the absence of effective strategies to improve this aspect...

  16. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  17. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  18. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  19. RB reactor benchmark cores

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)

  20. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  1. Trends in patient satisfaction in Dutch university medical centers: room for improvement for all

    NARCIS (Netherlands)

    Kleefstra, Sophia M.; Zandbelt, Linda C.; de Haes, Hanneke J. C. J. M.; Kool, Rudolf B.

    2015-01-01

    Results of patient satisfaction research provide hospitals areas for quality improvement. Although it may take several years to achieve such improvement, not all hospitals analyze changes in patient satisfaction over time structurally. Consequently, they lack information from patients' perspective

  2. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  3. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  4. Cloud benchmarking for performance

    OpenAIRE

    Varghese, Blesson; Akgun, Ozgur; Miguel, Ian; Thai, Long; Barker, Adam

    2014-01-01

    Date of Acceptance: 20/09/2014 How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computa...

  5. Benchmark results in radiative transfer

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Siewert, C.E.

    1986-02-01

    Several aspects of the F N method are reported, and the method is used to solve accurately some benchmark problems in radiative transfer in the field of atmospheric physics. The method was modified to solve cases of pure scattering and an improved process was developed for computing the radiation intensity. An algorithms for computing several quantities used in the F N method was done. An improved scheme to evaluate certain integrals relevant to the method is done, and a two-term recursion relation that has proved useful for the numerical evaluation of matrix elements, basic for the method, is given. The methods used to solve the encountered linear algebric equations are discussed, and the numerical results are evaluated. (M.C.K.) [pt

  6. Benchmarking van verkeersveiligheid : een inventarisatie en aanbevelingen voor de opzet van verkeersveiligheidsbenchmarks in Nederland.

    NARCIS (Netherlands)

    Aarts, L.T. & Bax, C.A.

    2014-01-01

    Benchmarking road safety; Stocktaking and recommendations for the development of road safety benchmarks in The Netherlands. Road safety policy has been decentralized in the Netherlands, giving regional and local governments a greater responsibility for road safety improvement in their jurisdiction.

  7. Advocacy for Benchmarking in the Nigerian Institute of Advanced ...

    African Journals Online (AJOL)

    The paper gave a general overview of benchmarking and its novel application to library practice with a view to achieve organizational change and improved performance. Based on literature, the paper took an analytic, descriptive and qualitative overview of benchmarking practices vis a vis services in law libraries generally ...

  8. Benchmarking in health care: using the Internet to identify resources.

    Science.gov (United States)

    Lingle, V A

    1996-01-01

    Benchmarking is a quality improvement tool that is increasingly being applied to the health care field and to the libraries within that field. Using mostly resources assessible at no charge through the Internet, a collection of information was compiled on benchmarking and its applications. Sources could be identified in several formats including books, journals and articles, multi-media materials, and organizations.

  9. Choice Complexity, Benchmarks and Costly Information

    NARCIS (Netherlands)

    Harms, Job; Rosenkranz, S.; Sanders, M.W.J.L.

    In this study we investigate how two types of information interventions, providing a benchmark and providing costly information on option ranking, can improve decision-making in complex choices. In our experiment subjects made a series of incentivized choices between four hypothetical financial

  10. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  11. Clinical profile and improving mortality trend of scrub typhus in South India.

    Science.gov (United States)

    Varghese, George M; Trowbridge, Paul; Janardhanan, Jeshina; Thomas, Kurien; Peter, John V; Mathews, Prasad; Abraham, Ooriapadickal C; Kavitha, M L

    2014-06-01

    Scrub typhus, a bacterial zoonosis caused by Orientia tsutsugamushi, may cause multiorgan dysfunction syndrome (MODS) and is associated with significant mortality. This study was undertaken to document the clinical and laboratory manifestations and complications and to study time trends and factors associated with mortality in patients with scrub typhus infection. This retrospective study, done at a university teaching hospital, included 623 patients admitted between 2005 and 2010 with scrub typhus. The diagnosis was established by a positive IgM ELISA and/or pathognomonic eschar with PCR confirmation where feasible. The clinical and laboratory profile, course in hospital, and outcome were documented. Factors associated with mortality were analyzed using multivariate logistic regression analysis. The most common presenting symptoms were fever (100%), nausea/vomiting (54%), shortness of breath (49%), headache (46%), cough (38%), and altered sensorium (26%). An eschar was present in 43.5% of patients. Common laboratory findings included elevated transaminases (87%), thrombocytopenia (79%), and leukocytosis (46%). MODS was seen in 34% of patients. The overall case-fatality rate was 9.0%. Features of acute lung injury were observed in 33.7%, and 29.5% required ventilatory support. On multivariate analysis, shock requiring vasoactive agents (relative risk (RR) 10.5, 95% confidence interval (CI) 4.2-25.7, p<0.001), central nervous system (CNS) dysfunction (RR 5.1, 95% CI 2.4-10.7, p<0.001), and renal failure (RR 3.6, 95% CI 1.7-7.5, p=0.001) were independent predictors of mortality. Over 4 years, a decreasing trend was observed in the mortality rate. Scrub typhus can manifest with potentially life-threatening complications such as lung injury, shock, and meningoencephalitis. MODS occurred in a third of our patients. The overall case-fatality rate was 9%, with shock, renal failure, and CNS associated with a higher mortality. Copyright © 2014 The Authors. Published by

  12. Hospital benchmarking: are U.S. eye hospitals ready?

    Science.gov (United States)

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added

  13. Cleanroom energy benchmarking in high-tech and biotech industries

    International Nuclear Information System (INIS)

    Tschudi, William; Benschine, Kathleen; Fok, Stephen; Rumsey, Peter

    2001-01-01

    Cleanrooms, critical to a wide range of industries, universities, and government facilities, are extremely energy intensive. Consequently, energy represents a significant operating cost for these facilities. Improving energy efficiency in cleanrooms will yield dramatic productivity improvement. But more importantly to the industries which rely on cleanrooms, base load reduction will also improve reliability. The number of cleanrooms in the US is growing and the cleanroom environmental systems' energy use is increasing due to increases in total square footage and trends toward more energy intensive, higher cleanliness applications. In California, many industries important to the State's economy utilize cleanrooms. In California these industries utilize over 150 cleanrooms with a total of 4.2 million sq. ft. (McIlvaine). Energy intensive high tech buildings offer an attractive incentive for large base load energy reduction. Opportunities for energy efficiency improvement exist in virtually all operating cleanrooms as well as in new designs. To understand the opportunities and their potential impact, Pacific Gas and Electric Company sponsored a project to benchmark energy use in cleanrooms in the electronics (high-tech) and biotechnology industries. Both of these industries are heavily dependent intensive cleanroom environments for research and manufacturing. In California these two industries account for approximately 3.6 million sq. ft. of cleanroom (McIlvaine, 1996) and 4349 GWh/yr. (Sartor et al. 1999). Little comparative energy information on cleanroom environmental systems was previously available. Benchmarking energy use allows direct comparisons leading to identification of best practices, efficiency innovations, and highlighting previously masked design or operational problems

  14. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Benchmarking HIV health care

    DEFF Research Database (Denmark)

    Podlekareva, Daria; Reekie, Joanne; Mocroft, Amanda

    2012-01-01

    ABSTRACT: BACKGROUND: State-of-the-art care involving the utilisation of multiple health care interventions is the basis for an optimal long-term clinical prognosis for HIV-patients. We evaluated health care for HIV-patients based on four key indicators. METHODS: Four indicators of health care we...... document pronounced regional differences in adherence to guidelines and can help to identify gaps and direct target interventions. It may serve as a tool for assessment and benchmarking the clinical management of HIV-patients in any setting worldwide....

  16. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  17. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    An infrastructure is emerging that enables the positioning of populations of on-line, mobile service users. In step with this, research in the management of moving objects has attracted substantial attention. In particular, quite a few proposals now exist for the indexing of moving objects...... takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...

  18. ENVIRONMENTAL BENCHMARKING FOR LOCAL AUTHORITIES

    Directory of Open Access Journals (Sweden)

    Marinela GHEREŞ

    2010-01-01

    Full Text Available This paper is an attempt to clarify and present the many definitions ofbenchmarking. It also attempts to explain the basic steps of benchmarking, toshow how this tool can be applied by local authorities as well as to discuss itspotential benefits and limitations. It is our strong belief that if cities useindicators and progressively introduce targets to improve management andrelated urban life quality, and to measure progress towards more sustainabledevelopment, we will also create a new type of competition among cities andfoster innovation. This is seen to be important because local authorities’actions play a vital role in responding to the challenges of enhancing thestate of the environment not only in policy-making, but also in the provision ofservices and in the planning process. Local communities therefore need tobe aware of their own sustainability performance levels and should be able toengage in exchange of best practices to respond effectively to the ecoeconomicalchallenges of the century.

  19. Improving Control System Security through the Evaluation of Current Trends in Computer Security Research

    Energy Technology Data Exchange (ETDEWEB)

    Rolston

    2005-03-01

    At present, control system security efforts are primarily technical and reactive in nature. What has been overlooked is the need for proactive efforts, focused on the IT security research community from which new threats might emerge. Evaluating cutting edge IT security research and how it is evolving can provide defenders with valuable information regarding what new threats and tools they can anticipate in the future. Only known attack methodologies can be blocked, and there is a gap between what is known to the general security community and what is being done by cutting edge researchers --both those trying to protect systems and those trying to compromise them. The best security researchers communicate with others in their field; they know what cutting edge research is being done; what software can be penetrated via this research; and what new attack techniques and methodologies are being circulated in the black hat community. Standardization of control system applications, operating systems, and networking protocols is occurring at a rapid rate, following a path similar to the standardization of modern IT networks. Many attack methodologies used on IT systems can be ported over to the control system environment with little difficulty. It is extremely important to take advantage of the lag time between new research, its use on traditional IT networks, and the time it takes to port the research over for use on a control system network. Analyzing nascent trends in IT security and determining their applicability to control system networks provides significant information regarding defense mechanisms needed to secure critical infrastructure more effectively. This work provides the critical infrastructure community with a better understanding of how new attacks might be launched, what layers of defense will be needed to deter them, how the attacks could be detected, and how their impact could be limited.

  20. Benchmarking multimedia performance

    Science.gov (United States)

    Zandi, Ahmad; Sudharsanan, Subramania I.

    1998-03-01

    With the introduction of faster processors and special instruction sets tailored to multimedia, a number of exciting applications are now feasible on the desktops. Among these is the DVD playback consisting, among other things, of MPEG-2 video and Dolby digital audio or MPEG-2 audio. Other multimedia applications such as video conferencing and speech recognition are also becoming popular on computer systems. In view of this tremendous interest in multimedia, a group of major computer companies have formed, Multimedia Benchmarks Committee as part of Standard Performance Evaluation Corp. to address the performance issues of multimedia applications. The approach is multi-tiered with three tiers of fidelity from minimal to full compliant. In each case the fidelity of the bitstream reconstruction as well as quality of the video or audio output are measured and the system is classified accordingly. At the next step the performance of the system is measured. In many multimedia applications such as the DVD playback the application needs to be run at a specific rate. In this case the measurement of the excess processing power, makes all the difference. All these make a system level, application based, multimedia benchmark very challenging. Several ideas and methodologies for each aspect of the problems will be presented and analyzed.

  1. Core Benchmarks Descriptions

    International Nuclear Information System (INIS)

    Pavlovichev, A.M.

    2001-01-01

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented

  2. A benchmarking study

    Directory of Open Access Journals (Sweden)

    H. Groessing

    2015-02-01

    Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.

  3. Benchmarking Organisational Capability using The 20 Keys

    Directory of Open Access Journals (Sweden)

    Dino Petrarolo

    2012-01-01

    Full Text Available Organisations have over the years implemented many improvement initiatives, many of which were applied individually with no real, lasting improvement. Approaches such as quality control, team activities, setup reduction and many more seldom changed the fundamental constitution or capability of an organisation. Leading companies in the world have come to realise that an integrated approach is required which focuses on improving more than one factor at the same time - by recognising the importance of synergy between different improvement efforts and the need for commitment at all levels of the company to achieve total system-wide improvement.

    The 20 Keys approach offers a way to look at the strenqth of organisations and to systemically improve it, one step at a time by focusing on 20 different but interrelated aspects. One feature of the approach is the benchmarking system which forms the main focus of this paper. The benchmarking system is introduced as an important part of the 20 Keys philosophy in measuring organisational strength. Benchmarking results from selected South African companies are provided, as well as one company's results achieved through the adoption of the 20 Keys philosophy.

  4. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts to...... contribution to the discussions within the Eusponsored BEST Thematic Network (Benchmarking European Sustainable Transport) which ran from 2000 to 2003....

  5. Benchmarking in Czech Higher Education

    OpenAIRE

    Plaček Michal; Ochrana František; Půček Milan

    2015-01-01

    The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Base...

  6. MOx Depletion Calculation Benchmark

    International Nuclear Information System (INIS)

    San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin

    2016-01-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone

  7. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  8. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  9. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  10. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  11. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1987-01-01

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  12. A Decade of Never-smokers Among Lung Cancer Patients-Increasing Trend and Improved Survival.

    Science.gov (United States)

    Toh, Chee-Keong; Ong, Whee-Sze; Lim, Wan-Teck; Tan, Daniel Shao-Weng; Ng, Quan-Sing; Kanesvaran, Ravindran; Seow, Wei-Jie; Ang, Mei-Kim; Tan, Eng-Huat

    2018-03-17

    It is not known whether clinicopathologic characteristics, treatment, and survival of never-smokers among lung cancer incident cases have changed over time. We assessed the trend and overall survival (OS) of these patients within our institution during a 10-year period. We reviewed 2 cohorts of non-small-cell lung cancer patients with a diagnosis from 1999 to 2002 and from 2008 to 2011. The patient characteristics and OS were compared by smoking status within each cohort and between the 2 cohorts over time. Of the 992 patients in the 1999-2002 cohort and the 1318 patients in the 2008-2011 cohort, 902 and 1272 had a known smoking status, respectively. The proportion of never-smokers increased from 31% in 1999-2002 to 48% in 2008-2011 (P never-, former-, and current-smokers have remained largely constant over time. A greater proportion of never-smokers had Eastern Cooperative Oncology Group performance status 0 to 1 and adenocarcinoma. The median OS increased from 15.5 months in 1999-2002 to 24.9 months in 2008-2011 (P = .001) for never-smokers, 12.3 to 15.9 months (P = .150) for former-smokers, and 10.5 to 13.9 months (P = .011) for current-smokers. The larger survival improvement among never-smokers was likely accounted for by the larger increase in never-smokers who were treated with tyrosine kinase inhibitors and pemetrexed over time. We found an increasing trend of never-smokers among incident lung cancer cases and improved survival for these patients during a 10-year period. The documentation of smoking status in any national cancer registry is vital to estimate the true incidence of lung cancer among never-smokers over time. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. A concept paper: using the outcomes of common surgical conditions as quality metrics to benchmark district surgical services in South Africa as part of a systematic quality improvement programme.

    Science.gov (United States)

    Clarke, Damian L; Kong, Victor Y; Handley, Jonathan; Aldous, Colleen

    2013-07-31

    The fourth, fifth and sixth Millennium Development Goals relate directly to improving global healthcare and health outcomes. The focus is to improve global health outcomes by reducing maternal and childhood mortality and the burden of infectious diseases such as HIV/AIDS, tuberculosis and malaria. Specific targets and time frames have been set for these diseases. There is, however, no specific mention of surgically treated diseases in these goals, reflecting a bias that is slowly changing with emerging consensus that surgical care is an integral part of primary healthcare systems in the developing world. The disparities between the developed and developing world in terms of wealth and social indicators are reflected in disparities in access to surgical care. Health administrators must develop plans and strategies to reduce these disparities. However, any strategic plan that addresses deficits in healthcare must have a system of metrics, which benchmark the current quality of care so that specific improvement targets may be set.This concept paper outlines the role of surgical services in a primary healthcare system, highlights the ongoing disparities in access to surgical care and outcomes of surgical care, discusses the importance of a systems-based approach to healthcare and quality improvement, and reviews the current state of surgical care at district hospitals in South Africa. Finally, it proposes that the results from a recently published study on acute appendicitis, as well as data from a number of other common surgical conditions, can provide measurable outcomes across a healthcare system and so act as an indicator for judging improvements in surgical care. This would provide a framework for the introduction of collection of these outcomes as a routine epidemiological health policy tool.

  14. Review for session K - benchmarks

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1980-01-01

    Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually

  15. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  16. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  17. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  18. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  19. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  20. Policies for agricultural nitrogen management—trends, challenges and prospects for improved efficiency in Denmark

    International Nuclear Information System (INIS)

    Dalgaard, Tommy; Hutchings, Nicholas J; Olesen, Jørgen E; Sillebak Kristensen, Ib; Graversgaard, Morten; Hansen, Birgitte; Hasler, Berit; Hertel, Ole; Termansen, Mette; Jacobsen, Brian H; Stoumann Jensen, Lars; Schjørring, Jan K; Kronvang, Brian; Vejre, Henrik

    2014-01-01

    With more than 60% of the land farmed, with vulnerable freshwater and marine environments, and with one of the most intensive, export-oriented livestock sectors in the world, the nitrogen (N) pollution pressure from Danish agriculture is severe. Consequently, a series of policy action plans have been implemented since the mid 1980s with significant effects on the surplus, efficiency and environmental loadings of N. This paper reviews the policies and actions taken and their ability to mitigate effects of reactive N (N r ) while maintaining agricultural production. In summary, the average N-surplus has been reduced from approximately 170 kg N ha −1 yr −1 to below 100 kg N ha −1 yr −1 during the past 30 yrs, while the overall N-efficiency for the agricultural sector (crop + livestock farming) has increased from around 20–30% to 40–45%, the N-leaching from the field root zone has been halved, and N losses to the aquatic and atmospheric environment have been significantly reduced. This has been achieved through a combination of approaches and measures (ranging from command and control legislation, over market-based regulation and governmental expenditure to information and voluntary action), with specific measures addressing the whole N cascade, in order to improve the quality of ground- and surface waters, and to reduce the deposition to terrestrial natural ecosystems. However, there is still a major challenge in complying with the EU Water Framework and Habitats Directives, calling for new approaches, measures and technologies to mitigate agricultural N losses and control N flows. (paper)

  1. Reactor calculation benchmark PCA blind test results

    International Nuclear Information System (INIS)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables

  2. Reactor calculation benchmark PCA blind test results

    Energy Technology Data Exchange (ETDEWEB)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables.

  3. Benchmarking school nursing practice: the North West Regional Benchmarking Group

    OpenAIRE

    Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn

    2016-01-01

    It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...

  4. Citation trend and suggestions for improvement of impact factor of Journal of Korean Therapeutic Radiology and Oncology

    International Nuclear Information System (INIS)

    Kim, Seong Hwan; Hwang, Seong Su; Ahn, Myeong Im; Jeong, So Na

    2006-01-01

    To analyze the recent citation trend and to find a way to improve impact factor (IF) of the Journal of Korean Therapeutic Radiology and Oncology (JKSTRO) by analysis of Korean Medical Citation index (KoMCI) citation data of JKSTRO and comparison with that of mean citation data of all journals enlisted on KoMCI (KoMCI journals) during 2000-2005. All citation data of entire journals enlisted on KoMCI and JKSTRO from 2000 to 2005 were obtained from KoMCI. The trend of total and annual number of published articles and reference citations, total citations and self-citations per paper, IF and impact factor excluding self-citations (ZIF) were described and compared on both KoMCI journals an JKSTRO. Annual number of published articles was decreased for 6 years on both KoMCI journals and JKSTRO (32% and 38% reduction rate). The number of Korean journal references per article is 1.6 papers of JKSTRO comparing to 2.0 papers on KoMCI journals. The percentage of Korean references/total references increased from 5.0% in 2000 to 7.7% in 2005 on JKSTRO and from 8.5% in 2000 to 10.1% on KoMCI journals. The number of total citations received/paper on JKSTRO (average 1.333) is smaller than that of KoMCI journals (average 1.694), there was an increased rate of 67% in 2005 comparing to 2000. The percentage of self-citations/total citations (average 72%) on JKSTRO is slightly higher than that of KoMCI journals (average 61%)/ IF of JKSTRO was gradually improved and 0.144, 0.125, 0.088, 0.107, 0.187 and 0.203 in 2000-2005 respectively. However, ZIF of JKSTRO is steadily decreased from 0.038 in 2000 to 0.013 in 2005 except 0.044 in 2004. IF of JKSTRO was slightly improved but had some innate problem of smaller number of citations received . To make JKSTRO as a highly cited journal, the awareness of academic status of JKSTRO and active participation of every member of JKSTRO including encouraging self-citations of papers published recent 2 years and submission of English written papers, and

  5. Benchmarking Nuclear Power Plants

    International Nuclear Information System (INIS)

    Jakic, I.

    2016-01-01

    One of the main tasks an owner have is to keep its business competitive on the market while delivering its product. Being owner of nuclear power plant bear the same (or even more complex and stern) responsibility due to safety risks and costs. In the past, nuclear power plant managements could (partly) ignore profit or it was simply expected and to some degree assured through the various regulatory processes governing electricity rate design. It is obvious now that, with the deregulation, utility privatization and competitive electricity market, key measure of success used at nuclear power plants must include traditional metrics of successful business (return on investment, earnings and revenue generation) as well as those of plant performance, safety and reliability. In order to analyze business performance of (specific) nuclear power plant, benchmarking, as one of the well-established concept and usual method was used. Domain was conservatively designed, with well-adjusted framework, but results have still limited application due to many differences, gaps and uncertainties. (author).

  6. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  7. AER benchmark specification sheet

    International Nuclear Information System (INIS)

    Aszodi, A.; Toth, S.

    2009-01-01

    In the VVER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational Fluid Dynamics (CFD) codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D CFD modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the 23rd cycle of the Paks NPP's Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (authors)

  8. AER Benchmark Specification Sheet

    International Nuclear Information System (INIS)

    Aszodi, A.; Toth, S.

    2009-01-01

    In the WWER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational fluid dynamics codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D computational fluid dynamics modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the twenty third cycle of the Paks NPPs Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (Authors)

  9. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  10. Implementation of benchmark management in quality assurance audit activities

    International Nuclear Information System (INIS)

    Liu Yongmei

    2008-01-01

    The concept of Benchmark Management is that the practices of the best competitor are taken as benchmark, to analyze and study the distance between that competitor and the institute, and take efficient actions to catch up and even exceed the competitor. This paper analyzes and rebuilds all the process for quality assurance audit with the concept of Benchmark Management, based on the practices during many years of quality assurance audits, in order to improve the level and effect of quality assurance audit activities. (author)

  11. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  12. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  13. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  14. Career performance trajectories of Olympic swimmers: benchmarks for talent development.

    Science.gov (United States)

    Allen, Sian V; Vandenbogaerde, Tom J; Hopkins, William G

    2014-01-01

    The age-related progression of elite athletes to their career-best performances can provide benchmarks for talent development. The purpose of this study was to model career performance trajectories of Olympic swimmers to develop these benchmarks. We searched the Web for annual best times of swimmers who were top 16 in pool events at the 2008 or 2012 Olympics, from each swimmer's earliest available competitive performance through to 2012. There were 6959 times in the 13 events for each sex, for 683 swimmers, with 10 ± 3 performances per swimmer (mean ± s). Progression to peak performance was tracked with individual quadratic trajectories derived using a mixed linear model that included adjustments for better performance in Olympic years and for the use of full-body polyurethane swimsuits in 2009. Analysis of residuals revealed appropriate fit of quadratic trends to the data. The trajectories provided estimates of age of peak performance and the duration of the age window of trivial improvement and decline around the peak. Men achieved peak performance later than women (24.2 ± 2.1 vs. 22.5 ± 2.4 years), while peak performance occurred at later ages for the shorter distances for both sexes (∼1.5-2.0 years between sprint and distance-event groups). Men and women had a similar duration in the peak-performance window (2.6 ± 1.5 years) and similar progressions to peak performance over four years (2.4 ± 1.2%) and eight years (9.5 ± 4.8%). These data provide performance targets for swimmers aiming to achieve elite-level performance.

  15. Parton Shower Uncertainties with Herwig 7: Benchmarks at Leading Order

    CERN Document Server

    Bellm, Johannes; Plätzer, Simon; Schichtel, Peter; Siódmok, Andrzej

    2016-01-01

    We perform a detailed study of the sources of perturbative uncertainty in parton shower predictions within the Herwig 7 event generator. We benchmark two rather different parton shower algorithms, based on angular-ordered and dipole-type evolution, against each other. We deliberately choose leading order plus parton shower as the benchmark setting to identify a controllable set of uncertainties. This will enable us to reliably assess improvements by higher-order contributions in a follow-up work.

  16. Benchmark matrix and guide: Part II.

    Science.gov (United States)

    1991-01-01

    In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.

  17. BENCHMARKING – BETWEEN TRADITIONAL & MODERN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Mihaela Ungureanu

    2011-09-01

    Full Text Available The concept of benchmarking requires a continuous process of performance improvement of different organizations in order to obtain superiority towards those perceived as market leader’s competitors. This superiority can always be questioned, its relativity originating in the quick growing evolution of the economic environment. The approach supports innovation in relation with traditional methods and it is based on the will of those managers who want to determine limits and seek excellence. The end of the twentieth century is the period of broad expression of benchmarking in various areas and its transformation from a simple quantitative analysis tool, to a resource of information on performance and quality of goods and services.

  18. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  19. Benchmarking: contexts and details matter.

    Science.gov (United States)

    Zheng, Siyuan

    2017-07-05

    Benchmarking is an essential step in the development of computational tools. We take this opportunity to pitch in our opinions on tool benchmarking, in light of two correspondence articles published in Genome Biology.Please see related Li et al. and Newman et al. correspondence articles: www.dx.doi.org/10.1186/s13059-017-1256-5 and www.dx.doi.org/10.1186/s13059-017-1257-4.

  20. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  1. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  2. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  3. Declining Trend of Hepatitis A Seroepidemiology in Association with Improved Public Health and Economic Status of Thailand.

    Science.gov (United States)

    Sa-nguanmoo, Pattaratida; Posuwan, Nawarat; Vichaiwattana, Preeyaporn; Vuthitanachot, Viboonsak; Saelao, Siriporn; Foonoi, Monthana; Fakthongyoo, Apinya; Makaroon, Jamorn; Srisingh, Klaita; Asawarachun, Duangporn; Owatanapanich, Somchai; Wutthiratkowit, Norra; Tohtubtiang, Kraisorn; Vongpunsawad, Sompong; Yoocharoen, Pornsak; Poovorawan, Yong

    2016-01-01

    Hepatitis A virus (HAV) is transmitted via the fecal-oral route from contaminated food or water. As part of the most recent survey of viral hepatitis burden in Thailand, we analyzed the current seroprevalence of HAV in the country and compared with data dating back to 1971. From March to October, 2014, a total of 4,260 individuals between one month and 71 years of age from different geographical regions (North = 961; Central = 1,125; Northeast = 1,109; South = 1,065) were screened for anti-HAV IgG antibody using an automated chemiluminescent microparticle immunoassay. Overall, 34.53% (1,471/4,260) possessed anti-HAV IgG antibody, and the age-standardized seroprevalence was 48.6%. Seroprevalence rates were 27.3% (North), 30.8% (Central), 33.8% (Northeast) and 45.8% (South) and were markedly lower than in the past studies especially among younger age groups. The overall trend showed an increase in the age by which 50% of the population were anti-HAV IgG antibody: 4.48 years (1971-1972), 6 (1976), 12.49 (1990), 36.02 (2004) and 42.03 (2014).This suggests that Thailand is transitioning from low to very low HAV endemicity. Lower prevalence of HAV correlated with improved healthcare system as measured by decreased infant mortality rate and improved national economy based on increased GDP per capita. The aging HAV immuno-naïve population may be rendered susceptible to potential HAV outbreaks similar to those in industrialized countries and may benefit from targeted vaccination of high-risk groups.

  4. Declining Trend of Hepatitis A Seroepidemiology in Association with Improved Public Health and Economic Status of Thailand.

    Directory of Open Access Journals (Sweden)

    Pattaratida Sa-nguanmoo

    Full Text Available Hepatitis A virus (HAV is transmitted via the fecal-oral route from contaminated food or water. As part of the most recent survey of viral hepatitis burden in Thailand, we analyzed the current seroprevalence of HAV in the country and compared with data dating back to 1971. From March to October, 2014, a total of 4,260 individuals between one month and 71 years of age from different geographical regions (North = 961; Central = 1,125; Northeast = 1,109; South = 1,065 were screened for anti-HAV IgG antibody using an automated chemiluminescent microparticle immunoassay. Overall, 34.53% (1,471/4,260 possessed anti-HAV IgG antibody, and the age-standardized seroprevalence was 48.6%. Seroprevalence rates were 27.3% (North, 30.8% (Central, 33.8% (Northeast and 45.8% (South and were markedly lower than in the past studies especially among younger age groups. The overall trend showed an increase in the age by which 50% of the population were anti-HAV IgG antibody: 4.48 years (1971-1972, 6 (1976, 12.49 (1990, 36.02 (2004 and 42.03 (2014.This suggests that Thailand is transitioning from low to very low HAV endemicity. Lower prevalence of HAV correlated with improved healthcare system as measured by decreased infant mortality rate and improved national economy based on increased GDP per capita. The aging HAV immuno-naïve population may be rendered susceptible to potential HAV outbreaks similar to those in industrialized countries and may benefit from targeted vaccination of high-risk groups.

  5. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  6. Benchmarking comprehensive cancer care

    NARCIS (Netherlands)

    Wind, Anke

    2017-01-01

    The number of cancer patients and survivors is steadily increasing and despite or perhaps because of rapid improvements in diagnostics and therapeutics, important inequalities in cancer survival exist within and between different countries in Europe. Improving the quality of care is part of the

  7. How to Use Benchmarking in Small and Medium-Sized Businesses

    OpenAIRE

    Alexandrache (Hrimiuc) Olivia Bianca

    2011-01-01

    In nowadays benchmarking become a powerful management tool that stimulates innovative improvement through exchange of corporate information, performance measurement, and adoption of best practices. It has been used for to improve productivity and quality in leading manufacturing organizations. In the last years, companies of different sizes and business sectors are getting involved in benchmarking activities. Despite the differences of benchmarking practices between smaller and bigger organiz...

  8. JENDL-4.0 benchmarking for fission reactor applications

    International Nuclear Information System (INIS)

    Chiba, Go; Okumura, Keisuke; Sugino, Kazuteru; Nagaya, Yasunobu; Yokoyama, Kenji; Kugo, Teruhiko; Ishikawa, Makoto; Okajima, Shigeaki

    2011-01-01

    Benchmark testing for the newly developed Japanese evaluated nuclear data library JENDL-4.0 is carried out by using a huge amount of integral data. Benchmark calculations are performed with a continuous-energy Monte Carlo code and with the deterministic procedure, which has been developed for fast reactor analyses in Japan. Through the present benchmark testing using a wide range of benchmark data, significant improvement in the performance of JENDL-4.0 for fission reactor applications is clearly demonstrated in comparison with the former library JENDL-3.3. Much more accurate and reliable prediction for neutronic parameters for both thermal and fast reactors becomes possible by using the library JENDL-4.0. (author)

  9. Empirical Methods for Detecting Regional Trends and Other Spatial Expressions in Antrim Shale Gas Productivity, with Implications for Improving Resource Projections Using Local Nonparametric Estimation Techniques

    Science.gov (United States)

    Coburn, T.C.; Freeman, P.A.; Attanasi, E.D.

    2012-01-01

    The primary objectives of this research were to (1) investigate empirical methods for establishing regional trends in unconventional gas resources as exhibited by historical production data and (2) determine whether or not incorporating additional knowledge of a regional trend in a suite of previously established local nonparametric resource prediction algorithms influences assessment results. Three different trend detection methods were applied to publicly available production data (well EUR aggregated to 80-acre cells) from the Devonian Antrim Shale gas play in the Michigan Basin. This effort led to the identification of a southeast-northwest trend in cell EUR values across the play that, in a very general sense, conforms to the primary fracture and structural orientations of the province. However, including this trend in the resource prediction algorithms did not lead to improved results. Further analysis indicated the existence of clustering among cell EUR values that likely dampens the contribution of the regional trend. The reason for the clustering, a somewhat unexpected result, is not completely understood, although the geological literature provides some possible explanations. With appropriate data, a better understanding of this clustering phenomenon may lead to important information about the factors and their interactions that control Antrim Shale gas production, which may, in turn, help establish a more general protocol for better estimating resources in this and other shale gas plays. ?? 2011 International Association for Mathematical Geology (outside the USA).

  10. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  11. Model based energy benchmarking for glass furnace

    International Nuclear Information System (INIS)

    Sardeshpande, Vishal; Gaitonde, U.N.; Banerjee, Rangan

    2007-01-01

    Energy benchmarking of processes is important for setting energy efficiency targets and planning energy management strategies. Most approaches used for energy benchmarking are based on statistical methods by comparing with a sample of existing plants. This paper presents a model based approach for benchmarking of energy intensive industrial processes and illustrates this approach for industrial glass furnaces. A simulation model for a glass furnace is developed using mass and energy balances, and heat loss equations for the different zones and empirical equations based on operating practices. The model is checked with field data from end fired industrial glass furnaces in India. The simulation model enables calculation of the energy performance of a given furnace design. The model results show the potential for improvement and the impact of different operating and design preferences on specific energy consumption. A case study for a 100 TPD end fired furnace is presented. An achievable minimum energy consumption of about 3830 kJ/kg is estimated for this furnace. The useful heat carried by glass is about 53% of the heat supplied by the fuel. Actual furnaces operating at these production scales have a potential for reduction in energy consumption of about 20-25%

  12. The Benchmarking of Integrated Business Structures

    Directory of Open Access Journals (Sweden)

    Nifatova Olena M.

    2017-12-01

    Full Text Available The aim of the article is to study the role of the benchmarking in the process of integration of business structures in the aspect of knowledge sharing. The results of studying the essential content of the concept “integrated business structure” and its semantic analysis made it possible to form our own understanding of this category with an emphasis on the need to consider it in the plane of three projections — legal, economic and organizational one. The economic projection of the essential content of integration associations of business units is supported by the organizational projection, which is expressed through such essential aspects as existence of a single center that makes key decisions; understanding integration as knowledge sharing; using the benchmarking as exchange of experience on key business processes. Understanding the process of integration of business units in the aspect of knowledge sharing involves obtaining certain information benefits. Using the benchmarking as exchange of experience on key business processes in integrated business structures will help improve the basic production processes, increase the efficiency of activity of both the individual business unit and the IBS as a whole.

  13. Status on benchmark testing of CENDL-3

    CERN Document Server

    Liu Ping

    2002-01-01

    CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium

  14. Monte Carlo benchmarking: Validation and progress

    International Nuclear Information System (INIS)

    Sala, P.

    2010-01-01

    Document available in abstract form only. Full text of publication follows: Calculational tools for radiation shielding at accelerators are faced with new challenges from the present and next generations of particle accelerators. All the details of particle production and transport play a role when dealing with huge power facilities, therapeutic ion beams, radioactive beams and so on. Besides the traditional calculations required for shielding, activation predictions have become an increasingly critical component. Comparison and benchmarking with experimental data is obviously mandatory in order to build up confidence in the computing tools, and to assess their reliability and limitations. Thin target particle production data are often the best tools for understanding the predictive power of individual interaction models and improving their performances. Complex benchmarks (e.g. thick target data, deep penetration, etc.) are invaluable in assessing the overall performances of calculational tools when all ingredients are put at work together. A review of the validation procedures of Monte Carlo tools will be presented with practical and real life examples. The interconnections among benchmarks, model development and impact on shielding calculations will be highlighted. (authors)

  15. [Benchmarking and other functions of ROM: back to basics].

    Science.gov (United States)

    Barendregt, M

    2015-01-01

    Since 2011 outcome data in the Dutch mental health care have been collected on a national scale. This has led to confusion about the position of benchmarking in the system known as routine outcome monitoring (rom). To provide insight into the various objectives and uses of aggregated outcome data. A qualitative review was performed and the findings were analysed. Benchmarking is a strategy for finding best practices and for improving efficacy and it belongs to the domain of quality management. Benchmarking involves comparing outcome data by means of instrumentation and is relatively tolerant with regard to the validity of the data. Although benchmarking is a function of rom, it must be differentiated form other functions from rom. Clinical management, public accountability, research, payment for performance and information for patients are all functions of rom which require different ways of data feedback and which make different demands on the validity of the underlying data. Benchmarking is often wrongly regarded as being simply a synonym for 'comparing institutions'. It is, however, a method which includes many more factors; it can be used to improve quality and has a more flexible approach to the validity of outcome data and is less concerned than other rom functions about funding and the amount of information given to patients. Benchmarking can make good use of currently available outcome data.

  16. The Medical Library Association Benchmarking Network: results.

    Science.gov (United States)

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C; Smith, Bernie Todd

    2006-04-01

    This article presents some limited results from the Medical Library Association (MLA) Benchmarking Network survey conducted in 2002. Other uses of the data are also presented. After several years of development and testing, a Web-based survey opened for data input in December 2001. Three hundred eighty-five MLA members entered data on the size of their institutions and the activities of their libraries. The data from 344 hospital libraries were edited and selected for reporting in aggregate tables and on an interactive site in the Members-Only area of MLANET. The data represent a 16% to 23% return rate and have a 95% confidence level. Specific questions can be answered using the reports. The data can be used to review internal processes, perform outcomes benchmarking, retest a hypothesis, refute a previous survey findings, or develop library standards. The data can be used to compare to current surveys or look for trends by comparing the data to past surveys. The impact of this project on MLA will reach into areas of research and advocacy. The data will be useful in the everyday working of small health sciences libraries as well as provide concrete data on the current practices of health sciences libraries.

  17. The Medical Library Association Benchmarking Network: results*

    Science.gov (United States)

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd

    2006-01-01

    Objective: This article presents some limited results from the Medical Library Association (MLA) Benchmarking Network survey conducted in 2002. Other uses of the data are also presented. Methods: After several years of development and testing, a Web-based survey opened for data input in December 2001. Three hundred eighty-five MLA members entered data on the size of their institutions and the activities of their libraries. The data from 344 hospital libraries were edited and selected for reporting in aggregate tables and on an interactive site in the Members-Only area of MLANET. The data represent a 16% to 23% return rate and have a 95% confidence level. Results: Specific questions can be answered using the reports. The data can be used to review internal processes, perform outcomes benchmarking, retest a hypothesis, refute a previous survey findings, or develop library standards. The data can be used to compare to current surveys or look for trends by comparing the data to past surveys. Conclusions: The impact of this project on MLA will reach into areas of research and advocacy. The data will be useful in the everyday working of small health sciences libraries as well as provide concrete data on the current practices of health sciences libraries. PMID:16636703

  18. Benchmarking and energy management schemes in SMEs

    Energy Technology Data Exchange (ETDEWEB)

    Huenges Wajer, Boudewijn [SenterNovem (Netherlands); Helgerud, Hans Even [New Energy Performance AS (Norway); Lackner, Petra [Austrian Energy Agency (Austria)

    2007-07-01

    Many companies are reluctant to focus on energy management or to invest in energy efficiency measures. Nevertheless, there are many good examples proving that the right approach to implementing energy efficiency can very well be combined with the business-priorities of most companies. SMEs in particular can benefit from a facilitated European approach because they normally have a lack of resources and time to invest in energy efficiency. In the EU supported pilot project BESS, 60 SMEs from 11 European countries of the food and drink industries successfully tested a package of interactive instruments which offers such a facilitated approach. A number of pilot companies show a profit increase of 3 up to 10 %. The package includes a user-friendly and web based E-learning scheme for implementing energy management as well as a benchmarking module for company specific comparison of energy performance indicators. Moreover, it has several practical and tested tools to support the cycle of continuous improvement of energy efficiency in the company such as checklists, sector specific measure lists, templates for auditing and energy conservation plans. An important feature and also a key trigger for companies is the possibility for SMEs to benchmark anonymously their energy situation against others of the same sector. SMEs can participate in a unique web based benchmarking system to interactively benchmark in a way which fully guarantees confidentiality and safety of company data. Furthermore, the available data can contribute to a bottom-up approach to support the objectives of (national) monitoring and targeting and thereby also contributing to the EU Energy Efficiency and Energy Services Directive. A follow up project to expand the number of participating SMEs of various sectors is currently being developed.

  19. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins , Robert

    2010-01-01

    Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...

  20. Measurement, Standards, and Peer Benchmarking: One Hospital's Journey.

    Science.gov (United States)

    Martin, Brian S; Arbore, Mark

    2016-04-01

    Peer-to-peer benchmarking is an important component of rapid-cycle performance improvement in patient safety and quality-improvement efforts. Institutions should carefully examine critical success factors before engagement in peer-to-peer benchmarking in order to maximize growth and change opportunities. Solutions for Patient Safety has proven to be a high-yield engagement for Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, with measureable improvement in both organizational process and culture. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Adventure Tourism Benchmark – Analyzing the Case of Suesca, Cundinamarca

    Directory of Open Access Journals (Sweden)

    Juan Felipe Tsao Borrero

    2012-11-01

    Full Text Available Adventure tourism is a growing sector within the tourism industry and understanding its dynamics is fundamental for adventure tourism destinations and their local authorities. Destination benchmarking is a strong tool to identify the performance of tourism services offered at the destination in order to design appropriate policies to improve its competitiveness. The benchmarking study of Suesca, an adventure tourism destination in Colombia, helps to identify the gaps compared with successful adventure tourism destinations around the world, and provides valuable information to local policy-makers on the features to be improved. The lack of available information to tourists and financial facilities hinders the capability of Suesca to improve its competitiveness.

  2. U.S. NO2 trends (2005-2013): EPA Air Quality System (AQS) data versus improved observations from the Ozone Monitoring Instrument (OMI)

    Science.gov (United States)

    Lamsal, Lok N.; Duncan, Bryan N.; Yoshida, Yasuko; Krotkov, Nickolay A.; Pickering, Kenneth E.; Streets, David G.; Lu, Zifeng

    2015-06-01

    Emissions of nitrogen oxides (NOx) and, subsequently, atmospheric levels of nitrogen dioxide (NO2) have decreased over the U.S. due to a combination of environmental policies and technological change. Consequently, NO2 levels have decreased by 30-40% in the last decade. We quantify NO2 trends (2005-2013) over the U.S. using surface measurements from the U.S. Environmental Protection Agency (EPA) Air Quality System (AQS) and an improved tropospheric NO2 vertical column density (VCD) data product from the Ozone Monitoring Instrument (OMI) on the Aura satellite. We demonstrate that the current OMI NO2 algorithm is of sufficient maturity to allow a favorable correspondence of trends and variations in OMI and AQS data. Our trend model accounts for the non-linear dependence of NO2 concentration on emissions associated with the seasonal variation of the chemical lifetime, including the change in the amplitude of the seasonal cycle associated with the significant change in NOx emissions that occurred over the last decade. The direct relationship between observations and emissions becomes more robust when one accounts for these non-linear dependencies. We improve the OMI NO2 standard retrieval algorithm and, subsequently, the data product by using monthly vertical concentration profiles, a required algorithm input, from a high-resolution chemistry and transport model (CTM) simulation with varying emissions (2005-2013). The impact of neglecting the time-dependence of the profiles leads to errors in trend estimation, particularly in regions where emissions have changed substantially. For example, trends calculated from retrievals based on time-dependent profiles offer 18% more instances of significant trends and up to 15% larger total NO2 reduction versus the results based on profiles for 2005. Using a CTM, we explore the theoretical relation of the trends estimated from NO2 VCDs to those estimated from ground-level concentrations. The model-simulated trends in VCDs strongly

  3. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  4. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  5. Benchmark of neutron production cross sections with Monte Carlo codes

    Science.gov (United States)

    Tsai, Pi-En; Lai, Bo-Lun; Heilbronn, Lawrence H.; Sheu, Rong-Jiun

    2018-02-01

    Aiming to provide critical information in the fields of heavy ion therapy, radiation shielding in space, and facility design for heavy-ion research accelerators, the physics models in three Monte Carlo simulation codes - PHITS, FLUKA, and MCNP6, were systematically benchmarked with comparisons to fifteen sets of experimental data for neutron production cross sections, which include various combinations of 12C, 20Ne, 40Ar, 84Kr and 132Xe projectiles and natLi, natC, natAl, natCu, and natPb target nuclides at incident energies between 135 MeV/nucleon and 600 MeV/nucleon. For neutron energies above 60% of the specific projectile energy per nucleon, the LAQGMS03.03 in MCNP6, the JQMD/JQMD-2.0 in PHITS, and the RQMD-2.4 in FLUKA all show a better agreement with data in heavy-projectile systems than with light-projectile systems, suggesting that the collective properties of projectile nuclei and nucleon interactions in the nucleus should be considered for light projectiles. For intermediate-energy neutrons whose energies are below the 60% projectile energy per nucleon and above 20 MeV, FLUKA is likely to overestimate the secondary neutron production, while MCNP6 tends towards underestimation. PHITS with JQMD shows a mild tendency for underestimation, but the JQMD-2.0 model with a modified physics description for central collisions generally improves the agreement between data and calculations. For low-energy neutrons (below 20 MeV), which are dominated by the evaporation mechanism, PHITS (which uses GEM linked with JQMD and JQMD-2.0) and FLUKA both tend to overestimate the production cross section, whereas MCNP6 tends to underestimate more systems than to overestimate. For total neutron production cross sections, the trends of the benchmark results over the entire energy range are similar to the trends seen in the dominate energy region. Also, the comparison of GEM coupled with either JQMD or JQMD-2.0 in the PHITS code indicates that the model used to describe the first

  6. Benchmarking on the management of radioactive waste; Benchmarking sobre la gestion de los residuos radiactivos

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Gomez, M. a.; Gonzalez Gandal, R.; Gomez Castano, N.

    2013-09-01

    In this project, an evaluation of the practices carried out in the waste management field at the Spanish nuclear power plants has been done following the Benchmarking methodology. This process has allowed the identification of aspects to improve waste treatment processes; to reduce the volume of waste; to reduce management costs and to establish ways of management for the waste stream which do not have. (Author)

  7. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  8. Benchmarking urban energy efficiency in the UK

    International Nuclear Information System (INIS)

    Keirstead, James

    2013-01-01

    This study asks what is the ‘best’ way to measure urban energy efficiency. There has been recent interest in identifying efficient cities so that best practices can be shared, a process known as benchmarking. Previous studies have used relatively simple metrics that provide limited insight on the complexity of urban energy efficiency and arguably fail to provide a ‘fair’ measure of urban performance. Using a data set of 198 urban UK local administrative units, three methods are compared: ratio measures, regression residuals, and data envelopment analysis. The results show that each method has its own strengths and weaknesses regarding the ease of interpretation, ability to identify outliers and provide consistent rankings. Efficient areas are diverse but are notably found in low income areas of large conurbations such as London, whereas industrial areas are consistently ranked as inefficient. The results highlight the shortcomings of the underlying production-based energy accounts. Ideally urban energy efficiency benchmarks would be built on consumption-based accounts, but interim recommendations are made regarding the use of efficiency measures that improve upon current practice and facilitate wider conversations about what it means for a specific city to be energy-efficient within an interconnected economy. - Highlights: • Benchmarking is a potentially valuable method for improving urban energy performance. • Three different measures of urban energy efficiency are presented for UK cities. • Most efficient areas are diverse but include low-income areas of large conurbations. • Least efficient areas perform industrial activities of national importance. • Improve current practice with grouped per capita metrics or regression residuals

  9. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    Takeda, T.; Ikeda, H.

    1991-03-01

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of K eff , control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  10. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    Makai, M.

    1998-01-01

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  11. 2009 South American benchmarking study: natural gas transportation companies

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Nathalie [Gas TransBoliviano S.A. (Bolivia); Walter, Juliana S. [TRANSPETRO, Rio de Janeiro, RJ (Brazil)

    2009-07-01

    In the current business environment large corporations are constantly seeking to adapt their strategies. Benchmarking is an important tool for continuous improvement and decision-making. Benchmarking is a methodology that determines which aspects are the most important to be improved upon, and it proposes establishing a competitive parameter in an analysis of the best practices and processes, applying continuous improvement driven by the best organizations in their class. At the beginning of 2008, GTB (Gas TransBoliviano S.A.) contacted several South American gas transportation companies to carry out a regional benchmarking study in 2009. In this study, the key performance indicators of the South American companies, whose reality is similar, for example, in terms of prices, availability of labor, and community relations, will be compared. Within this context, a comparative study of the results, the comparative evaluation among natural gas transportation companies, is becoming an essential management instrument to help with decision-making. (author)

  12. Benchmarking survey for recycling.

    Energy Technology Data Exchange (ETDEWEB)

    Marley, Margie Charlotte; Mizner, Jack Harry

    2005-06-01

    This report describes the methodology, analysis and conclusions of a comparison survey of recycling programs at ten Department of Energy sites including Sandia National Laboratories/New Mexico (SNL/NM). The goal of the survey was to compare SNL/NM's recycling performance with that of other federal facilities, and to identify activities and programs that could be implemented at SNL/NM to improve recycling performance.

  13. Trends in depression and antidepressant prescribing in children and adolescents: a cohort study in The Health Improvement Network (THIN.

    Directory of Open Access Journals (Sweden)

    Linda P M M Wijlaars

    Full Text Available In 2003, the Committee on Safety of Medicines (CSM advised against treatment with selective serotonin reuptake inhibitors (SSRIs other than fluoxetine in children, due to a possible increased risk of suicidal behaviour. This study examined the effects of this safety warning on general practitioners' depression diagnosing and prescription behaviour in children.We identified a cohort of 1,502,753 children (6 m in The Health Improvement Network (THIN UK primary care database. Trends in incidence of depression diagnoses, symptoms and antidepressant prescribing were examined 1995-2009, accounting for deprivation, age and gender. We used segmented regression analysis to assess changes in prescription rates. Overall, 45,723 (3% children had ≥ 1 depression-related entry in their clinical records. SSRIs were prescribed to 16,925 (1% of children. SSRI prescription rates decreased from 3.2 (95%CI:3.0,3.3 per 1,000 person-years at risk (PYAR in 2002 to 1.7 (95%CI:1.7,1.8 per 1,000 PYAR in 2005, but have since risen to 2.7 (95%CI:2.6,2.8 per 1,000 PYAR in 2009. Prescription rates for CSM-contraindicated SSRIs citalopram, sertraline and especially paroxetine dropped dramatically after 2002, while rates for fluoxetine and amitriptyline remained stable. After 2005 rates for all antidepressants, except paroxetine and imipramine, started to rise again. Rates for depression diagnoses dropped from 3.0 (95%CI:2.8,3.1 per 1,000 PYAR in 2002 to 2.0 (95%CI:1.9,2.1 per 1,000 PYAR in 2005 and have been stable since. Recording of symptoms saw a steady increase from 1.0 (95%CI:0.8,1.2 per 1,000 PYAR in 1995 to 4.7 (95%CI:4.5,4.8 per 1,000 PYAR in 2009.The rates of depression diagnoses and SSRI prescriptions showed a significant drop around the time of the CSM advice, which was not present in the recording of symptoms. This could indicate caution on the part of GPs in making depression diagnoses and prescribing antidepressants following the CSM advice.

  14. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    Science.gov (United States)

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie; Reed, Sasha; Reich, Peter B.; Ryan, Michael G.; Wood, Tana E.; Yang, Xiaojuan

    2017-10-01

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is to compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.

  15. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    Science.gov (United States)

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  16. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.; Tyhurst, Janis

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  17. Prismatic Core Coupled Transient Benchmark

    International Nuclear Information System (INIS)

    Ortensi, J.; Pope, M.A.; Strydom, G.; Sen, R.S.; DeHart, M.D.; Gougar, H.D.; Ellis, C.; Baxter, A.; Seker, V.; Downar, T.J.; Vierow, K.; Ivanov, K.

    2011-01-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  18. Scope of Internal Supply Chain Management Benchmarking in Indian Manufacturing Industries

    OpenAIRE

    Kailash; Rajeev Kumar Saha; Sanjeev Goyal

    2017-01-01

    Internal supply chain management benchmarking practice is necessary to overcome manufacturing industrial performance gap. The main purpose of this research work is to combine the benchmarking and internal supply chain practices to improve the performance of Indian manufacturing industries. In this paper, the main aim is to discuss the components of internal supply chain between suppliers and customers after that explain the scope of ISCM benchmarking in manufacturing industries.

  19. Longitudinal Household Trends in Access to Improved Water Sources and Sanitation in Chi Linh Town, Hai Duong Province, Viet Nam and Associated Factors.

    Science.gov (United States)

    Tuyet-Hanh, Tran Thi; Long, Tran Khanh; Van Minh, Hoang; Huong, Le Thi Thanh

    2016-01-01

    This study aims to characterize household trends in access to improved water sources and sanitaton in Chi Linh Town, Hai Duong Province, Vietnam, and to identify factors affecting those trends. Data were extracted from the Chi Linh Health and Demographic Surveillance System (CHILILAB HDSS) database from 2004-2014, which included household access to improved water sources, household access to improved sanitation, and household demographic data. Descriptive statistical analysis and multinominal logistic regression were used. The results showed that over a 10-year period (2004-2014), the proportion of households with access to improved water and improved sanitation increased by 3.7% and 28.3%, respectively. As such, the 2015 Millennium Development Goal targets for safe drinking water and basic sanitation were met. However, 13.5% of households still had unimproved water and sanitation. People who are retired, work in trade or services, or other occupations were 1.49, 1.97, and 1.34 times more likely to have access to improved water and sanitation facilities than farming households, respectively ( p < 0.001). Households living in urban areas were 1.84 times more likely than those living in rural areas to have access to improved water sources and improved sanitation facilities (OR =1.84; 95% CI = 1.73-1.96). Non-poor households were 2.12 times more likely to have access to improved water sources and improved sanitation facilities compared to the poor group (OR = 2.12; 95% CI = 2.00-2.25). More efforts are required to increase household access to both improved water and sanitation in Chi Linh Town, focusing on the 13.5% of households currently without access. Similar to situations observed elsewhere in Vietnam and other low- and middle- income countries, there is a need to address socio-economic factors that are associated with inadequate access to improved water sources and sanitation facilities.

  20. Longitudinal Household Trends in Access to Improved Water Sources and Sanitation in Chi Linh Town, Hai Duong Province, Viet Nam and Associated Factors

    Directory of Open Access Journals (Sweden)

    Tran Thi Tuyet-Hanh

    2016-10-01

    Full Text Available Objective: This study aims to characterize household trends in access to improved water sources and sanitaton in Chi Linh Town, Hai Duong Province, Vietnam, and to identify factors affecting those trends. Method: Data were extracted from the Chi Linh Health and Demographic Surveillance System (CHILILAB HDSS database from 2004–2014, which included household access to improved water sources, household access to improved sanitation, and household demographic data. Descriptive statistical analysis and multinominal logistic regression were used. The results showed that over a 10-year period (2004–2014, the proportion of households with access to improved water and improved sanitation increased by 3.7% and 28.3%, respectively. As such, the 2015 Millennium Development Goal targets for safe drinking water and basic sanitation were met. However, 13.5% of households still had unimproved water and sanitation. People who are retired, work in trade or services, or other occupations were 1.49, 1.97, and 1.34 times more likely to have access to improved water and sanitation facilities than farming households, respectively (p < 0.001. Households living in urban areas were 1.84 times more likely than those living in rural areas to have access to improved water sources and improved sanitation facilities (OR =1.84; 95% CI = 1.73–1.96. Non-poor households were 2.12 times more likely to have access to improved water sources and improved sanitation facilities compared to the poor group (OR = 2.12; 95% CI = 2.00–2.25. More efforts are required to increase household access to both improved water and sanitation in Chi Linh Town, focusing on the 13.5% of households currently without access. Similar to situations observed elsewhere in Vietnam and other low- and middle- income countries, there is a need to address socio-economic factors that are associated with inadequate access to improved water sources and sanitation facilities.

  1. ABM news and benchmarks

    International Nuclear Information System (INIS)

    Alekhin, Sergey; Bluemlein, Johannes; Moch, Sven-Olaf

    2013-08-01

    We report on progress in the determination of the unpolarised nucleon PDFs within the ABM global fit framework. The data used in the ABM analysis are updated including the charm-production and the high-Q 2 neutral-current samples obtained at the HERA collider, as well as the LHC data on the differential Drell-Yan cross-sections. An updated set of the PDFs with improved experimental and theoretical accuracy at small x is presented. We find minimal impact of the t-quark production cross section measured at the Tevatron and the LHC on the gluon distribution and the value of the strong coupling constant α s determined from the ABM fit in the case of the t-quark running-mass definition. In particular, the value of α s (M Z )=0.1133±0.0008 is obtained from the variant of the ABM12 fit with the Tevatron and CMS t-quark production cross-section data included and the MS value of m t (m t )=162 GeV.

  2. Integral benchmark test of JENDL-4.0 for U-233 systems with ICSBEP handbook

    International Nuclear Information System (INIS)

    Kuwagaki, Kazuki; Nagaya, Yasunobu

    2017-03-01

    The integral benchmark test of JENDL-4.0 for U-233 systems using the continuous-energy Monte Carlo code MVP was conducted. The previous benchmark test was performed only for U-233 thermal solution and fast metallic systems in the ICSBEP handbook. In this study, MVP input files were prepared for uninvestigated benchmark problems in the handbook including compound thermal systems (mainly lattice systems) and integral benchmark test was performed. The prediction accuracy of JENDL-4.0 was evaluated for effective multiplication factors (k eff 's) of the U-233 systems. As a result, a trend of underestimation was observed for all the categories of U-233 systems. In the benchmark test of ENDF/B-VII.1 for U-233 systems with the ICSBEP handbook, it is reported that a decreasing trend of calculated k eff values in association with a parameter ATFF (Above-Thermal Fission Fraction) is observed. The ATFF values were also calculated in this benchmark test of JENDL-4.0 and the same trend as ENDF/B-VII.1 was observed. A CD-ROM is attached as an appendix. (J.P.N.)

  3. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  4. Benchmarking the internal combustion engine and hydrogen

    International Nuclear Information System (INIS)

    Wallace, J.S.

    2006-01-01

    The internal combustion engine is a cost-effective and highly reliable energy conversion technology. Exhaust emission regulations introduced in the 1970's triggered extensive research and development that has significantly improved in-use fuel efficiency and dramatically reduced exhaust emissions. The current level of gasoline vehicle engine development is highlighted and representative emissions and efficiency data are presented as benchmarks. The use of hydrogen fueling for IC engines has been investigated over many decades and the benefits and challenges arising are well-known. The current state of hydrogen-fueled engine development will be reviewed and evaluated against gasoline-fueled benchmarks. The prospects for further improvements to hydrogen-fueled IC engines will be examined. While fuel cells are projected to offer greater energy efficiency than IC engines and zero emissions, the availability of fuel cells in quantity at reasonable cost is a barrier to their widespread adaptation for the near future. In their current state of development, hydrogen fueled IC engines are an effective technology to create demand for hydrogen fueling infrastructure until fuel cells become available in commercial quantities. During this transition period, hydrogen fueled IC engines can achieve PZEV/ULSLEV emissions. (author)

  5. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  6. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  7. International benchmarking of electricity transmission by regulators: A contrast between theory and practice?

    International Nuclear Information System (INIS)

    Haney, Aoife Brophy; Pollitt, Michael G.

    2013-01-01

    Benchmarking of electricity networks has a key role in sharing the benefits of efficiency improvements with consumers and ensuring regulated companies earn a fair return on their investments. This paper analyses and contrasts the theory and practice of international benchmarking of electricity transmission by regulators. We examine the literature relevant to electricity transmission benchmarking and discuss the results of a survey of 25 national electricity regulators. While new panel data techniques aimed at dealing with unobserved heterogeneity and the validity of the comparator group look intellectually promising, our survey suggests that they are in their infancy for regulatory purposes. In electricity transmission, relative to electricity distribution, choosing variables is particularly difficult, because of the large number of potential variables to choose from. Failure to apply benchmarking appropriately may negatively affect investors’ willingness to invest in the future. While few of our surveyed regulators acknowledge that regulatory risk is currently an issue in transmission benchmarking, many more concede it might be. In the meantime new regulatory approaches – such as those based on tendering, negotiated settlements, a wider range of outputs or longer term grid planning – are emerging and will necessarily involve a reduced role for benchmarking. -- Highlights: •We discuss how to benchmark electricity transmission. •We report survey results from 25 national energy regulators. •Electricity transmission benchmarking is more challenging than benchmarking distribution. •Many regulators concede benchmarking may raise capital costs. •Many regulators are considering new regulatory approaches

  8. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  9. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  10. Thermal reactor benchmark tests on JENDL-2

    International Nuclear Information System (INIS)

    Takano, Hideki; Tsuchihashi, Keichiro; Yamane, Tsuyoshi; Akino, Fujiyoshi; Ishiguro, Yukio; Ido, Masaru.

    1983-11-01

    A group constant library for the thermal reactor standard nuclear design code system SRAC was produced by using the evaluated nuclear data JENDL-2. Furthermore, the group constants for 235 U were calculated also from ENDF/B-V. Thermal reactor benchmark calculations were performed using the produced group constant library. The selected benchmark cores are two water-moderated lattices (TRX-1 and 2), two heavy water-moderated cores (DCA and ETA-1), two graphite-moderated cores (SHE-8 and 13) and eight critical experiments for critical safety. The effective multiplication factors and lattice cell parameters were calculated and compared with the experimental values. The results are summarized as follows. (1) Effective multiplication factors: The results by JENDL-2 are considerably improved in comparison with ones by ENDF/B-IV. The best agreement is obtained by using JENDL-2 and ENDF/B-V (only 235 U) data. (2) Lattice cell parameters: For the rho 28 (the ratio of epithermal to thermal 238 U captures) and C* (the ratio of 238 U captures to 235 U fissions), the values calculated by JENDL-2 are in good agreement with the experimental values. The rho 28 (the ratio of 238 U to 235 U fissions) are overestimated as found also for the fast reactor benchmarks. The rho 02 (the ratio of epithermal to thermal 232 Th captures) calculated by JENDL-2 or ENDF/B-IV are considerably underestimated. The functions of the SRAC system have been continued to be extended according to the needs of its users. A brief description will be given, in Appendix B, to the extended parts of the SRAC system together with the input specification. (author)

  11. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  12. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  13. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...

  14. Benchmarking Data Sets for the Evaluation of Virtual Ligand Screening Methods: Review and Perspectives.

    Science.gov (United States)

    Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu

    2015-07-27

    Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.

  15. Benchmarking of the FENDL-3 Neutron Cross-section Data Starter Library for Fusion Applications

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, U., E-mail: ulrich.fischer@kit.edu [Association KIT-Euratom, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Angelone, M. [Associazione ENEA-Euratom, ENEA Fusion Division, Via E. Fermi 27, I-00044 Frascati (Italy); Bohm, T. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States); Kondo, K. [Association KIT-Euratom, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Konno, C. [Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan); Sawan, M. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States); Villari, R. [Associazione ENEA-Euratom, ENEA Fusion Division, Via E. Fermi 27, I-00044 Frascati (Italy); Walker, B. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States)

    2014-06-15

    This paper summarizes the benchmark analyses performed in a joint effort of ENEA (Italy), JAEA (Japan), KIT (Germany), and the University of Wisconsin (USA) on a computational ITER benchmark and a series of 14 MeV neutron benchmark experiments. The computational benchmark revealed a modest increase of the neutron flux levels in the deep penetration regions and a substantial increase of the gas production in steel components. The comparison to experimental results showed good agreement with no substantial differences between FENDL-3.0 and FENDL-2.1 for most of the responses. In general, FENDL-3 shows an improved performance for fusion neutronics applications.

  16. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  17. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  18. Concrete benchmark experiment: ex-vessel LWR surveillance dosimetry

    International Nuclear Information System (INIS)

    Ait Abderrahim, H.; D'Hondt, P.; Oeyen, J.; Risch, P.; Bioux, P.

    1993-09-01

    The analysis of DOEL-1 in-vessel and ex-vessel neutron dosimetry, using the DOT 3.5 Sn code coupled with the VITAMIN-C cross-section library, showed the same C/E values for different detectors at the surveillance capsule and the ex-vessel cavity positions. These results seem to be in contradiction with those obtained in several Benchmark experiments (PCA, PSF, VENUS...) when using the same computational tools. Indeed a strong decreasing radial trend of the C/E was observed, partly explained by the overestimation of the iron inelastic scattering. The flat trend seen in DOEL-1 could be explained by compensating errors in the calculation such as the backscattering due to the concrete walls outside the cavity. The 'Concrete Benchmark' experiment has been designed to judge the ability of this calculation methods to treat the backscattering. This paper describes the 'Concrete Benchmark' experiment, the measured and computed neutron dosimetry results and their comparison. This preliminary analysis seems to indicate an overestimation of the backscattering effect in the calculations. (authors). 5 figs., 1 tab., 7 refs

  19. Closed-loop neuromorphic benchmarks

    CSIR Research Space (South Africa)

    Stewart, TC

    2015-11-01

    Full Text Available Benchmarks   Terrence C. Stewart 1* , Travis DeWolf 1 , Ashley Kleinhans 2 , Chris Eliasmith 1   1 University of Waterloo, Canada, 2 Council for Scientific and Industrial Research, South Africa   Submitted to Journal:   Frontiers in Neuroscience   Specialty... Eliasmith 1 1Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada 2Mobile Intelligent Autonomous Systems group, Council for Scientific and Industrial Research, Pretoria, South Africa Correspondence*: Terrence C. Stewart Centre...

  20. Investible benchmarks & hedge fund liquidity

    OpenAIRE

    Freed, Marc S; McMillan, Ben

    2011-01-01

    A lack of commonly accepted benchmarks for hedge fund performance has permitted hedge fund managers to attribute to skill returns that may actually accrue from market risk factors and illiquidity. Recent innovations in hedge fund replication permits us to estimate the extent of this misattribution. Using an option-based model, we find evidence that the value of liquidity options that investors implicitly grant managers when they invest may account for part or even all hedge fund returns. C...

  1. SKaMPI: A Comprehensive Benchmark for Public Benchmarking of MPI

    Directory of Open Access Journals (Sweden)

    Ralf Reussner

    2002-01-01

    Full Text Available The main objective of the MPI communication library is to enable portable parallel programming with high performance within the message-passing paradigm. Since the MPI standard has no associated performance model, and makes no performance guarantees, comprehensive, detailed and accurate performance figures for different hardware platforms and MPI implementations are important for the application programmer, both for understanding and possibly improving the behavior of a given program on a given platform, as well as for assuring a degree of predictable behavior when switching to another hardware platform and/or MPI implementation. We term this latter goal performance portability, and address the problem of attaining performance portability by benchmarking. We describe the SKaMPI benchmark which covers a large fraction of MPI, and incorporates well-accepted mechanisms for ensuring accuracy and reliability. SKaMPI is distinguished among other MPI benchmarks by an effort to maintain a public performance database with performance data from different hardware platforms and MPI implementations.

  2. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  3. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  4. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    Science.gov (United States)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test

  5. Benchmarking road safety performance by grouping local territories : a study in The Netherlands.

    NARCIS (Netherlands)

    Aarts, L.T. & Houwing, S.

    2015-01-01

    The method of benchmarking provides an opportunity to learn from better performing territories to improve the effectiveness and efficiency of activities in a particular field of interest. Such a field of interest could be road safety. Road safety benchmarking can include several indicators, ranging

  6. Benchmarking in Thoracic Surgery. Third Edition.

    Science.gov (United States)

    Freixinet Gilart, Jorge; Varela Simó, Gonzalo; Rodríguez Suárez, Pedro; Embún Flor, Raúl; Rivas de Andrés, Juan José; de la Torre Bravos, Mercedes; Molins López-Rodó, Laureano; Pac Ferrer, Joaquín; Izquierdo Elena, José Miguel; Baschwitz, Benno; López de Castro, Pedro E; Fibla Alfara, Juan José; Hernando Trancho, Florentino; Carvajal Carrasco, Ángel; Canalís Arrayás, Emili; Salvatierra Velázquez, Ángel; Canela Cardona, Mercedes; Torres Lanzas, Juan; Moreno Mata, Nicolás

    2016-04-01

    Benchmarking entails continuous comparison of efficacy and quality among products and activities, with the primary objective of achieving excellence. To analyze the results of benchmarking performed in 2013 on clinical practices undertaken in 2012 in 17 Spanish thoracic surgery units. Study data were obtained from the basic minimum data set for hospitalization, registered in 2012. Data from hospital discharge reports were submitted by the participating groups, but staff from the corresponding departments did not intervene in data collection. Study cases all involved hospital discharges recorded in the participating sites. Episodes included were respiratory surgery (Major Diagnostic Category 04, Surgery), and those of the thoracic surgery unit. Cases were labelled using codes from the International Classification of Diseases, 9th revision, Clinical Modification. The refined diagnosis-related groups classification was used to evaluate differences in severity and complexity of cases. General parameters (number of cases, mean stay, complications, readmissions, mortality, and activity) varied widely among the participating groups. Specific interventions (lobectomy, pneumonectomy, atypical resections, and treatment of pneumothorax) also varied widely. As in previous editions, practices among participating groups varied considerably. Some areas for improvement emerge: admission processes need to be standardized to avoid urgent admissions and to improve pre-operative care; hospital discharges should be streamlined and discharge reports improved by including all procedures and complications. Some units have parameters which deviate excessively from the norm, and these sites need to review their processes in depth. Coding of diagnoses and comorbidities is another area where improvement is needed. Copyright © 2015 SEPAR. Published by Elsevier Espana. All rights reserved.

  7. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  8. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    Kluth, Stefan

    2014-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  9. Policy Analysis of the English Graduation Benchmark in Taiwan

    Science.gov (United States)

    Shih, Chih-Min

    2012-01-01

    To nudge students to study English and to improve their English proficiency, many universities in Taiwan have imposed an English graduation benchmark on their students. This article reviews this policy, using the theoretic framework for education policy analysis proposed by Haddad and Demsky (1995). The author presents relevant research findings,…

  10. Evaluation of the effectiveness of the National Benchmarking ...

    African Journals Online (AJOL)

    Water shortages, public demonstrations and lack of service delivery have plagued many South African water services authorities (WSAs) for a number of years. From 2004–2007 the National Benchmarking Initiative (NBI) was implemented to improve the performance, efficiency and sustainability of WSAs. The current study ...

  11. Policy analysis of the English graduation benchmark in Taiwan ...

    African Journals Online (AJOL)

    To nudge students to study English and to improve their English proficiency, many universities in Taiwan have imposed an English graduation benchmark on their students. This article reviews this policy, using the theoretic framework for education policy analysis proposed by Haddad and Demsky (1995). The author ...

  12. Evaluation of the effectiveness of the National Benchmarking ...

    African Journals Online (AJOL)

    ABSTRACT. Water shortages, public demonstrations and lack of service delivery have plagued many South African water services authorities (WSAs) for a number of years. From 2004–2007 the National Benchmarking Initiative (NBI) was implemented to improve the performance, efficiency and sustainability of WSAs.

  13. REVISED STREAM CODE AND WASP5 BENCHMARK

    International Nuclear Information System (INIS)

    Chen, K

    2005-01-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within ±20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within ±3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls

  14. Developing a benchmark for emotional analysis of music.

    Science.gov (United States)

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  15. International benchmark tests of the FENDL-1 Nuclear Data Library

    International Nuclear Information System (INIS)

    Fischer, U.

    1997-01-01

    An international benchmark validation task has been conducted to validate the fusion evaluated nuclear data library FENDL-1 through data tests against integral 14 MeV neutron experiments. The main objective of this task was to qualify the FENDL-1 working libraries for fusion applications and to elaborate recommendations for further data improvements. Several laboratories and institutions from the European Union, Japan, the Russian Federation and US have contributed to the benchmark task. A large variety of existing integral 14 MeV benchmark experiments was analysed with the FENDL-1 working libraries for continuous energy Monte Carlo and multigroup discrete ordinate calculations. Results of the benchmark analyses have been collected, discussed and evaluated. The major findings, conclusions and recommendations are presented in this paper. With regard to the data quality, it is summarised that fusion nuclear data have reached a high confidence level with the available FENDL-1 data library. With few exceptions this holds for the materials of highest importance for fusion reactor applications. As a result of the performed benchmark analyses, some existing deficiencies and discrepancies have been identified that are recommended for removal in theforthcoming FENDL-2 data file. (orig.)

  16. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  17. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  18. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    Science.gov (United States)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  19. Local implementation of the Essence of Care benchmarks.

    Science.gov (United States)

    Jones, Sue

    To understand clinical practice benchmarking from the perspective of nurses working in a large acute NHS trust and to determine whether the nurses perceived that their commitment to Essence of Care led to improvements in care, the factors that influenced their role in the process and the organisational factors that influenced benchmarking. An ethnographic case study approach was adopted. Six themes emerged from the data. Two organisational issues emerged: leadership and the values and/or culture of the organisation. The findings suggested that the leadership ability of the Essence of Care link nurses and the value placed on this work by the organisation were key to the success of benchmarking. A model for successful implementation of the Essence of Care is proposed based on the findings of this study, which lends itself to testing by other organisations.

  20. Trends in Between-Country Health Equity in Sub-Saharan Africa from 1990 to 2011: Improvement, Convergence and Reversal

    Directory of Open Access Journals (Sweden)

    Jiajie Jin

    2016-06-01

    Full Text Available It is not clear whether between-country health inequity in Sub-Saharan Africa has been reduced over time due to economic development and increased foreign investments. We used the World Health Organization’s data about 46 nations in Sub-Saharan Africa to test if under-5 mortality rate (U5MR and life expectancy (LE converged or diverged from 1990 to 2011. We explored whether the standard deviation of selected health indicators decreased over time (i.e., sigma convergence, and whether the less developed countries moved toward the average level in the group (i.e., beta convergence. The variation of U5MR between countries became smaller from 1990 to 2001. Yet this sigma convergence trend did not continue after 2002. Life expectancy in Africa from 1990–2011 demonstrated a consistent convergence trend, even after controlling for initial differences of country-level factors. The lack of consistent convergence in U5MR partially resulted from the fact that countries with higher U5MR in 1990 eventually performed better than those countries with lower U5MRs in 1990, constituting a reversal in between-country health inequity. Thus, international aid agencies might consider to reassess the funding priority about which countries to invest in, especially in the field of early childhood health.

  1. Trends in Between-Country Health Equity in Sub-Saharan Africa from 1990 to 2011: Improvement, Convergence and Reversal.

    Science.gov (United States)

    Jin, Jiajie; Liang, Di; Shi, Lu; Huang, Jiayan

    2016-06-22

    It is not clear whether between-country health inequity in Sub-Saharan Africa has been reduced over time due to economic development and increased foreign investments. We used the World Health Organization's data about 46 nations in Sub-Saharan Africa to test if under-5 mortality rate (U5MR) and life expectancy (LE) converged or diverged from 1990 to 2011. We explored whether the standard deviation of selected health indicators decreased over time (i.e., sigma convergence), and whether the less developed countries moved toward the average level in the group (i.e., beta convergence). The variation of U5MR between countries became smaller from 1990 to 2001. Yet this sigma convergence trend did not continue after 2002. Life expectancy in Africa from 1990-2011 demonstrated a consistent convergence trend, even after controlling for initial differences of country-level factors. The lack of consistent convergence in U5MR partially resulted from the fact that countries with higher U5MR in 1990 eventually performed better than those countries with lower U5MRs in 1990, constituting a reversal in between-country health inequity. Thus, international aid agencies might consider to reassess the funding priority about which countries to invest in, especially in the field of early childhood health.

  2. Recent trends in the dispensing of 90-day-supply prescriptions at retail pharmacies: implications for improved convenience and access.

    Science.gov (United States)

    Liberman, Joshua N; Girdish, Charmaine

    2011-03-01

    Mail-service pharmacies offer consumers the convenience of prescriptions filled with a 90-day supply of medication. Unlike mail-service pharmacies, retail pharmacies traditionally dispensed maintenance medication prescriptions with a 30-day supply. However, the retail landscape changed in May 2008 with Walmart's announcement of an extension of its $4 Prescription Program to include 90-day-supply prescriptions. To evaluate recent changes in access to and use of 90-day-supply maintenance medications dispensed via retail pharmacy. As of the first quarter of 2007, the proportion of retail-dispensed maintenance medications with a 90-day supply (compared with all maintenance prescriptions dispensed) among Medicare Part D plans, self-insured employers, and private health plans was 5.1%, 5.1%, and 5.0%, respectively. As of December 2009, this ratio had risen to 8.0% for Medicare plans and 8.1% for commercial health plans; the ratio among employers had risen more modestly to 6.1%. Of particular interest and importance, the proportion increased similarly for brand and for generic medications. There has been substantial growth in 90-day prescriptions dispensed via retail pharmacy, a trend that is likely to continue as more insurance providers adopt compatible benefit designs. It is important to continue monitoring these trends and to identify opportunities to rigorously evaluate their impact on medication adherence and healthcare costs.

  3. Trends in Between-Country Health Equity in Sub-Saharan Africa from 1990 to 2011: Improvement, Convergence and Reversal

    Science.gov (United States)

    Jin, Jiajie; Liang, Di; Shi, Lu; Huang, Jiayan

    2016-01-01

    It is not clear whether between-country health inequity in Sub-Saharan Africa has been reduced over time due to economic development and increased foreign investments. We used the World Health Organization’s data about 46 nations in Sub-Saharan Africa to test if under-5 mortality rate (U5MR) and life expectancy (LE) converged or diverged from 1990 to 2011. We explored whether the standard deviation of selected health indicators decreased over time (i.e., sigma convergence), and whether the less developed countries moved toward the average level in the group (i.e., beta convergence). The variation of U5MR between countries became smaller from 1990 to 2001. Yet this sigma convergence trend did not continue after 2002. Life expectancy in Africa from 1990–2011 demonstrated a consistent convergence trend, even after controlling for initial differences of country-level factors. The lack of consistent convergence in U5MR partially resulted from the fact that countries with higher U5MR in 1990 eventually performed better than those countries with lower U5MRs in 1990, constituting a reversal in between-country health inequity. Thus, international aid agencies might consider to reassess the funding priority about which countries to invest in, especially in the field of early childhood health. PMID:27338435

  4. Generalizable open source urban water portfolio simulation framework demonstrated using a multi-objective risk-based planning benchmark problem.

    Science.gov (United States)

    Trindade, B. C.; Reed, P. M.

    2017-12-01

    The growing access and reduced cost for computing power in recent years has promoted rapid development and application of multi-objective water supply portfolio planning. As this trend continues there is a pressing need for flexible risk-based simulation frameworks and improved algorithm benchmarking for emerging classes of water supply planning and management problems. This work contributes the Water Utilities Management and Planning (WUMP) model: a generalizable and open source simulation framework designed to capture how water utilities can minimize operational and financial risks by regionally coordinating planning and management choices, i.e. making more efficient and coordinated use of restrictions, water transfers and financial hedging combined with possible construction of new infrastructure. We introduce the WUMP simulation framework as part of a new multi-objective benchmark problem for planning and management of regionally integrated water utility companies. In this problem, a group of fictitious water utilities seek to balance the use of the mentioned reliability driven actions (e.g., restrictions, water transfers and infrastructure pathways) and their inherent financial risks. Several traits of this problem make it ideal for a benchmark problem, namely the presence of (1) strong non-linearities and discontinuities in the Pareto front caused by the step-wise nature of the decision making formulation and by the abrupt addition of storage through infrastructure construction, (2) noise due to the stochastic nature of the streamflows and water demands, and (3) non-separability resulting from the cooperative formulation of the problem, in which decisions made by stakeholder may substantially impact others. Both the open source WUMP simulation framework and its demonstration in a challenging benchmarking example hold value for promoting broader advances in urban water supply portfolio planning for regions confronting change.

  5. Trends in the Quality of Treatment for Patients With Intact Cervical Cancer in the United States, 1999 Through 2011

    International Nuclear Information System (INIS)

    Smith, Grace L.; Jiang, Jing; Giordano, Sharon H.; Meyer, Larissa A.; Eifel, Patricia J.

    2015-01-01

    Purpose: High-quality treatment for intact cervical cancer requires external radiation therapy, brachytherapy, and chemotherapy, carefully sequenced and completed without delays. We sought to determine how frequently current treatment meets quality benchmarks and whether new technologies have influenced patterns of care. Methods and Materials: By searching diagnosis and procedure claims in MarketScan, an employment-based health care claims database, we identified 1508 patients with nonmetastatic, intact cervical cancer treated from 1999 to 2011, who were <65 years of age and received >10 fractions of radiation. Treatments received were identified using procedure codes and compared with 3 quality benchmarks: receipt of brachytherapy, receipt of chemotherapy, and radiation treatment duration not exceeding 63 days. The Cochran-Armitage test was used to evaluate temporal trends. Results: Seventy-eight percent of patients (n=1182) received brachytherapy, with brachytherapy receipt stable over time (Cochran-Armitage P trend =.15). Among patients who received brachytherapy, 66% had high–dose rate and 34% had low–dose rate treatment, although use of high–dose rate brachytherapy steadily increased to 75% by 2011 (P trend <.001). Eighteen percent of patients (n=278) received intensity modulated radiation therapy (IMRT), and IMRT receipt increased to 37% by 2011 (P trend <.001). Only 2.5% of patients (n=38) received IMRT in the setting of brachytherapy omission. Overall, 79% of patients (n=1185) received chemotherapy, and chemotherapy receipt increased to 84% by 2011 (P trend <.001). Median radiation treatment duration was 56 days (interquartile range, 47-65 days); however, duration exceeded 63 days in 36% of patients (n=543). Although 98% of patients received at least 1 benchmark treatment, only 44% received treatment that met all 3 benchmarks. With more stringent indicators (brachytherapy, ≥4 chemotherapy cycles, and duration not exceeding 56 days), only 25

  6. Recent trends in robot-assisted therapy environments to improve real-life functional performance after stroke

    OpenAIRE

    Johnson, Michelle J

    2006-01-01

    Abstract Upper and lower limb robotic tools for neuro-rehabilitation are effective in reducing motor impairment but they are limited in their ability to improve real world function. There is a need to improve functional outcomes after robot-assisted therapy. Improvements in the effectiveness of these environments may be achieved by incorporating into their design and control strategies important elements key to inducing motor learning and cerebral plasticity such as mass-practice, feedback, t...

  7. Impact of quantitative feedback and benchmark selection on radiation use by cardiologists performing cardiac angiography

    International Nuclear Information System (INIS)

    Smith, I. R.; Cameron, J.; Brighouse, R. D.; Ryan, C. M.; Foster, K. A.; Rivers, J. T.

    2013-01-01

    Audit of and feedback on both group and individual data provided immediately after the point of care and compared with realistic benchmarks of excellence have been demonstrated to drive change. This study sought to evaluate the impact of immediate benchmarked quantitative case-based performance feedback on the clinical practice of cardiologists practicing at a private hospital in Brisbane, Australia. The participating cardiologists were assigned to one of two groups: Group 1 received patient and procedural details for review and Group 2 received Group 1 data plus detailed radiation data relating to the procedures and comparative benchmarks. In Group 2, Linear-by-Linear Association analysis suggests a link between change in radiation use and initial radiation dose category (p50.014) with only those initially 'challenged' by the benchmarks showing improvement. Those not 'challenged' by the benchmarks deteriorated in performance compared with those starting well below the benchmarks showing greatest increase in radiation use. Conversely, those blinded to their radiation use (Group 1) showed general improvement in radiation use throughout the study compared with those performing initially close to the benchmarks showing greatest improvement. This study shows that use of non-challenging benchmarks in case-based radiation risk feedback does not promote a reduction in radiation use; indeed, it may contribute to increased doses. Paradoxically, cardiologists who are aware of performance monitoring but blinded to individual case data appear to maintain, if not reduce, their radiation use. (authors)

  8. International trends in patient selection for elective endovascular aneurysm repair: sicker patients with safer anatomy leading to improved 1-year survival.

    Science.gov (United States)

    Fitridge, Robert A; Boult, Margaret; Mackillop, Clare; De Loryn, Tania; Barnes, Mary; Cowled, Prue; Thompson, Matthew M; Holt, Peter J; Karthikesalingam, Alan; Sayers, Robert D; Choke, Edward; Boyle, Jonathan R; Forbes, Thomas L; Novick, Teresa V

    2015-02-01

    To review the trends in patient selection and early death rate for patients undergoing elective endovascular repair of infrarenal abdominal aortic aneurysms (EVAR) in 3 countries. For this study, audit data from 4,163 patients who had undergone elective infrarenal EVAR were amalgamated. The data originated from Australia, Canada (Ontario), and England (London, Cambridge, and Leicester). Statistical analyses were undertaken to determine whether patient characteristics and early death rate varied between and within study groups and over time. The study design was retrospective analysis of data collected prospectively between 1999 and 2012. One-year survival improved over time (P = 0.0013). Canadian patients were sicker than those in Australia or England (P international comparison, several trends were noted including improved 1-year survival despite declining patient health (as measured by increasing ASA status). This may reflect greater knowledge regarding EVAR that centers from different countries have gained over the last decade and improved medical management of patients with aneurysmal disease. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. The Global Trends in the Alternative Energetics and Improvement of the State Policy in the Sphere of Fiscal Security: in Search for Equilibrium and Markets

    Directory of Open Access Journals (Sweden)

    Hnedina Kateryna V.

    2017-12-01

    Full Text Available Alternative energetics is an important component of the competitiveness and security of the national economy. Its rapid development over the past 10 years is caused by both the attempts of individual countries to maintain and strengthen their competitive advantage in the world markets and the efforts of international organizations (UN, IRENA, IEA to consolidate different stakeholders to achieve energy and fiscal security, protection of environment and improvement of climate conditions. The article is aimed at generalizing global trends in alternative energetics in the context of development of the State policy in the sphere of fiscal security. A brief overview of the latest trends in the alternative energetics development, most of which focus on identifying the basic sectoral trends, has been provided. However, the issues of fiscal security in the energy sector remain poorly researched, especially in terms of formation of the State policy, consolidating interests of different groups of stakeholders. It has been determined that in the developed countries a significant growth of alternative energetics is caused by the consistent State policy on creation of conditions for formation of effective branch markets and the solving of so-called energy trilemma.

  10. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  11. NEACRP thermal fission product benchmark

    International Nuclear Information System (INIS)

    Halsall, M.J.; Taubman, C.J.

    1989-09-01

    The objective of the thermal fission product benchmark was to compare the range of fission product data in use at the present time. A simple homogeneous problem was set with 200 atoms H/1 atom U235, to be burnt up to 1000 days and then decay for 1000 days. The problem was repeated with 200 atoms H/1 atom Pu239, 20 atoms H/1 atom U235 and 20 atoms H/1 atom Pu239. There were ten participants and the submissions received are detailed in this report. (author)

  12. Benchmark neutron porosity log calculations

    International Nuclear Information System (INIS)

    Little, R.C.; Michael, M.; Verghese, K.; Gardner, R.P.

    1989-01-01

    Calculations have been made for a benchmark neutron porosity log problem with the general purpose Monte Carlo code MCNP and the specific purpose Monte Carlo code McDNL. For accuracy and timing comparison purposes the CRAY XMP and MicroVax II computers have been used with these codes. The CRAY has been used for an analog version of the MCNP code while the MicroVax II has been used for the optimized variance reduction versions of both codes. Results indicate that the two codes give the same results within calculated standard deviations. Comparisons are given and discussed for accuracy (precision) and computation times for the two codes

  13. Benchmarking organic mixed conductors for transistors

    KAUST Repository

    Inal, Sahika; Malliaras, George G.; Rivnay, Jonathan

    2017-01-01

    Organic mixed conductors have garnered significant attention in applications from bioelectronics to energy storage/generation. Their implementation in organic transistors has led to enhanced biosensing, neuromorphic function, and specialized circuits. While a narrow class of conducting polymers continues to excel in these new applications, materials design efforts have accelerated as researchers target new functionality, processability, and improved performance/stability. Materials for organic electrochemical transistors (OECTs) require both efficient electronic transport and facile ion injection in order to sustain high capacity. In this work, we show that the product of the electronic mobility and volumetric charge storage capacity (µC*) is the materials/system figure of merit; we use this framework to benchmark and compare the steady-state OECT performance of ten previously reported materials. This product can be independently verified and decoupled to guide materials design and processing. OECTs can therefore be used as a tool for understanding and designing new organic mixed conductors.

  14. Benchmarking organic mixed conductors for transistors

    KAUST Repository

    Inal, Sahika

    2017-11-20

    Organic mixed conductors have garnered significant attention in applications from bioelectronics to energy storage/generation. Their implementation in organic transistors has led to enhanced biosensing, neuromorphic function, and specialized circuits. While a narrow class of conducting polymers continues to excel in these new applications, materials design efforts have accelerated as researchers target new functionality, processability, and improved performance/stability. Materials for organic electrochemical transistors (OECTs) require both efficient electronic transport and facile ion injection in order to sustain high capacity. In this work, we show that the product of the electronic mobility and volumetric charge storage capacity (µC*) is the materials/system figure of merit; we use this framework to benchmark and compare the steady-state OECT performance of ten previously reported materials. This product can be independently verified and decoupled to guide materials design and processing. OECTs can therefore be used as a tool for understanding and designing new organic mixed conductors.

  15. Benchmarking routine psychological services: a discussion of challenges and methods.

    Science.gov (United States)

    Delgadillo, Jaime; McMillan, Dean; Leach, Chris; Lucock, Mike; Gilbody, Simon; Wood, Nick

    2014-01-01

    Policy developments in recent years have led to important changes in the level of access to evidence-based psychological treatments. Several methods have been used to investigate the effectiveness of these treatments in routine care, with different approaches to outcome definition and data analysis. To present a review of challenges and methods for the evaluation of evidence-based treatments delivered in routine mental healthcare. This is followed by a case example of a benchmarking method applied in primary care. High, average and poor performance benchmarks were calculated through a meta-analysis of published data from services working under the Improving Access to Psychological Therapies (IAPT) Programme in England. Pre-post treatment effect sizes (ES) and confidence intervals were estimated to illustrate a benchmarking method enabling services to evaluate routine clinical outcomes. High, average and poor performance ES for routine IAPT services were estimated to be 0.91, 0.73 and 0.46 for depression (using PHQ-9) and 1.02, 0.78 and 0.52 for anxiety (using GAD-7). Data from one specific IAPT service exemplify how to evaluate and contextualize routine clinical performance against these benchmarks. The main contribution of this report is to summarize key recommendations for the selection of an adequate set of psychometric measures, the operational definition of outcomes, and the statistical evaluation of clinical performance. A benchmarking method is also presented, which may enable a robust evaluation of clinical performance against national benchmarks. Some limitations concerned significant heterogeneity among data sources, and wide variations in ES and data completeness.

  16. Reevaluation of the Jezebel Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-10

    Every nuclear engineering student is familiar with Jezebel, the homogeneous bare sphere of plutonium first assembled at Los Alamos in 1954-1955. The actual Jezebel assembly was neither homogeneous, nor bare, nor spherical; nor was it singular – there were hundreds of Jezebel configurations assembled. The Jezebel benchmark has been reevaluated for the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Logbooks, original drawings, mass accountability statements, internal reports, and published reports have been used to model four actual three-dimensional Jezebel assemblies with high fidelity. Because the documentation available today is often inconsistent, three major assumptions were made regarding plutonium part masses and dimensions. The first was that the assembly masses given in Los Alamos report LA-4208 (1969) were correct, and the second was that the original drawing dimension for the polar height of a certain major part was correct. The third assumption was that a change notice indicated on the original drawing was not actually implemented. This talk will describe these assumptions, the alternatives, and the implications. Since the publication of the 2013 ICSBEP Handbook, the actual masses of the major components have turned up. Our assumption regarding the assembly masses was proven correct, but we had the mass distribution incorrect. Work to incorporate the new information is ongoing, and this talk will describe the latest assessment.

  17. SCWEB, Scientific Workstation Evaluation Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Raffenetti, R C [Computing Services-Support Services Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439 (United States)

    1988-06-16

    1 - Description of program or function: The SCWEB (Scientific Workstation Evaluation Benchmark) software includes 16 programs which are executed in a well-defined scenario to measure the following performance capabilities of a scientific workstation: implementation of FORTRAN77, processor speed, memory management, disk I/O, monitor (or display) output, scheduling of processing (multiprocessing), and scheduling of print tasks (spooling). 2 - Method of solution: The benchmark programs are: DK1, DK2, and DK3, which do Fourier series fitting based on spline techniques; JC1, which checks the FORTRAN function routines which produce numerical results; JD1 and JD2, which solve dense systems of linear equations in double- and single-precision, respectively; JD3 and JD4, which perform matrix multiplication in single- and double-precision, respectively; RB1, RB2, and RB3, which perform substantial amounts of I/O processing on files other than the input and output files; RR1, which does intense single-precision floating-point multiplication in a tight loop, RR2, which initializes a 512x512 integer matrix in a manner which skips around in the address space rather than initializing each consecutive memory cell in turn; RR3, which writes alternating text buffers to the output file; RR4, which evaluates the timer routines and demonstrates that they conform to the specification; and RR5, which determines whether the workstation is capable of executing a 4-megabyte program

  18. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  19. Benchmarking facilities providing care: An international overview of initiatives

    Science.gov (United States)

    Thonon, Frédérique; Watson, Jonathan; Saghatchian, Mahasti

    2015-01-01

    We performed a literature review of existing benchmarking projects of health facilities to explore (1) the rationales for those projects, (2) the motivation for health facilities to participate, (3) the indicators used and (4) the success and threat factors linked to those projects. We studied both peer-reviewed and grey literature. We examined 23 benchmarking projects of different medical specialities. The majority of projects used a mix of structure, process and outcome indicators. For some projects, participants had a direct or indirect financial incentive to participate (such as reimbursement by Medicaid/Medicare or litigation costs related to quality of care). A positive impact was reported for most projects, mainly in terms of improvement of practice and adoption of guidelines and, to a lesser extent, improvement in communication. Only 1 project reported positive impact in terms of clinical outcomes. Success factors and threats are linked to both the benchmarking process (such as organisation of meetings, link with existing projects) and indicators used (such as adjustment for diagnostic-related groups). The results of this review will help coordinators of a benchmarking project to set it up successfully. PMID:26770800

  20. Household trends in access to improved water sources and sanitation facilities in Vietnam and associated factors: findings from the Multiple Indicator Cluster Surveys, 2000–2011

    Science.gov (United States)

    Tuyet-Hanh, Tran Thi; Lee, Jong-Koo; Oh, Juhwan; Van Minh, Hoang; Ou Lee, Chul; Hoan, Le Thi; Nam, You-Seon; Long, Tran Khanh

    2016-01-01

    Background Despite progress made by the Millennium Development Goal (MDG) number 7.C, Vietnam still faces challenges with regard to the provision of access to safe drinking water and basic sanitation. Objective This paper describes household trends in access to improved water sources and sanitation facilities separately, and analyses factors associated with access to improved water sources and sanitation facilities in combination. Design Secondary data from the Vietnam Multiple Indicator Cluster Survey in 2000, 2006, and 2011 were analyzed. Descriptive statistics and tests of significance describe trends over time in access to water and sanitation by location, demographic and socio-economic factors. Binary logistic regressions (2000, 2006, and 2011) describe associations between access to water and sanitation, and geographic, demographic, and socio-economic factors. Results There have been some outstanding developments in access to improved water sources and sanitation facilities from 2000 to 2011. In 2011, the proportion of households with access to improved water sources and sanitation facilities reached 90% and 77%, respectively, meeting the 2015 MDG targets for safe drinking water and basic sanitation set at 88% and 75%, respectively. However, despite these achievements, in 2011, only 74% of households overall had access to combined improved drinking water and sanitation facilities. There were also stark differences between regions. In 2011, only 47% of households had access to both improved water and sanitation facilities in the Mekong River Delta compared with 94% in the Red River Delta. In 2011, households in urban compared to rural areas were more than twice as likely (odds ratio [OR]: 2.2; 95% confidence interval [CI]: 1.9–2.5) to have access to improved water and sanitation facilities in combination, and households in the highest compared with the lowest wealth quintile were over 40 times more likely (OR: 42.3; 95% CI: 29.8–60.0). Conclusions More

  1. Household trends in access to improved water sources and sanitation facilities in Vietnam and associated factors: findings from the Multiple Indicator Cluster Surveys, 2000–2011

    Directory of Open Access Journals (Sweden)

    Tran Thi Tuyet-Hanh

    2016-02-01

    Full Text Available Background: Despite progress made by the Millennium Development Goal (MDG number 7.C, Vietnam still faces challenges with regard to the provision of access to safe drinking water and basic sanitation. Objective: This paper describes household trends in access to improved water sources and sanitation facilities separately, and analyses factors associated with access to improved water sources and sanitation facilities in combination. Design: Secondary data from the Vietnam Multiple Indicator Cluster Survey in 2000, 2006, and 2011 were analyzed. Descriptive statistics and tests of significance describe trends over time in access to water and sanitation by location, demographic and socio-economic factors. Binary logistic regressions (2000, 2006, and 2011 describe associations between access to water and sanitation, and geographic, demographic, and socio-economic factors. Results: There have been some outstanding developments in access to improved water sources and sanitation facilities from 2000 to 2011. In 2011, the proportion of households with access to improved water sources and sanitation facilities reached 90% and 77%, respectively, meeting the 2015 MDG targets for safe drinking water and basic sanitation set at 88% and 75%, respectively. However, despite these achievements, in 2011, only 74% of households overall had access to combined improved drinking water and sanitation facilities. There were also stark differences between regions. In 2011, only 47% of households had access to both improved water and sanitation facilities in the Mekong River Delta compared with 94% in the Red River Delta. In 2011, households in urban compared to rural areas were more than twice as likely (odds ratio [OR]: 2.2; 95% confidence interval [CI]: 1.9–2.5 to have access to improved water and sanitation facilities in combination, and households in the highest compared with the lowest wealth quintile were over 40 times more likely (OR: 42.3; 95% CI: 29.8–60

  2. Household trends in access to improved water sources and sanitation facilities in Vietnam and associated factors: findings from the Multiple Indicator Cluster Surveys, 2000-2011.

    Science.gov (United States)

    Tuyet-Hanh, Tran Thi; Lee, Jong-Koo; Oh, Juhwan; Van Minh, Hoang; Ou Lee, Chul; Hoan, Le Thi; Nam, You-Seon; Long, Tran Khanh

    2016-01-01

    Despite progress made by the Millennium Development Goal (MDG) number 7.C, Vietnam still faces challenges with regard to the provision of access to safe drinking water and basic sanitation. This paper describes household trends in access to improved water sources and sanitation facilities separately, and analyses factors associated with access to improved water sources and sanitation facilities in combination. Secondary data from the Vietnam Multiple Indicator Cluster Survey in 2000, 2006, and 2011 were analyzed. Descriptive statistics and tests of significance describe trends over time in access to water and sanitation by location, demographic and socio-economic factors. Binary logistic regressions (2000, 2006, and 2011) describe associations between access to water and sanitation, and geographic, demographic, and socio-economic factors. There have been some outstanding developments in access to improved water sources and sanitation facilities from 2000 to 2011. In 2011, the proportion of households with access to improved water sources and sanitation facilities reached 90% and 77%, respectively, meeting the 2015 MDG targets for safe drinking water and basic sanitation set at 88% and 75%, respectively. However, despite these achievements, in 2011, only 74% of households overall had access to combined improved drinking water and sanitation facilities. There were also stark differences between regions. In 2011, only 47% of households had access to both improved water and sanitation facilities in the Mekong River Delta compared with 94% in the Red River Delta. In 2011, households in urban compared to rural areas were more than twice as likely (odds ratio [OR]: 2.2; 95% confidence interval [CI]: 1.9-2.5) to have access to improved water and sanitation facilities in combination, and households in the highest compared with the lowest wealth quintile were over 40 times more likely (OR: 42.3; 95% CI: 29.8-60.0). More efforts are required to increase household access to

  3. Analysis of a molten salt reactor benchmark

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Bajpai, Anil; Degweker, S.B.

    2013-01-01

    This paper discusses results of our studies of an IAEA molten salt reactor (MSR) benchmark. The benchmark, proposed by Japan, involves burnup calculations of a single lattice cell of a MSR for burning plutonium and other minor actinides. We have analyzed this cell with in-house developed burnup codes BURNTRAN and McBURN. This paper also presents a comparison of the results of our codes and those obtained by the proposers of the benchmark. (author)

  4. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  5. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  6. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts: this shape is considered as a fixed parameter in the benchmarking...

  7. HPC Benchmark Suite NMx, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  8. High Energy Physics (HEP) benchmark program

    International Nuclear Information System (INIS)

    Yasu, Yoshiji; Ichii, Shingo; Yashiro, Shigeo; Hirayama, Hideo; Kokufuda, Akihiro; Suzuki, Eishin.

    1993-01-01

    High Energy Physics (HEP) benchmark programs are indispensable tools to select suitable computer for HEP application system. Industry standard benchmark programs can not be used for this kind of particular selection. The CERN and the SSC benchmark suite are famous HEP benchmark programs for this purpose. The CERN suite includes event reconstruction and event generator programs, while the SSC one includes event generators. In this paper, we found that the results from these two suites are not consistent. And, the result from the industry benchmark does not agree with either of these two. Besides, we describe comparison of benchmark results using EGS4 Monte Carlo simulation program with ones from two HEP benchmark suites. Then, we found that the result from EGS4 in not consistent with the two ones. The industry standard of SPECmark values on various computer systems are not consistent with the EGS4 results either. Because of these inconsistencies, we point out the necessity of a standardization of HEP benchmark suites. Also, EGS4 benchmark suite should be developed for users of applications such as medical science, nuclear power plant, nuclear physics and high energy physics. (author)

  9. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  10. Clearance Prediction Methodology Needs Fundamental Improvement: Trends Common to Rat and Human Hepatocytes/Microsomes and Implications for Experimental Methodology.

    Science.gov (United States)

    Wood, F L; Houston, J B; Hallifax, D

    2017-11-01

    Although prediction of clearance using hepatocytes and liver microsomes has long played a decisive role in drug discovery, it is widely acknowledged that reliably accurate prediction is not yet achievable despite the predominance of hepatically cleared drugs. Physiologically mechanistic methodology tends to underpredict clearance by several fold, and empirical correction of this bias is confounded by imprecision across drugs. Understanding the causes of prediction uncertainty has been slow, possibly reflecting poor resolution of variables associated with donor source and experimental methods, particularly for the human situation. It has been reported that among published human hepatocyte predictions there was a tendency for underprediction to increase with increasing in vivo intrinsic clearance, suggesting an inherent limitation using this particular system. This implied an artifactual rate limitation in vitro, although preparative effects on cell stability and performance were not yet resolved from assay design limitations. Here, to resolve these issues further, we present an up-to-date and comprehensive examination of predictions from published rat as well as human studies (where n = 128 and 101 hepatocytes and n = 71 and 83 microsomes, respectively) to assess system performance more independently. We report a clear trend of increasing underprediction with increasing in vivo intrinsic clearance, which is similar both between species and between in vitro systems. Hence, prior concerns arising specifically from human in vitro systems may be unfounded and the focus of investigation in the future should be to minimize the potential in vitro assay limitations common to whole cells and subcellular fractions. Copyright © 2017 by The American Society for Pharmacology and Experimental Therapeutics.

  11. Benchmarking of nuclear economics tools

    International Nuclear Information System (INIS)

    Moore, Megan; Korinny, Andriy; Shropshire, David; Sadhankar, Ramesh

    2017-01-01

    Highlights: • INPRO and GIF economic tools exhibited good alignment in total capital cost estimation. • Subtle discrepancies in the cost result from differences in financing and the fuel cycle assumptions. • A common set of assumptions was found to reduce the discrepancies to 1% or less. • Opportunities for harmonisation of economic tools exists. - Abstract: Benchmarking of the economics methodologies developed by the Generation IV International Forum (GIF) and the International Atomic Energy Agency’s International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO), was performed for three Generation IV nuclear energy systems. The Economic Modeling Working Group of GIF developed an Excel based spreadsheet package, G4ECONS (Generation 4 Excel-based Calculation Of Nuclear Systems), to calculate the total capital investment cost (TCIC) and the levelised unit energy cost (LUEC). G4ECONS is sufficiently generic in the sense that it can accept the types of projected input, performance and cost data that are expected to become available for Generation IV systems through various development phases and that it can model both open and closed fuel cycles. The Nuclear Energy System Assessment (NESA) Economic Support Tool (NEST) was developed to enable an economic analysis using the INPRO methodology to easily calculate outputs including the TCIC, LUEC and other financial figures of merit including internal rate of return, return of investment and net present value. NEST is also Excel based and can be used to evaluate nuclear reactor systems using the open fuel cycle, MOX (mixed oxide) fuel recycling and closed cycles. A Super Critical Water-cooled Reactor system with an open fuel cycle and two Fast Reactor systems, one with a break-even fuel cycle and another with a burner fuel cycle, were selected for the benchmarking exercise. Published data on capital and operating costs were used for economics analyses using G4ECONS and NEST tools. Both G4ECONS and

  12. BENCHMARKING WORKSHOPS AS A TOOL TO RAISE BUSINESS EXCELLENCE

    Directory of Open Access Journals (Sweden)

    Milos Jelic

    2011-03-01

    Full Text Available Annual competition for national award for business excellence appears to be a good opportunity for participating organizations to demonstrate their practices particularly those ones which enable them to excel. National quality award competition in Serbia (and Montenegro, namely "OSKAR KVALITETA" started in 1995 but was limited to competition cycle only. However, upon establishing Fund for Quality Culture and Excellence - FQCE in 2002, which took over OSKAR KVALITETA model, several changes took place. OSKAR KVALITETA turned to be annual competition in business excellence, but at the same time FQCE started to offer much wider portfolio of its services including levels of excellence programs, assessment and self-assessment training courses and benchmarking workshops. These benchmarking events have hosted by Award winners or other laureates in OSKAR KVALITETA competition who demonstrated excellence in regard of some particular criteria thus being in position to share their practice with other organizations. For six years experience in organizing benchmarking workshops FQCE scored 31 workshops covering major part of model issues. Increasing level of participation on the workshops and distinct positive trends of participants expressed satisfaction may serve as a reliable indicator that the workshops have been effective in actuating people to think and move in business excellence direction.

  13. Recent trends in robot-assisted therapy environments to improve real-life functional performance after stroke

    Directory of Open Access Journals (Sweden)

    Johnson Michelle J

    2006-12-01

    Full Text Available Abstract Upper and lower limb robotic tools for neuro-rehabilitation are effective in reducing motor impairment but they are limited in their ability to improve real world function. There is a need to improve functional outcomes after robot-assisted therapy. Improvements in the effectiveness of these environments may be achieved by incorporating into their design and control strategies important elements key to inducing motor learning and cerebral plasticity such as mass-practice, feedback, task-engagement, and complex problem solving. This special issue presents nine articles. Novel strategies covered in this issue encourage more natural movements through the use of virtual reality and real objects and faster motor learning through the use of error feedback to guide acquisition of natural movements that are salient to real activities. In addition, several articles describe novel systems and techniques that use of custom and commercial games combined with new low-cost robot systems and a humanoid robot to embody the " supervisory presence" of the therapy as possible solutions to exercise compliance in under-supervised environments such as the home.

  14. Recent trends in robot-assisted therapy environments to improve real-life functional performance after stroke.

    Science.gov (United States)

    Johnson, Michelle J

    2006-12-18

    Upper and lower limb robotic tools for neuro-rehabilitation are effective in reducing motor impairment but they are limited in their ability to improve real world function. There is a need to improve functional outcomes after robot-assisted therapy. Improvements in the effectiveness of these environments may be achieved by incorporating into their design and control strategies important elements key to inducing motor learning and cerebral plasticity such as mass-practice, feedback, task-engagement, and complex problem solving. This special issue presents nine articles. Novel strategies covered in this issue encourage more natural movements through the use of virtual reality and real objects and faster motor learning through the use of error feedback to guide acquisition of natural movements that are salient to real activities. In addition, several articles describe novel systems and techniques that use of custom and commercial games combined with new low-cost robot systems and a humanoid robot to embody the " supervisory presence" of the therapy as possible solutions to exercise compliance in under-supervised environments such as the home.

  15. Common Nearest Neighbor Clustering—A Benchmark

    Directory of Open Access Journals (Sweden)

    Oliver Lemke

    2018-02-01

    Full Text Available Cluster analyses are often conducted with the goal to characterize an underlying probability density, for which the data-point density serves as an estimate for this probability density. We here test and benchmark the common nearest neighbor (CNN cluster algorithm. This algorithm assigns a spherical neighborhood R to each data point and estimates the data-point density between two data points as the number of data points N in the overlapping region of their neighborhoods (step 1. The main principle in the CNN cluster algorithm is cluster growing. This grows the clusters by sequentially adding data points and thereby effectively positions the border of the clusters along an iso-surface of the underlying probability density. This yields a strict partitioning with outliers, for which the cluster represents peaks in the underlying probability density—termed core sets (step 2. The removal of the outliers on the basis of a threshold criterion is optional (step 3. The benchmark datasets address a series of typical challenges, including datasets with a very high dimensional state space and datasets in which the cluster centroids are aligned along an underlying structure (Birch sets. The performance of the CNN algorithm is evaluated with respect to these challenges. The results indicate that the CNN cluster algorithm can be useful in a wide range of settings. Cluster algorithms are particularly important for the analysis of molecular dynamics (MD simulations. We demonstrate how the CNN cluster results can be used as a discretization of the molecular state space for the construction of a core-set model of the MD improving the accuracy compared to conventional full-partitioning models. The software for the CNN clustering is available on GitHub.

  16. Benchmark calculation of subchannel analysis codes

    International Nuclear Information System (INIS)

    1996-02-01

    In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)

  17. Evaluating multiple indices of agricultural water use efficiency and productivity to improve comparisons between sites and trends

    Science.gov (United States)

    Levy, M. C.

    2012-12-01

    in efficiency and productivity measures in different agricultural regions. Individual indices consistently over- or under- estimate trends in efficiency and productivity by their construction, and may provide inaccurate results in years with extreme climatic events, such as droughts. By treating multiple indices as an "ensemble" of measures, analogous to the treatment of multiple climate model predictions, this study quantifies likely "true" states of efficiency and productivity in the selected agricultural regions, and error in individual indices. While different individual indices are preferable at different scales, and relative to the quality of available input data, ensemble indices can be more reliably used in comparative study across different agricultural regions, and for prediction.

  18. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-06-01

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  19. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Briesmeister, J.F.

    1994-01-01

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  20. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    International Nuclear Information System (INIS)

    Bess, John D.; Fujimoto, Nozomu

    2014-01-01

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  1. Trends in Utility Green Pricing Programs (2005)

    Energy Technology Data Exchange (ETDEWEB)

    Bird, L.; Brown, E.

    2006-10-01

    This report presents year-end 2005 data on utility green pricing programs, and examines trends in consumer response and program implementation over time. The data in this report, which were obtained via a questionnaire distributed to utility green pricing program managers, can be used by utilities to benchmark the success of their green power programs.

  2. Study of the Continuous Improvement Trend for Health, Safety and Environmental Indicators, after Establishment of Integrated Management System (IMS) in a Pharmaceutical Industry in Iran.

    Science.gov (United States)

    Mariouryad, Pegah; Golbabaei, Farideh; Nasiri, Parvin; Mohammadfam, Iraj; Marioryad, Hossein

    2015-10-01

    Nowadays, organizations try to improve their services and consequently adopt management systems and standards which have become key parts in various industries. One of these management systems which have been noticed in the recent years is Integrated Management System that is the combination of quality, health, safety and environment management systems. This study was conducted with the aim of evaluating the improvement trend after establishment of integrated management system for health, safety and environment indicators, in a pharmaceutical industry in Iran. First, during several inspections in different parts of the industry, indicators that should have been noted were listed and then these indicators were organized in 3 domains of health, safety and environment in the form of a questionnaire that followed Likert method of scaling. Also, the weight of each index was resulted from averaging out of 30 managers and the viewpoints of the related experts in the field. Moreover, by checking the documents and evidence of different years (5 contemplation years of this study), the score of each indicator was determined by multiplying the weight and score of the indices and were finally analysed. Over 5 years, scores of health scope indicators, increased from 161.99 to 202.23. Score in the first year after applying the integrated management system establishment was 172.37 in safety part and in the final year increased to 197.57. The changes of environmental scope rates, from the beginning of the program up to the last year increased from 49.24 to 64.27. Integrated management systems help organizations to improve programs to achieve their objectives. Although in this study all trends of health, safety and environmental indicator changes were positive, but at the same time showed to be slow. So, one can suggest that the result of an annual evaluation should be applied in planning future activities for the years ahead.

  3. Benchmarking passive transfer of immunity and growth in dairy calves.

    Science.gov (United States)

    Atkinson, D J; von Keyserlingk, M A G; Weary, D M

    2017-05-01

    Poor health and growth in young dairy calves can have lasting effects on their development and future production. This study benchmarked calf-rearing outcomes in a cohort of Canadian dairy farms, reported these findings back to producers and their veterinarians, and documented the results. A total of 18 Holstein dairy farms were recruited, all in British Columbia. Blood samples were collected from calves aged 1 to 7 d. We estimated serum total protein levels using digital refractometry, and failure of passive transfer (FPT) was defined as values below 5.2 g/dL. We estimated average daily gain (ADG) for preweaned heifers (1 to 70 d old) using heart-girth tape measurements, and analyzed early (≤35 d) and late (>35 d) growth separately. At first assessment, the average farm FPT rate was 16%. Overall, ADG was 0.68 kg/d, with early and late growth rates of 0.51 and 0.90 kg/d, respectively. Following delivery of the benchmark reports, all participants volunteered to undergo a second assessment. The majority (83%) made at least 1 change in their colostrum-management or milk-feeding practices, including increased colostrum at first feeding, reduced time to first colostrum, and increased initial and maximum daily milk allowances. The farms that made these changes experienced improved outcomes. On the 11 farms that made changes to improve colostrum feeding, the rate of FPT declined from 21 ± 10% before benchmarking to 11 ± 10% after making the changes. On the 10 farms that made changes to improve calf growth, ADG improved from 0.66 ± 0.09 kg/d before benchmarking to 0.72 ± 0.08 kg/d after making the management changes. Increases in ADG were greatest in the early milk-feeding period, averaging 0.13 kg/d higher than pre-benchmarking values for calves ≤35 d of age. Benchmarking specific outcomes associated with calf rearing can motivate producer engagement in calf care, leading to improved outcomes for calves on farms that apply relevant management changes. Copyright

  4. MoleculeNet: a benchmark for molecular machine learning.

    Science.gov (United States)

    Wu, Zhenqin; Ramsundar, Bharath; Feinberg, Evan N; Gomes, Joseph; Geniesse, Caleb; Pappu, Aneesh S; Leswing, Karl; Pande, Vijay

    2018-01-14

    Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.

  5. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    Science.gov (United States)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  6. Benchmarking the financial performance of local councils in Ireland

    Directory of Open Access Journals (Sweden)

    Robbins Geraldine

    2016-05-01

    Full Text Available It was over a quarter of a century ago that information from the financial statements was used to benchmark the efficiency and effectiveness of local government in the US. With the global adoption of New Public Management ideas, benchmarking practice spread to the public sector and has been employed to drive reforms aimed at improving performance and, ultimately, service delivery and local outcomes. The manner in which local authorities in OECD countries compare and benchmark their performance varies widely. The methodology developed in this paper to rate the relative financial performance of Irish city and county councils is adapted from an earlier assessment tool used to measure the financial condition of small cities in the US. Using our financial performance framework and the financial data in the audited annual financial statements of Irish local councils, we calculate composite scores for each of the thirty-four local authorities for the years 2007–13. This paper contributes composite scores that measure the relative financial performance of local councils in Ireland, as well as a full set of yearly results for a seven-year period in which local governments witnessed significant changes in their financial health. The benchmarking exercise is useful in highlighting those councils that, in relative financial performance terms, are the best/worst performers.

  7. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  8. Benchmarking: A tool for conducting self-assessment

    International Nuclear Information System (INIS)

    Perkey, D.N.

    1992-01-01

    There is more information on nuclear plant performance available than can reasonably be assimilated and used effectively by plant management or personnel responsible for self-assessment. Also, it is becoming increasingly more important that an effective self-assessment program uses internal parameters not only to evaluate performance, but to incorporate lessons learned from other plants. Because of the quantity of information available, it is important to focus efforts and resources in areas where safety or performance is a concern and where the most improvement can be realized. One of the techniques that is being used to effectively accomplish this is benchmarking. Benchmarking involves the use of various sources of information to self-identify a plant's strengths and weaknesses, identify which plants are strong performers in specific areas, evaluate what makes a top performer, and incorporate the success factors into existing programs. The formality with which benchmarking is being implemented varies widely depending on the objective. It can be as simple as looking at a single indicator, such as systematic assessment of licensee performance (SALP) in engineering and technical support, then surveying the top performers with specific questions. However, a more comprehensive approach may include the performance of a detailed benchmarking study. Both operational and economic indicators may be used in this type of evaluation. Some of the indicators that may be considered and the limitations of each are discussed

  9. Benchmarking of radiological departments. Starting point for successful process optimization

    International Nuclear Information System (INIS)

    Busch, Hans-Peter

    2010-01-01

    Continuous optimization of the process of organization and medical treatment is part of the successful management of radiological departments. The focus of this optimization can be cost units such as CT and MRI or the radiological parts of total patient treatment. Key performance indicators for process optimization are cost- effectiveness, service quality and quality of medical treatment. The potential for improvements can be seen by comparison (benchmark) with other hospitals and radiological departments. Clear definitions of key data and criteria are absolutely necessary for comparability. There is currently little information in the literature regarding the methodology and application of benchmarks especially from the perspective of radiological departments and case-based lump sums, even though benchmarking has frequently been applied to radiological departments by hospital management. The aim of this article is to describe and discuss systematic benchmarking as an effective starting point for successful process optimization. This includes the description of the methodology, recommendation of key parameters and discussion of the potential for cost-effectiveness analysis. The main focus of this article is cost-effectiveness (efficiency and effectiveness) with respect to cost units and treatment processes. (orig.)

  10. Sieve of Eratosthenes benchmarks for the Z8 FORTH microcontroller

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, R.

    1989-02-01

    This report presents benchmarks for the Z8 FORTH microcontroller system that ORNL uses extensively in proving concepts and developing prototype test equipment for the Smart House Project. The results are based on the sieve of Eratosthenes algorithm, a calculation used extensively to rate computer systems and programming languages. Three benchmark refinements are presented,each showing how the execution speed of a FORTH program can be improved by use of a particular optimization technique. The last version of the FORTH benchmark shows that optimization is worth the effort: It executes 20 times faster than the Gilbreaths' widely-published FORTH benchmark program. The National Association of Home Builders Smart House Project is a cooperative research and development effort being undertaken by American home builders and a number of major corporations serving the home building industry. The major goal of the project is to help the participating organizations incorporate advanced technology in communications,energy distribution, and appliance control products for American homes. This information is provided to help project participants use the Z8 FORTH prototyping microcontroller in developing Smart House concepts and equipment. The discussion is technical in nature and assumes some experience with microcontroller devices and the techniques used to develop software for them. 7 refs., 5 tabs.

  11. QFD Based Benchmarking Logic Using TOPSIS and Suitability Index

    Directory of Open Access Journals (Sweden)

    Jaeho Cho

    2015-01-01

    Full Text Available Users’ satisfaction on quality is a key that leads successful completion of the project in relation to decision-making issues in building design solutions. This study proposed QFD (quality function deployment based benchmarking logic of market products for building envelope solutions. Benchmarking logic is composed of QFD-TOPSIS and QFD-SI. QFD-TOPSIS assessment model is able to evaluate users’ preferences on building envelope solutions that are distributed in the market and may allow quick achievement of knowledge. TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution provides performance improvement criteria that help defining users’ target performance criteria. SI (Suitability Index allows analysis on suitability of the building envelope solution based on users’ required performance criteria. In Stage 1 of the case study, QFD-TOPSIS was used to benchmark the performance criteria of market envelope products. In Stage 2, a QFD-SI assessment was performed after setting user performance targets. The results of this study contribute to confirming the feasibility of QFD based benchmarking in the field of Building Envelope Performance Assessment (BEPA.

  12. Trends and Inequalities in Use of Maternal Health Care Services in Nepal: Strategy in the Search for Improvements

    Directory of Open Access Journals (Sweden)

    Suresh Mehata

    2017-01-01

    Full Text Available Background. Nepal has made significant progress against the Millennium Development Goals for maternal and child health over the past two decades. However, disparities in use of maternal health services persist along geographic, economic, and sociocultural lines. Methods. Trends and inequalities in the use of maternal health services in Nepal between 1994 and 2011 were examined using four Nepal Demographic and Health Surveys (NDHS, nationally representative cross-sectional surveys conducted by interviewing women who gave birth 3–5 years prior to the survey. Sociodemographic disparities in maternal health service utilization were measured. Rate difference, rate ratios, and concentration index were calculated to measure income inequalities. Findings. The percentage of mothers that received four antenatal care (ANC consultations increased from 9% to 54%, the institutional delivery rate increased from 6% to 47%, and the cesarean section (C-section rate increased from 1% in 1994 to 6% in 2011. The ratio of the richest and the poorest quintile mothers for use of four ANC, institutional delivery, and C-section delivery were 5.08 (95% CI: 3.82–6.76, 9.00 (95% CI: 6.55–12.37, and 9.37 (95% CI: 4.22–20.83, respectively. However, inequality is reducing over time; for the use of four ANC services, the concentration index fell from 0.60 (95% CI: 0.56–0.64 in 1994–1996 to 0.31 (95% CI: 0.29–0.33 in 2009–2011. For institutional delivery, the concentration index fell from 0.65 (95% CI: 0.62–0.70 to 0.40 (95% CI: 0.38–0.40 between 1994–1996 and 2009–2011. For C-section deliveries, an increase in concentration index was observed, 0.64 (95% CI: 0.51–0.77; 0.76 (95% CI: 0.64–0.88; 0.77 (95% CI: 0.71–0.84; and 0.66 (95% CI: 0.60–0.72 in the periods 1994–1996, 1999–2001, 2004–2006, and 2009–2011, respectively. All sociodemographic variables were significant predictors of use of maternal health services, out of which maternal

  13. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    Burns, Phil; Jenkins, Cloda; Riechmann, Christoph

    2005-01-01

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  14. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  15. Benchmarking in digital circuit design automation

    NARCIS (Netherlands)

    Jozwiak, L.; Gawlowski, D.M.; Slusarczyk, A.S.

    2008-01-01

    This paper focuses on benchmarking, which is the main experimental approach to the design method and EDA-tool analysis, characterization and evaluation. We discuss the importance and difficulties of benchmarking, as well as the recent research effort related to it. To resolve several serious

  16. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  17. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  18. Benchmarking the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William; Hui, Y.V.; Lam, Y. Miu

    2006-01-01

    Benchmarking energy-efficiency is an important tool to promote the efficient use of energy in commercial buildings. Benchmarking models are mostly constructed in a simple benchmark table (percentile table) of energy use, which is normalized with floor area and temperature. This paper describes a benchmarking process for energy efficiency by means of multiple regression analysis, where the relationship between energy-use intensities (EUIs) and the explanatory factors (e.g., operating hours) is developed. Using the resulting regression model, these EUIs are then normalized by removing the effect of deviance in the significant explanatory factors. The empirical cumulative distribution of the normalized EUI gives a benchmark table (or percentile table of EUI) for benchmarking an observed EUI. The advantage of this approach is that the benchmark table represents a normalized distribution of EUI, taking into account all the significant explanatory factors that affect energy consumption. An application to supermarkets is presented to illustrate the development and the use of the benchmarking method

  19. Benchmarking, Total Quality Management, and Libraries.

    Science.gov (United States)

    Shaughnessy, Thomas W.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) in higher education and academic libraries focuses on the identification, collection, and use of reliable data. Methods for measuring quality, including benchmarking, are described; performance measures are considered; and benchmarking techniques are examined. (11 references) (MES)

  20. Improved Survival of Patients With Extensive Burns: Trends in Patient Characteristics and Mortality Among Burn Patients in a Tertiary Care Burn Facility, 2004-2013.

    Science.gov (United States)

    Strassle, Paula D; Williams, Felicia N; Napravnik, Sonia; van Duin, David; Weber, David J; Charles, Anthony; Cairns, Bruce A; Jones, Samuel W

    Classic determinants of burn mortality are age, burn size, and the presence of inhalation injury. Our objective was to describe temporal trends in patient and burn characteristics, inpatient mortality, and the relationship between these characteristics and inpatient mortality over time. All patients aged 18 years or older and admitted with burn injury, including inhalation injury only, between 2004 and 2013 were included. Adjusted Cox proportional hazards regression models were used to estimate the relationship between admit year and inpatient mortality. A total of 5540 patients were admitted between 2004 and 2013. Significant differences in sex, race/ethnicity, burn mechanisms, TBSA, inhalation injury, and inpatient mortality were observed across calendar years. Patients admitted between 2011 and 2013 were more likely to be women, non-Hispanic Caucasian, with smaller burn size, and less likely to have an inhalation injury, in comparison with patients admitted from 2004 to 2010. After controlling for patient demographics, burn mechanisms, and differential lengths of stay, no calendar year trends in inpatient mortality were detected. However, a significant decrease in inpatient mortality was observed among patients with extensive burns (≥75% TBSA) in more recent calendar years. This large, tertiary care referral burn center has maintained low inpatient mortality rates among burn patients over the past 10 years. While observed decreases in mortality during this time are largely due to changes in patient and burn characteristics, survival among patients with extensive burns has improved.

  1. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  2. SP2Bench: A SPARQL Performance Benchmark

    Science.gov (United States)

    Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg

    A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.

  3. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  4. Streamflow characteristics at hydrologic bench-mark stations

    Science.gov (United States)

    Lawrence, C.L.

    1987-01-01

    The Hydrologic Bench-Mark Network was established in the 1960's. Its objectives were to document the hydrologic characteristics of representative undeveloped watersheds nationwide and to provide a comparative base for studying the effects of man on the hydrologic environment. The network, which consists of 57 streamflow gaging stations and one lake-stage station in 39 States, is planned for permanent operation. This interim report describes streamflow characteristics at each bench-mark site and identifies time trends in annual streamflow that have occurred during the data-collection period. The streamflow characteristics presented for each streamflow station are (1) flood and low-flow frequencies, (2) flow duration, (3) annual mean flow, and (4) the serial correlation coefficient for annual mean discharge. In addition, Kendall's tau is computed as an indicator of time trend in annual discharges. The period of record for most stations was 13 to 17 years, although several stations had longer periods of record. The longest period was 65 years for Merced River near Yosemite, Calif. Records of flow at 6 of 57 streamflow sites in the network showed a statistically significant change in annual mean discharge over the period of record, based on computations of Kendall's tau. The values of Kendall's tau ranged from -0.533 to 0.648. An examination of climatological records showed that changes in precipitation were most likely the cause for the change in annual mean discharge.

  5. Recent trends in application of multivariate curve resolution approaches for improving gas chromatography-mass spectrometry analysis of essential oils.

    Science.gov (United States)

    Jalali-Heravi, Mehdi; Parastar, Hadi

    2011-08-15

    Essential oils (EOs) are valuable natural products that are popular nowadays in the world due to their effects on the health conditions of human beings and their role in preventing and curing diseases. In addition, EOs have a broad range of applications in foods, perfumes, cosmetics and human nutrition. Among different techniques for analysis of EOs, gas chromatography-mass spectrometry (GC-MS) is the most important one in recent years. However, there are some fundamental problems in GC-MS analysis including baseline drift, spectral background, noise, low S/N (signal to noise) ratio, changes in the peak shapes and co-elution. Multivariate curve resolution (MCR) approaches cope with ongoing challenges and are able to handle these problems. This review focuses on the application of MCR techniques for improving GC-MS analysis of EOs published between January 2000 and December 2010. In the first part, the importance of EOs in human life and their relevance in analytical chemistry is discussed. In the second part, an insight into some basics needed to understand prospects and limitations of the MCR techniques are given. In the third part, the significance of the combination of the MCR approaches with GC-MS analysis of EOs is highlighted. Furthermore, the commonly used algorithms for preprocessing, chemical rank determination, local rank analysis and multivariate resolution in the field of EOs analysis are reviewed. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Best Practice Benchmarking in Australian Agriculture: Issues and Challenges

    OpenAIRE

    Ronan, Glenn; Cleary, Gordon

    2000-01-01

    The quest to shape Australian agriculture for improved and sustainable profitability is leading Research and Development Corporations, agri-service consultants and government to devote substantial effort into development of new farm business analysis and benchmarking programs. ‘Biz Check’, ‘Pork Biz’, ‘Wool Enterprise Benchmarking’, ‘Dairy Business Focus’ and ‘Business Skills and Best Practice’ for beef and sheep meat producers are examples of current farm management and training programs whe...

  7. Benchmark problem suite for reactor physics study of LWR next generation fuels

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Ikehara, Tadashi; Ito, Takuya; Saji, Etsuro

    2002-01-01

    This paper proposes a benchmark problem suite for studying the physics of next-generation fuels of light water reactors. The target discharge burnup of the next-generation fuel was set to 70 GWd/t considering the increasing trend in discharge burnup of light water reactor fuels. The UO 2 and MOX fuels are included in the benchmark specifications. The benchmark problem consists of three different geometries: fuel pin cell, PWR fuel assembly and BWR fuel assembly. In the pin cell problem, detailed nuclear characteristics such as burnup dependence of nuclide-wise reactivity were included in the required calculation results to facilitate the study of reactor physics. In the assembly benchmark problems, important parameters for in-core fuel management such as local peaking factors and reactivity coefficients were included in the required results. The benchmark problems provide comprehensive test problems for next-generation light water reactor fuels with extended high burnup. Furthermore, since the pin cell, the PWR assembly and the BWR assembly problems are independent, analyses of the entire benchmark suite is not necessary: e.g., the set of pin cell and PWR fuel assembly problems will be suitable for those in charge of PWR in-core fuel management, and the set of pin cell and BWR fuel assembly problems for those in charge of BWR in-core fuel management. (author)

  8. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States). Transportation and Hydrogen Systems Center

    2017-10-19

    In FY16, the thermal performance of the 2014 Honda Accord Hybrid power electronics thermal management systems were benchmarked. Both experiments and numerical simulation were utilized to thoroughly study the thermal resistances and temperature distribution in the power module. Experimental results obtained from the water-ethylene glycol tests provided the junction-to-liquid thermal resistance. The finite element analysis (FEA) and computational fluid dynamics (CFD) models were found to yield a good match with experimental results. Both experimental and modeling results demonstrate that the passive stack is the dominant thermal resistance for both the motor and power electronics systems. The 2014 Accord power electronics systems yield steady-state thermal resistance values around 42- 50 mm to the 2nd power K/W, depending on the flow rates. At a typical flow rate of 10 liters per minute, the thermal resistance of the Accord system was found to be about 44 percent lower than that of the 2012 Nissan LEAF system that was benchmarked in FY15. The main reason for the difference is that the Accord power module used a metalized-ceramic substrate and eliminated the thermal interface material layers. FEA models were developed to study the transient performance of 2012 Nissan LEAF, 2014 Accord, and two other systems that feature conventional power module designs. The simulation results indicate that the 2012 LEAF power module has lowest thermal impedance at a time scale less than one second. This is probably due to moving low thermally conductive materials further away from the heat source and enhancing the heat spreading effect from the copper-molybdenum plate close to the insulated gate bipolar transistors. When approaching steady state, the Honda system shows lower thermal impedance. Measurement results of the thermal resistance of the 2015 BMW i3 power electronic system indicate that the i3 insulated gate bipolar transistor module has significantly lower junction

  9. Recent trends in working with the private sector to improve basic healthcare: a review of evidence and interventions.

    Science.gov (United States)

    Montagu, Dominic; Goodman, Catherine; Berman, Peter; Penn, Amy; Visconti, Adam

    2016-10-01

    The private sector provides the majority of health care in Africa and Asia. A number of interventions have, for many years, applied different models of subsidy, support and engagement to address social and efficiency failures in private health care markets. We have conducted a review of these models, and the evidence in support of them, to better understand what interventions are currently common, and to what extent practice is based on evidence. Using established typologies, we examined five models of intervention with private markets for care: commodity social marketing, social franchising, contracting, accreditation and vouchers. We conducted a systematic review of both published and grey literature, identifying programmes large enough to be cited in publications, and studies of the listed intervention types. 343 studies were included in the review, including both published and grey literature. Three hundred and eighty programmes were identified, the earliest having begun operation in 1955. Commodity social marketing programmes were the most common intervention type, with 110 documented programmes operating for condoms alone at the highest period. Existing evidence shows that these models can improve access and utilization, and possibly quality, but for all programme types, the overall evidence base remains weak, with practice in private sector engagement consistently moving in advance of evidence. Future research should address key questions concerning the impact of interventions on the market as a whole, the distribution of benefits by socio-economic status, the potential for scale up and sustainability, cost-effectiveness compared to relevant alternatives and the risk of unintended consequences. Alongside better data, a stronger conceptual basis linking programme design and outcomes to context is also required. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Greenhouse gas emissions from solid waste in Beijing: The rising trend and the mitigation effects by management improvements.

    Science.gov (United States)

    Yu, Yongqiang; Zhang, Wen

    2016-04-01

    Disposal of solid waste poses great challenges to city managements. Changes in solid waste composition and disposal methods, along with urbanisation, can certainly affect greenhouse gas emissions from municipal solid waste. In this study, we analysed the changes in the generation, composition and management of municipal solid waste in Beijing. The changes of greenhouse gas emissions from municipal solid waste management were thereafter calculated. The impacts of municipal solid waste management improvements on greenhouse gas emissions and the mitigation effects of treatment techniques of greenhouse gas were also analysed. Municipal solid waste generation in Beijing has increased, and food waste has constituted the most substantial component of municipal solid waste over the past decade. Since the first half of 1950s, greenhouse gas emission has increased from 6 CO2-eq Gg y(-1)to approximately 200 CO2-eq Gg y(-1)in the early 1990s and 2145 CO2-eq Gg y(-1)in 2013. Landfill gas flaring, landfill gas utilisation and energy recovery in incineration are three techniques of the after-emission treatments in municipal solid waste management. The scenario analysis showed that three techniques might reduce greenhouse gas emissions by 22.7%, 4.5% and 9.8%, respectively. In the future, if waste disposal can achieve a ratio of 4:3:3 by landfill, composting and incineration with the proposed after-emission treatments, as stipulated by the Beijing Municipal Waste Management Act, greenhouse gas emissions from municipal solid waste will decrease by 41%. © The Author(s) 2016.

  11. Development of computer code SIMPSEX for simulation of FBR fuel reprocessing flowsheets: II. additional benchmarking results

    International Nuclear Information System (INIS)

    Shekhar Kumar; Koganti, S.B.

    2003-07-01

    Benchmarking and application of a computer code SIMPSEX for high plutonium FBR flowsheets was reported recently in an earlier report (IGC-234). Improvements and recompilation of the code (Version 4.01, March 2003) required re-validation with the existing benchmarks as well as additional benchmark flowsheets. Improvements in the high Pu region (Pu Aq >30 g/L) resulted in better results in the 75% Pu flowsheet benchmark. Below 30 g/L Pu Aq concentration, results were identical to those from the earlier version (SIMPSEX Version 3, code compiled in 1999). In addition, 13 published flowsheets were taken as additional benchmarks. Eleven of these flowsheets have a wide range of feed concentrations and few of them are β-γ active runs with FBR fuels having a wide distribution of burnup and Pu ratios. A published total partitioning flowsheet using externally generated U(IV) was also simulated using SIMPSEX. SIMPSEX predictions were compared with listed predictions from conventional SEPHIS, PUMA, PUNE and PUBG. SIMPSEX results were found to be comparable and better than the result from above listed codes. In addition, recently reported UREX demo results along with AMUSE simulations are also compared with SIMPSEX predictions. Results of the benchmarking SIMPSEX with these 14 benchmark flowsheets are discussed in this report. (author)

  12. Benchmarking and Self-Assessment in the Wine Industry

    Energy Technology Data Exchange (ETDEWEB)

    Galitsky, Christina; Radspieler, Anthony; Worrell, Ernst; Healy,Patrick; Zechiel, Susanne

    2005-12-01

    Not all industrial facilities have the staff or theopportunity to perform a detailed audit of their operations. The lack ofknowledge of energy efficiency opportunities provides an importantbarrier to improving efficiency. Benchmarking programs in the U.S. andabroad have shown to improve knowledge of the energy performance ofindustrial facilities and buildings and to fuel energy managementpractices. Benchmarking provides a fair way to compare the energyintensity of plants, while accounting for structural differences (e.g.,the mix of products produced, climate conditions) between differentfacilities. In California, the winemaking industry is not only one of theeconomic pillars of the economy; it is also a large energy consumer, witha considerable potential for energy-efficiency improvement. LawrenceBerkeley National Laboratory and Fetzer Vineyards developed the firstbenchmarking tool for the California wine industry called "BEST(Benchmarking and Energy and water Savings Tool) Winery". BEST Wineryenables a winery to compare its energy efficiency to a best practicereference winery. Besides overall performance, the tool enables the userto evaluate the impact of implementing efficiency measures. The toolfacilitates strategic planning of efficiency measures, based on theestimated impact of the measures, their costs and savings. The tool willraise awareness of current energy intensities and offer an efficient wayto evaluate the impact of future efficiency measures.

  13. What Randomized Benchmarking Actually Measures

    International Nuclear Information System (INIS)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; Sarovar, Mohan; Blume-Kohout, Robin

    2017-01-01

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.

  14. Benchmarking Commercial Conformer Ensemble Generators.

    Science.gov (United States)

    Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes

    2017-11-27

    We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.

  15. Benchmark tests of JENDL-1

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki; Hasegawa, Akira; Takano, Hideki; Kamei, Takanobu; Hojuyama, Takeshi; Sasaki, Makoto; Seki, Yuji; Zukeran, Atsushi; Otake, Iwao.

    1982-02-01

    Various benchmark tests were made on JENDL-1. At the first stage, various core center characteristics were tested for many critical assemblies with one-dimensional model. At the second stage, applicability of JENDL-1 was further tested to more sophisticated problems for MOZART and ZPPR-3 assemblies with two-dimensional model. It was proved that JENDL-1 predicted various quantities of fast reactors satisfactorily as a whole. However, the following problems were pointed out: 1) There exists discrepancy of 0.9% in the k sub(eff)-values between the Pu- and U-cores. 2) The fission rate ratio of 239 Pu to 235 U is underestimated by 3%. 3) The Doppler reactivity coefficients are overestimated by about 10%. 4) The control rod worths are underestimated by 4%. 5) The fission rates of 235 U and 239 Pu are underestimated considerably in the outer core and radial blanket regions. 6) The negative sodium void reactivities are overestimated, when the sodium is removed from the outer core. As a whole, most of problems of JENDL-1 seem to be related with the neutron leakage and the neutron spectrum. It was found through the further study that most of these problems came from too small diffusion coefficients and too large elastic removal cross sections above 100 keV, which might be probably caused by overestimation of the total and elastic scattering cross sections for structural materials in the unresolved resonance region up to several MeV. (author)

  16. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  17. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...

  18. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  19. Benchmarking gate-based quantum computers

    Science.gov (United States)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  20. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  1. Benchmarking of venous thromboembolism prophylaxis practice with ENT.UK guidelines.

    Science.gov (United States)

    Al-Qahtani, Ali S

    2017-05-01

    The aim of this study was to benchmark our guidelines of prevention of venous thromboembolism (VTE) in ENT surgical population against ENT.UK guidelines, and also to encourage healthcare providers to utilize benchmarking as an effective method of improving performance. The study design is prospective descriptive analysis. The setting of this study is tertiary referral centre (Assir Central Hospital, Abha, Saudi Arabia). In this study, we are benchmarking our practice guidelines of the prevention of VTE in the ENT surgical population against that of ENT.UK guidelines to mitigate any gaps. ENT guidelines 2010 were downloaded from the ENT.UK Website. Our guidelines were compared with the possibilities that either our performance meets or fall short of ENT.UK guidelines. Immediate corrective actions will take place if there is quality chasm between the two guidelines. ENT.UK guidelines are evidence-based and updated which may serve as role-model for adoption and benchmarking. Our guidelines were accordingly amended to contain all factors required in providing a quality service to ENT surgical patients. While not given appropriate attention, benchmarking is a useful tool in improving quality of health care. It allows learning from others' practices and experiences, and works towards closing any quality gaps. In addition, benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning service provision. It is recommended to be included on the list of quality improvement methods of healthcare services.

  2. Thermal and fast reactor benchmark testing of ENDF/B-6.4

    International Nuclear Information System (INIS)

    Liu Guisheng

    1999-01-01

    The benchmark testing for B-6.4 was done with the same benchmark experiments and calculating method as for B-6.2. The effective multiplication factors k eff , central reaction rate ratios of fast assemblies and lattice cell reaction rate ratios of thermal lattice cell assemblies were calculated and compared with testing results of B-6.2 and CENDL-2. It is obvious that 238 U data files are most important for the calculations of large fast reactors and lattice thermal reactors. However, 238 U data in the new version of ENDF/B-6 have not been renewed. Only data of 235 U, 27 Al, 14 N and 2 D have been renewed in ENDF/B-6.4. Therefor, it will be shown that the thermal reactor benchmark testing results are remarkably improved and the fast reactor benchmark testing results are not improved

  3. Trend analysis

    International Nuclear Information System (INIS)

    Smith, M.; Jones, D.R.

    1991-01-01

    The goal of exploration is to find reserves that will earn an adequate rate of return on the capital invested. Neither exploration nor economics is an exact science. The authors must therefore explore in those trends (plays) that have the highest probability of achieving this goal. Trend analysis is a technique for organizing the available data to make these strategic exploration decisions objectively and is in conformance with their goals and risk attitudes. Trend analysis differs from resource estimation in its purpose. It seeks to determine the probability of economic success for an exploration program, not the ultimate results of the total industry effort. Thus the recent past is assumed to be the best estimate of the exploration probabilities for the near future. This information is combined with economic forecasts. The computer software tools necessary for trend analysis are (1) Information data base - requirements and sources. (2) Data conditioning program - assignment to trends, correction of errors, and conversion into usable form. (3) Statistical processing program - calculation of probability of success and discovery size probability distribution. (4) Analytical processing - Monte Carlo simulation to develop the probability distribution of the economic return/investment ratio for a trend. Limited capital (short-run) effects are analyzed using the Gambler's Ruin concept in the Monte Carlo simulation and by a short-cut method. Multiple trend analysis is concerned with comparing and ranking trends, allocating funds among acceptable trends, and characterizing program risk by using risk profiles. In summary, trend analysis is a reality check for long-range exploration planning

  4. Parton-shower uncertainties with Herwig 7: benchmarks at leading order

    Energy Technology Data Exchange (ETDEWEB)

    Bellm, Johannes; Schichtel, Peter [Durham University, Department of Physics, IPPP, Durham (United Kingdom); Nail, Graeme [University of Manchester, Particle Physics Group, School of Physics and Astronomy, Manchester (United Kingdom); Karlsruhe Institute of Technology, Institute for Theoretical Physics, Karlsruhe (Germany); Plaetzer, Simon [Durham University, Department of Physics, IPPP, Durham (United Kingdom); University of Manchester, Particle Physics Group, School of Physics and Astronomy, Manchester (United Kingdom); Siodmok, Andrzej [CERN, TH Department, Geneva (Switzerland); Polish Academy of Sciences, The Henryk Niewodniczanski Institute of Nuclear Physics in Cracow, Krakow (Poland)

    2016-12-15

    We perform a detailed study of the sources of perturbative uncertainty in parton-shower predictions within the Herwig 7 event generator. We benchmark two rather different parton-shower algorithms, based on angular-ordered and dipole-type evolution, against each other. We deliberately choose leading order plus parton shower as the benchmark setting to identify a controllable set of uncertainties. This will enable us to reliably assess improvements by higher-order contributions in a follow-up work. (orig.)

  5. Beyond-CMOS Device Benchmarking for Boolean and Non-Boolean Logic Applications

    OpenAIRE

    Pan, Chenyun; Naeemi, Azad

    2017-01-01

    The latest results of benchmarking research are presented for a variety of beyond-CMOS charge- and spin-based devices. In addition to improving the device-level models, several new device proposals and a few majorly modified devices are investigated. Deep pipelining circuits are employed to boost the throughput of low-power devices. Furthermore, the benchmarking methodology is extended to interconnect-centric analyses and non-Boolean logic applications. In contrast to Boolean circuits, non-Bo...

  6. Financial benchmarking the example of confectionery industry companies

    Directory of Open Access Journals (Sweden)

    Vasilić Marina

    2014-01-01

    Full Text Available Being a managerial tool of proven efficiency when it comes to managing companies in cri­sis periods, benchmarking concept is still insufficiently known and applied in the Republic of Serbia. The idea of this paper was to reveal its possibilities through the aspect of finan­cial benchmarking, showing its simplicity and benefits even from the point of an external analyst. This was achieved through the analysis of two biggest competitors on the market of confectionery products of the Republic of Serbia, using secondary data analysis. Through multidimensional set of performance measures based on profit as the ultimate goal, but also including value for shareholders, liquidity and capitalization, we have confirmed the leader's market position and found its sources, which are the key learning points for the follower to adopt in order to improve its performance.

  7. BENCHMARKING AND CONFIGURATION OF OPENSOURCE MANUFACTURING EXECUTION SYSTEM (MES APPLICATION

    Directory of Open Access Journals (Sweden)

    Ganesha Nur Laksmana

    2013-05-01

    Full Text Available Information now is an important element to every growing industry in the world. Inorder to keep up with other competitors, endless improvements in optimizing overall efficiency areneeded. There still exist barriers that separate departments in PT. XYZ and cause limitation to theinformation sharing in the system. Open-Source Manufacturing Execution System (MES presentsas an IT-based application that offers wide variety of customization to eliminate stovepipes bysharing information between departments. Benchmarking is used to choose the best Open-SourceMES Application; and Dynamic System Development Method (DSDM is adopted as this workguideline. As a result, recommendations of the chosen Open-Source MES Application arerepresented.Keywords: Manufacturing Execution System (MES; Open Source; Dynamic SystemDevelopment Method (DSDM; Benchmarking; Configuration

  8. LHC benchmarks from flavored gauge mediation

    Energy Technology Data Exchange (ETDEWEB)

    Ierushalmi, N.; Iwamoto, S.; Lee, G.; Nepomnyashy, V.; Shadmi, Y. [Physics Department, Technion - Israel Institute of Technology,Haifa 32000 (Israel)

    2016-07-12

    We present benchmark points for LHC searches from flavored gauge mediation models, in which messenger-matter couplings give flavor-dependent squark masses. Our examples include spectra in which a single squark — stop, scharm, or sup — is much lighter than all other colored superpartners, motivating improved quark flavor tagging at the LHC. Many examples feature flavor mixing; in particular, large stop-scharm mixing is possible. The correct Higgs mass is obtained in some examples by virtue of the large stop A-term. We also revisit the general flavor and CP structure of the models. Even though the A-terms can be substantial, their contributions to EDM’s are very suppressed, because of the particular dependence of the A-terms on the messenger coupling. This holds regardless of the messenger-coupling texture. More generally, the special structure of the soft terms often leads to stronger suppression of flavor- and CP-violating processes, compared to naive estimates.

  9. Benchmarking and validation activities within JEFF project

    Directory of Open Access Journals (Sweden)

    Cabellos O.

    2017-01-01

    Full Text Available The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  10. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Sean J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Alvarez, Paul [The Wired Group

    2018-04-13

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  11. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    R. Angles Rojas (Renzo); M.-D. Pham (Minh-Duc); P.A. Boncz (Peter)

    2014-01-01

    htmlabstractWith inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics

  12. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available When algorithms solve dynamic multi-objective optimisation problems (DMOOPs), benchmark functions should be used to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for dynamic multi...

  13. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...

  14. Second benchmark problem for WIPP structural computations

    International Nuclear Information System (INIS)

    Krieg, R.D.; Morgan, H.S.; Hunter, T.O.

    1980-12-01

    This report describes the second benchmark problem for comparison of the structural codes used in the WIPP project. The first benchmark problem consisted of heated and unheated drifts at a depth of 790 m, whereas this problem considers a shallower level (650 m) more typical of the repository horizon. But more important, the first problem considered a homogeneous salt configuration, whereas this problem considers a configuration with 27 distinct geologic layers, including 10 clay layers - 4 of which are to be modeled as possible slip planes. The inclusion of layering introduces complications in structural and thermal calculations that were not present in the first benchmark problem. These additional complications will be handled differently by the various codes used to compute drift closure rates. This second benchmark problem will assess these codes by evaluating the treatment of these complications

  15. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  16. Benchmarking and validation activities within JEFF project

    Science.gov (United States)

    Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der

    2017-09-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  17. Quality benchmarking methodology: Case study of finance and culture industries in Latvia

    Directory of Open Access Journals (Sweden)

    Ieva Zemīte

    2011-01-01

    Full Text Available Political, socio-economic and cultural changes that have taken place in the world during the last years have influenced all the spheres. Constant improvements are necessary to sustain in rival and shrinking markets. This sets high quality standards for the service industries. Therefore it is important to conduct comparison of quality criteria to ascertain which practices are achieving superior performance levels. At present companies in Latvia do not carry out mutual benchmarking, and as a result of that do not know how they rank against their peers in terms of quality, as well as they do not see benefits in sharing of information and in benchmarking.The purpose of this paper is to determine the criteria of qualitative benchmarking, and to investigate the use of the benchmarking quality in service industries, particularly: finance and culture sectors in Latvia in order to determine the key driving factors of quality, to explore internal and foreign benchmarks, and to reveal the full potential of inputs’ reduction and efficiency growth for the aforementioned industries.Case study and other tools are used to define the readiness of the company for benchmarking. Certain key factors are examined for their impact on quality criteria. The results are based on the research conducted in professional associations in defined fields (insurance and theatre.Originality/value – this is the first study that adopts the benchmarking models for measuring quality criteria and readiness for mutual comparison in insurance and theatre industries in Latvia.

  18. Benchmarking strategies for measuring the quality of healthcare: problems and prospects.

    Science.gov (United States)

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

  19. Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects

    Science.gov (United States)

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140

  20. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  1. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  2. Energy efficiency benchmarking of energy-intensive industries in Taiwan

    International Nuclear Information System (INIS)

    Chan, David Yih-Liang; Huang, Chi-Feng; Lin, Wei-Chun; Hong, Gui-Bing

    2014-01-01

    Highlights: • Analytical tool was applied to estimate the energy efficiency indicator of energy intensive industries in Taiwan. • The carbon dioxide emission intensity in selected energy-intensive industries is also evaluated in this study. • The obtained energy efficiency indicator can serve as a base case for comparison to the other regions in the world. • This analysis results can serve as a benchmark for selected energy-intensive industries. - Abstract: Taiwan imports approximately 97.9% of its primary energy as rapid economic development has significantly increased energy and electricity demands. Increased energy efficiency is necessary for industry to comply with energy-efficiency indicators and benchmarking. Benchmarking is applied in this work as an analytical tool to estimate the energy-efficiency indicators of major energy-intensive industries in Taiwan and then compare them to other regions of the world. In addition, the carbon dioxide emission intensity in the iron and steel, chemical, cement, textile and pulp and paper industries are evaluated in this study. In the iron and steel industry, the energy improvement potential of blast furnace–basic oxygen furnace (BF–BOF) based on BPT (best practice technology) is about 28%. Between 2007 and 2011, the average specific energy consumption (SEC) of styrene monomer (SM), purified terephthalic acid (PTA) and low-density polyethylene (LDPE) was 9.6 GJ/ton, 5.3 GJ/ton and 9.1 GJ/ton, respectively. The energy efficiency of pulping would be improved by 33% if BAT (best available technology) were applied. The analysis results can serve as a benchmark for these industries and as a base case for stimulating changes aimed at more efficient energy utilization

  3. International benchmark on the natural convection test in Phenix reactor

    International Nuclear Information System (INIS)

    Tenchine, D.; Pialla, D.; Fanning, T.H.; Thomas, J.W.; Chellapandi, P.; Shvetsov, Y.; Maas, L.; Jeong, H.-Y.; Mikityuk, K.; Chenu, A.; Mochizuki, H.; Monti, S.

    2013-01-01

    Highlights: ► Phenix main characteristics, instrumentation and natural convection test are described. ► “Blind” calculations and post-test calculations from all the participants to the benchmark are compared to reactor data. ► Lessons learned from the natural convection test and the associated calculations are discussed. -- Abstract: The French Phenix sodium cooled fast reactor (SFR) started operation in 1973 and was stopped in 2009. Before the reactor was definitively shutdown, several final tests were planned and performed, including a natural convection test in the primary circuit. During this natural convection test, the heat rejection provided by the steam generators was disabled, followed several minutes later by reactor scram and coast-down of the primary pumps. The International Atomic Energy Agency (IAEA) launched a Coordinated Research Project (CRP) named “control rod withdrawal and sodium natural circulation tests performed during the Phenix end-of-life experiments”. The overall purpose of the CRP was to improve the Member States’ analytical capabilities in the field of SFR safety. An international benchmark on the natural convection test was organized with “blind” calculations in a first step, then “post-test” calculations and sensitivity studies compared with reactor measurements. Eight organizations from seven Member States took part in the benchmark: ANL (USA), CEA (France), IGCAR (India), IPPE (Russian Federation), IRSN (France), KAERI (Korea), PSI (Switzerland) and University of Fukui (Japan). Each organization performed computations and contributed to the analysis and global recommendations. This paper summarizes the findings of the CRP benchmark exercise associated with the Phenix natural convection test, including blind calculations, post-test calculations and comparisons with measured data. General comments and recommendations are pointed out to improve future simulations of natural convection in SFRs

  4. Numisheet2005 Benchmark Analysis on Forming of an Automotive Underbody Cross Member: Benchmark 2

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    This report presents an international cooperation benchmark effort focusing on simulations of a sheet metal stamping process. A forming process of an automotive underbody cross member using steel and aluminum blanks is used as a benchmark. Simulation predictions from each submission are analyzed via comparison with the experimental results. A brief summary of various models submitted for this benchmark study is discussed. Prediction accuracy of each parameter of interest is discussed through the evaluation of cumulative errors from each submission

  5. A review on the benchmarking concept in Malaysian construction safety performance

    Science.gov (United States)

    Ishak, Nurfadzillah; Azizan, Muhammad Azizi

    2018-02-01

    Construction industry is one of the major industries that propels Malaysia's economy in highly contributes to our nation's GDP growth, yet the high fatality rates on construction sites have caused concern among safety practitioners and the stakeholders. Hence, there is a need of benchmarking in performance of Malaysia's construction industry especially in terms of safety. This concept can create a fertile ground for ideas, but only in a receptive environment, organization that share good practices and compare their safety performance against other benefit most to establish improvement in safety culture. This research was conducted to study the awareness important, evaluate current practice and improvement, and also identify the constraint in implement of benchmarking on safety performance in our industry. Additionally, interviews with construction professionals were come out with different views on this concept. Comparison has been done to show the different understanding of benchmarking approach and how safety performance can be benchmarked. But, it's viewed as one mission, which to evaluate objectives identified through benchmarking that will improve the organization's safety performance. Finally, the expected result from this research is to help Malaysia's construction industry implement best practice in safety performance management through the concept of benchmarking.

  6. Improving the capability of an integrated CA-Markov model to simulate spatio-temporal urban growth trends using an Analytical Hierarchy Process and Frequency Ratio

    Science.gov (United States)

    Aburas, Maher Milad; Ho, Yuek Ming; Ramli, Mohammad Firuz; Ash'aari, Zulfa Hanan

    2017-07-01

    The creation of an accurate simulation of future urban growth is considered one of the most important challenges in urban studies that involve spatial modeling. The purpose of this study is to improve the simulation capability of an integrated CA-Markov Chain (CA-MC) model using CA-MC based on the Analytical Hierarchy Process (AHP) and CA-MC based on Frequency Ratio (FR), both applied in Seremban, Malaysia, as well as to compare the performance and accuracy between the traditional and hybrid models. Various physical, socio-economic, utilities, and environmental criteria were used as predictors, including elevation, slope, soil texture, population density, distance to commercial area, distance to educational area, distance to residential area, distance to industrial area, distance to roads, distance to highway, distance to railway, distance to power line, distance to stream, and land cover. For calibration, three models were applied to simulate urban growth trends in 2010; the actual data of 2010 were used for model validation utilizing the Relative Operating Characteristic (ROC) and Kappa coefficient methods Consequently, future urban growth maps of 2020 and 2030 were created. The validation findings confirm that the integration of the CA-MC model with the FR model and employing the significant driving force of urban growth in the simulation process have resulted in the improved simulation capability of the CA-MC model. This study has provided a novel approach for improving the CA-MC model based on FR, which will provide powerful support to planners and decision-makers in the development of future sustainable urban planning.

  7. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  8. Ad hoc committee on reactor physics benchmarks

    International Nuclear Information System (INIS)

    Diamond, D.J.; Mosteller, R.D.; Gehin, J.C.

    1996-01-01

    In the spring of 1994, an ad hoc committee on reactor physics benchmarks was formed under the leadership of two American Nuclear Society (ANS) organizations. The ANS-19 Standards Subcommittee of the Reactor Physics Division and the Computational Benchmark Problem Committee of the Mathematics and Computation Division had both seen a need for additional benchmarks to help validate computer codes used for light water reactor (LWR) neutronics calculations. Although individual organizations had employed various means to validate the reactor physics methods that they used for fuel management, operations, and safety, additional work in code development and refinement is under way, and to increase accuracy, there is a need for a corresponding increase in validation. Both organizations thought that there was a need to promulgate benchmarks based on measured data to supplement the LWR computational benchmarks that have been published in the past. By having an organized benchmark activity, the participants also gain by being able to discuss their problems and achievements with others traveling the same route

  9. Benchmarking for controllere: metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels Erik; Dietrichson, Lars Grubbe

    2008-01-01

    Benchmarking indgår på mange måder i både private og offentlige virksomheders ledelsespraksis. I økonomistyring anvendes benchmark-baserede indikatorer (eller nøgletal), eksempelvis ved fastlæggelse af mål i resultatkontrakter eller for at angive det ønskede niveau for visse nøgletal i et Balanced...... Scorecard eller tilsvarende målstyringsmodeller. Artiklen redegør for begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det, samt redegør for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et...... benchmarkingprojekt. Dernæst bliver forskellen på resultatbenchmarking og procesbenchmarking behandlet, hvorefter brugen af intern hhv. ekstern benchmarking, samt brugen af benchmarking i budgetlægning og budgetopfølgning, behandles....

  10. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  11. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  12. Benchmarking local healthcare-associated infections: Available benchmarks and interpretation challenges

    Directory of Open Access Journals (Sweden)

    Aiman El-Saed

    2013-10-01

    Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections

  13. Assessment of S(α, β) libraries for criticality safety evaluations of wet storage pools by refined trend analyses

    International Nuclear Information System (INIS)

    Kolbe, E.; Vasiliev, A.; Ferroukhi, H.

    2009-01-01

    In a recent criticality safety evaluation (CSE) of a commercial wet storage pool applying MCNPX-2.5.0 in combination with the ENDF/B-VII.0 and JEFF-3.1 continuous energy cross section libraries, the maximum permissible initial fuel-enrichment limit for water reflected configurations was found to be dependant upon the applied neutron cross section library. More detailed investigations indicated that the difference is mainly caused by different sub-libraries for thermal neutron scattering based on parameterizations of the S(α, β) scattering matrix. Hence an analysis of trends was done with respect to the low energy neutron flux in order to assess the S(α, β) data sets. First, when performing the trend analysis based on the full set of 149 benchmarks that were employed for the validation, significant trends could not be found. But by analyzing a selected subset of benchmarks clear trends with respect to the low energy neutron flux could be detected. The results presented in this paper demonstrate the sensitivity of specific configurations to the parameterizations of the S(α, β) scattering matrix and thus may help to improve CSE of wet storage pools. Finally, in addition to the low energy neutron flux, we also refined the trend analyses with respect to other key (spectrum-related) parameters by performing them with various selected subsets of the full suite of 149 benchmarks. The corresponding outcome using MCNPX 2.5.0 in combination with the ENDF/B-VII.0, ENDF/B-VI.8, JEFF-3.1, JEF-2.2, and JENDL-3.3 neutron cross section libraries are presented and discussed. (authors)

  14. Developing a Benchmarking Process in Perfusion: A Report of the Perfusion Downunder Collaboration

    Science.gov (United States)

    Baker, Robert A.; Newland, Richard F.; Fenton, Carmel; McDonald, Michael; Willcox, Timothy W.; Merry, Alan F.

    2012-01-01

    Abstract: Improving and understanding clinical practice is an appropriate goal for the perfusion community. The Perfusion Downunder Collaboration has established a multi-center perfusion focused database aimed at achieving these goals through the development of quantitative quality indicators for clinical improvement through benchmarking. Data were collected using the Perfusion Downunder Collaboration database from procedures performed in eight Australian and New Zealand cardiac centers between March 2007 and February 2011. At the Perfusion Downunder Meeting in 2010, it was agreed by consensus, to report quality indicators (QI) for glucose level, arterial outlet temperature, and pCO2 management during cardiopulmonary bypass. The values chosen for each QI were: blood glucose ≥4 mmol/L and ≤10 mmol/L; arterial outlet temperature ≤37°C; and arterial blood gas pCO2 ≥ 35 and ≤45 mmHg. The QI data were used to derive benchmarks using the Achievable Benchmark of Care (ABC™) methodology to identify the incidence of QIs at the best performing centers. Five thousand four hundred and sixty-five procedures were evaluated to derive QI and benchmark data. The incidence of the blood glucose QI ranged from 37–96% of procedures, with a benchmark value of 90%. The arterial outlet temperature QI occurred in 16–98% of procedures with the benchmark of 94%; while the arterial pCO2 QI occurred in 21–91%, with the benchmark value of 80%. We have derived QIs and benchmark calculations for the management of several key aspects of cardiopulmonary bypass to provide a platform for improving the quality of perfusion practice. PMID:22730861

  15. Benchmarking - a validation of UTDefect

    International Nuclear Information System (INIS)

    Niklasson, Jonas; Bostroem, Anders; Wirdelius, Haakan

    2006-06-01

    New and stronger demands on reliability of used NDE/NDT procedures and methods have stimulated the development of simulation tools of NDT. Modelling of ultrasonic non-destructive testing is useful for a number of reasons, e.g. physical understanding, parametric studies and in the qualification of procedures and personnel. The traditional way of qualifying a procedure is to generate a technical justification by employing experimental verification of the chosen technique. The manufacturing of test pieces is often very expensive and time consuming. It also tends to introduce a number of possible misalignments between the actual NDT situation and the proposed experimental simulation. The UTDefect computer code (SUNDT/simSUNDT) has been developed, together with the Dept. of Mechanics at Chalmers Univ. of Technology, during a decade and simulates the entire ultrasonic testing situation. A thorough validated model has the ability to be an alternative and a complement to the experimental work in order to reduce the extensive cost. The validation can be accomplished by comparisons with other models, but ultimately by comparisons with experiments. This project addresses the last alternative but provides an opportunity to, in a later stage, compare with other software when all data are made public and available. The comparison has been with experimental data from an international benchmark study initiated by the World Federation of NDE Centers. The experiments have been conducted with planar and spherically focused immersion transducers. The defects considered are side-drilled holes, flat-bottomed holes, and a spherical cavity. The data from the experiments are a reference signal used for calibration (the signal from the front surface of the test block at normal incidence) and the raw output from the scattering experiment. In all, more than forty cases have been compared. The agreement between UTDefect and the experiments was in general good (deviation less than 2dB) when the

  16. Lesson learned from the SARNET wall condensation benchmarks

    International Nuclear Information System (INIS)

    Ambrosini, W.; Forgione, N.; Merli, F.; Oriolo, F.; Paci, S.; Kljenak, I.; Kostka, P.; Vyskocil, L.; Travis, J.R.; Lehmkuhl, J.; Kelm, S.; Chin, Y.-S.; Bucci, M.

    2014-01-01

    four benchmarking steps received the interest of a number of participants (more or less in the order of ten in each phase), who applied their models to the proposed blind exercises, receiving by the University of Pisa, as the hosting organization, comparisons with the reference data. Since the same geometry and relatively similar conditions were addressed in the four steps, though considering different operating conditions, a gradual improvement of the quality of results was observed with respect to the first applications. The activity revealed to be fruitful in providing the needed awareness about the capabilities of condensation models, at least in the simple configuration involved by the benchmark exercises

  17. Review of California and National Methods for Energy PerformanceBenchmarking of Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Matson, Nance E.; Piette, Mary Ann

    2005-09-05

    This benchmarking review has been developed to support benchmarking planning and tool development under discussion by the California Energy Commission (CEC), Lawrence Berkeley National Laboratory (LBNL) and others in response to the Governor's Executive Order S-20-04 (2004). The Executive Order sets a goal of benchmarking and improving the energy efficiency of California's existing commercial building stock. The Executive Order requires the CEC to propose ''a simple building efficiency benchmarking system for all commercial buildings in the state''. This report summarizes and compares two currently available commercial building energy-benchmarking tools. One tool is the U.S. Environmental Protection Agency's Energy Star National Energy Performance Rating System, which is a national regression-based benchmarking model (referred to in this report as Energy Star). The second is Lawrence Berkeley National Laboratory's Cal-Arch, which is a California-based distributional model (referred to as Cal-Arch). Prior to the time Cal-Arch was developed in 2002, there were several other benchmarking tools available to California consumers but none that were based solely on California data. The Energy Star and Cal-Arch benchmarking tools both provide California with unique and useful methods to benchmark the energy performance of California's buildings. Rather than determine which model is ''better'', the purpose of this report is to understand and compare the underlying data, information systems, assumptions, and outcomes of each model.

  18. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  19. Raising Quality and Achievement. A College Guide to Benchmarking.

    Science.gov (United States)

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  20. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  1. Benchmarks: The Development of a New Approach to Student Evaluation.

    Science.gov (United States)

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  2. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  3. Practice benchmarking in the age of targeted auditing.

    Science.gov (United States)

    Langdale, Ryan P; Holland, Ben F

    2012-11-01

    The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists.

  4. Benchmarking MARS (accident management software) with the Browns Ferry fire

    International Nuclear Information System (INIS)

    Dawson, S.M.; Liu, L.Y.; Raines, J.C.

    1992-01-01

    The MAAP Accident Response System (MARS) is a userfriendly computer software developed to provide management and engineering staff with the most needed insights, during actual or simulated accidents, of the current and future conditions of the plant based on current plant data and its trends. To demonstrate the reliability of the MARS code in simulatng a plant transient, MARS is being benchmarked with the available reactor pressure vessel (RPV) pressure and level data from the Browns Ferry fire. The MRS software uses the Modular Accident Analysis Program (MAAP) code as its basis to calculate plant response under accident conditions. MARS uses a limited set of plant data to initialize and track the accidnt progression. To perform this benchmark, a simulated set of plant data was constructed based on actual report data containing the information necessary to initialize MARS and keep track of plant system status throughout the accident progression. The initial Browns Ferry fire data were produced by performing a MAAP run to simulate the accident. The remaining accident simulation used actual plant data

  5. Mixed-oxide (MOX) fuel performance benchmark. Summary of the results for the PRIMO MOX rod BD8

    International Nuclear Information System (INIS)

    Ott, L.J.; Sartori, E.; Costa, A.; ); Sobolev, V.; Lee, B-H.; Alekseev, P.N.; Shestopalov, A.A.; Mikityuk, K.O.; Fomichenko, P.A.; Shatrova, L.P.; Medvedev, A.V.; Bogatyr, S.M.; Khvostov, G.A.; Kuznetsov, V.I.; Stoenescu, R.; Chatwin, C.P.

    2009-01-01

    The OECD/NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, nuclear fuel performance, and fuel cycle issues related to the disposition of weapons-grade plutonium as MOX fuel. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close cooperation with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A major part of these activities includes benchmark studies. This report describes the results of the PRIMO rod BD8 benchmark exercise, the second benchmark by the TFRPD relative to MOX fuel behaviour. The corresponding PRIMO experimental data have been released, compiled and reviewed for the International Fuel Performance Experiments (IFPE) database. The observed ranges (as noted in the text) in the predicted thermal and FGR responses are reasonable given the variety and combination of thermal conductivity and FGR models employed by the benchmark participants with their respective fuel performance codes

  6. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  7. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  8. 2010 energy benchmarking report performance of the Canadian office sector

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-04-15

    In 2009, REALpac set a target of reducing energy consumption in office buildings to 20 equivalent kilowatt-hours per square foot by 2015. Following this, REALpac launched a national energy benchmarking survey to create a baseline for building energy performance across Canada; this paper provides the results of that survey. The survey was carried out using a tool which can measure energy use in a meaningful way using building characteristics data and energy use data from utility bills. The survey was conducted on 2009 data, 261 office buildings submitted their data which were then analyzed to provide trends and a baseline. Results showed a variety and diversity of performances and a 28.7 ekWh/ft2 annual mean building energy use intensity was found. This survey demonstrated that several office building owners and managers are taking steps to monitor and minimize energy use in their buildings.

  9. Benchmarking MILC code with OpenMP and MPI

    International Nuclear Information System (INIS)

    Gottlieb, Steven; Tamhankar, Sonali

    2001-01-01

    A trend in high performance computers that is becoming increasingly popular is the use of symmetric multi-processing (SMP) rather than the older paradigm of MPP. MPI codes that ran and scaled well on MPP machines can often be run on an SMP machine using the vendor's version of MPI. However, this approach may not make optimal use of the (expensive) SMP hardware. More significantly, there are machines like Blue Horizon, an IBM SP with 8-way SMP nodes at the San Diego Supercomputer Center that can only support 4 MPI processes per node (with the current switch). On such a machine it is imperative to be able to use OpenMP parallelism on the node, and MPI between nodes. We describe the challenges of converting MILC MPI code to using a second level of OpenMP parallelism, and benchmarks on IBM and Sun computers

  10. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II.

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  11. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  12. A simplified 2D HTTR benchmark problem

    International Nuclear Information System (INIS)

    Zhang, Z.; Rahnema, F.; Pounders, J. M.; Zhang, D.; Ougouag, A.

    2009-01-01

    To access the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of relevant whole core configurations. In this paper we have created a numerical benchmark problem in 2D configuration typical of a high temperature gas cooled prismatic core. This problem was derived from the HTTR start-up experiment. For code-to-code verification, complex details of geometry and material specification of the physical experiments are not necessary. To this end, the benchmark problem presented here is derived by simplifications that remove the unnecessary details while retaining the heterogeneity and major physics properties from the neutronics viewpoint. Also included here is a six-group material (macroscopic) cross section library for the benchmark problem. This library was generated using the lattice depletion code HELIOS. Using this library, benchmark quality Monte Carlo solutions are provided for three different configurations (all-rods-in, partially-controlled and all-rods-out). The reference solutions include the core eigenvalue, block (assembly) averaged fuel pin fission density distributions, and absorption rate in absorbers (burnable poison and control rods). (authors)

  13. Effects of benchmarking on the quality of type 2 diabetes care: results of the OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study in Greece

    Science.gov (United States)

    Tsimihodimos, Vasilis; Kostapanos, Michael S.; Moulis, Alexandros; Nikas, Nikos; Elisaf, Moses S.

    2015-01-01

    Objectives: To investigate the effect of benchmarking on the quality of type 2 diabetes (T2DM) care in Greece. Methods: The OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study [ClinicalTrials.gov identifier: NCT00681850] was an international multicenter, prospective cohort study. It included physicians randomized 3:1 to either receive benchmarking for glycated hemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) treatment targets (benchmarking group) or not (control group). The proportions of patients achieving the targets of the above-mentioned parameters were compared between groups after 12 months of treatment. Also, the proportions of patients achieving those targets at 12 months were compared with baseline in the benchmarking group. Results: In the Greek region, the OPTIMISE study included 797 adults with T2DM (570 in the benchmarking group). At month 12 the proportion of patients within the predefined targets for SBP and LDL-C was greater in the benchmarking compared with the control group (50.6 versus 35.8%, and 45.3 versus 36.1%, respectively). However, these differences were not statistically significant. No difference between groups was noted in the percentage of patients achieving the predefined target for HbA1c. At month 12 the increase in the percentage of patients achieving all three targets was greater in the benchmarking (5.9–15.0%) than in the control group (2.7–8.1%). In the benchmarking group more patients were on target regarding SBP (50.6% versus 29.8%), LDL-C (45.3% versus 31.3%) and HbA1c (63.8% versus 51.2%) at 12 months compared with baseline (p Benchmarking may comprise a promising tool for improving the quality of T2DM care. Nevertheless, target achievement rates of each, and of all three, quality indicators were suboptimal, indicating there are still unmet needs in the management of T2DM. PMID:26445642

  14. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A.C.; Herman, M.; Kahler,A.C.; MacFarlane,R.E.; Mosteller,R.D.; Kiedrowski,B.C.; Frankle,S.C.; Chadwick,M.B.; McKnight,R.D.; Lell,R.M.; Palmiotti,G.; Hiruta,H.; Herman,M.; Arcilla,R.; Mughabghab,S.F.; Sublet,J.C.; Trkov,A.; Trumbull,T.H.; Dunn,M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., 'ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data,' Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected {sup 235}U and {sup 239}Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also

  15. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A. [Los Alamos National Laboratory (LANL); Macfarlane, R E [Los Alamos National Laboratory (LANL); Mosteller, R D [Los Alamos National Laboratory (LANL); Kiedrowski, B C [Los Alamos National Laboratory (LANL); Frankle, S C [Los Alamos National Laboratory (LANL); Chadwick, M. B. [Los Alamos National Laboratory (LANL); Mcknight, R D [Argonne National Laboratory (ANL); Lell, R M [Argonne National Laboratory (ANL); Palmiotti, G [Idaho National Laboratory (INL); Hiruta, h [Idaho National Laboratory (INL); Herman, Micheal W [Brookhaven National Laboratory (BNL); Arcilla, r [Brookhaven National Laboratory (BNL); Mughabghab, S F [Brookhaven National Laboratory (BNL); Sublet, J C [Culham Science Center, Abington, UK; Trkov, A. [Jozef Stefan Institute, Slovenia; Trumbull, T H [Knolls Atomic Power Laboratory; Dunn, Michael E [ORNL

    2011-01-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unrnoderated and uranium reflected (235)U and (239)Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as (236)U; (238,242)Pu and (241,243)Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical

  16. Benchmark calculations of power distribution within assemblies

    International Nuclear Information System (INIS)

    Cavarec, C.; Perron, J.F.; Verwaerde, D.; West, J.P.

    1994-09-01

    The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (P ij , S n , Monte Carlo). This report presents an analysis and intercomparisons of all the results received

  17. ZZ WPPR, Pu Recycling Benchmark Results

    International Nuclear Information System (INIS)

    Lutz, D.; Mattes, M.; Delpech, Marc; Juanola, Marc

    2002-01-01

    Description of program or function: The NEA NSC Working Party on Physics of Plutonium Recycling has commissioned a series of benchmarks covering: - Plutonium recycling in pressurized-water reactors; - Void reactivity effect in pressurized-water reactors; - Fast Plutonium-burner reactors: beginning of life; - Plutonium recycling in fast reactors; - Multiple recycling in advanced pressurized-water reactors. The results have been published (see references). ZZ-WPPR-1-A/B contains graphs and tables relative to the PWR Mox pin cell benchmark, representing typical fuel for plutonium recycling, one corresponding to a first cycle, the second for a fifth cycle. These computer readable files contain the complete set of results, while the printed report contains only a subset. ZZ-WPPR-2-CYC1 are the results from cycle 1 of the multiple recycling benchmarks

  18. Interior beam searchlight semi-analytical benchmark

    International Nuclear Information System (INIS)

    Ganapol, Barry D.; Kornreich, Drew E.

    2008-01-01

    Multidimensional semi-analytical benchmarks to provide highly accurate standards to assess routine numerical particle transport algorithms are few and far between. Because of the well-established 1D theory for the analytical solution of the transport equation, it is sometimes possible to 'bootstrap' a 1D solution to generate a more comprehensive solution representation. Here, we consider the searchlight problem (SLP) as a multidimensional benchmark. A variation of the usual SLP is the interior beam SLP (IBSLP) where a beam source lies beneath the surface of a half space and emits directly towards the free surface. We consider the establishment of a new semi-analytical benchmark based on a new FN formulation. This problem is important in radiative transfer experimental analysis to determine cloud absorption and scattering properties. (authors)

  19. The national hydrologic bench-mark network

    Science.gov (United States)

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  20. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  1. Benchmark referencing of neutron dosimetry measurements

    International Nuclear Information System (INIS)

    Eisenhauer, C.M.; Grundl, J.A.; Gilliam, D.M.; McGarry, E.D.; Spiegel, V.

    1980-01-01

    The concept of benchmark referencing involves interpretation of dosimetry measurements in applied neutron fields in terms of similar measurements in benchmark fields whose neutron spectra and intensity are well known. The main advantage of benchmark referencing is that it minimizes or eliminates many types of experimental uncertainties such as those associated with absolute detection efficiencies and cross sections. In this paper we consider the cavity external to the pressure vessel of a power reactor as an example of an applied field. The pressure vessel cavity is an accessible location for exploratory dosimetry measurements aimed at understanding embrittlement of pressure vessel steel. Comparisons with calculated predictions of neutron fluence and spectra in the cavity provide a valuable check of the computational methods used to estimate pressure vessel safety margins for pressure vessel lifetimes

  2. MIPS bacterial genomes functional annotation benchmark dataset.

    Science.gov (United States)

    Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen

    2005-05-15

    Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab

  3. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  4. Benchmarking criticality safety calculations with subcritical experiments

    International Nuclear Information System (INIS)

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments

  5. Concrete benchmark experiment: ex-vessel LWR surveillance dosimetry; Experience ``Benchmark beton`` pour la dosimetrie hors cuve dans les reacteurs a eau legere

    Energy Technology Data Exchange (ETDEWEB)

    Ait Abderrahim, H.; D`Hondt, P.; Oeyen, J.; Risch, P.; Bioux, P.

    1993-09-01

    The analysis of DOEL-1 in-vessel and ex-vessel neutron dosimetry, using the DOT 3.5 Sn code coupled with the VITAMIN-C cross-section library, showed the same C/E values for different detectors at the surveillance capsule and the ex-vessel cavity positions. These results seem to be in contradiction with those obtained in several Benchmark experiments (PCA, PSF, VENUS...) when using the same computational tools. Indeed a strong decreasing radial trend of the C/E was observed, partly explained by the overestimation of the iron inelastic scattering. The flat trend seen in DOEL-1 could be explained by compensating errors in the calculation such as the backscattering due to the concrete walls outside the cavity. The `Concrete Benchmark` experiment has been designed to judge the ability of this calculation methods to treat the backscattering. This paper describes the `Concrete Benchmark` experiment, the measured and computed neutron dosimetry results and their comparison. This preliminary analysis seems to indicate an overestimation of the backscattering effect in the calculations. (authors). 5 figs., 1 tab., 7 refs.

  6. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  7. Health Improvements Have Been More Rapid and Widespread in China than in India: A Comparative Analysis of Health and Socioeconomic Trends from 1960 to 2011

    Directory of Open Access Journals (Sweden)

    Gopal K. Singh, PhD

    2012-11-01

    Full Text Available ObjectivesWe examined differences between China and India in key health and socioeconomic indicators, including life expectancy, infant and child mortality, non-communicable disease mortality from cancer, cardiovascular diseases (CVD, and diabetes, Human Development Index, Gender Inequality Index, material living conditions, and health expenditure.MethodsData on health and social indicators came from various World Health Organization and United Nations databases on global health and development statistics, including the GLOBOCAN cancer database. Mortality trends were modeled by log-linear regression, and differences in rates and relative risks were tested for statistical significance.ResultsAlthough both countries have made marked improvements, India lags behind China on several key health indicators. Differential rates of mortality decline during 1960-2009 have led to a widening health gap between China and India. In 2009 the infant mortality rate in India was 50 deaths per 1,000 live births, 3 times greater than the rate for China. Sixty-six out of 1,000 Indian children died before reaching their 5th birthday, compared with 19 children in China. China’s life expectancy is 9 years longer than India’s. Life expectancy at birth in India increased from 42 years in 1960 to 65 years in 2009, while life expectancy in China increased from 47 years in 1960 to 74 years in 2009. Major health concerns for China include high rates of stomach, liver, and lung cancer, CVD, and smoking prevalence. Globally, India ranked 90th and China 102nd in life satisfaction.Conclusions and Public Health Implications:India’s less favorable health profile compared to China is largely attributable to its higher rates of mortality from communicable diseases and maternal and perinatal conditions. Further health gains can be achieved by reducing social inequality, greater investments in human development and health services, and by prevention and control of chronic

  8. Trends in the quality of care for elderly people with type 2 diabetes: the need for improvements in safety and quality (the 2001 and 2007 ENTRED Surveys).

    Science.gov (United States)

    Pornet, Carole; Bourdel-Marchasson, Isabelle; Lecomte, Pierre; Eschwège, Eveline; Romon, Isabelle; Fosse, Sandrine; Assogba, Frank; Roudier, Candice; Fagot-Campagna, Anne

    2011-04-01

    This study aimed to characterize the sociodemographic data, health status, quality of care and 6-year trends in elderly people with type 2 diabetes. This study used two French cross-sectional representative surveys of adults of all ages with all types of diabetes (Entred 2001 and 2007), which combined medical claims, and patient and medical provider questionnaires. The 2007 data in patients with type 2 diabetes aged 65 years or over (n=1766) were described and compared with the 2001 data (n=1801). Since 2001, obesity has increased (35% in 2007; +7 points since 2001) while written nutritional advice was less often provided (59%; -6 points). Mean HbA(1c) (7.1%; -0.2%), blood pressure (135/76 mmHg; -4/-3 mmHg) and LDL cholesterol (1.04 g/L; -0.21 g/L) declined, while the use of medication increased: at least two OHAs, 34% (+4 points); OHA(s) and insulin combined, 10% (+4 points); antihypertensive treatment, 83% (+4 points); and statins 48% (+26 points). Severe hypoglycaemia remained frequent (10% had an event at least once a year). The overall prevalence of complications increased. Renal complications were not monitored carefully enough (missing value for albuminuria: 42%; -4.5 points), and 46% of those with a glomerular filtration rate less than 60 mL/min/1.73 m² were taking metformin. Elderly people with type 2 diabetes are receiving better quality of care and have better control of cardiovascular risk factors than before. However, improvement is still required, in particular by performing better screening for complications. In this patient population, it is important to carefully monitor the risks for hypoglycaemia, hypotension, malnutrition and contraindications related to renal function. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  9. Empirical mode decomposition and k-nearest embedding vectors for timely analyses of antibiotic resistance trends.

    Science.gov (United States)

    Teodoro, Douglas; Lovis, Christian

    2013-01-01

    Antibiotic resistance is a major worldwide public health concern. In clinical settings, timely antibiotic resistance information is key for care providers as it allows appropriate targeted treatment or improved empirical treatment when the specific results of the patient are not yet available. To improve antibiotic resistance trend analysis algorithms by building a novel, fully data-driven forecasting method from the combination of trend extraction and machine learning models for enhanced biosurveillance systems. We investigate a robust model for extraction and forecasting of antibiotic resistance trends using a decade of microbiology data. Our method consists of breaking down the resistance time series into independent oscillatory components via the empirical mode decomposition technique. The resulting waveforms describing intrinsic resistance trends serve as the input for the forecasting algorithm. The algorithm applies the delay coordinate embedding theorem together with the k-nearest neighbor framework to project mappings from past events into the future dimension and estimate the resistance levels. The algorithms that decompose the resistance time series and filter out high frequency components showed statistically significant performance improvements in comparison with a benchmark random walk model. We present further qualitative use-cases of antibiotic resistance trend extraction, where empirical mode decomposition was applied to highlight the specificities of the resistance trends. The decomposition of the raw signal was found not only to yield valuable insight into the resistance evolution, but also to produce novel models of resistance forecasters with boosted prediction performance, which could be utilized as a complementary method in the analysis of antibiotic resistance trends.

  10. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II.

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  11. Benchmarking af kommunernes førtidspensionspraksis

    DEFF Research Database (Denmark)

    Gregersen, Ole

    Hvert år udgiver Den Sociale Ankestyrelse statistikken over afgørelser i sager om førtidspension. I forbindelse med årsstatistikken udgives resultater fra en benchmarking model, hvor antal tilkendelser i den enkelte kommune sammenlignes med et forventet antal tilkendelser, hvis kommunen havde haft...... samme afgørelsespraksis, som den "gennemsnitlige kommune", når vi korrigerer for den sociale struktur i kommunen. Den hidtil anvendte benchmarking model er dokumenteret i Ole Gregersen (1994): Kommunernes Pensionspraksis, Servicerapport, Socialforskningsinstituttet. I dette notat dokumenteres en...

  12. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.E.; Cheng, E.T.

    1985-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li/sub 17/Pb/sub 83/ and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the TBR to group structure and weighting spectrum increases and Li enrichment decrease with up to 20% discrepancies for thin natural Li/sub 17/Pb/sub 83/ blankets

  13. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.L.; Cheng, E.T.

    1986-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li 17 Pb 83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li 17 Pb 83 blankets. (author)

  14. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  15. Reactor group constants and benchmark test

    Energy Technology Data Exchange (ETDEWEB)

    Takano, Hideki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    The evaluated nuclear data files such as JENDL, ENDF/B-VI and JEF-2 are validated by analyzing critical mock-up experiments for various type reactors and assessing applicability for nuclear characteristics such as criticality, reaction rates, reactivities, etc. This is called Benchmark Testing. In the nuclear calculations, the diffusion and transport codes use the group constant library which is generated by processing the nuclear data files. In this paper, the calculation methods of the reactor group constants and benchmark test are described. Finally, a new group constants scheme is proposed. (author)

  16. Skiing trends

    Science.gov (United States)

    Charles R. Goeldner; Stacy Standley

    1980-01-01

    A brief historical overview of skiing is presented, followed by a review of factors such as energy, population trends, income, sex, occupation and attitudes which affect the future of skiing. A. C. Neilson's Sports Participation Surveys show that skiing is the second fastest growing sport in the country. Skiing Magazine's study indicates there are...

  17. Billing Trends

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Billing Trends. Internet access: Bandwidth becoming analogous to electric power. Only maximum capacity (load) is fixed; Charges based on usage (units). Leased line bandwidth: Billing analogous to phone calls. But bandwidth is variable.

  18. Food Trends.

    Science.gov (United States)

    Schwenk, Nancy E.

    1991-01-01

    An overall perspective on trends in food consumption is presented. Nutrition awareness is at an all-time high; consumption is influenced by changes in disposable income, availability of convenience foods, smaller household size, and an increasing proportion of ethnic minorities in the population. (18 references) (LB)

  19. Trends and Perspective in Industrial Maintenance Management

    DEFF Research Database (Denmark)

    Luxhoj, James T.; Thorsteinsson, Uffe; Riis, Jens Ove

    1997-01-01

    With increased global competition for manufacturing, many companies are seeking ways to gain competitative advanges with respect to cost, service, quality and on-time deliveries. The role that effective maintenance management plays in contributing to overall organizational productivity has received...... increased attention. Trends and perspective in industrial maintenance are presented. The result of benchmarking studies from Scandinavia and United States are also presented and compared. Case studies that examine maintenance methods, knowledge, organization and information systems in three Danish...

  20. Benchmarks for effective primary care-based nursing services for adults with depression: a Delphi study.

    Science.gov (United States)

    McIlrath, Carole; Keeney, Sinead; McKenna, Hugh; McLaughlin, Derek

    2010-02-01

    This paper is a report of a study conducted to identify and gain consensus on appropriate benchmarks for effective primary care-based nursing services for adults with depression. Worldwide evidence suggests that between 5% and 16% of the population have a diagnosis of depression. Most of their care and treatment takes place in primary care. In recent years, primary care nurses, including community mental health nurses, have become more involved in the identification and management of patients with depression; however, there are no appropriate benchmarks to guide, develop and support their practice. In 2006, a three-round electronic Delphi survey was completed by a United Kingdom multi-professional expert panel (n = 67). Round 1 generated 1216 statements relating to structures (such as training and protocols), processes (such as access and screening) and outcomes (such as patient satisfaction and treatments). Content analysis was used to collapse statements into 140 benchmarks. Seventy-three benchmarks achieved consensus during subsequent rounds. Of these, 45 (61%) were related to structures, 18 (25%) to processes and 10 (14%) to outcomes. Multi-professional primary care staff have similar views about the appropriate benchmarks for care of adults with depression. These benchmarks could serve as a foundation for depression improvement initiatives in primary care and ongoing research into depression management by nurses.