WorldWideScience

Sample records for benchmark definition updated

  1. Building America Research Benchmark Definition: Updated December 20, 2007

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2008-01-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a 'moving target'.

  2. Building America Research Benchmark Definition, Updated December 29, 2004

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2005-02-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, the U.S. Department of Energy (DOE) Residential Buildings Program and the National Renewable Energy Laboratory (NREL) developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. A series of user profiles, intended to represent the behavior of a ''standard'' set of occupants, was created for use in conjunction with the Benchmark.

  3. Building America Research Benchmark Definition, Updated December 15, 2006

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2007-01-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a ''moving target''.

  4. Building America Research Benchmark Definition: Updated August 15, 2007

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2007-09-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a 'moving target'.

  5. Building America Research Benchmark Definition: Updated December 2009

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.; Engebrecht, C.

    2010-01-01

    The Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without chasing a 'moving target.'

  6. Building America Research Benchmark Definition, Updated December 19, 2008

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2008-12-19

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Bui

  7. Building America Research Benchmark Definition: Updated December 19, 2008

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2008-12-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams.

  8. Building America Research Benchmark Definition, Updated December 2009

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engebrecht, Cheryn [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2010-01-01

    To track progress toward aggressive multi-year, whole-house energy savings goals of 40%–70% and on-site power production of up to 30%, the U.S. Department of Energy (DOE) Residential Buildings Program and the National Renewable Energy Laboratory (NREL) developed the Building America (BA) Research Benchmark in consultation with the Building America industry teams.

  9. Updates to the Integrated Protein-Protein Interaction Benchmarks : Docking Benchmark Version 5 and Affinity Benchmark Version 2

    NARCIS (Netherlands)

    Vreven, Thom; Moal, Iain H.; Vangone, Anna|info:eu-repo/dai/nl/370549694; Pierce, Brian G.; Kastritis, Panagiotis L.|info:eu-repo/dai/nl/315886668; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M J J|info:eu-repo/dai/nl/113691238; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high-quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were

  10. Advanced fuel cycles options for LWRs and IMF benchmark definition

    International Nuclear Information System (INIS)

    Breza, J.; Darilek, P.; Necas, V.

    2008-01-01

    In the paper, different advanced nuclear fuel cycles including thorium-based fuel and inert-matrix fuel are examined under light water reactor conditions, especially VVER-440, and compared. Two investigated thorium based fuels include one solely plutonium-thorium based fuel and the second one plutonium-thorium based fuel with initial uranium content. Both of them are used to carry and burn or transmute plutonium created in the classical UOX cycle. The inert-matrix fuel consist of plutonium and minor actinides separated from spent UOX fuel fixed in Yttria-stabilised zirconia matrix. The article shows analysed fuel cycles and their short description. The conclusion is concentrated on the rate of Pu transmutation and Pu with minor actinides cumulating in the spent advanced thorium fuel and its comparison to UOX open fuel cycle. Definition of IMF benchmark based on presented scenario is given. (authors)

  11. The SWAP Upper Atmosphere Expansion Benchmark: Updates and Challenges

    Science.gov (United States)

    Fuller-Rowell, T. J.

    2017-12-01

    improve the Benchmark estimates, including improved estimates of the solar drivers, extreme event analysis, neutral density data analysis for past storms, and further geospace model simulations. Part of the uncertainty in the response is because it would be dependent on how the magnetosphere reacts and channels the energy into the upper atmosphere.

  12. Finite element model updating of the UCF grid benchmark using measured frequency response functions

    Science.gov (United States)

    Sipple, Jesse D.; Sanayei, Masoud

    2014-05-01

    A frequency response function based finite element model updating method is presented and used to perform parameter estimation of the University of Central Florida Grid Benchmark Structure. The proposed method is used to calibrate the initial finite element model using measured frequency response functions from the undamaged, intact structure. Stiffness properties, mass properties, and boundary conditions of the initial model were estimated and updated. Model updating was then performed using measured frequency response functions from the damaged structure to detect physical structural change. Grouping and ungrouping were utilized to determine the exact location and magnitude of the damage. The fixity in rotation of two boundary condition nodes was accurately and successfully estimated. The usefulness of the proposed method for finite element model updating is shown by being able to detect, locate, and quantify change in structural properties.

  13. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    Science.gov (United States)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  14. Preeclampsia: Updates in Pathogenesis, Definitions, and Guidelines.

    Science.gov (United States)

    Phipps, Elizabeth; Prasanna, Devika; Brima, Wunnie; Jim, Belinda

    2016-06-06

    Preeclampsia is becoming an increasingly common diagnosis in the developed world and remains a high cause of maternal and fetal morbidity and mortality in the developing world. Delay in childbearing in the developed world feeds into the risk factors associated with preeclampsia, which include older maternal age, obesity, and/or vascular diseases. Inadequate prenatal care partially explains the persistent high prevalence in the developing world. In this review, we begin by presenting the most recent concepts in the pathogenesis of preeclampsia. Upstream triggers of the well described angiogenic pathways, such as the heme oxygenase and hydrogen sulfide pathways, as well as the roles of autoantibodies, misfolded proteins, nitric oxide, and oxidative stress will be described. We also detail updated definitions, classification schema, and treatment targets of hypertensive disorders of pregnancy put forth by obstetric and hypertensive societies throughout the world. The shift has been made to view preeclampsia as a systemic disease with widespread endothelial damage and the potential to affect future cardiovascular diseases rather than a self-limited occurrence. At the very least, we now know that preeclampsia does not end with delivery of the placenta. We conclude by summarizing the latest strategies for prevention and treatment of preeclampsia. A better understanding of this entity will help in the care of at-risk women before delivery and for decades after. Copyright © 2016 by the American Society of Nephrology.

  15. Preeclampsia: Updates in Pathogenesis, Definitions, and Guidelines

    Science.gov (United States)

    Phipps, Elizabeth; Prasanna, Devika; Brima, Wunnie

    2016-01-01

    Preeclampsia is becoming an increasingly common diagnosis in the developed world and remains a high cause of maternal and fetal morbidity and mortality in the developing world. Delay in childbearing in the developed world feeds into the risk factors associated with preeclampsia, which include older maternal age, obesity, and/or vascular diseases. Inadequate prenatal care partially explains the persistent high prevalence in the developing world. In this review, we begin by presenting the most recent concepts in the pathogenesis of preeclampsia. Upstream triggers of the well described angiogenic pathways, such as the heme oxygenase and hydrogen sulfide pathways, as well as the roles of autoantibodies, misfolded proteins, nitric oxide, and oxidative stress will be described. We also detail updated definitions, classification schema, and treatment targets of hypertensive disorders of pregnancy put forth by obstetric and hypertensive societies throughout the world. The shift has been made to view preeclampsia as a systemic disease with widespread endothelial damage and the potential to affect future cardiovascular diseases rather than a self-limited occurrence. At the very least, we now know that preeclampsia does not end with delivery of the placenta. We conclude by summarizing the latest strategies for prevention and treatment of preeclampsia. A better understanding of this entity will help in the care of at-risk women before delivery and for decades after. PMID:27094609

  16. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  17. Universal MI definition update for cardiovascular disease.

    Science.gov (United States)

    White, Harvey; Thygesen, Kristian; Alpert, Joseph S; Jaffe, Allan

    2014-01-01

    The new third universal definition of myocardial infarction (MI) is based on troponin elevation together with ischemic symptoms, ischemic ECG changes, and imaging evidence. MIs are classified into five types as to whether they are spontaneous, secondary to imbalance between coronary artery blood supply and demand, related to sudden death, or related to revascularization procedures. The definition is based on a rise and/or fall in troponin levels occurring in a clinical setting. There have been modifications over previous definitions with adding intracoronary thrombus as a criterion, adding a new type of MI type 4c, and raising the cutpoint for the diagnosis of MI related to percutaneous coronary intervention to five times the 99(th) percentile upper reference limit and requiring evidence of ischemia or angiographic complications. In clinical practice, trials, and registries, different definitions are used. There is a need for consistency with regard to the definition of MI and the universal definition should be implemented.

  18. Definition and classification of hypertension: an update.

    Science.gov (United States)

    Giles, Thomas D; Materson, Barry J; Cohn, Jay N; Kostis, John B

    2009-11-01

    Since the publication of a paper by the American Society of Hypertension, Inc. Writing Group in 2003, some refinements have occurred in the definition of hypertension. Blood pressure is now recognized as a biomarker for hypertension, and a distinction is made between the various stages of hypertension and global cardiovascular risk. This paper discusses the logic underlying the refinements in the definition of hypertension. 2009 Wiley Periodical, Inc.

  19. Definition labour migrant (updated) : Second, revised version

    NARCIS (Netherlands)

    Cremers, Jan

    2017-01-01

    Different definitions in which one can comprehend who counts as a migrant are used. Moreover, also in the regulatory frame that applies for migrants, in statistics on the stocks and flows of migrant workers, in analysis of labour mobility and cross-border recruitment, in data sources and research

  20. Implementing Data Definition Consistency for Emergency Department Operations Benchmarking and Research.

    Science.gov (United States)

    Yiadom, Maame Yaa A B; Scheulen, James; McWade, Conor M; Augustine, James J

    2016-07-01

    The objective was to obtain a commitment to adopt a common set of definitions for emergency department (ED) demographic, clinical process, and performance metrics among the ED Benchmarking Alliance (EDBA), ED Operations Study Group (EDOSG), and Academy of Academic Administrators of Emergency Medicine (AAAEM) by 2017. A retrospective cross-sectional analysis of available data from three ED operations benchmarking organizations supported a negotiation to use a set of common metrics with identical definitions. During a 1.5-day meeting-structured according to social change theories of information exchange, self-interest, and interdependence-common definitions were identified and negotiated using the EDBA's published definitions as a start for discussion. Methods of process analysis theory were used in the 8 weeks following the meeting to achieve official consensus on definitions. These two lists were submitted to the organizations' leadership for implementation approval. A total of 374 unique measures were identified, of which 57 (15%) were shared by at least two organizations. Fourteen (4%) were common to all three organizations. In addition to agreement on definitions for the 14 measures used by all three organizations, agreement was reached on universal definitions for 17 of the 57 measures shared by at least two organizations. The negotiation outcome was a list of 31 measures with universal definitions to be adopted by each organization by 2017. The use of negotiation, social change, and process analysis theories achieved the adoption of universal definitions among the EDBA, EDOSG, and AAAEM. This will impact performance benchmarking for nearly half of US EDs. It initiates a formal commitment to utilize standardized metrics, and it transitions consistency in reporting ED operations metrics from consensus to implementation. This work advances our ability to more accurately characterize variation in ED care delivery models, resource utilization, and performance. In

  1. Updating the ACT College Readiness Benchmarks. ACT Research Report Series 2013 (6)

    Science.gov (United States)

    Allen, Jeff

    2013-01-01

    The ACT College Readiness Benchmarks are the ACT® College Readiness Assessment scores associated with a 50% chance of earning a B or higher grade in typical first-year credit-bearing college courses. The Benchmarks also correspond to an approximate 75% chance of earning a C or higher grade in these courses. There are four Benchmarks, corresponding…

  2. IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov

    2012-11-01

    Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.

  3. Proposal for an Update of the Definition and Scope of Behavioral Medicine

    OpenAIRE

    Dekker, Joost; Stauder, Adrienne; Penedo, Frank J.

    2016-01-01

    Purpose We aim to provide an update of the definition and scope of behavioral medicine in the Charter of ISBM, as the present version was developed more than 25?years ago. Methods We identify issues which need clarification or updating. This leads us to propose an update of the definition and scope of behavioral medicine. Results Issues in need of clarification or updating include the scope of behavioral medicine (biobehavioral mechanisms, clinical diagnosis and intervention, and prevention a...

  4. Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model

    International Nuclear Information System (INIS)

    Taylor, G. A.; Hiergesell, R. A.

    2013-01-01

    The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow

  5. Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, G. A.; Hiergesell, R. A.

    2013-11-12

    The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow

  6. Improved precision and accuracy for microarrays using updated probe set definitions

    Directory of Open Access Journals (Sweden)

    Larsson Ola

    2007-02-01

    Full Text Available Abstract Background Microarrays enable high throughput detection of transcript expression levels. Different investigators have recently introduced updated probe set definitions to more accurately map probes to our current knowledge of genes and transcripts. Results We demonstrate that updated probe set definitions provide both better precision and accuracy in probe set estimates compared to the original Affymetrix definitions. We show that the improved precision mainly depends on the increased number of probes that are integrated into each probe set, but we also demonstrate an improvement when the same number of probes is used. Conclusion Updated probe set definitions does not only offer expression levels that are more accurately associated to genes and transcripts but also improvements in the estimated transcript expression levels. These results give support for the use of updated probe set definitions for analysis and meta-analysis of microarray data.

  7. National Performance Benchmarks for Modern Screening Digital Mammography: Update from the Breast Cancer Surveillance Consortium.

    Science.gov (United States)

    Lehman, Constance D; Arao, Robert F; Sprague, Brian L; Lee, Janie M; Buist, Diana S M; Kerlikowske, Karla; Henderson, Louise M; Onega, Tracy; Tosteson, Anna N A; Rauscher, Garth H; Miglioretti, Diana L

    2017-04-01

    Purpose To establish performance benchmarks for modern screening digital mammography and assess performance trends over time in U.S. community practice. Materials and Methods This HIPAA-compliant, institutional review board-approved study measured the performance of digital screening mammography interpreted by 359 radiologists across 95 facilities in six Breast Cancer Surveillance Consortium (BCSC) registries. The study included 1 682 504 digital screening mammograms performed between 2007 and 2013 in 792 808 women. Performance measures were calculated according to the American College of Radiology Breast Imaging Reporting and Data System, 5th edition, and were compared with published benchmarks by the BCSC, the National Mammography Database, and performance recommendations by expert opinion. Benchmarks were derived from the distribution of performance metrics across radiologists and were presented as 50th (median), 10th, 25th, 75th, and 90th percentiles, with graphic presentations using smoothed curves. Results Mean screening performance measures were as follows: abnormal interpretation rate (AIR), 11.6 (95% confidence interval [CI]: 11.5, 11.6); cancers detected per 1000 screens, or cancer detection rate (CDR), 5.1 (95% CI: 5.0, 5.2); sensitivity, 86.9% (95% CI: 86.3%, 87.6%); specificity, 88.9% (95% CI: 88.8%, 88.9%); false-negative rate per 1000 screens, 0.8 (95% CI: 0.7, 0.8); positive predictive value (PPV) 1, 4.4% (95% CI: 4.3%, 4.5%); PPV2, 25.6% (95% CI: 25.1%, 26.1%); PPV3, 28.6% (95% CI: 28.0%, 29.3%); cancers stage 0 or 1, 76.9%; minimal cancers, 57.7%; and node-negative invasive cancers, 79.4%. Recommended CDRs were achieved by 92.1% of radiologists in community practice, and 97.1% achieved recommended ranges for sensitivity. Only 59.0% of radiologists achieved recommended AIRs, and only 63.0% achieved recommended levels of specificity. Conclusion The majority of radiologists in the BCSC surpass cancer detection recommendations for screening

  8. Updates on Definition, Consequences, and Management of Obstructive Sleep Apnea

    OpenAIRE

    Park, John G.; Ramar, Kannan; Olson, Eric J.

    2011-01-01

    Obstructive sleep apnea (OSA) is a breathing disorder during sleep that has implications beyond disrupted sleep. It is increasingly recognized as an independent risk factor for cardiac, neurologic, and perioperative morbidities. Yet this disorder remains undiagnosed in a substantial portion of our population. It is imperative for all physicians to remain vigilant in identifying patients with signs and symptoms consistent with OSA. This review focuses on updates in the areas of terminology and...

  9. Updates on definition, consequences, and management of obstructive sleep apnea.

    Science.gov (United States)

    Park, John G; Ramar, Kannan; Olson, Eric J

    2011-06-01

    Obstructive sleep apnea (OSA) is a breathing disorder during sleep that has implications beyond disrupted sleep. It is increasingly recognized as an independent risk factor for cardiac, neurologic, and perioperative morbidities. Yet this disorder remains undiagnosed in a substantial portion of our population. It is imperative for all physicians to remain vigilant in identifying patients with signs and symptoms consistent with OSA. This review focuses on updates in the areas of terminology and testing, complications of untreated OSA, perioperative considerations, treatment options, and new developments in this field.

  10. Diabetic neuropathies: update on definitions, diagnostic criteria, estimation of severity, and treatments

    DEFF Research Database (Denmark)

    Tesfaye, Solomon; Boulton, Andrew J M; Dyck, Peter J

    2010-01-01

    Preceding the joint meeting of the 19th annual Diabetic Neuropathy Study Group of the European Association for the Study of Diabetes (NEURODIAB) and the 8th International Symposium on Diabetic Neuropathy in Toronto, Canada, 13-18 October 2009, expert panels were convened to provide updates on cla...... on classification, definitions, diagnostic criteria, and treatments of diabetic peripheral neuropathies (DPNs), autonomic neuropathy, painful DPNs, and structural alterations in DPNs....

  11. Planetary Protection and Mars Special Regions--A Suggestion for Updating the Definition.

    Science.gov (United States)

    Rettberg, Petra; Anesio, Alexandre M; Baker, Victor R; Baross, John A; Cady, Sherry L; Detsis, Emmanouil; Foreman, Christine M; Hauber, Ernst; Ori, Gian Gabriele; Pearce, David A; Renno, Nilton O; Ruvkun, Gary; Sattler, Birgit; Saunders, Mark P; Smith, David H; Wagner, Dirk; Westall, Frances

    2016-02-01

    We highlight the role of COSPAR and the scientific community in defining and updating the framework of planetary protection. Specifically, we focus on Mars "Special Regions," areas where strict planetary protection measures have to be applied before a spacecraft can explore them, given the existence of environmental conditions that may be conducive to terrestrial microbial growth. We outline the history of the concept of Special Regions and inform on recent developments regarding the COSPAR policy, namely, the MEPAG SR-SAG2 review and the Academies and ESF joint committee report on Mars Special Regions. We present some new issues that necessitate the update of the current policy and provide suggestions for new definitions of Special Regions. We conclude with the current major scientific questions that remain unanswered regarding Mars Special Regions.

  12. Clinical Case Definitions for Classification of Intrathoracic Tuberculosis in Children: An Update.

    Science.gov (United States)

    Graham, Stephen M; Cuevas, Luis E; Jean-Philippe, Patrick; Browning, Renee; Casenghi, Martina; Detjen, Anne K; Gnanashanmugam, Devasena; Hesseling, Anneke C; Kampmann, Beate; Mandalakas, Anna; Marais, Ben J; Schito, Marco; Spiegel, Hans M L; Starke, Jeffrey R; Worrell, Carol; Zar, Heather J

    2015-10-15

    Consensus case definitions for childhood tuberculosis have been proposed by an international expert panel, aiming to standardize the reporting of cases in research focusing on the diagnosis of intrathoracic tuberculosis in children. These definitions are intended for tuberculosis diagnostic evaluation studies of symptomatic children with clinical suspicion of intrathoracic tuberculosis, and were not intended to predefine inclusion criteria into such studies. Feedback from researchers suggested that further clarification was required and that these case definitions could be further improved. Particular concerns were the perceived complexity and overlap of some case definitions, as well as the potential exclusion of children with acute onset of symptoms or less severe disease. The updated case definitions proposed here incorporate a number of key changes that aim to reduce complexity and improve research performance, while maintaining the original focus on symptomatic children suspected of having intrathoracic tuberculosis. The changes proposed should enhance harmonized classification for intrathoracic tuberculosis disease in children across studies, resulting in greater comparability and the much-needed ability to pool study results. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. An updated diagnostic approach to subtype definition of vascular parkinsonism - Recommendations from an expert working group.

    Science.gov (United States)

    Rektor, Ivan; Bohnen, Nicolaas I; Korczyn, Amos D; Gryb, Viktoria; Kumar, Hrishikesh; Kramberger, Milica G; de Leeuw, Frank-Erik; Pirtošek, Zvezdan; Rektorová, Irena; Schlesinger, Ilana; Slawek, Jaroslaw; Valkovič, Peter; Veselý, Branislav

    2018-04-01

    This expert working group report proposes an updated approach to subtype definition of vascular parkinsonism (VaP) based on a review of the existing literature. The persistent lack of consensus on clear terminology and inconsistent conceptual definition of VaP formed the impetus for the current expert recommendation report. The updated diagnostic approach intends to provide a comprehensive tool for clinical practice. The preamble for this initiative is that VaP can be diagnosed in individual patients with possible prognostic and therapeutic consequences and therefore should be recognized as a clinical entity. The diagnosis of VaP is based on the presence of clinical parkinsonism, with variable motor and non-motor signs that are corroborated by clinical, anatomic or imaging findings of cerebrovascular disease. Three VaP subtypes are presented: (1) The acute or subacute post-stroke VaP subtype presents with acute or subacute onset of parkinsonism, which is typically asymmetric and responds to dopaminergic drugs; (2) The more frequent insidious onset VaP subtype presents with progressive parkinsonism with prominent postural instability, gait impairment, corticospinal, cerebellar, pseudobulbar, cognitive and urinary symptoms and poor responsiveness to dopaminergic drugs. A higher-level gait disorder occurs frequently as a dominant manifestation in the clinical spectrum of insidious onset VaP, and (3) With the emergence of molecular imaging biomarkers in clinical practice, our diagnostic approach also allows for the recognition of mixed or overlapping syndromes of VaP with Parkinson's disease or other neurodegenerative parkinsonisms. Directions for future research are also discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. The Oral HIV/AIDS Research Alliance: updated case definitions of oral disease endpoints.

    Science.gov (United States)

    Shiboski, C H; Patton, L L; Webster-Cyriaque, J Y; Greenspan, D; Traboulsi, R S; Ghannoum, M; Jurevic, R; Phelan, J A; Reznik, D; Greenspan, J S

    2009-07-01

    The Oral HIV/AIDS Research Alliance (OHARA) is part of the AIDS Clinical Trials Group (ACTG), the largest HIV clinical trials organization in the world. Its main objective is to investigate oral complications associated with HIV/AIDS as the epidemic is evolving, in particular, the effects of antiretrovirals on oral mucosal lesion development and associated fungal and viral pathogens. The OHARA infrastructure comprises: the Epidemiologic Research Unit (at the University of California San Francisco), the Medical Mycology Unit (at Case Western Reserve University) and the Virology/Specimen Banking Unit (at the University of North Carolina). The team includes dentists, physicians, virologists, mycologists, immunologists, epidemiologists and statisticians. Observational studies and clinical trials are being implemented at ACTG-affiliated sites in the US and resource-poor countries. Many studies have shared end-points, which include oral diseases known to be associated with HIV/AIDS measured by trained and calibrated ACTG study nurses. In preparation for future protocols, we have updated existing diagnostic criteria of the oral manifestations of HIV published in 1992 and 1993. The proposed case definitions are designed to be used in large-scale epidemiologic studies and clinical trials, in both US and resource-poor settings, where diagnoses may be made by non-dental healthcare providers. The objective of this article is to present updated case definitions for HIV-related oral diseases that will be used to measure standardized clinical end-points in OHARA studies, and that can be used by any investigator outside of OHARA/ACTG conducting clinical research that pertains to these end-points.

  15. Numerical performance and throughput benchmark for electronic structure calculations in PC-Linux systems with new architectures, updated compilers, and libraries.

    Science.gov (United States)

    Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui

    2004-01-01

    A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.

  16. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  17. Benchmarking Complications Associated with Esophagectomy

    NARCIS (Netherlands)

    Low, Donald E.; Kuppusamy, Madhan Kumar; Alderson, Derek; Cecconello, Ivan; Chang, Andrew C.; Darling, Gail; Davies, Andrew; D'journo, Xavier Benoit; Gisbertz, Suzanne S.; Griffin, S. Michael; Hardwick, Richard; Hoelscher, Arnulf; Hofstetter, Wayne; Jobe, Blair; Kitagawa, Yuko; Law, Simon; Mariette, Christophe; Maynard, Nick; Morse, Christopher R.; Nafteux, Philippe; Pera, Manuel; Pramesh, C. S.; Puig, Sonia; Reynolds, John V.; Schroeder, Wolfgang; Smithers, Mark; Wijnhoven, B. P. L.

    2017-01-01

    Utilizing a standardized dataset with specific definitions to prospectively collect international data to provide a benchmark for complications and outcomes associated with esophagectomy. Outcome reporting in oncologic surgery has suffered from the lack of a standardized system for reporting

  18. Intra-abdominal hypertension and the abdominal compartment syndrome: updated consensus definitions and clinical practice guidelines from the World Society of the Abdominal Compartment Syndrome

    OpenAIRE

    Kirkpatrick, Andrew W; Roberts, Derek J; De Waele, Jan; Jaeschke, Roman; Malbrain, Manu LNG; De Keulenaer, Bart; Duchesne, Juan; Bjorck, Martin; Leppaniemi, Ari; Ejike, Janeth C; Sugrue, Michael; Cheatham, Michael; Ivatury, Rao; Ball, Chad G; Reintam Blaser, Annika

    2013-01-01

    Purpose To update the World Society of the Abdominal Compartment Syndrome (WSACS) consensus definitions and management statements relating to intra-abdominal hypertension (IAH) and the abdominal compartment syndrome (ACS). Methods We conducted systematic or structured reviews to identify relevant studies relating to IAH or ACS. Updated consensus definitions and management statements were then derived using a modified Delphi method and the Grading of Recommendations, Assessment, Development, a...

  19. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  20. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  1. Incidence and prevalence of elite male cricket injuries using updated consensus definitions

    Directory of Open Access Journals (Sweden)

    Orchard JW

    2016-12-01

    Full Text Available John W Orchard, Alex Kountouris, Kevin Sims National Cricket Centre, Cricket Australia, Brisbane, Australia Background: T20 (Twenty20 or 20 over cricket has emerged in the last decade as the most popular form of cricket (in terms of spectator attendances. International consensus cricket definitions, first published in 2005, were updated in 2016 to better reflect the rise to prominence of T20 cricket.  Methods: Injury incidence and prevalence rates were calculated using the new international methods and units for elite senior male Australian cricketers over the past decade (season 2006–2007 to season 2015–2016 inclusive.  Results: Over the past 10 seasons, average match injury incidence, for match time-loss injuries, was 155 injuries/1,000 days of play, with the highest daily rates in 50-over cricket, followed by 20-over cricket and First-Class matches. Annual injury incidence was 64 injuries/100 players per season, and average annual injury prevalence was 12.5% (although fast bowlers averaged 20.6%, much higher than other positions. The most common injury was the hamstring strain (seasonal incidence 8.7 injuries/100 players per season. The most prevalent injury was lumbar stress fractures (1.9% of players unavailable at all times owing to these injuries, which represents 15% of all missed playing time.  Discussion: The hamstring strain has emerged from being one of the many common injuries in elite cricket a decade ago to being clearly the most common injury in the sport at the elite level. This is presumably in association with increased T20 cricket. Lumbar stress fractures in fast bowlers are still the most prevalent injury in the sport of cricket at the elite level, although these injuries are more associated with high workloads arising from the longer forms of the game. Domestic and international matches have very similar match injury incidence rates across the formats, but injury prevalence is higher in international players as

  2. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators...

  3. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...

  4. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part II: Benchmark comparisons of PUMA core parameters with MCNP5 and improvements due to a simple cell heterogeneity correction

    International Nuclear Information System (INIS)

    Grant, C.; Mollerach, R.; Leszczynski, F.; Serra, O.; Marconi, J.; Fink, J.

    2006-01-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure vessel design with 451 vertical coolant channels and the fuel assemblies (FA) are clusters of 37 natural UO 2 rods with an active length of 530 cm. For the reactor physics area, a revision and update of reactor physics calculation methods and models was recently carried out covering cell, supercell (control rod) and core calculations. This paper presents benchmark comparisons of core parameters of a slightly idealized model of the Atucha-I core obtained with the PUMA reactor code with MCNP5. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, more symmetric than Atucha-II, and has some experimental data available. To validate the new models benchmark comparisons of k-effective, channel power and axial power distributions obtained with PUMA and MCNP5 have been performed. In addition, a simple cell heterogeneity correction recently introduced in PUMA is presented, which improves significantly the agreement of calculated channel powers with MCNP5. To complete the validation, the calculation of some of the critical configurations of the Atucha-I reactor measured during the experiments performed at first criticality is also presented. (authors)

  5. Update of the case definitions for population-based surveillance of periodontitis.

    Science.gov (United States)

    Eke, Paul I; Page, Roy C; Wei, Liang; Thornton-Evans, Gina; Genco, Robert J

    2012-12-01

    This report adds a new definition for mild periodontitis that allows for better descriptions of the overall prevalence of periodontitis in populations. In 2007, the Centers for Disease Control and Prevention in partnership with the American Academy of Periodontology developed and reported standard case definitions for surveillance of moderate and severe periodontitis based on measurements of probing depth (PD) and clinical attachment loss (AL) at interproximal sites. However, combined cases of moderate and severe periodontitis are insufficient to determine the total prevalence of periodontitis in populations. The authors proposed a definition for mild periodontitis as ≥ 2 interproximal sites with AL ≥ 3 mm and ≥ 2 interproximal sites with PD ≥ 4 mm (not on the same tooth) or one site with PD ≥ 5 mm . The effect of the proposed definition on the total burden of periodontitis was assessed in a convenience sample of 456 adults ≥ 35 years old and compared with other previously reported definitions for similar categories of periodontitis. Addition of mild periodontitis increases the total prevalence of periodontitis by ≈31% in this sample when compared with the prevalence of severe and moderate disease. Total periodontitis using the case definitions in this study should be based on the sum of mild, moderate, and severe periodontitis.

  6. Positive Behavior Support: A Proposal for Updating and Refining the Definition

    Science.gov (United States)

    Kincaid, Don; Dunlap, Glen; Kern, Lee; Lane, Kathleen Lynne; Bambara, Linda M.; Brown, Fredda; Fox, Lise; Knoster, Timothy P.

    2016-01-01

    Positive behavior support (PBS) has been a dynamic and growing enterprise for more than 25 years. During this period, PBS has expanded applications across a wide range of populations and multiple levels of implementation. As a result, there have been understandable inconsistencies and confusion regarding the definition of PBS. In this essay, we…

  7. Childhood leukodystrophies: A literature review of updates on new definitions, classification, diagnostic approach and management.

    Science.gov (United States)

    Ashrafi, Mahmoud Reza; Tavasoli, Ali Reza

    2017-05-01

    Childhood leukodystrophies are a growing category of neurological disorders in pediatric neurology practice. With the help of new advanced genetic studies such as whole exome sequencing (WES) and whole genome sequencing (WGS), the list of childhood heritable white matter disorders has been increased to more than one hundred disorders. During the last three decades, the basic concepts and definitions, classification, diagnostic approach and medical management of these disorders much have changed. Pattern recognition based on brain magnetic resonance imaging (MRI), has played an important role in this process. We reviewed the last Global Leukodystrophy Initiative (GLIA) expert opinions in definition, new classification, diagnostic approach and medical management including emerging treatments for pediatric leukodystrophies. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  8. Remission Time after Rituximab Treatment for Autoimmune Bullous Disease: A Proposed Update Definition.

    Science.gov (United States)

    Iranzo, Pilar; Pigem, Ramon; Giavedoni, Priscila; Alsina-Gibert, Mercè

    2015-01-01

    A therapeutic endpoint is a very important tool to evaluate response in clinical trials. In 2005, a consensus statement identified two late endpoints of disease activity in pemphigus: complete remission off therapy and complete remission on therapy, both definitions applying to patients without lesions for at least 2 months. The same period of time was considered for partial remission off/on therapy. These definitions were later applied to bullous pemphigoid and are considered in most studies on autoimmune bullous disease. These endpoints were established for different adjuvant agents, but at that moment, rituximab was not considered. Rituximab is known for the long duration of its effect, and in most studies relapses have been reported later than 6 months after treatment. In our opinion, time to remission after rituximab treatment should be redefined. © 2015 S. Karger AG, Basel.

  9. An updated definition of stroke for the 21st century: a statement for healthcare professionals from the American Heart Association/American Stroke Association.

    Science.gov (United States)

    Sacco, Ralph L; Kasner, Scott E; Broderick, Joseph P; Caplan, Louis R; Connors, J J Buddy; Culebras, Antonio; Elkind, Mitchell S V; George, Mary G; Hamdan, Allen D; Higashida, Randall T; Hoh, Brian L; Janis, L Scott; Kase, Carlos S; Kleindorfer, Dawn O; Lee, Jin-Moo; Moseley, Michael E; Peterson, Eric D; Turan, Tanya N; Valderrama, Amy L; Vinters, Harry V

    2013-07-01

    Despite the global impact and advances in understanding the pathophysiology of cerebrovascular diseases, the term "stroke" is not consistently defined in clinical practice, in clinical research, or in assessments of the public health. The classic definition is mainly clinical and does not account for advances in science and technology. The Stroke Council of the American Heart Association/American Stroke Association convened a writing group to develop an expert consensus document for an updated definition of stroke for the 21st century. Central nervous system infarction is defined as brain, spinal cord, or retinal cell death attributable to ischemia, based on neuropathological, neuroimaging, and/or clinical evidence of permanent injury. Central nervous system infarction occurs over a clinical spectrum: Ischemic stroke specifically refers to central nervous system infarction accompanied by overt symptoms, while silent infarction by definition causes no known symptoms. Stroke also broadly includes intracerebral hemorrhage and subarachnoid hemorrhage. The updated definition of stroke incorporates clinical and tissue criteria and can be incorporated into practice, research, and assessments of the public health.

  10. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1996 revision

    Energy Technology Data Exchange (ETDEWEB)

    Suter, G.W. II [Oak Ridge National Lab., TN (United States); Tsao, C.L. [Duke Univ., Durham, NC (United States). School of the Environment

    1996-06-01

    This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more complete documentation of the sources and derivation of all values are presented.

  11. Nea Benchmarks

    International Nuclear Information System (INIS)

    D'Auria, F.

    2008-01-01

    simulations and consequently to improve the understanding of safety issues and the design/operating conditions of nuclear reactors, definitely putting the basis for advancing the nuclear technology.

  12. The definition, diagnostic testing, and management of chronic inducible urticarias - The EAACI/GA(2) LEN/EDF/UNEV consensus recommendations 2016 update and revision.

    Science.gov (United States)

    Magerl, M; Altrichter, S; Borzova, E; Giménez-Arnau, A; Grattan, C E H; Lawlor, F; Mathelier-Fusade, P; Meshkova, R Y; Zuberbier, T; Metz, M; Maurer, M

    2016-06-01

    These recommendations for the definition, diagnosis and management of chronic inducible urticaria (CIndU) extend, revise and update our previous consensus report on physical urticarias and cholinergic urticaria (Allergy, 2009). The aim of these recommendations is to improve the diagnosis and management of patients with CIndU. Our recommendations acknowledge the latest changes in our understanding of CIndU, and the available therapeutic options, as well as the development of novel diagnostic tools. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  14. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  15. Coding update of the SMFM definition of low risk for cesarean delivery from ICD-9-CM to ICD-10-CM.

    Science.gov (United States)

    Armstrong, Joanne; McDermott, Patricia; Saade, George R; Srinivas, Sindhu K

    2017-07-01

    In 2015, the Society for Maternal-Fetal Medicine developed a low risk for cesarean delivery definition based on administrative claims-based diagnosis codes described by the International Classification of Diseases, Ninth Revision, Clinical Modification. The Society for Maternal-Fetal Medicine definition is a clinical enrichment of 2 available measures from the Joint Commission and the Agency for Healthcare Research and Quality measures. The Society for Maternal-Fetal Medicine measure excludes diagnosis codes that represent clinically relevant risk factors that are absolute or relative contraindications to vaginal birth while retaining diagnosis codes such as labor disorders that are discretionary risk factors for cesarean delivery. The introduction of the International Statistical Classification of Diseases, 10th Revision, Clinical Modification in October 2015 expanded the number of available diagnosis codes and enabled a greater depth and breadth of clinical description. These coding improvements further enhance the clinical validity of the Society for Maternal-Fetal Medicine definition and its potential utility in tracking progress toward the goal of safely lowering the US cesarean delivery rate. This report updates the Society for Maternal-Fetal Medicine definition of low risk for cesarean delivery using International Statistical Classification of Diseases, 10th Revision, Clinical Modification coding. Copyright © 2017. Published by Elsevier Inc.

  16. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  17. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  18. Defining core elements and outstanding practice in Nutritional Science through collaborative benchmarking.

    Science.gov (United States)

    Samman, Samir; McCarthur, Jennifer O; Peat, Mary

    2006-01-01

    Benchmarking has been adopted by educational institutions as a potentially sensitive tool for improving learning and teaching. To date there has been limited application of benchmarking methodology in the Discipline of Nutritional Science. The aim of this survey was to define core elements and outstanding practice in Nutritional Science through collaborative benchmarking. Questionnaires that aimed to establish proposed core elements for Nutritional Science, and inquired about definitions of " good" and " outstanding" practice were posted to named representatives at eight Australian universities. Seven respondents identified core elements that included knowledge of nutrient metabolism and requirement, food production and processing, modern biomedical techniques that could be applied to understanding nutrition, and social and environmental issues as related to Nutritional Science. Four of the eight institutions who agreed to participate in the present survey identified the integration of teaching with research as an indicator of outstanding practice. Nutritional Science is a rapidly evolving discipline. Further and more comprehensive surveys are required to consolidate and update the definition of the discipline, and to identify the optimal way of teaching it. Global ideas and specific regional requirements also need to be considered.

  19. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  20. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  1. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo......Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...

  2. Post-contrast acute kidney injury - Part 1: Definition, clinical features, incidence, role of contrast medium and risk factors : Recommendations for updated ESUR Contrast Medium Safety Committee guidelines.

    Science.gov (United States)

    van der Molen, Aart J; Reimer, Peter; Dekkers, Ilona A; Bongartz, Georg; Bellin, Marie-France; Bertolotto, Michele; Clement, Olivier; Heinz-Peer, Gertraud; Stacul, Fulvio; Webb, Judith A W; Thomsen, Henrik S

    2018-02-09

    The Contrast Media Safety Committee (CMSC) of the European Society of Urogenital Radiology (ESUR) has updated its 2011 guidelines on the prevention of post-contrast acute kidney injury (PC-AKI). The results of the literature review and the recommendations based on it, which were used to prepare the new guidelines, are presented in two papers. AREAS COVERED IN PART 1: Topics reviewed include the terminology used, the best way to measure eGFR, the definition of PC-AKI, and the risk factors for PC-AKI, including whether the risk with intravenous and intra-arterial contrast medium differs. • PC-AKI is the preferred term for renal function deterioration after contrast medium. • PC-AKI has many possible causes. • The risk of AKI caused by intravascular contrast medium has been overstated. • Important patient risk factors for PC-AKI are CKD and dehydration.

  3. Updated standardized endpoint definitions for transcatheter aortic valve implantation: The Valve Academic Research Consortium-2 consensus document

    NARCIS (Netherlands)

    A.P. Kappetein (Arie Pieter); S.J. Head (Stuart); P. Généreux (Philippe); N. Piazza (Nicolo); N.M. van Mieghem (Nicolas); E.H. Blackstone (Eugene); T.G. Brott (Thomas); D.J. Cohen (David J.); D.E. Cutlip (Donald); G.A. van Es (Gerrit Anne); R.T. Hahn (Rebecca); A.J. Kirtane (Ajay); M. Krucoff (Mitchell); S. Kodali (Susheel); M.J. Mack (Michael); R. Mehran (Roxana); J. Rodés-Cabau (Josep); P. Vranckx (Pascal); J.G. Webb (John); S.W. Windecker (Stephan); P.W.J.C. Serruys (Patrick); M.B. Leon (Martin)

    2012-01-01

    textabstractObjectives: The aim of the current Valvular Academic Research Consortium (VARC)-2 initiative was to revisit the selection and definitions of transcatheter aortic valve implantation (TAVI)- clinical endpoints to make them more suitable to the present and future needs of clinical trials.

  4. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  5. Benchmark risk analysis models

    NARCIS (Netherlands)

    Ale BJM; Golbach GAM; Goos D; Ham K; Janssen LAM; Shield SR; LSO

    2002-01-01

    A so-called benchmark exercise was initiated in which the results of five sets of tools available in the Netherlands would be compared. In the benchmark exercise a quantified risk analysis was performed on a -hypothetical- non-existing hazardous establishment located on a randomly chosen location in

  6. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  7. Internet Based Benchmarking

    OpenAIRE

    Bogetoft, Peter; Nielsen, Kurt

    2002-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as non-parametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore alternative improvement strategies. An implementation of both a parametric and a non parametric model are presented.

  8. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  9. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  10. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  11. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  12. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  13. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  14. Draft Mercury Aquatic Wildlife Benchmarks for Great Salt Lake ...

    Science.gov (United States)

    This document describes the EPA Region 8's rationale for selecting aquatic wildlife dietary and tissue mercury benchmarks for use in interpreting available data collected from the Great Salt Lake and surrounding wetlands. EPA Region 8 has conducted a literature review to update and refine the aquatic wildlife dietary and tissue benchmarks for mercury that may be used for data assessment until water quality criteria can be derived. The document describes how aquatic wildlife dietary and tissue benchmarks for mercury have been compiled for existing literature sources and the approach for how they will be used to evaluate whether the Great Salt Lake and surrounding wetlands meet its designated use for aquatic wildlife.

  15. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  16. [Definition, classification, clinical diagnosis and prognosis of fibromyalgia syndrome : Updated guidelines 2017 and overview of systematic review articles].

    Science.gov (United States)

    Eich, W; Bär, K-J; Bernateck, M; Burgmer, M; Dexl, C; Petzke, F; Sommer, C; Winkelmann, A; Häuser, W

    2017-06-01

    The regular update of the guidelines on fibromyalgia syndrome, AWMF number 145/004, was scheduled for April 2017. The guidelines were developed by 13 scientific societies and 2 patient self-help organizations coordinated by the German Pain Society. Working groups (n =8) with a total of 42 members were formed balanced with respect to gender, medical expertise, position in the medical or scientific hierarchy and potential conflicts of interest. A systematic search of the literature from December 2010 to May 2016 was performed in the Cochrane library, MEDLINE, PsycINFO and Scopus databases. Levels of evidence were assigned according to the classification system of the Oxford Centre for Evidence-Based Medicine version 2009. The strength of recommendations was achieved by multiple step formalized procedures to reach a consensus. The guidelines were reviewed and approved by the board of directors of the societies engaged in the development of the guidelines. The clinical diagnosis of fibromyalgia syndrome can be established by the American College of Rheumatology (ACR) 1990 classification criteria (with examination of tender points) or without the examination of tender points by the modified preliminary diagnostic ACR 2010 or 2011 criteria.

  17. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  18. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  19. Reliable B cell epitope predictions: impacts of method development and improved benchmarking

    DEFF Research Database (Denmark)

    Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole

    2012-01-01

    evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve highly significant predictive performances suggesting these tools to be a powerful asset in rational epitope discovery. The updated version...... biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping...... of B-cell epitopes has been moderate. Several issues regarding the evaluation data sets may however have led to the performance values being underestimated: Rarely, all potential epitopes have been mapped on an antigen, and antibodies are generally raised against the antigen in a given biological...

  20. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  1. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...

  2. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  3. An updated bleeding model to predict the risk of post-procedure bleeding among patients undergoing percutaneous coronary intervention: a report using an expanded bleeding definition from the National Cardiovascular Data Registry CathPCI Registry.

    Science.gov (United States)

    Rao, Sunil V; McCoy, Lisa A; Spertus, John A; Krone, Ronald J; Singh, Mandeep; Fitzgerald, Susan; Peterson, Eric D

    2013-09-01

    This study sought to develop a model that predicts bleeding complications using an expanded bleeding definition among patients undergoing percutaneous coronary intervention (PCI) in contemporary clinical practice. New knowledge about the importance of periprocedural bleeding combined with techniques to mitigate its occurrence and the inclusion of new data in the updated CathPCI Registry data collection forms encouraged us to develop a new bleeding definition and risk model to improve the monitoring and safety of PCI. Detailed clinical data from 1,043,759 PCI procedures at 1,142 centers from February 2008 through April 2011 participating in the CathPCI Registry were used to identify factors associated with major bleeding complications occurring within 72 h post-PCI. Risk models (full and simplified risk scores) were developed in 80% of the cohort and validated in the remaining 20%. Model discrimination and calibration were assessed in the overall population and among the following pre-specified patient subgroups: females, those older than 70 years of age, those with diabetes mellitus, those with ST-segment elevation myocardial infarction, and those who did not undergo in-hospital coronary artery bypass grafting. Using the updated definition, the rate of bleeding was 5.8%. The full model included 31 variables, and the risk score had 10. The full model had similar discriminatory value across pre-specified subgroups and was well calibrated across the PCI risk spectrum. The updated bleeding definition identifies important post-PCI bleeding events. Risk models that use this expanded definition provide accurate estimates of post-PCI bleeding risk, thereby better informing clinical decision making and facilitating risk-adjusted provider feedback to support quality improvement. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  4. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...

  5. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  6. Benchmarking of workplace performance

    NARCIS (Netherlands)

    van der Voordt, Theo; Jensen, Per Anker

    2017-01-01

    This paper aims to present a process model of value adding corporate real estate and facilities management and to discuss which indicators can be used to measure and benchmark workplace performance.

    In order to add value to the organisation, the work environment has to provide value for

  7. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...

  8. Algebraic Multigrid Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    2017-08-01

    AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the BoomerAMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL and is very similar to the AMG2013 benchmark with additional optimizations. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem with a 27-point stencil, which can be scaled up and is designed to solve a very large problem. A second problem simulates a time dependent problem, in which successively various smnllcr systems are solved.

  9. Mask Waves Benchmark

    Science.gov (United States)

    2007-10-01

    frequenciesfoeahpbeswllsa"gdnsmtrc fo eah/Rbe. /Qthe acuation are de fiamn aprltmethod raetheorta cmiurve fTtn,wihe ies whynee select ful cycle wisdoimporat tob...See Figure 22 for a comparison of measured waves, linear waves, and non- linear Stokes waves. Looking at the selected 16 runs from the trough-to-peak...Figure 23 for the benchmark data set, the relation of obtained frequency verses desired frequency is almost completely linear . The slight variation at

  10. Benchmarking Cloud Resources for HEP

    Science.gov (United States)

    Alef, M.; Cordeiro, C.; De Salvo, A.; Di Girolamo, A.; Field, L.; Giordano, D.; Guerri, M.; Schiavi, F. C.; Wiebalck, A.

    2017-10-01

    In a commercial cloud environment, exhaustive resource profiling is beneficial to cope with the intrinsic variability of the virtualised environment, allowing to promptly identify performance degradation. In the context of its commercial cloud initiatives, CERN has acquired extensive experience in benchmarking commercial cloud resources. Ultimately, this activity provides information on the actual delivered performance of invoiced resources. In this report we discuss the experience acquired and the results collected using several fast benchmark applications adopted by the HEP community. These benchmarks span from open-source benchmarks to specific user applications and synthetic benchmarks. The workflow put in place to collect and analyse performance metrics is also described.

  11. The International Association for the Study of Pain definition of pain: as valid in 2018 as in 1979, but in need of regularly updated footnotes

    Directory of Open Access Journals (Sweden)

    Rolf-Detlef Treede

    2018-04-01

    Full Text Available Abstract. . Milton Cohen, John Quintner, and Simon van Rysewyk proposed a revision of the IASP definition of pain of 1979. This commentary summarizes, why this proposal is useful for guiding assessment of pain, but not its definition.

  12. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  13. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless......This report is based on the survey "Industrial Companies in Denmark - Today and Tomorrow',section IV: Supply Chain Management - Practices and Performance, question number 4.9 onperformance assessment. To our knowledge, this survey is unique, as we have not been able to findresults from any...

  14. WIMS Library updating

    International Nuclear Information System (INIS)

    Ravnik, M.; Trkov, A.; Holubar, A.

    1992-01-01

    At the end of 1990 the WIMS Library Update Project (WLUP) has been initiated at the International Atomic Energy Agency. The project was organized as an international research project, coordinated at the J. Stefan Institute. Up to now, 22 laboratories from 19 countries joined the project. Phase 1 of the project, which included WIMS input optimization for five experimental benchmark lattices, has been completed. The work presented in this paper describes also the results of Phase 2 of the Project, in which the cross sections based on ENDF/B-IV evaluated nuclear data library have been processed. (author) [sl

  15. WIPP Benchmark calculations with the large strain SPECTROM codes

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, G.D.; DeVries, K.L. [RE/SPEC, Inc., Rapid City, SD (United States)

    1995-08-01

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems.

  16. A benchmarking study

    Directory of Open Access Journals (Sweden)

    H. Groessing

    2015-02-01

    Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.

  17. Computational shielding benchmarks

    International Nuclear Information System (INIS)

    The American Nuclear Society Standards Committee 6.2.1 is engaged in the documentation of radiation transport problems and their solutions. The primary objective of this effort is to test computational methods used within the international shielding community. Dissemination of benchmarks will, it is hoped, accomplish several goals: (1) Focus attention on problems whose solutions represent state-of-the-art methodology for representative transport problems of generic interest; (2) Specification of standard problems makes comparisons of alternate computational methods, including use of approximate vs. ''exact'' computer codes, more meaningful; (3) Comparison with experimental data may suggest improvements in computer codes and/or associated data sets; (4) Test reliability of new methods as they are introduced for the solution of specific problems; (5) Verify user ability to apply a given computational method; and (6) Verify status of a computer program being converted for use on a different computer (e.g., CDC vs IBM) or facility

  18. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  19. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  20. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1987-01-01

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  1. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  2. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  3. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    is generally not advised. Several other ways in which benchmarking and policy can support one another are identified in the analysis. This leads to a range of recommended initiatives to exploit the benefits of benchmarking in transport while avoiding some of the lurking pitfalls and dead ends......Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport...

  4. Benchmarking: a method for continuous quality improvement in health.

    Science.gov (United States)

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  5. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.

    2013-01-01

    Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade......A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...

  6. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  7. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  8. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  9. Net Zero Water Update

    Science.gov (United States)

    2011-05-12

    Update 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7...Opened to Collect Supplemental Data from Candidate Installations 15 Mar 11 Supplemental Data received from Army Commands 16-31 Mar 11 DOE...hierarchy (reduction, re-purpose, recycling & composting , energy recovery, and disposal) • Complied with Net Zero definitions • Demonstrated

  10. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  11. Benchmarking & european sustainable transport policies

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2003-01-01

    way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...

  12. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  13. Editor's Choice : acute cardiovascular care association position paper on intensive cardiovascular care units : an update on their definition, structure, organisation and function

    OpenAIRE

    Bonnefoy-Cudraz, Eric; Bueno, Hector; Casella, Gianni; De Maria, Elia; Fitzsimons, Donna; Halvorsen, Sigrun; Hassager, Christian; Iakobishvili, Zaza; Magdy, Ahmed; Marandi, Toomas; Mimoso, Jorge; Parkhomenko, Alexander; Price, Susana; Rokyta, Richard; Roubille, Francois

    2018-01-01

    Abstract: Acute cardiovascular care has progressed considerably since the last position paper was published 10 years ago. It is now a well-defined, complex field with demanding multidisciplinary teamworking. The Acute Cardiovascular Care Association has provided this update of the 2005 position paper on acute cardiovascular care organisation, using a multinational working group. The patient population has changed, and intensive cardiovascular care units now manage a large range of conditions ...

  14. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  15. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together...... to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  16. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  17. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  18. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  19. Editor's Choice - Acute Cardiovascular Care Association Position Paper on Intensive Cardiovascular Care Units: An update on their definition, structure, organisation and function.

    Science.gov (United States)

    Bonnefoy-Cudraz, Eric; Bueno, Hector; Casella, Gianni; De Maria, Elia; Fitzsimons, Donna; Halvorsen, Sigrun; Hassager, Christian; Iakobishvili, Zaza; Magdy, Ahmed; Marandi, Toomas; Mimoso, Jorge; Parkhomenko, Alexander; Price, Susana; Rokyta, Richard; Roubille, Francois; Serpytis, Pranas; Shimony, Avi; Stepinska, Janina; Tint, Diana; Trendafilova, Elina; Tubaro, Marco; Vrints, Christiaan; Walker, David; Zahger, Doron; Zima, Endre; Zukermann, Robert; Lettino, Maddalena

    2018-02-01

    Acute cardiovascular care has progressed considerably since the last position paper was published 10 years ago. It is now a well-defined, complex field with demanding multidisciplinary teamworking. The Acute Cardiovascular Care Association has provided this update of the 2005 position paper on acute cardiovascular care organisation, using a multinational working group. The patient population has changed, and intensive cardiovascular care units now manage a large range of conditions from those simply requiring specialised monitoring, to critical cardiovascular diseases with associated multi-organ failure. To describe better intensive cardiovascular care units case mix, acuity of care has been divided into three levels, and then defining intensive cardiovascular care unit functional organisation. For each level of intensive cardiovascular care unit, this document presents the aims of the units, the recommended management structure, the optimal number of staff, the need for specially trained cardiologists and cardiovascular nurses, the desired equipment and architecture, and the interaction with other departments in the hospital and other intensive cardiovascular care units in the region/area. This update emphasises cardiologist training, referring to the recently updated Acute Cardiovascular Care Association core curriculum on acute cardiovascular care. The training of nurses in acute cardiovascular care is additionally addressed. Intensive cardiovascular care unit expertise is not limited to within the unit's geographical boundaries, extending to different specialties and subspecialties of cardiology and other specialties in order to optimally manage the wide scope of acute cardiovascular conditions in frequently highly complex patients. This position paper therefore addresses the need for the inclusion of acute cardiac care and intensive cardiovascular care units within a hospital network, linking university medical centres, large community hospitals, and smaller

  20. Swiss electricity grid - Benchmarking pilot project

    International Nuclear Information System (INIS)

    2001-01-01

    This article is a short version of the ENET number 210369. This report for the Swiss Federal Office of Energy (SFOE) describes a benchmarking pilot project carried out as a second phase in the development of a formula for the regulation of an open electricity market in Switzerland. It follows on from an initial phase involving the definition of a 'blue print' and a basic concept. The aims of the pilot project - to check out the practicability of the concept - are discussed. The collection of anonymised data for the benchmarking model from over 30 electricity utilities operating on all 7 Swiss grid levels and their integration in the three areas 'Technology', 'Grid Costs' and 'Capital Invested' are discussed in detail. In particular, confidentiality and data protection aspects are looked at. The methods used in the analysis of the data are described and the results of an efficiency analysis of various utilities are presented. The report is concluded with a listing of questions concerning data collection and analysis as well as operational and capital costs that are still to be answered

  1. Towards benchmarking an in-stream water quality model

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available A method of model evaluation is presented which utilises a comparison with a benchmark model. The proposed benchmarking concept is one that can be applied to many hydrological models but, in this instance, is implemented in the context of an in-stream water quality model. The benchmark model is defined in such a way that it is easily implemented within the framework of the test model, i.e. the approach relies on two applications of the same model code rather than the application of two separate model codes. This is illustrated using two case studies from the UK, the Rivers Aire and Ouse, with the objective of simulating a water quality classification, general quality assessment (GQA, which is based on dissolved oxygen, biochemical oxygen demand and ammonium. Comparisons between the benchmark and test models are made based on GQA, as well as a step-wise assessment against the components required in its derivation. The benchmarking process yields a great deal of important information about the performance of the test model and raises issues about a priori definition of the assessment criteria.

  2. Benchmarking and Its Relevance to the Library and Information Sector. Interim Findings of "Best Practice Benchmarking in the Library and Information Sector," a British Library Research and Development Department Project.

    Science.gov (United States)

    Kinnell, Margaret; Garrod, Penny

    This British Library Research and Development Department study assesses current activities and attitudes toward quality management in library and information services (LIS) in the academic sector as well as the commercial/industrial sector. Definitions and types of benchmarking are described, and the relevance of benchmarking to LIS is evaluated.…

  3. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  4. FLOWTRAN-TF code benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.P. (ed.)

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss Of Coolant Accident (LOCA). A description of the code is given by Flach et al. (1990). This report provides benchmarking results for the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit (Smith et al., 1990a; 1990b). Individual constitutive relations are benchmarked in Sections 2 through 5 while in Sections 6 and 7 integral code benchmarking results are presented. An overall assessment of FLOWTRAN-TF for its intended use in computing the ECS power limit completes the document.

  5. Circular Updates

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Circular Updates are periodic sequentially numbered instructions to debriefing staff and observers informing them of changes or additions to scientific and specimen...

  6. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  7. Wind turbine reliability database update.

    Energy Technology Data Exchange (ETDEWEB)

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.

    2009-03-01

    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  8. Benchmark of a Cubieboard cluster

    Science.gov (United States)

    Schnepf, M. J.; Gudu, D.; Rische, B.; Fischer, M.; Jung, C.; Hardt, M.

    2015-12-01

    We built a cluster of ARM-based Cubieboards2 which has a SATA interface to connect a harddrive. This cluster was set up as a storage system using Ceph and as a compute cluster for high energy physics analyses. To study the performance in these applications, we ran two benchmarks on this cluster. We also checked the energy efficiency of the cluster using the preseted benchmarks. Performance and energy efficency of our cluster were compared with a network-attached storage (NAS), and with a desktop PC.

  9. Proceedings from the 1998 Occupational Health Conference: Benchmarking for Excellence

    Science.gov (United States)

    Hoffler, G. Wyckliffe (Editor); O'Donnell, Michele D. (Editor)

    1999-01-01

    The theme of the 1998 NASA Occupational Health Conference was "Benchmarking for Excellence." Conference participants included NASA and contractor Occupational Health professionals, as well as speakers from NASA, other Federal agencies and private companies. Addressing the Conference theme, speakers described new concepts and techniques for corporate benchmarking. They also identified practices used by NASA, other Federal agencies, and by award winning programs in private industry. A two-part Professional Development Course on workplace toxicology and indoor air quality was conducted a day before the Conference. A program manager with the International Space Station Office provided an update on station activities and an expert delivered practical advice on both oral and written communications. A keynote address on the medical aspects of space walking by a retired NASA astronaut highlighted the Conference. Discipline breakout sessions, poster presentations, and a KSC tour complemented the Conference agenda.

  10. The EAACI/GA²LEN/EDF/WAO Guideline for the Definition, Classification, Diagnosis and Management of Urticaria. The 2017 Revision and Update

    DEFF Research Database (Denmark)

    Zuberbier, T; Aberer, W; Asero, R

    2018-01-01

    urticaria and other chronic forms of urticaria are disabling, impair quality of life, and affect performance at work and school. This guideline covers the definition and classification of urticaria, taking into account the recent progress in identifying its causes, eliciting factors and pathomechanisms...... of the European Academy of Allergology and Clinical Immunology (EAACI), the EU-founded network of excellence, the Global Allergy and Asthma European Network (GA²LEN), the European Dermatology Forum (EDF), and the World Allergy Organization (WAO) with the participation of 48 delegates of 42 national...

  11. Parameter Curation for Benchmark Queries

    NARCIS (Netherlands)

    Gubichev, Andrey; Boncz, Peter

    2014-01-01

    In this paper we consider the problem of generating parameters for benchmark queries so these have stable behavior despite being executed on datasets (real-world or synthetic) with skewed data distributions and value correlations. We show that uniform random sampling of the substitution parameters

  12. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  13. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  14. Benchmarking the performance of daily temperature homogenisation algorithms

    Science.gov (United States)

    Killick, Rachel; Bailey, Trevor; Jolliffe, Ian; Willett, Kate

    2016-04-01

    This work focuses on the results of a recent daily benchmarking study carried out to compare different temperature homogenisation algorithms; it also gives an overview of the creation of the realistic synthetic data used in the study. Four different regions in the United States were chosen and up to four different inhomogeneity scenarios were explored for each region. These benchmark datasets are beneficial as, unlike in the real world, the underlying truth is known a priori, thus allowing definite statements to be made about the performance of the algorithms run on them. Performance can be assessed both in terms of the ability of algorithms to detect changepoints and to correctly remove the inhomogeneities the changepoints create. The focus is on daily data, thus presenting new challenges in comparison to monthly data and pushing the boundaries of previous studies. The aims of this work are to evaluate and compare the performance of various homogenisation algorithms, aiding their improvement and enabling a quantification of the uncertainty remaining in the data even after they have been homogenised. An important outcome is also to evaluate how realistic the created benchmarks are. It is essential that any weaknesses in the benchmarks are taken into account when judging algorithm performance against them. This information will in turn help to improve future versions of benchmarks. Here I present a summary of this work including an overview of the benchmark creation and the algorithms run and details of the results of this study. This work formed a 3 year PhD and feeds into the larger project of the International Surface Temperature Initiative which is working on a wider scale and with monthly instead of daily data.

  15. Updated Outcome and Analysis of Tumor Response in Mobile Spine and Sacral Chordoma Treated With Definitive High-Dose Photon/Proton Radiation Therapy

    International Nuclear Information System (INIS)

    Kabolizadeh, Peyman; Chen, Yen-Lin; Liebsch, Norbert; Hornicek, Francis J.; Schwab, Joseph H.; Choy, Edwin; Rosenthal, Daniel I.; Niemierko, Andrzej; DeLaney, Thomas F.

    2017-01-01

    Purpose: Treatment of spine and sacral chordoma generally involves surgical resection, usually in conjunction with radiation therapy. In certain circumstances where resection may result in significant neurologic or organ dysfunction, patients can be treated definitively with radiation therapy alone. Herein, we report the outcome and the assessment of tumor response to definitive radiation therapy. Methods and Materials: A retrospective analysis was performed on 40 patients with unresected chordoma treated with photon/proton radiation therapy. Nineteen patients had complete sets of imaging scans. The soft tissue and bone compartments of the tumor were defined separately. Tumor response was evaluated by the modified Response Evaluation Criteria in Solid Tumors (RECIST) and volumetric analysis. Results: With a median follow-up time of 50.3 months, the rates of 5-year local control, overall survival, disease-specific survival, and distant failure were 85.4%, 81.9%, 89.4%, and 20.2%, respectively. Eighty-four computed tomographic and magnetic resonance imaging scans were reviewed. Among the 19 patients, only 4 local failures occurred, and the median tumor dose was 77.4 GyRBE. Analysis at a median follow-up time of 18 months showed significant volumetric reduction of the total target volume (TTV) and the soft tissue target volume (STTV) within the first 24 months after treatment initiation, followed by further gradual reduction throughout the rest of the follow-up period. The median maximum percentage volumetric regressions of TTV and STTV were 43.2% and 70.4%, respectively. There was only a small reduction in bone target volume over time. In comparison with the modified RECIST, volumetric analysis was more reliable, more reproducible, and could help in measuring minimal changes in the tumor volume. Conclusion: These results continue to support the use of high-dose definitive radiation therapy for selected patients with unresected spine and sacral chordomas

  16. Updated Outcome and Analysis of Tumor Response in Mobile Spine and Sacral Chordoma Treated With Definitive High-Dose Photon/Proton Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Kabolizadeh, Peyman, E-mail: peyman.kabolizadeh@beaumont.org [Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts (United States); Chen, Yen-Lin; Liebsch, Norbert [Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts (United States); Hornicek, Francis J.; Schwab, Joseph H. [Department of Orthopedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts (United States); Choy, Edwin [Department of Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts (United States); Rosenthal, Daniel I. [Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts (United States); Niemierko, Andrzej; DeLaney, Thomas F. [Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts (United States)

    2017-02-01

    Purpose: Treatment of spine and sacral chordoma generally involves surgical resection, usually in conjunction with radiation therapy. In certain circumstances where resection may result in significant neurologic or organ dysfunction, patients can be treated definitively with radiation therapy alone. Herein, we report the outcome and the assessment of tumor response to definitive radiation therapy. Methods and Materials: A retrospective analysis was performed on 40 patients with unresected chordoma treated with photon/proton radiation therapy. Nineteen patients had complete sets of imaging scans. The soft tissue and bone compartments of the tumor were defined separately. Tumor response was evaluated by the modified Response Evaluation Criteria in Solid Tumors (RECIST) and volumetric analysis. Results: With a median follow-up time of 50.3 months, the rates of 5-year local control, overall survival, disease-specific survival, and distant failure were 85.4%, 81.9%, 89.4%, and 20.2%, respectively. Eighty-four computed tomographic and magnetic resonance imaging scans were reviewed. Among the 19 patients, only 4 local failures occurred, and the median tumor dose was 77.4 GyRBE. Analysis at a median follow-up time of 18 months showed significant volumetric reduction of the total target volume (TTV) and the soft tissue target volume (STTV) within the first 24 months after treatment initiation, followed by further gradual reduction throughout the rest of the follow-up period. The median maximum percentage volumetric regressions of TTV and STTV were 43.2% and 70.4%, respectively. There was only a small reduction in bone target volume over time. In comparison with the modified RECIST, volumetric analysis was more reliable, more reproducible, and could help in measuring minimal changes in the tumor volume. Conclusion: These results continue to support the use of high-dose definitive radiation therapy for selected patients with unresected spine and sacral chordomas

  17. Hypokalemia: a clinical update

    Directory of Open Access Journals (Sweden)

    Efstratios Kardalas

    2018-04-01

    Full Text Available Hypokalemia is a common electrolyte disturbance, especially in hospitalized patients. It can have various causes, including endocrine ones. Sometimes, hypokalemia requires urgent medical attention. The aim of this review is to present updated information regarding: (1 the definition and prevalence of hypokalemia, (2 the physiology of potassium homeostasis, (3 the various causes leading to hypokalemia, (4 the diagnostic steps for the assessment of hypokalemia and (5 the appropriate treatment of hypokalemia depending on the cause. Practical algorithms for the optimal diagnostic, treatment and follow-up strategy are presented, while an individualized approach is emphasized.

  18. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  19. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  20. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  1. Benchmarking of radiological departments. Starting point for successful process optimization

    International Nuclear Information System (INIS)

    Busch, Hans-Peter

    2010-01-01

    Continuous optimization of the process of organization and medical treatment is part of the successful management of radiological departments. The focus of this optimization can be cost units such as CT and MRI or the radiological parts of total patient treatment. Key performance indicators for process optimization are cost- effectiveness, service quality and quality of medical treatment. The potential for improvements can be seen by comparison (benchmark) with other hospitals and radiological departments. Clear definitions of key data and criteria are absolutely necessary for comparability. There is currently little information in the literature regarding the methodology and application of benchmarks especially from the perspective of radiological departments and case-based lump sums, even though benchmarking has frequently been applied to radiological departments by hospital management. The aim of this article is to describe and discuss systematic benchmarking as an effective starting point for successful process optimization. This includes the description of the methodology, recommendation of key parameters and discussion of the potential for cost-effectiveness analysis. The main focus of this article is cost-effectiveness (efficiency and effectiveness) with respect to cost units and treatment processes. (orig.)

  2. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  3. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  4. A Framework for Urban Transport Benchmarking

    OpenAIRE

    Theuns Henning; Mohammed Dalil Essakali; Jung Eun Oh

    2011-01-01

    This report summarizes the findings of a study aimed at exploring key elements of a benchmarking framework for urban transport. Unlike many industries where benchmarking has proven to be successful and straightforward, the multitude of the actors and interactions involved in urban transport systems may make benchmarking a complex endeavor. It was therefore important to analyze what has bee...

  5. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  6. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  7. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  8. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  9. Closed-Loop Neuromorphic Benchmarks

    Directory of Open Access Journals (Sweden)

    Terrence C Stewart

    2015-12-01

    Full Text Available Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is evenmore difficult when the task of interest is a closed-loop task; that is, a task where the outputfrom the neuromorphic hardware affects some environment, which then in turn affects thehardware’s future input. However, closed-loop situations are one of the primary potential uses ofneuromorphic hardware. To address this, we present a methodology for generating closed-loopbenchmarks that makes use of a hybrid of real physical embodiment and a type of minimalsimulation. Minimal simulation has been shown to lead to robust real-world performance, whilestill maintaining the practical advantages of simulation, such as making it easy for the samebenchmark to be used by many researchers. This method is flexible enough to allow researchersto explicitly modify the benchmarks to identify specific task domains where particular hardwareexcels. To demonstrate the method, we present a set of novel benchmarks that focus on motorcontrol for an arbitrary system with unknown external forces. Using these benchmarks, we showthat an error-driven learning rule can consistently improve motor control performance across arandomly generated family of closed-loop simulations, even when there are up to 15 interactingjoints to be controlled.

  10. The percentage of core involved by cancer is the best predictor of insignificant prostate cancer, according to an updated definition (tumor volume up to 2.5 cm3): analysis of a cohort of 210 consecutive patients with low-risk disease.

    Science.gov (United States)

    Antonelli, Alessandro; Vismara Fugini, Andrea; Tardanico, Regina; Giovanessi, Luca; Zambolin, Tiziano; Simeone, Claudio

    2014-01-01

    To find out which factors could predict the diagnosis of insignificant prostate cancer (ins-PCa) according to a recently updated definition (overall tumor volume up to 2.5 cm(3); final Gleason score ≤6; organ-confined disease) on a prostatic biopsy specimen. This was a retrospective analysis of 210 patients undergoing radical prostatectomy for a cT1c prostate neoplasm with a biopsy specimen Gleason score of ≤6. A logistic regression model was used to assess the differences in the distribution of some possibly predictive factors between the ins-PCa patients, according to the updated definition, and the remaining patients. By applying an updated definition of ins-PCa, the prevalence of this condition increased from 13.3% to 49.5% (104 of 210 patients). The univariate analysis showed a statistically different distribution of the following factors: prostate-specific antigen density, prostate volume, number of cancer-involved cores, and maximum percentage of core involvement by cancer. At the multivariable analysis, the maximum percentage of involvement of the core retained its relevance (27.0% in ins-PCa patients and 43.8% in the remaining patients; hazard ratio, 0.972; P = .046), and a 20% cutoff was detected. In a cohort of patients with PCa cT1c and a biopsy specimen Gleason score of ≤6, the ins-PCa rate, according to the updated definition, is close to 50%, and the percentage of cancer involvement of the core is the single factor that best predicts this diagnosis. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Avhandling (dr.ing.) - Høgskolen i Telemark / Norges teknisk-naturvitenskapelige universitet Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past t...

  12. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  13. NEACRP thermal fission product benchmark

    International Nuclear Information System (INIS)

    Halsall, M.J.; Taubman, C.J.

    1989-09-01

    The objective of the thermal fission product benchmark was to compare the range of fission product data in use at the present time. A simple homogeneous problem was set with 200 atoms H/1 atom U235, to be burnt up to 1000 days and then decay for 1000 days. The problem was repeated with 200 atoms H/1 atom Pu239, 20 atoms H/1 atom U235 and 20 atoms H/1 atom Pu239. There were ten participants and the submissions received are detailed in this report. (author)

  14. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  15. Lesson learned from the SARNET wall condensation benchmarks

    International Nuclear Information System (INIS)

    Ambrosini, W.; Forgione, N.; Merli, F.; Oriolo, F.; Paci, S.; Kljenak, I.; Kostka, P.; Vyskocil, L.; Travis, J.R.; Lehmkuhl, J.; Kelm, S.; Chin, Y.-S.; Bucci, M.

    2014-01-01

    Highlights: • The results of the benchmarking activity on wall condensation are reported. • The work was performed in the frame of SARNET. • General modelling techniques for condensation are discussed. • Results of University of Pisa and of other benchmark participants are discussed. • The lesson learned is drawn. - Abstract: The prediction of condensation in the presence of noncondensable gases has received continuing attention in the frame of the Severe Accident Research Network of Excellence, both in the first (2004–2008) and in the second (2009–2013) EC integrated projects. Among the different reasons for considering so relevant this basic phenomenon, coped with by classical treatments dated in the first decades of the last century, there is the interest for developing updated CFD models for reactor containment analysis, requiring validating at a different level the available modelling techniques. In the frame of SARNET, benchmarking activities were undertaken taking advantage of the work performed at different institutions in setting up and developing models for steam condensation in conditions of interest for nuclear reactor containment. Four steps were performed in the activity, involving: (1) an idealized problem freely inspired at the actual conditions occurring in an experimental facility, CONAN, installed at the University of Pisa; (2) a first comparison with experimental data purposely collected by the CONAN facility; (3) a second comparison with data available from experimental campaigns performed in the same apparatus before the inclusion of the activities in SARNET; (4) a third exercise involving data obtained at lower mixture velocity than in previous campaigns, aimed at providing conditions closer to those addressed in reactor containment analyses. The last step of the benchmarking activity required to change the configuration of the experimental apparatus to achieve the lower flow rates involved in the new test specifications. The

  16. Benchmarking of venous thromboembolism prophylaxis practice with ENT.UK guidelines.

    Science.gov (United States)

    Al-Qahtani, Ali S

    2017-05-01

    The aim of this study was to benchmark our guidelines of prevention of venous thromboembolism (VTE) in ENT surgical population against ENT.UK guidelines, and also to encourage healthcare providers to utilize benchmarking as an effective method of improving performance. The study design is prospective descriptive analysis. The setting of this study is tertiary referral centre (Assir Central Hospital, Abha, Saudi Arabia). In this study, we are benchmarking our practice guidelines of the prevention of VTE in the ENT surgical population against that of ENT.UK guidelines to mitigate any gaps. ENT guidelines 2010 were downloaded from the ENT.UK Website. Our guidelines were compared with the possibilities that either our performance meets or fall short of ENT.UK guidelines. Immediate corrective actions will take place if there is quality chasm between the two guidelines. ENT.UK guidelines are evidence-based and updated which may serve as role-model for adoption and benchmarking. Our guidelines were accordingly amended to contain all factors required in providing a quality service to ENT surgical patients. While not given appropriate attention, benchmarking is a useful tool in improving quality of health care. It allows learning from others' practices and experiences, and works towards closing any quality gaps. In addition, benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning service provision. It is recommended to be included on the list of quality improvement methods of healthcare services.

  17. Reevaluation of the Jezebel Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-10

    Every nuclear engineering student is familiar with Jezebel, the homogeneous bare sphere of plutonium first assembled at Los Alamos in 1954-1955. The actual Jezebel assembly was neither homogeneous, nor bare, nor spherical; nor was it singular – there were hundreds of Jezebel configurations assembled. The Jezebel benchmark has been reevaluated for the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Logbooks, original drawings, mass accountability statements, internal reports, and published reports have been used to model four actual three-dimensional Jezebel assemblies with high fidelity. Because the documentation available today is often inconsistent, three major assumptions were made regarding plutonium part masses and dimensions. The first was that the assembly masses given in Los Alamos report LA-4208 (1969) were correct, and the second was that the original drawing dimension for the polar height of a certain major part was correct. The third assumption was that a change notice indicated on the original drawing was not actually implemented. This talk will describe these assumptions, the alternatives, and the implications. Since the publication of the 2013 ICSBEP Handbook, the actual masses of the major components have turned up. Our assumption regarding the assembly masses was proven correct, but we had the mass distribution incorrect. Work to incorporate the new information is ongoing, and this talk will describe the latest assessment.

  18. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  19. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  20. A proposal for benchmarking learning objects

    OpenAIRE

    Rita Falcão; Alfredo Soeiro

    2007-01-01

    This article proposes a methodology for benchmarking learning objects. It aims to deal with twoproblems related to e-learning: the validation of learning using this method and the return oninvestment of the process of development and use: effectiveness and efficiency.This paper describes a proposal for evaluating learning objects (LOs) through benchmarking, basedon the Learning Object Metadata Standard and on an adaptation of the main tools of the BENVICproject. The Benchmarking of Learning O...

  1. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  2. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  3. HPC Benchmark Suite NMx, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  4. Analysis of VENUS-3 benchmark experiment

    International Nuclear Information System (INIS)

    Kodeli, I.; Sartori, E.

    1998-01-01

    The paper presents the revision and the analysis of VENUS-3 benchmark experiment performed at CEN/SCK, Mol (Belgium). This benchmark was found to be particularly suitable for validation of current calculation tools like 3-D neutron transport codes, and in particular of the 3D sensitivity and uncertainty analysis code developed within the EFF project. The compilation of the integral experiment was integrated into the SINBAD electronic data base for storing and retrieving information about the shielding experiments for nuclear systems. SINBAD now includes 33 reviewed benchmark descriptions and several compilations waiting for the review, among them many benchmarks relevant for pressure vessel dosimetry system validation.(author)

  5. Benchmarking of nuclear economics tools

    International Nuclear Information System (INIS)

    Moore, Megan; Korinny, Andriy; Shropshire, David; Sadhankar, Ramesh

    2017-01-01

    Highlights: • INPRO and GIF economic tools exhibited good alignment in total capital cost estimation. • Subtle discrepancies in the cost result from differences in financing and the fuel cycle assumptions. • A common set of assumptions was found to reduce the discrepancies to 1% or less. • Opportunities for harmonisation of economic tools exists. - Abstract: Benchmarking of the economics methodologies developed by the Generation IV International Forum (GIF) and the International Atomic Energy Agency’s International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO), was performed for three Generation IV nuclear energy systems. The Economic Modeling Working Group of GIF developed an Excel based spreadsheet package, G4ECONS (Generation 4 Excel-based Calculation Of Nuclear Systems), to calculate the total capital investment cost (TCIC) and the levelised unit energy cost (LUEC). G4ECONS is sufficiently generic in the sense that it can accept the types of projected input, performance and cost data that are expected to become available for Generation IV systems through various development phases and that it can model both open and closed fuel cycles. The Nuclear Energy System Assessment (NESA) Economic Support Tool (NEST) was developed to enable an economic analysis using the INPRO methodology to easily calculate outputs including the TCIC, LUEC and other financial figures of merit including internal rate of return, return of investment and net present value. NEST is also Excel based and can be used to evaluate nuclear reactor systems using the open fuel cycle, MOX (mixed oxide) fuel recycling and closed cycles. A Super Critical Water-cooled Reactor system with an open fuel cycle and two Fast Reactor systems, one with a break-even fuel cycle and another with a burner fuel cycle, were selected for the benchmarking exercise. Published data on capital and operating costs were used for economics analyses using G4ECONS and NEST tools. Both G4ECONS and

  6. Benchmarking the UAF Tsunami Code

    Science.gov (United States)

    Nicolsky, D.; Suleimani, E.; West, D.; Hansen, R.

    2008-12-01

    We have developed a robust numerical model to simulate propagation and run-up of tsunami waves in the framework of non-linear shallow water theory. A temporal position of the shoreline is calculated using the free-surface moving boundary condition. The numerical code adopts a staggered leapfrog finite-difference scheme to solve the shallow water equations formulated for depth-averaged water fluxes in spherical coordinates. To increase spatial resolution, we construct a series of telescoping embedded grids that focus on areas of interest. For large scale problems, a parallel version of the algorithm is developed by employing a domain decomposition technique. The developed numerical model is benchmarked in an exhaustive series of tests suggested by NOAA. We conducted analytical and laboratory benchmarking for the cases of solitary wave runup on simple and composite beaches, run-up of a solitary wave on a conical island, and the extreme runup in the Monai Valley, Okushiri Island, Japan, during the 1993 Hokkaido-Nansei-Oki tsunami. Additionally, we field-tested the developed model to simulate the November 15, 2006 Kuril Islands tsunami, and compared the simulated water height to observations at several DART buoys. In all conducted tests we calculated a numerical solution with an accuracy recommended by NOAA standards. In this work we summarize results of numerical benchmarking of the code, its strengths and limits with regards to reproduction of fundamental features of coastal inundation, and also illustrate some possible improvements. We applied the developed model to simulate potential inundation of the city of Seward located in Resurrection Bay, Alaska. To calculate an aerial extent of potential inundation, we take into account available near-shore bathymetry and inland topography on a grid of 15 meter resolution. By choosing several scenarios of potential earthquakes, we calculated the maximal aerial extent of Seward inundation. As a test to validate our model, we

  7. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-06-01

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  8. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Briesmeister, J.F.

    1994-01-01

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  9. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  10. International benchmark on the natural convection test in Phenix reactor

    International Nuclear Information System (INIS)

    Tenchine, D.; Pialla, D.; Fanning, T.H.; Thomas, J.W.; Chellapandi, P.; Shvetsov, Y.; Maas, L.; Jeong, H.-Y.; Mikityuk, K.; Chenu, A.; Mochizuki, H.; Monti, S.

    2013-01-01

    Highlights: ► Phenix main characteristics, instrumentation and natural convection test are described. ► “Blind” calculations and post-test calculations from all the participants to the benchmark are compared to reactor data. ► Lessons learned from the natural convection test and the associated calculations are discussed. -- Abstract: The French Phenix sodium cooled fast reactor (SFR) started operation in 1973 and was stopped in 2009. Before the reactor was definitively shutdown, several final tests were planned and performed, including a natural convection test in the primary circuit. During this natural convection test, the heat rejection provided by the steam generators was disabled, followed several minutes later by reactor scram and coast-down of the primary pumps. The International Atomic Energy Agency (IAEA) launched a Coordinated Research Project (CRP) named “control rod withdrawal and sodium natural circulation tests performed during the Phenix end-of-life experiments”. The overall purpose of the CRP was to improve the Member States’ analytical capabilities in the field of SFR safety. An international benchmark on the natural convection test was organized with “blind” calculations in a first step, then “post-test” calculations and sensitivity studies compared with reactor measurements. Eight organizations from seven Member States took part in the benchmark: ANL (USA), CEA (France), IGCAR (India), IPPE (Russian Federation), IRSN (France), KAERI (Korea), PSI (Switzerland) and University of Fukui (Japan). Each organization performed computations and contributed to the analysis and global recommendations. This paper summarizes the findings of the CRP benchmark exercise associated with the Phenix natural convection test, including blind calculations, post-test calculations and comparisons with measured data. General comments and recommendations are pointed out to improve future simulations of natural convection in SFRs

  11. Benchmarking Ortec ISOTOPIC measurements and calculations

    International Nuclear Information System (INIS)

    This paper represents a description of eight compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC gamma-ray analysis software program. The paper describes tests of the programs capability to perform finite geometry correction factors and sample-matrix-container photon absorption correction factors. Favorable results are obtained in all benchmark tests. (author)

  12. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  13. Benchmarking nutrient use efficiency of dairy farms

    NARCIS (Netherlands)

    Mu, W.; Groen, E.A.; Middelaar, van C.E.; Bokkers, E.A.M.; Hennart, S.; Stilmant, D.; Boer, de I.J.M.

    2017-01-01

    The nutrient use efficiency (NUE) of a system, generally computed as the amount of nutrients in valuable outputs over the amount of nutrients in all inputs, is commonly used to benchmark the environmental performance of dairy farms. Benchmarking the NUE of farms, however, may lead to biased

  14. Benchmark analysis of railway networks and undertakings

    NARCIS (Netherlands)

    Hansen, I.A.; Wiggenraad, P.B.L.; Wolff, J.W.

    2013-01-01

    Benchmark analysis of railway networks and companies has been stimulated by the European policy of deregulation of transport markets, the opening of national railway networks and markets to new entrants and separation of infrastructure and train operation. Recent international railway benchmarking

  15. The Linked Data Benchmark Council Project

    NARCIS (Netherlands)

    P.A. Boncz (Peter); I. Fundulaki; A. Gubichev (Andrey); J. Larriba-Pey (Josep); T. Neumann (Thomas)

    2013-01-01

    htmlabstractDespite the fast growth and increasing popularity, the broad field of RDF and Graph database systems lacks an independent authority for developing benchmarks, and for neutrally assessing benchmark results through industry-strength auditing which would allow to quantify and compare the

  16. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...

  17. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  18. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  19. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  20. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  1. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States). Transportation and Hydrogen Systems Center

    2017-10-19

    In FY16, the thermal performance of the 2014 Honda Accord Hybrid power electronics thermal management systems were benchmarked. Both experiments and numerical simulation were utilized to thoroughly study the thermal resistances and temperature distribution in the power module. Experimental results obtained from the water-ethylene glycol tests provided the junction-to-liquid thermal resistance. The finite element analysis (FEA) and computational fluid dynamics (CFD) models were found to yield a good match with experimental results. Both experimental and modeling results demonstrate that the passive stack is the dominant thermal resistance for both the motor and power electronics systems. The 2014 Accord power electronics systems yield steady-state thermal resistance values around 42- 50 mm to the 2nd power K/W, depending on the flow rates. At a typical flow rate of 10 liters per minute, the thermal resistance of the Accord system was found to be about 44 percent lower than that of the 2012 Nissan LEAF system that was benchmarked in FY15. The main reason for the difference is that the Accord power module used a metalized-ceramic substrate and eliminated the thermal interface material layers. FEA models were developed to study the transient performance of 2012 Nissan LEAF, 2014 Accord, and two other systems that feature conventional power module designs. The simulation results indicate that the 2012 LEAF power module has lowest thermal impedance at a time scale less than one second. This is probably due to moving low thermally conductive materials further away from the heat source and enhancing the heat spreading effect from the copper-molybdenum plate close to the insulated gate bipolar transistors. When approaching steady state, the Honda system shows lower thermal impedance. Measurement results of the thermal resistance of the 2015 BMW i3 power electronic system indicate that the i3 insulated gate bipolar transistor module has significantly lower junction

  2. Pescara benchmark: overview of modelling, testing and identification

    Energy Technology Data Exchange (ETDEWEB)

    Bellino, A; Garibaldi, L; Marchesiello, S [Dynamics/Identification Research Group, Department of Mechanics, Politecnico of Torino, C.so Duca degli Abruzzi 24, 10129 Torino (Italy); Brancaleoni, F; Gabriele, S; Spina, D [Department of Structures, University ' Roma Tre' of Rome, Via C. Segre 4/6, 00146 Rome (Italy); Bregant, L [Department of Mechanical and Marine Engineering , University of Trieste, Via Valerio 8, 34127 Trieste (Italy); Carminelli, A; Catania, G; Sorrentino, S [Diem Department of Mechanical Engineering, University of Bologna, Viale Risorgimento 2, 40136 Bologna (Italy); Di Evangelista, A; Valente, C; Zuccarino, L, E-mail: c.valente@unich.it [Department of Engineering, University ' G. d' Annunzio' of Chieti-Pescara Viale Pindaro 42, 65127 Pescara (Italy)

    2011-07-19

    The 'Pescara benchmark' is part of the national research project 'BriViDi' (BRIdge VIbrations and DIagnosis) supported by the Italian Ministero dell'Universita e Ricerca. The project is aimed at developing an integrated methodology for the structural health evaluation of railway r/c, p/c bridges. The methodology should provide for applicability in operating conditions, easy data acquisition through common industrial instrumentation, robustness and reliability against structural and environmental uncertainties. The Pescara benchmark consisted in lab tests to get a consistent and large experimental data base and subsequent data processing. Special tests were devised to simulate the train transit effects in actual field conditions. Prestressed concrete beams of current industrial production both sound and damaged at various severity corrosion levels were tested. The results were collected either in a deterministic setting and in a form suitable to deal with experimental uncertainties. Damage identification was split in two approaches: with or without a reference model. In the first case f.e. models were used in conjunction with non conventional updating techniques. In the second case, specialized output-only identification techniques capable to deal with time-variant and possibly non linear systems were developed. The lab tests allowed validating the above approaches and the performances of classical modal based damage indicators.

  3. JNC results of BN-600 benchmark calculation (phase 4)

    International Nuclear Information System (INIS)

    Ishikawa, Makoto

    2003-01-01

    The present work is the results of JNC, Japan, for the Phase 4 of the BN-600 core benchmark problem (Hex-Z fully MOX fuelled core model) organized by IAEA. The benchmark specification is based on 1) the RCM report of IAEA CRP on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of LMFR Reactivity Effects, Action 3.12' (Calculations for BN-600 fully fuelled MOX core for subsequent transient analyses). JENDL-3.2 nuclear data library was used for calculating 70 group ABBN-type group constants. Cell models for fuel assembly and control rod calculations were applied: homogeneous and heterogeneous (cylindrical supercell) model. Basic diffusion calculation was three-dimensional Hex-Z model, 18 group (Citation code). Transport calculations were 18 group, three-dimensional (NSHEC code) based on Sn-transport nodal method developed at JNC. The generated thermal power per fission was based on Sher's data corrected on the basis of ENDF/B-IV data library. Calculation results are presented in Tables for intercomparison

  4. Pescara benchmark: overview of modelling, testing and identification

    Science.gov (United States)

    Bellino, A.; Brancaleoni, F.; Bregant, L.; Carminelli, A.; Catania, G.; Di Evangelista, A.; Gabriele, S.; Garibaldi, L.; Marchesiello, S.; Sorrentino, S.; Spina, D.; Valente, C.; Zuccarino, L.

    2011-07-01

    The `Pescara benchmark' is part of the national research project `BriViDi' (BRIdge VIbrations and DIagnosis) supported by the Italian Ministero dell'Universitá e Ricerca. The project is aimed at developing an integrated methodology for the structural health evaluation of railway r/c, p/c bridges. The methodology should provide for applicability in operating conditions, easy data acquisition through common industrial instrumentation, robustness and reliability against structural and environmental uncertainties. The Pescara benchmark consisted in lab tests to get a consistent and large experimental data base and subsequent data processing. Special tests were devised to simulate the train transit effects in actual field conditions. Prestressed concrete beams of current industrial production both sound and damaged at various severity corrosion levels were tested. The results were collected either in a deterministic setting and in a form suitable to deal with experimental uncertainties. Damage identification was split in two approaches: with or without a reference model. In the first case f.e. models were used in conjunction with non conventional updating techniques. In the second case, specialized output-only identification techniques capable to deal with time-variant and possibly non linear systems were developed. The lab tests allowed validating the above approaches and the performances of classical modal based damage indicators.

  5. Benchmarking Commercial Conformer Ensemble Generators.

    Science.gov (United States)

    Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes

    2017-11-27

    We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.

  6. What Randomized Benchmarking Actually Measures

    Science.gov (United States)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; Sarovar, Mohan; Blume-Kohout, Robin

    2017-09-01

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r . For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. These theories allow explicit computation of the error rate that RB measures (r ), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.

  7. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  8. Benchmark tests of JENDL-1

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki; Hasegawa, Akira; Takano, Hideki; Kamei, Takanobu; Hojuyama, Takeshi; Sasaki, Makoto; Seki, Yuji; Zukeran, Atsushi; Otake, Iwao.

    1982-02-01

    Various benchmark tests were made on JENDL-1. At the first stage, various core center characteristics were tested for many critical assemblies with one-dimensional model. At the second stage, applicability of JENDL-1 was further tested to more sophisticated problems for MOZART and ZPPR-3 assemblies with two-dimensional model. It was proved that JENDL-1 predicted various quantities of fast reactors satisfactorily as a whole. However, the following problems were pointed out: 1) There exists discrepancy of 0.9% in the k sub(eff)-values between the Pu- and U-cores. 2) The fission rate ratio of 239 Pu to 235 U is underestimated by 3%. 3) The Doppler reactivity coefficients are overestimated by about 10%. 4) The control rod worths are underestimated by 4%. 5) The fission rates of 235 U and 239 Pu are underestimated considerably in the outer core and radial blanket regions. 6) The negative sodium void reactivities are overestimated, when the sodium is removed from the outer core. As a whole, most of problems of JENDL-1 seem to be related with the neutron leakage and the neutron spectrum. It was found through the further study that most of these problems came from too small diffusion coefficients and too large elastic removal cross sections above 100 keV, which might be probably caused by overestimation of the total and elastic scattering cross sections for structural materials in the unresolved resonance region up to several MeV. (author)

  9. Using benchmarking for the primary allocation of EU allowances. An application to the German power sector

    Energy Technology Data Exchange (ETDEWEB)

    Schleich, J.; Cremer, C.

    2007-07-01

    Basing allocation of allowances for existing installations under the EU Emissions Trading Scheme on specific emission values (benchmarks) rather than on historic emissions may have several advantages. Benchmarking may recognize early ac-tion, provide higher incentives for replacing old installations and result in fewer distortions in case of updating, facilitate EU-wide harmonization of allocation rules or allow for simplified and more efficient closure rules. Applying an optimization model for the German power sector, we analyze the distributional effects of vari-ous allocation regimes across and within different generation technologies. Re-sults illustrate that regimes with a single uniform benchmark for all fuels or with a single benchmark for coal- and lignite-fired plants imply substantial distributional effects. In particular, lignite- and old coal-fired plants would be made worse off. Under a regime with fuel-specific benchmarks for gas, coal, and lignite 50 % of the gas-fired plants and 4 % of the lignite and coal-fired plants would face an allow-ance deficit of at least 10 %, while primarily modern lignite-fired plants would benefit. Capping the surplus and shortage of allowances would further moderate the distributional effects, but may tarnish incentives for efficiency improvements and recognition of early action. (orig.)

  10. Workshop: Monte Carlo computational performance benchmark - Contributions

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.; Petrovic, B.; Martin, W.R.; Sutton, T.; Leppaenen, J.; Forget, B.; Romano, P.; Siegel, A.; Hoogenboom, E.; Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, Y.; Yu, J.; Sun, J.; Fan, X.; Yu, G.; Bernard, F.; Cochet, B.; Jinaphanh, A.; Jacquet, O.; Van der Marck, S.; Tramm, J.; Felker, K.; Smith, K.; Horelik, N.; Capellan, N.; Herman, B.

    2013-01-01

    This series of slides is divided into 3 parts. The first part is dedicated to the presentation of the Monte-Carlo computational performance benchmark (aims, specifications and results). This benchmark aims at performing a full-size Monte Carlo simulation of a PWR core with axial and pin-power distribution. Many different Monte Carlo codes have been used and their results have been compared in terms of computed values and processing speeds. It appears that local power values mostly agree quite well. The first part also includes the presentations of about 10 participants in which they detail their calculations. In the second part, an extension of the benchmark is proposed in order to simulate a more realistic reactor core (for instance non-uniform temperature) and to assess feedback coefficients due to change of some parameters. The third part deals with another benchmark, the BEAVRS benchmark (Benchmark for Evaluation And Validation of Reactor Simulations). BEAVRS is also a full-core PWR benchmark for Monte Carlo simulations

  11. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  12. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  13. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  14. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  15. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  16. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and the...

  17. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  18. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Sean J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Alvarez, Paul [The Wired Group

    2018-04-13

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  19. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    R. Angles Rojas (Renzo); M.-D. Pham (Minh-Duc); P.A. Boncz (Peter)

    2014-01-01

    htmlabstractWith inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics

  20. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  1. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...

  2. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    and professional performance but only if prior professional performance was low. Supplemental analyses support the robustness of our results. Findings indicate conditions under which bureaucratic benchmarking information may affect professional performance and advance research on professional control and social...

  3. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  4. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  5. OR-Benchmark: An Open and Reconfigurable Digital Watermarking Benchmarking Framework

    OpenAIRE

    Wang, Hui; Ho, Anthony TS; Li, Shujun

    2015-01-01

    Benchmarking digital watermarking algorithms is not an easy task because different applications of digital watermarking often have very different sets of requirements and trade-offs between conflicting requirements. While there have been some general-purpose digital watermarking benchmarking systems available, they normally do not support complicated benchmarking tasks and cannot be easily reconfigured to work with different watermarking algorithms and testing conditions. In this paper, we pr...

  6. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  7. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  8. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  9. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  10. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  11. Toxicological benchmarks for screening contaminants of potential concern for effects on sediment-associated biota: 1994 Revision. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Hull, R.N. [JAYCOR, Vienna, VA (United States)]|[Oak Ridge National Lab., TN (United States); Suter, G.W. II [Oak Ridge National Lab., TN (United States)

    1994-06-01

    Because a hazardous waste site may contain hundreds of chemicals, it is important to screen contaminants of potential concern for the ecological risk assessment. Often this screening is done as part of a Screening Assessment, the purpose of which is to evaluate the available data, identify data gaps, and screen contaminants of potential concern. Screening may be accomplished by using a set of toxicological benchmarks. These benchmarks are helpful in determining whether contaminants warrant further assessment or are at a level that requires no further attention. If a chemical concentration or the reported detection limit exceeds a proposed lower benchmark, more analysis is needed to determine the hazards posed by that chemical. If, however, the chemical concentration falls below the lower benchmark value, the chemical may be eliminated from further study. This report briefly describes three categories of approaches to the development of sediment quality benchmarks. These approaches are based on analytical chemistry, toxicity test and field survey data. A fourth integrative approach incorporates all three types of data. The equilibrium partitioning approach is recommended for screening nonpolar organic contaminants of concern in sediments. For inorganics, the National Oceanic and Atmospheric Administration has developed benchmarks that may be used for screening. There are supplemental benchmarks from the province of Ontario, the state of Wisconsin, and US Environmental Protection Agency Region V. Pore water analysis is recommended for polar organic compounds; comparisons are then made against water quality benchmarks. This report is an update of a prior report. It contains revised ER-L and ER-M values, the five EPA proposed sediment quality criteria, and benchmarks calculated for several nonionic organic chemicals using equilibrium partitioning.

  12. A benchmark for comparison of cell tracking algorithms.

    Science.gov (United States)

    Maška, Martin; Ulman, Vladimír; Svoboda, David; Matula, Pavel; Matula, Petr; Ederra, Cristina; Urbiola, Ainhoa; España, Tomás; Venkatesan, Subramanian; Balak, Deepak M W; Karas, Pavel; Bolcková, Tereza; Streitová, Markéta; Carthel, Craig; Coraluppi, Stefano; Harder, Nathalie; Rohr, Karl; Magnusson, Klas E G; Jaldén, Joakim; Blau, Helen M; Dzyubachyk, Oleh; Křížek, Pavel; Hagen, Guy M; Pastor-Escuredo, David; Jimenez-Carretero, Daniel; Ledesma-Carbayo, Maria J; Muñoz-Barrutia, Arrate; Meijering, Erik; Kozubek, Michal; Ortiz-de-Solorzano, Carlos

    2014-06-01

    Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2013 Cell Tracking Challenge. In this article, we present the logistics, datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of a comprehensive video dataset repository and the definition of objective measures for comparison and ranking of the algorithms. With this benchmark, six algorithms covering a variety of segmentation and tracking paradigms have been compared and ranked based on their performance on both synthetic and real datasets. Given the diversity of the datasets, we do not declare a single winner of the challenge. Instead, we present and discuss the results for each individual dataset separately. The challenge Web site (http://www.codesolorzano.com/celltrackingchallenge) provides access to the training and competition datasets, along with the ground truth of the training videos. It also provides access to Windows and Linux executable files of the evaluation software and most of the algorithms that competed in the challenge. © The Author 2014. Published by Oxford University Press.

  13. Integration of oncology and palliative care: setting a benchmark.

    Science.gov (United States)

    Vayne-Bossert, P; Richard, E; Good, P; Sullivan, K; Hardy, J R

    2017-10-01

    Integration of oncology and palliative care (PC) should be the standard model of care for patients with advanced cancer. An expert panel developed criteria that constitute integration. This study determined whether the PC service within this Health Service, which is considered to be fully "integrated", could be benchmarked against these criteria. A survey was undertaken to determine the perceived level of integration of oncology and palliative care by all health care professionals (HCPs) within our cancer centre. An objective determination of integration was obtained from chart reviews of deceased patients. Integration was defined as >70% of all respondents answered "agree" or "strongly agree" to each indicator and >70% of patient charts supported each criteria. Thirty-four HCPs participated in the survey (response rate 69%). Over 90% were aware of the outpatient PC clinic, interdisciplinary and consultation team, PC senior leadership, and the acceptance of concurrent anticancer therapy. None of the other criteria met the 70% agreement mark but many respondents lacked the necessary knowledge to respond. The chart review included 67 patients, 92% of whom were seen by the PC team prior to death. The median time from referral to death was 103 days (range 0-1347). The level of agreement across all criteria was below our predefined definition of integration. The integration criteria relating to service delivery are medically focused and do not lend themselves to interdisciplinary review. The objective criteria can be audited and serve both as a benchmark and a basis for improvement activities.

  14. Benchmarking Big Data Systems and the BigData Top100 List.

    Science.gov (United States)

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  15. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  16. Benchmarking as a strategy policy tool for energy management

    NARCIS (Netherlands)

    Rienstra, S.A.; Nijkamp, P.

    2002-01-01

    In this paper we analyse to what extent benchmarking is a valuable tool in strategic energy policy analysis. First, the theory on benchmarking is concisely presented, e.g., by discussing the benchmark wheel and the benchmark path. Next, some results of surveys among business firms are presented. To

  17. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  18. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  19. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  20. A review of benchmarking, rating and labelling concepts within the framework of building energy certification schemes

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Lombard, Luis; Gonzalez, Rocio [Grupo de Termotecnia, Escuela Superior de Ingenieros, Universidad de Sevilla, Camino de los Descubrimientos, 41092 Sevilla (Spain); Ortiz, Jose [BRE (Building Research Establishment), Garston, Watford WD25 9XX (United Kingdom); Maestre, Ismael R. [Dpto. de Maquinas y Motores Termicos, Escuela Politecnica Superior de Algeciras, Universidad de Cadiz, Av. Ramon Puyol s/n, Algeciras 11202 Cadiz (Spain)

    2009-03-15

    Energy certification schemes for buildings emerged in the early 1990s as an essential method for improving energy efficiency, minimising energy consumption and enabling greater transparency with regards to the use of energy in buildings. However, from the beginning their definition and implementation process were diffuse and, occasionally, have confused building sector stakeholders. A multiplicity of terms and concepts such as energy performance, energy efficiency, energy ratings, benchmarking, labelling, etc., have emerged with sometimes overlapping meanings. This has frequently led to misleading interpretations by regulatory bodies, energy agencies and final consumers. This paper analyses the origin and the historic development of energy certification schemes in buildings along with the definition and scope of a building energy certificate and critical aspects of its implementation. Concepts such as benchmarking tools, energy ratings and energy labelling are clarified within the wider topic of certification schemes. Finally, a seven steps process is proposed as a guide for implementing building energy certification. (author)

  1. The institutionalization of benchmarking in the Danish construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard; Gottlieb, Stefan Christoffer

    and disseminated to the construction industry. The fourth chapter demonstrates how benchmarking was concretized into a benchmarking system and articulated to address several political focus areas for the construction industry. BEC accordingly became a political arena where many local perspectives and strategic...... interests had to be managed. The fifth chapter is about the operationalization of benchmarking and demonstrates how the concretizing and implementation of benchmarking gave rise to reactions from different actors with different and diverse interests in the benchmarking initiative. Political struggles...

  2. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II.

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  3. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  4. A benchmark server using high resolution protein structure data, and benchmark results for membrane helix predictions.

    Science.gov (United States)

    Rath, Emma M; Tessier, Dominique; Campbell, Alexander A; Lee, Hong Ching; Werner, Tim; Salam, Noeris K; Lee, Lawrence K; Church, W Bret

    2013-03-27

    Helical membrane proteins are vital for the interaction of cells with their environment. Predicting the location of membrane helices in protein amino acid sequences provides substantial understanding of their structure and function and identifies membrane proteins in sequenced genomes. Currently there is no comprehensive benchmark tool for evaluating prediction methods, and there is no publication comparing all available prediction tools. Current benchmark literature is outdated, as recently determined membrane protein structures are not included. Current literature is also limited to global assessments, as specialised benchmarks for predicting specific classes of membrane proteins were not previously carried out. We present a benchmark server at http://sydney.edu.au/pharmacy/sbio/software/TMH_benchmark.shtml that uses recent high resolution protein structural data to provide a comprehensive assessment of the accuracy of existing membrane helix prediction methods. The server further allows a user to compare uploaded predictions generated by novel methods, permitting the comparison of these novel methods against all existing methods compared by the server. Benchmark metrics include sensitivity and specificity of predictions for membrane helix location and orientation, and many others. The server allows for customised evaluations such as assessing prediction method performances for specific helical membrane protein subtypes.We report results for custom benchmarks which illustrate how the server may be used for specialised benchmarks. Which prediction method is the best performing method depends on which measure is being benchmarked. The OCTOPUS membrane helix prediction method is consistently one of the highest performing methods across all measures in the benchmarks that we performed. The benchmark server allows general and specialised assessment of existing and novel membrane helix prediction methods. Users can employ this benchmark server to determine the most

  5. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  6. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... solvers in IPOPT and FMINCON, and the sequential quadratic programming method in SNOPT, are benchmarked on the library using performance profiles. Whenever possible the methods are applied to both the nested and the Simultaneous Analysis and Design (SAND) formulations of the problem. The performance...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  7. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  8. 'Wasteaware' benchmark indicators for integrated sustainable waste management in cities.

    Science.gov (United States)

    Wilson, David C; Rodic, Ljiljana; Cowing, Michael J; Velis, Costas A; Whiteman, Andrew D; Scheinberg, Anne; Vilches, Recaredo; Masterson, Darragh; Stretz, Joachim; Oelz, Barbara

    2015-01-01

    This paper addresses a major problem in international solid waste management, which is twofold: a lack of data, and a lack of consistent data to allow comparison between cities. The paper presents an indicator set for integrated sustainable waste management (ISWM) in cities both North and South, to allow benchmarking of a city's performance, comparing cities and monitoring developments over time. It builds on pioneering work for UN-Habitat's solid waste management in the World's cities. The comprehensive analytical framework of a city's solid waste management system is divided into two overlapping 'triangles' - one comprising the three physical components, i.e. collection, recycling, and disposal, and the other comprising three governance aspects, i.e. inclusivity; financial sustainability; and sound institutions and proactive policies. The indicator set includes essential quantitative indicators as well as qualitative composite indicators. This updated and revised 'Wasteaware' set of ISWM benchmark indicators is the cumulative result of testing various prototypes in more than 50 cities around the world. This experience confirms the utility of indicators in allowing comprehensive performance measurement and comparison of both 'hard' physical components and 'soft' governance aspects; and in prioritising 'next steps' in developing a city's solid waste management system, by identifying both local strengths that can be built on and weak points to be addressed. The Wasteaware ISWM indicators are applicable to a broad range of cities with very different levels of income and solid waste management practices. Their wide application as a standard methodology will help to fill the historical data gap. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  10. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II.

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  11. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  12. Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team

    Science.gov (United States)

    Wetherholt, Martha

    2016-01-01

    To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.

  13. LHC benchmark scenarios for the real Higgs singlet extension of the standard model

    International Nuclear Information System (INIS)

    Robens, Tania; Stefaniak, Tim

    2016-01-01

    We present benchmark scenarios for searches for an additional Higgs state in the real Higgs singlet extension of the Standard Model in Run 2 of the LHC. The scenarios are selected such that they fulfill all relevant current theoretical and experimental constraints, but can potentially be discovered at the current LHC run. We take into account the results presented in earlier work and update the experimental constraints from relevant LHC Higgs searches and signal rate measurements. The benchmark scenarios are given separately for the low-mass and high-mass region, i.e. the mass range where the additional Higgs state is lighter or heavier than the discovered Higgs state at around 125 GeV. They have also been presented in the framework of the LHC Higgs Cross Section Working Group. (orig.)

  14. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  15. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available of dynamic multi-objective optimisation algorithms (DMOAs) are highlighted. In addition, new DMOO benchmark functions with complicated Pareto-optimal sets (POSs) and approaches to develop DMOOPs with either an isolated or deceptive Pareto-optimal front (POF...

  16. Benchmarking 2009: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica; Kilgore, Gin

    2009-01-01

    "Benchmarking 2009: Trends in Education Philanthropy" is Grantmakers for Education's (GFE) second annual study of grantmaking trends and priorities among members. As a national network dedicated to improving education outcomes through philanthropy, GFE members are mindful of their role in fostering greater knowledge in the field. They believe it's…

  17. Resolution for the Loviisa benchmark problem

    International Nuclear Information System (INIS)

    Garcia, C.R.; Quintero, R.; Milian, D.

    1992-01-01

    In the present paper, the Loviisa benchmark problem for cycles 11 and 8, and reactor blocks 1 and 2 from Loviisa NPP, is calculated. This problem user law leakage reload patterns and was posed at the second thematic group of TIC meeting held in Rheinsberg GDR, march 1989. SPPS-1 coarse mesh code has been used for the calculations

  18. Parton Distribution Benchmarking with LHC Data

    NARCIS (Netherlands)

    Ball, Richard D.; Carrazza, Stefano; Debbio, Luigi Del; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C. -P.

    2012-01-01

    We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross

  19. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  20. Why and How to Benchmark XML Databases

    NARCIS (Netherlands)

    A.R. Schmidt; F. Waas; M.L. Kersten (Martin); D. Florescu; M.J. Carey; I. Manolescu; R. Busse

    2001-01-01

    textabstractBenchmarks belong to the very standard repertory of tools deployed in database development. Assessing the capabilities of a system, analyzing actual and potential bottlenecks, and, naturally, comparing the pros and cons of different systems architectures have become indispensable tasks

  1. Prague texture segmentation data generator and benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal

    2006-01-01

    Roč. 2006, č. 64 (2006), s. 67-68 ISSN 0926-4981 R&D Projects: GA MŠk(CZ) 1M0572; GA AV ČR(CZ) 1ET400750407; GA AV ČR IAA2075302 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * texture * benchmark * web Subject RIV: BD - Theory of Information

  2. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  3. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...... dimensions of knowledge thought to be essential for success following graduation....

  4. Determination of Benchmarks Stability within Ahmadu Bello ...

    African Journals Online (AJOL)

    Heights of six geodetic benchmarks over a total distance of 8.6km at the Ahmadu Bello University (ABU), Zaria, Nigeria were recomputed and analysed using least squares adjustment technique. The network computations were tied to two fix primary reference pillars situated outside the campus. The two-tail Chi-square ...

  5. Benchmarking and performance management in health care

    OpenAIRE

    Buttigieg, Sandra; ; EHMA Annual Conference : Public Health Care : Who Pays, Who Provides?

    2012-01-01

    Current economic conditions challenge health care providers globally. Healthcare organizations need to deliver optimal financial, operational, and clinical performance to sustain quality of service delivery. Benchmarking is one of the most potent and under-utilized management tools available and an analytic tool to understand organizational performance. Additionally, it is required for financial survival and organizational excellence.

  6. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  7. Benchmarking health IT among OECD countries: better data for better policy

    Science.gov (United States)

    Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K

    2014-01-01

    Objective To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. Materials and methods The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. Results The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Discussion Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. Conclusions As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this. PMID:23721983

  8. Pediatric Preformed Metal Crowns - An Update

    Directory of Open Access Journals (Sweden)

    Sangameshwar Sajjanshetty

    2013-01-01

    Full Text Available Stainless Steel crowns (SSC were introduced in 1947 by the Rocky Mountain Company and popularized by Humphrey in 1950. Prefabricated SSC can be adapted to individual primary teeth and cemented in place to provide a definitive restoration. The SSC is extremely durable, relatively inexpensive, subject to minimal technique sensitivity during placement, and offers the advantage of full coronal coverage. SSC are often used to restore primary and permanent teeth in children and adolescents where intracoronal restorations would otherwise fail. This article brings the update of this definitive restoration.

  9. Toxicological Benchmarks for Screening of Potential Contaminants of Concern for Effects on Aquatic Biota on the Oak Ridge Reservation, Oak Ridge, Tennessee

    Energy Technology Data Exchange (ETDEWEB)

    Suter, G.W., II

    1993-01-01

    concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.

  10. Benchmarking transaction and analytical processing systems the creation of a mixed workload benchmark and its application

    CERN Document Server

    Bog, Anja

    2014-01-01

    This book introduces a new benchmark for hybrid database systems, gauging the effect of adding OLAP to an OLTP workload and analyzing the impact of commonly used optimizations in historically separate OLTP and OLAP domains in mixed-workload scenarios.

  11. Methods report on the development of the 2013 revision and update of the EAACI/GA2 LEN/EDF/WAO guideline for the definition, classification, diagnosis, and management of urticaria

    DEFF Research Database (Denmark)

    Zuberbier, T; Aberer, W; Asero, R

    2014-01-01

    ) with the participation of delegates of 21 national and international societies. This guideline covers the definition and classification of urticaria, taking into account the recent progress in identifying its causes, eliciting factors and pathomechanisms. In addition, it outlines evidence-based diagnostic...

  12. RADSAT Benchmarks for Prompt Gamma Neutron Activation Analysis Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Burns, Kimberly A.; Gesh, Christopher J.

    2011-07-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. High-resolution gamma-ray spectrometers are used in these applications to measure the spectrum of the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used simulation tool for this type of problem, but computational times can be prohibitively long. This work explores the use of multi-group deterministic methods for the simulation of coupled neutron-photon problems. The main purpose of this work is to benchmark several problems modeled with RADSAT and MCNP to experimental data. Additionally, the cross section libraries for RADSAT are updated to include ENDF/B-VII cross sections. Preliminary findings show promising results when compared to MCNP and experimental data, but also areas where additional inquiry and testing are needed. The potential benefits and shortcomings of the multi-group-based approach are discussed in terms of accuracy and computational efficiency.

  13. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying...... this perspective develops more thorough knowledge about benchmarking and challenges the current dominating rationales. Hereby, it is argued that benchmarking is not a neutral practice. On the contrary it is highly influenced by organizational ambitions and strategies, with the potentials to transform...

  14. Benchmarking in the globalised world and its impact on South ...

    African Journals Online (AJOL)

    In order to understand the potential impact of international benchmarking on South African institutions, it is important to explore the future role of benchmarking on the international level. In this regard, examples of transnational benchmarking activities will be considered. As a result of the involvement of South African ...

  15. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  16. MTCB: A Multi-Tenant Customizable database Benchmark

    NARCIS (Netherlands)

    van der Zijden, WIm; Hiemstra, Djoerd; van Keulen, Maurice

    2017-01-01

    We argue that there is a need for Multi-Tenant Customizable OLTP systems. Such systems need a Multi-Tenant Customizable Database (MTC-DB) as a backing. To stimulate the development of such databases, we propose the benchmark MTCB. Benchmarks for OLTP exist and multi-tenant benchmarks exist, but no

  17. GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise Paul [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. • The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read

  18. An Update on Memory Reconsolidation Updating.

    Science.gov (United States)

    Lee, Jonathan L C; Nader, Karim; Schiller, Daniela

    2017-07-01

    The reactivation of a stored memory in the brain can make the memory transiently labile. During the time it takes for the memory to restabilize (reconsolidate) the memory can either be reduced by an amnesic agent or enhanced by memory enhancers. The change in memory expression is related to changes in the brain correlates of long-term memory. Many have suggested that such retrieval-induced plasticity is ideally placed to enable memories to be updated with new information. This hypothesis has been tested experimentally, with a translational perspective, by attempts to update maladaptive memories to reduce their problematic impact. We review here progress on reconsolidation updating studies, highlighting their translational exploitation and addressing recent challenges to the reconsolidation field. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. BENCHMARKING – BETWEEN TRADITIONAL & MODERN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Mihaela Ungureanu

    2011-09-01

    Full Text Available The concept of benchmarking requires a continuous process of performance improvement of different organizations in order to obtain superiority towards those perceived as market leader’s competitors. This superiority can always be questioned, its relativity originating in the quick growing evolution of the economic environment. The approach supports innovation in relation with traditional methods and it is based on the will of those managers who want to determine limits and seek excellence. The end of the twentieth century is the period of broad expression of benchmarking in various areas and its transformation from a simple quantitative analysis tool, to a resource of information on performance and quality of goods and services.

  20. Benchmark and Continuous Improvement of Performance

    Directory of Open Access Journals (Sweden)

    Alina Alecse Stanciu

    2017-12-01

    Full Text Available The present Economic Environment is challenge us to perform, to think and re-think our personal strategies in according with our entities strategies, even if we are simply employed or we are entrepreneurs. Is an environment characterised by Volatility, Uncertainity, Complexity and Ambiguity - a VUCA World in which the entities must fight for their position gained in the market, disrupt new markets and new economies, developing their client portofolio, with the Performance as one final goal. The pressure of driving forces known as the 2030 Megatrends: Globalization 2.0, Environmental Crisis and the Scarcity of Resources, Individualism and Value Pluralism, Demographic Change, This paper examines whether using benchmark is an opportunity to increase the competitiveness of Romanian SMEs and the results show that benchmark is therefore a powerful instrument, combining reduced negative impact on the environment with a positive impact on the economy and society.

  1. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  2. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  3. IOP Physics benchmarks of the VELO upgrade

    CERN Document Server

    AUTHOR|(CDS)2068636

    2017-01-01

    The LHCb Experiment at the LHC is successfully performing precision measurements primarily in the area of flavour physics. The collaboration is preparing an upgrade that will start taking data in 2021 with a trigger-less readout at five times the current luminosity. The vertex locator has been crucial in the success of the experiment and will continue to be so for the upgrade. It will be replaced by a hybrid pixel detector and this paper discusses the performance benchmarks of the upgraded detector. Despite the challenging experimental environment, the vertex locator will maintain or improve upon its benchmark figures compared to the current detector. Finally the long term plans for LHCb, beyond those of the upgrade currently in preparation, are discussed.

  4. Assessing and benchmarking multiphoton microscopes for biologists.

    Science.gov (United States)

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F

    2014-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. © 2014 Elsevier Inc. All rights reserved.

  5. Assessing and benchmarking multiphoton microscopes for biologists

    Science.gov (United States)

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F.

    2017-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. PMID:24974026

  6. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  7. Benchmark On Sensitivity Calculation (Phase III)

    Energy Technology Data Exchange (ETDEWEB)

    Ivanova, Tatiana [IRSN; Laville, Cedric [IRSN; Dyrda, James [Atomic Weapons Establishment; Mennerdahl, Dennis [E. Mennerdahl Systems; Golovko, Yury [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Raskach, Kirill [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Tsiboulia, Anatoly [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Lee, Gil Soo [Korea Institute of Nuclear Safety (KINS); Woo, Sweng-Woong [Korea Institute of Nuclear Safety (KINS); Bidaud, Adrien [Labratoire de Physique Subatomique et de Cosmolo-gie (LPSC); Patel, Amrit [NRC; Bledsoe, Keith C [ORNL; Rearden, Bradley T [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.

  8. Benchmark problems for repository siting models

    International Nuclear Information System (INIS)

    Ross, B.; Mercer, J.W.; Thomas, S.D.; Lester, B.H.

    1982-12-01

    This report describes benchmark problems to test computer codes used in siting nuclear waste repositories. Analytical solutions, field problems, and hypothetical problems are included. Problems are included for the following types of codes: ground-water flow in saturated porous media, heat transport in saturated media, ground-water flow in saturated fractured media, heat and solute transport in saturated porous media, solute transport in saturated porous media, solute transport in saturated fractured media, and solute transport in unsaturated porous media

  9. SINBAD: Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    Hunter, H.T.; Ingersoll, D.T.; Roussin, R.W.

    1996-01-01

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity

  10. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    Science.gov (United States)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  11. Benchmarking of Remote Sensing Segmentation Methods

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.

    2015-01-01

    Roč. 8, č. 5 (2015), s. 2240-2248 ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 2.145, year: 2015 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf

  12. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  13. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks'...... state during the computation. We ran the system with two servers doing the secure computation using a database with information on about 2500 users. Answers arrived in about 25 seconds.......We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  14. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  15. Equilibrium Partitioning Sediment Benchmarks (ESBs) for the ...

    Science.gov (United States)

    This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it accounts for the varying bioavailability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms.  This equilibrium partitioning sediment benchmark (ESB) document was prepared by scientists from the Atlantic Ecology Division, Mid-Continent Ecology Division, and Western Ecology Division, the Office of Water, and private consultants. The document describes procedures to determine the interstitial water concentrations of nonionic organic chemicals in contaminated sediments. Based on these concentrations, guidance is provided on the derivation of toxic units to assess whether the sediments are likely to cause adverse effects to benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it is based on the concentrations of chemical(s) that are known to be harmful and bioavailable in the environment.  This document, and five others published over the last nine years, will be useful for the Program Offices, including Superfund, a

  16. Status on benchmark testing of CENDL-3

    CERN Document Server

    Liu Ping

    2002-01-01

    CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium

  17. The Benchmarking of Integrated Business Structures

    Directory of Open Access Journals (Sweden)

    Nifatova Olena M.

    2017-12-01

    Full Text Available The aim of the article is to study the role of the benchmarking in the process of integration of business structures in the aspect of knowledge sharing. The results of studying the essential content of the concept “integrated business structure” and its semantic analysis made it possible to form our own understanding of this category with an emphasis on the need to consider it in the plane of three projections — legal, economic and organizational one. The economic projection of the essential content of integration associations of business units is supported by the organizational projection, which is expressed through such essential aspects as existence of a single center that makes key decisions; understanding integration as knowledge sharing; using the benchmarking as exchange of experience on key business processes. Understanding the process of integration of business units in the aspect of knowledge sharing involves obtaining certain information benefits. Using the benchmarking as exchange of experience on key business processes in integrated business structures will help improve the basic production processes, increase the efficiency of activity of both the individual business unit and the IBS as a whole.

  18. The Alpha consensus meeting on cryopreservation key performance indicators and benchmarks: proceedings of an expert meeting.

    Science.gov (United States)

    2012-08-01

    This proceedings report presents the outcomes from an international workshop designed to establish consensus on: definitions for key performance indicators (KPIs) for oocyte and embryo cryopreservation, using either slow freezing or vitrification; minimum performance level values for each KPI, representing basic competency; and aspirational benchmark values for each KPI, representing best practice goals. This report includes general presentations about current practice and factors for consideration in the development of KPIs. A total of 14 KPIs were recommended and benchmarks for each are presented. No recommendations were made regarding specific cryopreservation techniques or devices, or whether vitrification is 'better' than slow freezing, or vice versa, for any particular stage or application, as this was considered to be outside the scope of this workshop. Copyright © 2012 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  19. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  20. Bumper 3 Update for IADC Protection Manual

    Science.gov (United States)

    Christiansen, Eric L.; Nagy, Kornel; Hyde, Jim

    2016-01-01

    The Bumper code has been the standard in use by NASA and contractors to perform meteoroid/debris risk assessments since 1990. It has undergone extensive revisions and updates [NASA JSC HITF website; Christiansen et al., 1992, 1997]. NASA Johnson Space Center (JSC) has applied BUMPER to risk assessments for Space Station, Shuttle, Mir, Extravehicular Mobility Units (EMU) space suits, and other spacecraft (e.g., LDEF, Iridium, TDRS, and Hubble Space Telescope). Bumper continues to be updated with changes in the ballistic limit equations describing failure threshold of various spacecraft components, as well as changes in the meteoroid and debris environment models. Significant efforts are expended to validate Bumper and benchmark it to other meteoroid/debris risk assessment codes. Bumper 3 is a refactored version of Bumper II. The structure of the code was extensively modified to improve maintenance, performance and flexibility. The architecture was changed to separate the frequently updated ballistic limit equations from the relatively stable common core functions of the program. These updates allow NASA to produce specific editions of the Bumper 3 that are tailored for specific customer requirements. The core consists of common code necessary to process the Micrometeoroid and Orbital Debris (MMOD) environment models, assess shadowing and calculate MMOD risk. The library of target response subroutines includes a board range of different types of MMOD shield ballistic limit equations as well as equations describing damage to various spacecraft subsystems or hardware (thermal protection materials, windows, radiators, solar arrays, cables, etc.). The core and library of ballistic response subroutines are maintained under configuration control. A change in the core will affect all editions of the code, whereas a change in one or more of the response subroutines will affect all editions of the code that contain the particular response subroutines which are modified. Note

  1. Regional restoration benchmarks for Acropora cervicornis

    Science.gov (United States)

    Schopmeyer, Stephanie A.; Lirman, Diego; Bartels, Erich; Gilliam, David S.; Goergen, Elizabeth A.; Griffin, Sean P.; Johnson, Meaghan E.; Lustic, Caitlin; Maxwell, Kerry; Walter, Cory S.

    2017-12-01

    Coral gardening plays an important role in the recovery of depleted populations of threatened Acropora cervicornis in the Caribbean. Over the past decade, high survival coupled with fast growth of in situ nursery corals have allowed practitioners to create healthy and genotypically diverse nursery stocks. Currently, thousands of corals are propagated and outplanted onto degraded reefs on a yearly basis, representing a substantial increase in the abundance, biomass, and overall footprint of A. cervicornis. Here, we combined an extensive dataset collected by restoration practitioners to document early (1-2 yr) restoration success metrics in Florida and Puerto Rico, USA. By reporting region-specific data on the impacts of fragment collection on donor colonies, survivorship and productivity of nursery corals, and survivorship and productivity of outplanted corals during normal conditions, we provide the basis for a stop-light indicator framework for new or existing restoration programs to evaluate their performance. We show that current restoration methods are very effective, that no excess damage is caused to donor colonies, and that once outplanted, corals behave just as wild colonies. We also provide science-based benchmarks that can be used by programs to evaluate successes and challenges of their efforts, and to make modifications where needed. We propose that up to 10% of the biomass can be collected from healthy, large A. cervicornis donor colonies for nursery propagation. We also propose the following benchmarks for the first year of activities for A. cervicornis restoration: (1) >75% live tissue cover on donor colonies; (2) >80% survivorship of nursery corals; and (3) >70% survivorship of outplanted corals. Finally, we report productivity means of 4.4 cm yr-1 for nursery corals and 4.8 cm yr-1 for outplants as a frame of reference for ranking performance within programs. Such benchmarks, and potential subsequent adaptive actions, are needed to fully assess the

  2. A Uranium Bioremediation Reactive Transport Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  3. Benchmarking urban energy efficiency in the UK

    International Nuclear Information System (INIS)

    Keirstead, James

    2013-01-01

    This study asks what is the ‘best’ way to measure urban energy efficiency. There has been recent interest in identifying efficient cities so that best practices can be shared, a process known as benchmarking. Previous studies have used relatively simple metrics that provide limited insight on the complexity of urban energy efficiency and arguably fail to provide a ‘fair’ measure of urban performance. Using a data set of 198 urban UK local administrative units, three methods are compared: ratio measures, regression residuals, and data envelopment analysis. The results show that each method has its own strengths and weaknesses regarding the ease of interpretation, ability to identify outliers and provide consistent rankings. Efficient areas are diverse but are notably found in low income areas of large conurbations such as London, whereas industrial areas are consistently ranked as inefficient. The results highlight the shortcomings of the underlying production-based energy accounts. Ideally urban energy efficiency benchmarks would be built on consumption-based accounts, but interim recommendations are made regarding the use of efficiency measures that improve upon current practice and facilitate wider conversations about what it means for a specific city to be energy-efficient within an interconnected economy. - Highlights: • Benchmarking is a potentially valuable method for improving urban energy performance. • Three different measures of urban energy efficiency are presented for UK cities. • Most efficient areas are diverse but include low-income areas of large conurbations. • Least efficient areas perform industrial activities of national importance. • Improve current practice with grouped per capita metrics or regression residuals

  4. International Benchmarking of Electricity Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2014-01-01

    Electricity transmission system operators (TSO) in Europe are increasing subject to high-powered performance-based regulation, such as revenue-cap regimes. The determination of the parameters in such regimes is challenging for national regulatory authorities (NRA), since there is normally a single...... TSO operating in each jurisdiction. The solution for European regulators has been found in international regulatory benchmarking, organized in collaboration with the Council of European Energy Regulators (CEER) in 2008 and 2012 for 22 and 23 TSOs, respectively. The frontier study provides static cost...

  5. An OpenMP Compiler Benchmark

    Directory of Open Access Journals (Sweden)

    Matthias S. Müller

    2003-01-01

    Full Text Available The purpose of this benchmark is to propose several optimization techniques and to test their existence in current OpenMP compilers. Examples are the removal of redundant synchronization constructs, effective constructs for alternative code and orphaned directives. The effectiveness of the compiler generated code is measured by comparing different OpenMP constructs and compilers. If possible, we also compare with the hand coded "equivalent" solution. Six out of seven proposed optimization techniques are already implemented in different compilers. However, most compilers implement only one or two of them.

  6. Benchmarks of Global Clean Energy Manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-01-01

    The Clean Energy Manufacturing Analysis Center (CEMAC), sponsored by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), provides objective analysis and up-to-date data on global supply chains and manufacturing of clean energy technologies. Benchmarks of Global Clean Energy Manufacturing sheds light on several fundamental questions about the global clean technology manufacturing enterprise: How does clean energy technology manufacturing impact national economies? What are the economic opportunities across the manufacturing supply chain? What are the global dynamics of clean energy technology manufacturing?

  7. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  8. Benchmark simulations of ICRF antenna coupling

    International Nuclear Information System (INIS)

    Louche, F.; Lamalle, P. U.; Messiaen, A. M.; Compernolle, B. van; Milanesio, D.; Maggiora, R.

    2007-01-01

    The paper reports on ongoing benchmark numerical simulations of antenna input impedance parameters in the ion cyclotron range of frequencies with different coupling codes: CST Microwave Studio, TOPICA and ANTITER 2. In particular we study the validity of the approximation of a magnetized plasma slab by a dielectric medium of suitably chosen permittivity. Different antenna models are considered: a single-strap antenna, a 4-strap antenna and the 24-strap ITER antenna array. Whilst the diagonal impedances are mostly in good agreement, some differences between the mutual terms predicted by Microwave Studio and TOPICA have yet to be resolved

  9. COVE 2A Benchmarking calculations using NORIA

    International Nuclear Information System (INIS)

    Carrigan, C.R.; Bixler, N.E.; Hopkins, P.L.; Eaton, R.R.

    1991-10-01

    Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs

  10. Red Hill Updates

    Science.gov (United States)

    This and other periodic updates are intended to keep the public informed on major progress being made to protect public health and the environment at the Red Hill Underground Fuel Storage Facility in Hawaii.

  11. Benchmarking the internal combustion engine and hydrogen

    International Nuclear Information System (INIS)

    Wallace, J.S.

    2006-01-01

    The internal combustion engine is a cost-effective and highly reliable energy conversion technology. Exhaust emission regulations introduced in the 1970's triggered extensive research and development that has significantly improved in-use fuel efficiency and dramatically reduced exhaust emissions. The current level of gasoline vehicle engine development is highlighted and representative emissions and efficiency data are presented as benchmarks. The use of hydrogen fueling for IC engines has been investigated over many decades and the benefits and challenges arising are well-known. The current state of hydrogen-fueled engine development will be reviewed and evaluated against gasoline-fueled benchmarks. The prospects for further improvements to hydrogen-fueled IC engines will be examined. While fuel cells are projected to offer greater energy efficiency than IC engines and zero emissions, the availability of fuel cells in quantity at reasonable cost is a barrier to their widespread adaptation for the near future. In their current state of development, hydrogen fueled IC engines are an effective technology to create demand for hydrogen fueling infrastructure until fuel cells become available in commercial quantities. During this transition period, hydrogen fueled IC engines can achieve PZEV/ULSLEV emissions. (author)

  12. Simple mathematical law benchmarks human confrontations

    Science.gov (United States)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  13. REVISED STREAM CODE AND WASP5 BENCHMARK

    International Nuclear Information System (INIS)

    Chen, K

    2005-01-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within ±20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within ±3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls

  14. UAV CAMERAS: OVERVIEW AND GEOMETRIC CALIBRATION BENCHMARK

    Directory of Open Access Journals (Sweden)

    M. Cramer

    2017-08-01

    Full Text Available Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial calibrations runs. Already (pre-calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  15. Uav Cameras: Overview and Geometric Calibration Benchmark

    Science.gov (United States)

    Cramer, M.; Przybilla, H.-J.; Zurhorst, A.

    2017-08-01

    Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  16. Multisensor benchmark data for riot control

    Science.gov (United States)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  17. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc. © 2015 Wiley Periodicals, Inc.

  18. Benchmarking of SIMULATE-3 on engineering workstations

    International Nuclear Information System (INIS)

    Karlson, C.F.; Reed, M.L.; Webb, J.R.; Elzea, J.D.

    1990-01-01

    The nuclear fuel management department of Arizona Public Service Company (APS) has evaluated various computer platforms for a departmental engineering and business work-station local area network (LAN). Historically, centralized mainframe computer systems have been utilized for engineering calculations. Increasing usage and the resulting longer response times on the company mainframe system and the relative cost differential between a mainframe upgrade and workstation technology justified the examination of current workstations. A primary concern was the time necessary to turn around routine reactor physics reload and analysis calculations. Computers ranging from a Definicon 68020 processing board in an AT compatible personal computer up to an IBM 3090 mainframe were benchmarked. The SIMULATE-3 advanced nodal code was selected for benchmarking based on its extensive use in nuclear fuel management. SIMULATE-3 is used at APS for reload scoping, design verification, core follow, and providing predictions of reactor behavior under nominal conditions and planned reactor maneuvering, such as axial shape control during start-up and shutdown

  19. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  20. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  1. Shoulder dystocia: An update and review of new techniques | Cluver ...

    African Journals Online (AJOL)

    The definition of shoulder dystocia and the incidence vary. Worldwide, shoulder dystocia may be increasing. In this update we look at the complications for both mother and fetus, and review the risk factors and strategies for possible prevention. Management options include the McRoberts position, techniques to deliver the ...

  2. The LUVOIR Mission Concept: Update and Technology Overview

    Science.gov (United States)

    Bolcar, Matthew R.

    2016-01-01

    We present an overview of the Large Ultra Violet Optical Infrared (LUVOIR) decadal mission concept study. We provide updates from recent activities of the Science and Technology Definition Team (STDT) and the Technology Working Group (TWG). We review the technology prioritization and discuss specific technology needs to enable the LUVOIR mission.

  3. A History and Interpretation of Aircraft Icing Intensity Definitions and FAA Rules for Operating in Icing Conditions

    National Research Council Canada - National Science Library

    Jeck, Richard

    2001-01-01

    ... other. There have been several changes in both the definitions and the regulations over time, and part of the problem is that the definitions have not been updated or clarified to account for current regulations...

  4. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  5. Robot Visual Tracking via Incremental Self-Updating of Appearance Model

    Directory of Open Access Journals (Sweden)

    Danpei Zhao

    2013-09-01

    Full Text Available This paper proposes a target tracking method called Incremental Self-Updating Visual Tracking for robot platforms. Our tracker treats the tracking problem as a binary classification: the target and the background. The greyscale, HOG and LBP features are used in this work to represent the target and are integrated into a particle filter framework. To track the target over long time sequences, the tracker has to update its model to follow the most recent target. In order to deal with the problems of calculation waste and lack of model-updating strategy with the traditional methods, an intelligent and effective online self-updating strategy is devised to choose the optimal update opportunity. The strategy of updating the appearance model can be achieved based on the change in the discriminative capability between the current frame and the previous updated frame. By adjusting the update step adaptively, severe waste of calculation time for needless updates can be avoided while keeping the stability of the model. Moreover, the appearance model can be kept away from serious drift problems when the target undergoes temporary occlusion. The experimental results show that the proposed tracker can achieve robust and efficient performance in several benchmark-challenging video sequences with various complex environment changes in posture, scale, illumination and occlusion.

  6. Energy Economic Data Base (EEDB) Program: Phase VI update (1983) report

    International Nuclear Information System (INIS)

    1984-09-01

    This update of the Energy Economic Data Base is the latest in a series of technical and cost studies prepared by United Engineers and Constructors Inc., during the last 18 years. The data base was developed during 1978 and has been updated annually since then. The purpose of the updates has been to reflect the impact of changing regulations and technology on the costs of electric power generating stations. This Phase VI (Sixth) Update report documents the results of the 1983 EEDB Program update effort. The latest effort was a comprehensive update of the technical and capital cost information for the pressurized water reactor, boiling water reactor, and liquid metal fast breeder reactor nuclear power plant data models and for the 800 MWe and 500 MWe high sulfur coal-fired power plant data models. The update provided representative costs for these nuclear and coal-fired power plants for the 1980's. In addition, the updated nuclear power plant data models for the 1980's were modified to provide anticipated costs for nuclear power plants for the 1990's. Consequently, the Phase VI Update has continued to provide important benchmark information through which technical and capital cost trends may be identified that have occurred since January 1, 1978

  7. Energy Economic Data Base (EEDB) Program: Phase VI update (1983) report

    Energy Technology Data Exchange (ETDEWEB)

    1984-09-01

    This update of the Energy Economic Data Base is the latest in a series of technical and cost studies prepared by United Engineers and Constructors Inc., during the last 18 years. The data base was developed during 1978 and has been updated annually since then. The purpose of the updates has been to reflect the impact of changing regulations and technology on the costs of electric power generating stations. This Phase VI (Sixth) Update report documents the results of the 1983 EEDB Program update effort. The latest effort was a comprehensive update of the technical and capital cost information for the pressurized water reactor, boiling water reactor, and liquid metal fast breeder reactor nuclear power plant data models and for the 800 MWe and 500 MWe high sulfur coal-fired power plant data models. The update provided representative costs for these nuclear and coal-fired power plants for the 1980's. In addition, the updated nuclear power plant data models for the 1980's were modified to provide anticipated costs for nuclear power plants for the 1990's. Consequently, the Phase VI Update has continued to provide important benchmark information through which technical and capital cost trends may be identified that have occurred since January 1, 1978.

  8. A Dwarf-based Scalable Big Data Benchmarking Methodology

    OpenAIRE

    Gao, Wanling; Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zheng, Daoyi; Jia, Zhen; Xie, Biwei; Zheng, Chen; Yang, Qiang; Wang, Haibin

    2017-01-01

    Different from the traditional benchmarking methodology that creates a new benchmark or proxy for every possible workload, this paper presents a scalable big data benchmarking methodology. Among a wide variety of big data analytics workloads, we identify eight big data dwarfs, each of which captures the common requirements of each class of unit of computation while being reasonably divorced from individual implementations. We implement the eight dwarfs on different software stacks, e.g., Open...

  9. Evaluating the Resilience of the Bottom-up Method used to Detect and Benchmark the Smartness of University Campuses

    DEFF Research Database (Denmark)

    Giovannella, Carlo; Andone, Diana; Dascalu, Mihai

    2016-01-01

    A new method to perform a bottom-up extraction and benchmark of the perceived multilevel smartness of complex ecosystems has been recently described and applied to territories and learning ecosystems like university campuses and schools. In this paper we study the resilience of our method...... by comparing and integrating the data collected in several European Campuses during two different academic years, 2014-15 and 2015-16. The overall results are: a) a more adequate and robust definition of the orthogonal multidimensional space of representation of the smartness, and b) the definition...

  10. Benchmarking Best Practices in Transformation for Sea Enterprise

    National Research Council Canada - National Science Library

    Brook, Douglas A; Hudgens, Bryan; Nguyen, Nam; Walsh, Katherine

    2006-01-01

    ... applied to reinvestment and recapitalization. Sea Enterprise contracted the Center for Defense Management Reform to research transformation and benchmarking best practices in the private sector...

  11. Experiences with installing and benchmarking SCALE 4.0 on workstations

    International Nuclear Information System (INIS)

    Montierth, L.M.; Briggs, J.B.

    1992-01-01

    The advent of economical, high-speed workstations has placed on the criticality engineer's desktop the means to perform computational analysis that was previously possible only on mainframe computers. With this capability comes the need to modify and maintain criticality codes for use on a variety of different workstations. Due to the use of nonstandard coding, compiler differences [in lieu of American National Standards Institute (ANSI) standards], and other machine idiosyncrasies, there is a definite need to systematically test and benchmark all codes ported to workstations. Once benchmarked, a user environment must be maintained to ensure that user code does not become corrupted. The goal in creating a workstation version of the criticality safety analysis sequence (CSAS) codes in SCALE 4.0 was to start with the Cray versions and change as little source code as possible yet produce as generic a code as possible. To date, this code has been ported to the IBM RISC 6000, Data General AViiON 400, Silicon Graphics 4D-35 (all using the same source code), and to the Hewlett Packard Series 700 workstations. The code is maintained under a configuration control procedure. In this paper, the authors address considerations that pertain to the installation and benchmarking of CSAS

  12. Development of an MPI benchmark program library

    Energy Technology Data Exchange (ETDEWEB)

    Uehara, Hitoshi

    2001-03-01

    Distributed parallel simulation software with message passing interfaces has been developed to realize large-scale and high performance numerical simulations. The most popular API for message communication is an MPI. The MPI will be provided on the Earth Simulator. It is known that performance of message communication using the MPI libraries gives a significant influence on a whole performance of simulation programs. We developed an MPI benchmark program library named MBL in order to measure the performance of message communication precisely. The MBL measures the performance of major MPI functions such as point-to-point communications and collective communications and the performance of major communication patterns which are often found in application programs. In this report, the description of the MBL and the performance analysis of the MPI/SX measured on the SX-4 are presented. (author)

  13. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  14. NASA Indexing Benchmarks: Evaluating Text Search Engines

    Science.gov (United States)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  15. Benchmarking organic mixed conductors for transistors

    KAUST Repository

    Inal, Sahika

    2017-11-20

    Organic mixed conductors have garnered significant attention in applications from bioelectronics to energy storage/generation. Their implementation in organic transistors has led to enhanced biosensing, neuromorphic function, and specialized circuits. While a narrow class of conducting polymers continues to excel in these new applications, materials design efforts have accelerated as researchers target new functionality, processability, and improved performance/stability. Materials for organic electrochemical transistors (OECTs) require both efficient electronic transport and facile ion injection in order to sustain high capacity. In this work, we show that the product of the electronic mobility and volumetric charge storage capacity (µC*) is the materials/system figure of merit; we use this framework to benchmark and compare the steady-state OECT performance of ten previously reported materials. This product can be independently verified and decoupled to guide materials design and processing. OECTs can therefore be used as a tool for understanding and designing new organic mixed conductors.

  16. Benchmark Results for Few-Body Hypernuclei

    Science.gov (United States)

    Ferrari Ruffino, F.; Lonardoni, D.; Barnea, N.; Deflorian, S.; Leidemann, W.; Orlandini, G.; Pederiva, F.

    2017-05-01

    The Non-Symmetrized Hyperspherical Harmonics method (NSHH) is introduced in the hypernuclear sector and benchmarked with three different ab-initio methods, namely the Auxiliary Field Diffusion Monte Carlo method, the Faddeev-Yakubovsky approach and the Gaussian Expansion Method. Binding energies and hyperon separation energies of three- to five-body hypernuclei are calculated by employing the two-body Λ N component of the phenomenological Bodmer-Usmani potential (Bodmer and Usmani in Nucl Phys A 477:621, 1988; Usmani and Khanna in J Phys G 35:025105, 2008), and a hyperon-nucleon interaction (Hiyama et al. in Phus Rev C 65:011301, 2001) simulating the scattering phase shifts given by NSC97f (Rijken et al. in Phys Rev C 59:21, 1999). The range of applicability of the NSHH method is briefly discussed.

  17. Supply network configuration—A benchmarking problem

    Science.gov (United States)

    Brandenburg, Marcus

    2018-03-01

    Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.

  18. Development of solutions to benchmark piping problems

    Energy Technology Data Exchange (ETDEWEB)

    Reich, M; Chang, T Y; Prachuktam, S; Hartzman, M

    1977-12-01

    Benchmark problems and their solutions are presented. The problems consist in calculating the static and dynamic response of selected piping structures subjected to a variety of loading conditions. The structures range from simple pipe geometries to a representative full scale primary nuclear piping system, which includes the various components and their supports. These structures are assumed to behave in a linear elastic fashion only, i.e., they experience small deformations and small displacements with no existing gaps, and remain elastic through their entire response. The solutions were obtained by using the program EPIPE, which is a modification of the widely available program SAP IV. A brief outline of the theoretical background of this program and its verification is also included.

  19. Shielding integral benchmark archive and database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L.; Grove, R.E. [Radiation Safety Information Computational Center RSICC, Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831-6171 (United States); Kodeli, I. [Josef Stefan Inst., Jamova 39, 1000 Ljubljana (Slovenia); Gulliford, J.; Sartori, E. [OECD NEA Data Bank, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2011-07-01

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  20. Comparison and validation of HEU and LEU modeling results to HEU experimental benchmark data for the Massachusetts Institute of Technology MITR reactor.

    Energy Technology Data Exchange (ETDEWEB)

    Newton, T. H.; Wilson, E. H; Bergeron, A.; Horelik, N.; Stevens, J. (Nuclear Engineering Division); (MIT Nuclear Reactor Lab.)

    2011-03-02

    The Massachusetts Institute of Technology Reactor (MITR-II) is a research reactor in Cambridge, Massachusetts designed primarily for experiments using neutron beam and in-core irradiation facilities. It delivers a neutron flux comparable to current LWR power reactors in a compact 6 MW core using Highly Enriched Uranium (HEU) fuel. In the framework of its non-proliferation policies, the international community presently aims to minimize the amount of nuclear material available that could be used for nuclear weapons. In this geopolitical context, most research and test reactors both domestic and international have started a program of conversion to the use of Low Enriched Uranium (LEU) fuel. A new type of LEU fuel based on an alloy of uranium and molybdenum (UMo) is expected to allow the conversion of U.S. domestic high performance reactors like the MITR-II reactor. Towards this goal, comparisons of MCNP5 Monte Carlo neutronic modeling results for HEU and LEU cores have been performed. Validation of the model has been based upon comparison to HEU experimental benchmark data for the MITR-II. The objective of this work was to demonstrate a model which could represent the experimental HEU data, and therefore could provide a basis to demonstrate LEU core performance. This report presents an overview of MITR-II model geometry and material definitions which have been verified, and updated as required during the course of validation to represent the specifications of the MITR-II reactor. Results of calculations are presented for comparisons to historical HEU start-up data from 1975-1976, and to other experimental benchmark data available for the MITR-II Reactor through 2009. This report also presents results of steady state neutronic analysis of an all-fresh LEU fueled core. Where possible, HEU and LEU calculations were performed for conditions equivalent to HEU experiments, which serves as a starting point for safety analyses for conversion of MITR-II from the use of HEU

  1. Benchmark calculation of subchannel analysis codes

    International Nuclear Information System (INIS)

    1996-02-01

    In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)

  2. Benchmarking management practices in Australian public healthcare.

    Science.gov (United States)

    Agarwal, Renu; Green, Roy; Agarwal, Neeru; Randhawa, Krithika

    2016-01-01

    The purpose of this paper is to investigate the quality of management practices of public hospitals in the Australian healthcare system, specifically those in the state-managed health systems of Queensland and New South Wales (NSW). Further, the authors assess the management practices of Queensland and NSW public hospitals jointly and globally benchmark against those in the health systems of seven other countries, namely, USA, UK, Sweden, France, Germany, Italy and Canada. In this study, the authors adapt the unique and globally deployed Bloom et al. (2009) survey instrument that uses a "double blind, double scored" methodology and an interview-based scoring grid to measure and internationally benchmark the management practices in Queensland and NSW public hospitals based on 21 management dimensions across four broad areas of management - operations, performance monitoring, targets and people management. The findings reveal the areas of strength and potential areas of improvement in the Queensland and NSW Health hospital management practices when compared with public hospitals in seven countries, namely, USA, UK, Sweden, France, Germany, Italy and Canada. Together, Queensland and NSW Health hospitals perform best in operations management followed by performance monitoring. While target management presents scope for improvement, people management is the sphere where these Australian hospitals lag the most. This paper is of interest to both hospital administrators and health care policy-makers aiming to lift management quality at the hospital level as well as at the institutional level, as a vehicle to consistently deliver sustainable high-quality health services. This study provides the first internationally comparable robust measure of management capability in Australian public hospitals, where hospitals are run independently by the state-run healthcare systems. Additionally, this research study contributes to the empirical evidence base on the quality of

  3. Summary of the First Workshop on OECD/NRC boiling water reactor turbine trip benchmark

    International Nuclear Information System (INIS)

    2000-11-01

    the Benchmark Specifications and the definition of three exercises. The information needed for the exercises was clearly presented to the participants in order to be able to proceed with the calculations in an efficient and timely manner. Three workshops are scheduled during the course of the benchmark activities. This document is the summary of the First Workshop

  4. LD Definition.

    Science.gov (United States)

    Learning Disability Quarterly, 1987

    1987-01-01

    The position paper (1981) of the National Joint Committee on Learning Disabilities presents a revised definition of learning disabilities and identifies issues and concerns (such as the limitation to children and the exclusion clause) associated with the definition included in P.L. 94-142, the Education for All Handicapped Children Act. (DB)

  5. Multimodal Definition:

    African Journals Online (AJOL)

    user

    booming as a common tool of language learning and use. In the field of e-lexi- cography, such ... defining, as indicated under the entry of definition in Oxford Advanced Learner's. Dictionary of Current English (the ... In the above-mentioned context, this article argues for the establishment of the notion of multimodal definition ...

  6. Benchmarking on the management of radioactive waste; Benchmarking sobre la gestion de los residuos radiactivos

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Gomez, M. a.; Gonzalez Gandal, R.; Gomez Castano, N.

    2013-09-01

    In this project, an evaluation of the practices carried out in the waste management field at the Spanish nuclear power plants has been done following the Benchmarking methodology. This process has allowed the identification of aspects to improve waste treatment processes; to reduce the volume of waste; to reduce management costs and to establish ways of management for the waste stream which do not have. (Author)

  7. Benchmarking in pathology: development of a benchmarking complexity unit and associated key performance indicators.

    Science.gov (United States)

    Neil, Amanda; Pfeffer, Sally; Burnett, Leslie

    2013-01-01

    This paper details the development of a new type of pathology laboratory productivity unit, the benchmarking complexity unit (BCU). The BCU provides a comparative index of laboratory efficiency, regardless of test mix. It also enables estimation of a measure of how much complex pathology a laboratory performs, and the identification of peer organisations for the purposes of comparison and benchmarking. The BCU is based on the theory that wage rates reflect productivity at the margin. A weighting factor for the ratio of medical to technical staff time was dynamically calculated based on actual participant site data. Given this weighting, a complexity value for each test, at each site, was calculated. The median complexity value (number of BCUs) for that test across all participating sites was taken as its complexity value for the Benchmarking in Pathology Program. The BCU allowed implementation of an unbiased comparison unit and test listing that was found to be a robust indicator of the relative complexity for each test. Employing the BCU data, a number of Key Performance Indicators (KPIs) were developed, including three that address comparative organisational complexity, analytical depth and performance efficiency, respectively. Peer groups were also established using the BCU combined with simple organisational and environmental metrics. The BCU has enabled productivity statistics to be compared between organisations. The BCU corrects for differences in test mix and workload complexity of different organisations and also allows for objective stratification into peer groups.

  8. Update of European bioethics

    DEFF Research Database (Denmark)

    Rendtorff, Jacob Dahl

    2015-01-01

    This paper presents an update of the research on European bioethics undertaken by the author together with Professor Peter Kemp since the 1990s, on Basic ethical principles in European bioethics and biolaw. In this European approach to basic ethical principles in bioethics and biolaw......, the principles of autonomy, dignity, integrity and vulnerability are proposed as the most important ethical principles for respect for the human person in biomedical and biotechnological development. This approach to bioethics and biolaw is presented here in a short updated version that integrates the earlier...... research in a presentation of the present understanding of the basic ethical principles in bioethics and biolaw....

  9. Updating action domain descriptions.

    Science.gov (United States)

    Eiter, Thomas; Erdem, Esra; Fink, Michael; Senko, Ján

    2010-10-01

    Incorporating new information into a knowledge base is an important problem which has been widely investigated. In this paper, we study this problem in a formal framework for reasoning about actions and change. In this framework, action domains are described in an action language whose semantics is based on the notion of causality. Unlike the formalisms considered in the related work, this language allows straightforward representation of non-deterministic effects and indirect effects of (possibly concurrent) actions, as well as state constraints; therefore, the updates can be more general than elementary statements. The expressivity of this formalism allows us to study the update of an action domain description with a more general approach compared to related work. First of all, we consider the update of an action description with respect to further criteria, for instance, by ensuring that the updated description entails some observations, assertions, or general domain properties that constitute further constraints that are not expressible in an action description in general. Moreover, our framework allows us to discriminate amongst alternative updates of action domain descriptions and to single out a most preferable one, based on a given preference relation possibly dependent on the specified criteria. We study semantic and computational aspects of the update problem, and establish basic properties of updates as well as a decomposition theorem that gives rise to a divide and conquer approach to updating action descriptions under certain conditions. Furthermore, we study the computational complexity of decision problems around computing solutions, both for the generic setting and for two particular preference relations, viz. set-inclusion and weight-based preference. While deciding the existence of solutions and recognizing solutions are PSPACE-complete problems in general, the problems fall back into the polynomial hierarchy under restrictions on the additional

  10. BI-RADS update.

    Science.gov (United States)

    Mercado, Cecilia L

    2014-05-01

    The updated American College of Radiology (ACR) Breast Imaging Reporting and Data System (BI-RADS) has been newly released. This article summarizes the changes and updates that have been made to BI-RADS. The goal of the revised edition continues to be the same: to improve clarification in image interpretation, maintain reporting standardization, and simplify the monitoring of outcomes. The new BI-RADS also introduces new terminology to provide a more universal lexicon across all 3 imaging modalities. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  12. The Learning Organisation: Results of a Benchmarking Study.

    Science.gov (United States)

    Zairi, Mohamed

    1999-01-01

    Learning in corporations was assessed using these benchmarks: core qualities of creative organizations, characteristic of organizational creativity, attributes of flexible organizations, use of diversity and conflict, creative human resource management systems, and effective and successful teams. These benchmarks are key elements of the learning…

  13. Present status of International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    Miyoshi, Yoshinori

    2000-01-01

    The International Criticality Safety Evaluation Project, ICSBEP was designed to identify and evaluate a comprehensive set of critical experiment benchmark data. Compilation of the data into a standardized format are made by reviewing original and subsequently revised documentation for calculating each experiment with standard criticality safety codes. Five handbooks of evaluated criticality safety benchmark experiments have been published since 1995. (author)

  14. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Science.gov (United States)

    2010-01-01

    ... recent data available, and periodically revise, the home energy cost benchmarks and the high energy cost...), other petroleum products, wood and other biomass fuels, coal, wind and solar energy. ... 7 Agriculture 11 2010-01-01 2010-01-01 false Determination of energy cost benchmarks. 1709.5...

  15. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...... to a number of constrains. The objective is to develop an efficient and optimal control strategy....

  16. Case mix classification and a benchmark set for surgery scheduling

    NARCIS (Netherlands)

    Leeftink, Gréanne; Hans, Erwin W.

    Numerous benchmark sets exist for combinatorial optimization problems. However, in healthcare scheduling, only a few benchmark sets are known, mainly focused on nurse rostering. One of the most studied topics in the healthcare scheduling literature is surgery scheduling, for which there is no widely

  17. Quality indicators for international benchmarking of mental health care

    DEFF Research Database (Denmark)

    Hermann, Richard C; Mattke, Soeren; Somekh, David

    2006-01-01

    To identify quality measures for international benchmarking of mental health care that assess important processes and outcomes of care, are scientifically sound, and are feasible to construct from preexisting data.......To identify quality measures for international benchmarking of mental health care that assess important processes and outcomes of care, are scientifically sound, and are feasible to construct from preexisting data....

  18. Benchmark – based review as a strategy for microfinance delivery ...

    African Journals Online (AJOL)

    Benchmark – based review as a strategy for microfinance delivery. ... A.O Ejiogu, C.O Utazi. Abstract. Microfinance is one of the development tools for poverty reduction. The traditional supply-led subsidized credit delivery has led to increase in credit disbursements. However, there is shortage of model benchmark and ...

  19. Advocacy for Benchmarking in the Nigerian Institute of Advanced ...

    African Journals Online (AJOL)

    FIRST LADY

    Abstract. The paper gave a general overview of benchmarking and its novel application to library practice with a view to achieve organizational change and improved performance. Based on literature, the paper took an analytic, descriptive and qualitative overview of benchmarking practices vis a vis services in law libraries ...

  20. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    Science.gov (United States)

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  1. Procedural restraint in children's nursing: using clinical benchmarks.

    Science.gov (United States)

    Bland, Michael; Bridge, Caroline; Cooper, Melanie; Dixon, Deborah; Hay, Lyndsey; Zerbato, Anna

    2002-08-01

    This paper will explore the use of child restraint during common clinical procedures such as venepuncture, cannulation and lumbar puncture. A lack of research, guidelines and protocols on restraining children led a group of student nurses to develop a clinical practice benchmark on procedural restraint for the North West Clinical Practice Benchmarking Group.

  2. Evaluation of an international benchmarking initiative in nine eye hospitals.

    Science.gov (United States)

    de Korne, Dirk F; Sol, Kees J C A; van Wijngaarden, Jeroen D H; van Vliet, Ellen J; Custers, Thomas; Cubbon, Mark; Spileers, Werner; Ygge, Jan; Ang, Chong-Lye; Klazinga, Niek S

    2010-01-01

    Benchmarking has become very popular among managers to improve quality in the private and public sector, but little is known about its applicability in international hospital settings. The purpose of this study was to evaluate the applicability of an international benchmarking initiative in eye hospitals. To assess the applicability, an evaluation frame was constructed on the basis of a systematic literature review. The frame was applied longitudinally to a case study of nine eye hospitals that used a set of performance indicators for benchmarking. Document analysis, nine questionnaires, and 26 semistructured interviews with stakeholders in each hospital were used for qualitative analysis. The evaluation frame consisted of four areas with key conditions for benchmarking: purposes of benchmarking, performance indicators, participating organizations, and performance management systems. This study showed that the international benchmarking between eye hospitals scarcely met these conditions. The used indicators were not incorporated in a performance management system in any of the hospitals. Despite the apparent homogeneity of the participants and the absence of competition, differences in ownership, governance structure, reimbursement, and market orientation made comparisons difficult. Benchmarking, however, stimulated learning and exchange of knowledge. It encouraged interaction and thereby learning on the tactical and operational levels, which is also an incentive to attract and motivate staff. Although international hospital benchmarking seems to be a rational process of sharing performance data, this case study showed that it is highly dependent on social processes and a learning environment. It can be useful for diagnostics, helping local hospitals to catalyze performance improvements.

  3. Evaluation of an international benchmarking initiative in nine eye hospitals

    NARCIS (Netherlands)

    de Korne, Dirk F.; Sol, Kees J. C. A.; van Wijngaarden, Jeroen D. H.; van Vliet, Ellen J.; Custers, Thomas; Cubbon, Mark; Spileers, Werner; Ygge, Jan; Ang, Chong-Lye; Klazinga, Niek S.

    2010-01-01

    BACKGROUND:: Benchmarking has become very popular among managers to improve quality in the private and public sector, but little is known about its applicability in international hospital settings. PURPOSE:: The purpose of this study was to evaluate the applicability of an international benchmarking

  4. Aerodynamic benchmarking of the DeepWind design

    DEFF Research Database (Denmark)

    Bedon, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...

  5. Advocacy for Benchmarking in the Nigerian Institute of Advanced ...

    African Journals Online (AJOL)

    The paper gave a general overview of benchmarking and its novel application to library practice with a view to achieve organizational change and improved performance. Based on literature, the paper took an analytic, descriptive and qualitative overview of benchmarking practices vis a vis services in law libraries generally ...

  6. A Benchmark for Online Non-Blocking Schema Transformations

    NARCIS (Netherlands)

    Wevers, L.; Hofstra, Matthijs; Tammens, Menno; Huisman, Marieke; van Keulen, Maurice

    2015-01-01

    This paper presents a benchmark for measuring the blocking behavior of schema transformations in relational database systems. As a basis for our benchmark, we have developed criteria for the functionality and performance of schema transformation mechanisms based on the characteristics of state of

  7. Presidential Address 1997--Benchmarks for the Next Millennium.

    Science.gov (United States)

    Baker, Pamela C.

    1997-01-01

    Reflects on the century's preeminent benchmarks, including the evolution in the lives of people with disabilities and the prevention of many causes of mental retardation. The ethical challenges of genetic engineering and diagnostic technology and the need for new benchmarks in policy, practice, and research are discussed. (CR)

  8. Computational benchmark problem for deep penetration in iron

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Carter, L.L.

    1980-01-01

    A calculational benchmark problem which is simple to model and easy to interpret is described. The benchmark consists of monoenergetic 2-, 4-, or 40-MeV neutrons normally incident upon a 3-m-thick pure iron slab. Currents, fluxes, and radiation doses are tabulated throughout the slab

  9. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...

  10. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  11. Benchmarking with the BLASST Sessional Staff Standards Framework

    Science.gov (United States)

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  12. BIM quickscan: benchmark of BIM performance in the Netherlands

    NARCIS (Netherlands)

    Berlo, L.A.H.M. van; Dijkmans, T.J.A.; Hendriks, H.; Spekkink, D.; Pel, W.

    2012-01-01

    In 2009 a “BIM QuickScan” for benchmarking BIM performance was created in the Netherlands (Sebastian, Berlo 2010). This instrument aims to provide insight into the current BIM performance of a company. The benchmarking instrument combines quantitative and qualitative assessments of the ‘hard’ and

  13. Benchmarks for Psychotherapy Efficacy in Adult Major Depression

    Science.gov (United States)

    Minami, Takuya; Wampold, Bruce E.; Serlin, Ronald C.; Kircher, John C.; Brown, George S.

    2007-01-01

    This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression…

  14. Benchmarking rehabilitation practice in the intensive care unit.

    Science.gov (United States)

    Knott, Anna; Stevenson, Matt; Harlow, Stephanie Km

    2015-02-01

    Early rehabilitation in critically ill patients has been demonstrated to be safe and is associated with many positive outcomes. Despite this, there are inconsistencies in the early active rehabilitation that patients receive on intensive care units. The aims of this study were to quantify the amount of active rehabilitation provided for patients in a District General Hospital intensive care unit and to identify specific barriers encountered. Data were collected over a six-week period during March and April 2013. All patients admitted to the intensive care unit at St Peter's Hospital for more than 48 h were included. For every treatment session, the treating physiotherapist recorded what type of treatment took place. Treatments were classified as either non-active or active rehabilitation. Non-active rehabilitation included chest physiotherapy, passive range of movement exercises and hoisting to a chair. Active rehabilitation was defined as any treatment including active/active-assisted exercises, sitting on the edge of the bed, sitting to standing, standing transfers, marching on the spot or ambulation. Classification of rehabilitation was based upon internationally agreed intensive care unit activity codes and definitions. All barriers to active rehabilitation were also recorded. The study included 35 patients with a total of 194 physiotherapy treatment sessions. Active rehabilitation was included in 51% of all treatment sessions. The median time to commencing active rehabilitation from intensive care unit admission was 3 days (range 3-42 [IQR 3-7]). The most frequent barriers to active rehabilitation were sedation and endotracheal tubes, which together accounted for 50% of the total barriers. The study provides useful benchmarking of current rehabilitation activity in a District General Hospital intensive care unit and highlights the most common barriers encountered to active rehabilitation. Longer duration studies incorporating larger sample sizes are

  15. Characterization of the benchmark binary NLTT 33370 {sup ,}

    Energy Technology Data Exchange (ETDEWEB)

    Schlieder, Joshua E.; Bonnefoy, Mickaël; Herbst, T. M.; Henning, Thomas; Biller, Beth; Bergfors, Carolina; Brandner, Wolfgang [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Lépine, Sébastien; Rice, Emily [Department of Astrophysics, Division of Physical Sciences, American Museum of Natural History, Central Park West at 79th Street, New York, NY 10024 (United States); Berger, Edo [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Skemer, Andrew; Hinz, Philip; Defrère, Denis; Leisenring, Jarron [Steward Observatory, Department of Astronomy, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721 (United States); Chauvin, Gaël; Lagrange, Anne-Marie [UJF-Grenoble 1/CNRS-INSU, Institut de Planètologie et d' Astrophysique de Grenoble (IPAG) UMR 5274, Grenoble F-38041 (France); Girard, Julien H. V. [European Southern Observatory, Casilla 19001, Santiago 19 (Chile); Lacour, Sylvestre [LESIA, Observatoire de Paris, CNRS, University Pierre et Marie Curie Paris 6 and University Denis Diderot Paris 7, 5 place Jules Janssen, F-92195 Meudon (France); Skrutskie, Michael, E-mail: schlieder@mpia-hd.mpg.de [Department of Astronomy, University of Virginia, Charlottesville, VA 22904 (United States)

    2014-03-01

    We confirm the binary nature of the nearby, very low mass (VLM) system NLTT 33370 with adaptive optics imaging and present resolved near-infrared photometry and integrated light optical and near-infrared spectroscopy to characterize the system. VLT-NaCo and LBTI-LMIRCam images show significant orbital motion between 2013 February and 2013 April. Optical spectra reveal weak, gravity-sensitive alkali lines and strong lithium 6708 Å absorption that indicate the system is younger than field age. VLT-SINFONI near-IR spectra also show weak, gravity-sensitive features and spectral morphology that is consistent with other young VLM dwarfs. We combine the constraints from all age diagnostics to estimate a system age of ∼30-200 Myr. The 1.2-4.7 μm spectral energy distribution of the components point toward T {sub eff} = 3200 ± 500 K and T {sub eff} = 3100 ± 500 K for NLTT 33370 A and B, respectively. The observed spectra, derived temperatures, and estimated age combine to constrain the component spectral types to the range M6-M8. Evolutionary models predict masses of 97{sub −48}{sup +41} M{sub Jup} and 91{sub −44}{sup +41} M{sub Jup} from the estimated luminosities of the components. KPNO-Phoenix spectra allow us to estimate the systemic radial velocity of the binary. The Galactic kinematics of NLTT 33370AB are broadly consistent with other young stars in the solar neighborhood. However, definitive membership in a young, kinematic group cannot be assigned at this time and further follow-up observations are necessary to fully constrain the system's kinematics. The proximity, age, and late-spectral type of this binary make it very novel and an ideal target for rapid, complete orbit determination. The system is one of only a few model calibration benchmarks at young ages and VLMs.

  16. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  17. OWL2 benchmarking for the evaluation of knowledge based systems.

    Directory of Open Access Journals (Sweden)

    Sher Afgun Khan

    Full Text Available OWL2 semantics are becoming increasingly popular for the real domain applications like Gene engineering and health MIS. The present work identifies the research gap that negligible attention has been paid to the performance evaluation of Knowledge Base Systems (KBS using OWL2 semantics. To fulfil this identified research gap, an OWL2 benchmark for the evaluation of KBS is proposed. The proposed benchmark addresses the foundational blocks of an ontology benchmark i.e. data schema, workload and performance metrics. The proposed benchmark is tested on memory based, file based, relational database and graph based KBS for performance and scalability measures. The results show that the proposed benchmark is able to evaluate the behaviour of different state of the art KBS on OWL2 semantics. On the basis of the results, the end users (i.e. domain expert would be able to select a suitable KBS appropriate for his domain.

  18. Quality assurance and benchmarking: an approach for European dental schools.

    Science.gov (United States)

    Jones, M L; Hobson, R S; Plasschaert, A J M; Gundersen, S; Dummer, P; Roger-Leroi, V; Sidlauskas, A; Hamlin, J

    2007-08-01

    This document was written by Task Force 3 of DentEd III, which is a European Union funded Thematic Network working under the auspices of the Association for Dental Education in Europe (ADEE). It provides a guide to assist in the harmonisation of Dental Education Quality Assurance (QA) systems across the European Higher Education Area (EHEA). There is reference to the work, thus far, of DentEd, DentEd Evolves, DentEd III and the ADEE as they strive to assist the convergence of standards in dental education; obviously QA and benchmarking has an important part to play in the European HE response to the Bologna Process. Definitions of Quality, Quality Assurance, Quality Management and Quality Improvement are given and put into the context of dental education. The possible process and framework for Quality Assurance are outlined and some basic guidelines/recommendations suggested. It is recognised that Quality Assurance in Dental Schools has to co-exist as part of established Quality Assurance systems within faculties and universities, and that Schools also may have to comply with existing local or national systems. Perhaps of greatest importance are the 14 'requirements' for the Quality Assurance of Dental Education in Europe. These, together with the document and its appendices, were unanimously supported by the ADEE at its General Assembly in 2006. As there must be more than one road to achieve a convergence or harmonisation standard, a number of appendices are made available on the ADEE website. These provide a series of 'toolkits' from which schools can 'pick and choose' to assist them in developing QA systems appropriate to their own environment. Validated contributions and examples continue to be most welcome from all members of the European dental community for inclusion at this website. It is realised that not all schools will be able to achieve all of these requirements immediately, by definition, successful harmonisation is a process that will take time. At

  19. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    This report represents a description of compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC {gamma}-ray analysis computer program. The ISOTOPIC program performs analyses of {gamma}-ray spectra applied to specific acquisition configurations in order to apply finite-geometry correction factors and sample-matrix-container photon absorption correction factors. The analysis program provides an extensive set of preset acquisition configurations to which the user can add relevant parameters in order to build the geometry and absorption correction factors that the program determines from calculus and from nuclear g-ray absorption and scatter data. The Analytical Development Section field nuclear measurement group of the Savannah River National Laboratory uses the Ortec ISOTOPIC analysis program extensively for analyses of solid waste and process holdup applied to passive {gamma}-ray acquisitions. Frequently the results of these {gamma}-ray acquisitions and analyses are to determine compliance with facility criticality safety guidelines. Another use of results is to designate 55-gallon drum solid waste as qualified TRU waste3 or as low-level waste. Other examples of the application of the ISOTOPIC analysis technique to passive {gamma}-ray acquisitions include analyses of standard waste box items and unique solid waste configurations. In many passive {gamma}-ray acquisition circumstances the container and sample have sufficient density that the calculated energy-dependent transmission correction factors have intrinsic uncertainties in the range 15%-100%. This is frequently the case when assaying 55-gallon drums of solid waste with masses of up to 400 kg and when assaying solid waste in extensive unique containers. Often an accurate assay of the transuranic content of these containers is not required, but rather a good defensible designation as >100 nCi/g (TRU waste) or <100 nCi/g (low level solid waste) is required. In

  20. Goal Definition

    DEFF Research Database (Denmark)

    Bjørn, Anders; Laurent, Alexis; Owsianiak, Mikołaj

    2017-01-01

    The goal definition is the first phase of an LCA and determines the purpose of a study in detail. This chapter teaches how to perform the six aspects of a goal definition: (1) Intended applications of the results, (2) Limitations due to methodological choices, (3) Decision context and reasons...... for carrying out the study, (4) Target audience , (5) Comparative studies to be disclosed to the public and (6) Commissioner of the study and other influential actors. The instructions address both the conduct and reporting of a goal definition and are largely based on the ILCD guidance document (EC...

  1. Updating of the bovine neosporosis

    Directory of Open Access Journals (Sweden)

    Alexander Martínez Contreras

    2012-06-01

    Full Text Available In the fields of Medicine and bovine production, there is a wide variety of diseases affecting reproduction, in relation to the number of live births, the interval between births and open days, among others. Some of these diseases produce abortions and embryonic death, which explain the alteration of reproductive parameters. Many of these diseases have an infectious origin, such as parasites, bacteria, viruses and fungi, which are transmitted among animals. Besides, some of them have zoonotic features that generate problems to human health. Among these agents, the Neospora caninum, protozoan stands out. Its life cycle is fulfilled in several species of animals like the dog and the coyote. These two act as its definitive hosts and the cattle as its intermediary host. The Neospora caninum causes in the infected animals, reproductive disorders, clinical manifestations and decreased production which affects productivity of small, medium and large producers. Because of this, diagnostic techniques that allow understanding the epidemiological behavior of this disease have been developed. However in spite of being a major agent in the bovine reproductive health, few studies have been undertaken to determine the prevalence of this agent around the world. Therefore, the objective of this review was to collect updated information on the behavior of this parasite, targeting its epidemiology, its symptoms, its impact on production and the methods of its control and prevention.

  2. Supreme Court Update

    Science.gov (United States)

    Taylor, Kelley R.

    2009-01-01

    "Chief Justice Flubs Oath." "Justice Ginsburg Has Cancer Surgery." At the start of this year, those were the news headlines about the U.S. Supreme Court. But January 2009 also brought news about key education cases--one resolved and two others on the docket--of which school administrators should take particular note. The Supreme Court updates on…

  3. Update: Biological Nitrogen Fixation.

    Science.gov (United States)

    Wiseman, Alan; And Others

    1985-01-01

    Updates knowledge on nitrogen fixation, indicating that investigation of free-living nitrogen-fixing organisms is proving useful in understanding bacterial partners and is expected to lead to development of more effective symbioses. Specific areas considered include biochemistry/genetics, synthesis control, proteins and enzymes, symbiotic systems,…

  4. Liver transplantation : an update

    NARCIS (Netherlands)

    Verdonk, R. C.; Van den Berg, A. P.; Slooff, M. J. H.; Porte, R. J.; Haagsma, E. B.

    2007-01-01

    Liver transplantation has been an accepted treatment for end-stage liver disease since the 1980s. Currently it is a highly successful treatment for this indication. The aim of this review is to give a general update on recent developments in the field of liver transplantation. In the last decades

  5. Phaeochromocytoma – an update

    African Journals Online (AJOL)

    but has been advocated as the test of choice in many studies. The management of a phaeochromocytoma is mainly surgical and requires careful patient preparation to avoid catecholamine-induced complications during surgery. This review provides an update on phaeochromocytomas. Genetics. 68. J e. MDS a. November ...

  6. Updating protocols prodigy.

    Science.gov (United States)

    Ambrose, Kate

    2005-04-01

    If you are updating protocols, why not try the Prodigy website, at www.prodigy.nhs.uk ? It is a source of clinical knowledge on a range oftopics that is based on best evidence and organised to support clinical decision making.

  7. Update of telephone exchange

    CERN Multimedia

    2006-01-01

    As part of the upgrade of telephone services, the CERN switching centre will be updated on Monday 3 July between 8.00 p.m. and 3.00 a.m. Telephone services may be disrupted and possibly even interrupted during this operation.We apologise in advance for any inconvenience this may cause. CERN TELECOM Service

  8. Update of telephone exchange

    CERN Multimedia

    2006-01-01

    As part of the upgrade of telephone services, the CERN switching centre will be updated on between Monday 23 October 8.00 p.m. and Tuesday 24 October 2.00 a.m. Telephone services may be disrupted and possibly even interrupted during this operation. We apologise in advance for any inconvenience this may cause. CERN TELECOM Service

  9. Update of telephone exchange

    CERN Multimedia

    2006-01-01

    As part of the upgrade of telephone services, the CERN switching centre will be updated on Monday 3 July between 8.00 p.m. and 3.00 a.m. Telephone services may be disrupted and possibly even interrupted during this operation. We apologise in advance for any inconvenience this may cause. CERN TELECOM Service

  10. Update of telephone exchange

    CERN Multimedia

    2006-01-01

    As part of the upgrade of telephone services, the CERN switching centre will be updated on Wednesday 14 June between 8.00 p.m. and midnight. Telephone services may be disrupted and possibly even interrupted during this operation. We apologise in advance for any inconvenience this may cause. CERN TELECOM Service

  11. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  12. Developments in the Control Loops Benchmarking

    OpenAIRE

    Bialic, Grzegorz; B??achuta, Marian B??

    2008-01-01

    In the chapter some developments in the control performance assessment are provided. The solution based on quadratic performance criteria which taking control effort into account was proposed in return for popular MV measure. This further broke about the definition of trade-off curve using standard deviation of both control and error signals. The standard deviation parameter is preferred because better than variance characterize the signal

  13. Validation of the WIMSD4M cross-section generation code with benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Deen, J.R.; Woodruff, W.L. [Argonne National Lab., IL (United States); Leal, L.E. [Oak Ridge National Lab., TN (United States)

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.

  14. Validation of the WIMSD4M cross-section generation code with benchmark results

    International Nuclear Information System (INIS)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D 2 O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented

  15. Neuroretinitis -- definition

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/007624.htm Neuroretinitis - definition To use the sharing features on this page, ... PhD, and the A.D.A.M. Editorial team. Retinal Disorders Read more NIH MedlinePlus Magazine Read ...

  16. Scope Definition

    DEFF Research Database (Denmark)

    Bjørn, Anders; Owsianiak, Mikołaj; Laurent, Alexis

    2017-01-01

    The scope definition is the second phase of an LCA. It determines what product systems are to be assessed and how this assessment should take place. This chapter teaches how to perform a scope definition. First, important terminology and key concepts of LCA are introduced. Then, the nine items...... making up a scope definition are elaborately explained: (1) Deliverables. (2) Object of assessment, (3) LCI modelling framework and handling of multifunctional processes, (4) System boundaries and completeness requirements, (5) Representativeness of LCI data, (6) Preparing the basis for the impact...... assessment, (7) Special requirements for system comparisons, (8) Critical review needs and (9) Planning reporting of results. The instructions relate both to the performance and reporting of a scope definition and are largely based on ILCD....

  17. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    Science.gov (United States)

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie; Reed, Sasha; Reich, Peter B.; Ryan, Michael G.; Wood, Tana E.; Yang, Xiaojuan

    2017-10-01

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is to compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.

  18. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    Science.gov (United States)

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie A.; Reed, Sasha C.; Reich, Peter B.; Ryan, Michael G.; Wood, Tana E.; Yang, Xiaojuan

    2017-01-01

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is to compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.

  19. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    Directory of Open Access Journals (Sweden)

    D. A. Clark

    2017-10-01

    Full Text Available For more accurate projections of both the global carbon (C cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking project, is to compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.

  20. Rules for scoring respiratory events in sleep: update of the 2007 AASM Manual for the Scoring of Sleep and Associated Events. Deliberations of the Sleep Apnea Definitions Task Force of the American Academy of Sleep Medicine.

    Science.gov (United States)

    Berry, Richard B; Budhiraja, Rohit; Gottlieb, Daniel J; Gozal, David; Iber, Conrad; Kapur, Vishesh K; Marcus, Carole L; Mehra, Reena; Parthasarathy, Sairam; Quan, Stuart F; Redline, Susan; Strohl, Kingman P; Davidson Ward, Sally L; Tangredi, Michelle M

    2012-10-15

    The American Academy of Sleep Medicine (AASM) Sleep Apnea Definitions Task Force reviewed the current rules for scoring respiratory events in the 2007 AASM Manual for the Scoring and Sleep and Associated Events to determine if revision was indicated. The goals of the task force were (1) to clarify and simplify the current scoring rules, (2) to review evidence for new monitoring technologies relevant to the scoring rules, and (3) to strive for greater concordance between adult and pediatric rules. The task force reviewed the evidence cited by the AASM systematic review of the reliability and validity of scoring respiratory events published in 2007 and relevant studies that have appeared in the literature since that publication. Given the limitations of the published evidence, a consensus process was used to formulate the majority of the task force recommendations concerning revisions.The task force made recommendations concerning recommended and alternative sensors for the detection of apnea and hypopnea to be used during diagnostic and positive airway pressure (PAP) titration polysomnography. An alternative sensor is used if the recommended sensor fails or the signal is inaccurate. The PAP device flow signal is the recommended sensor for the detection of apnea, hypopnea, and respiratory effort related arousals (RERAs) during PAP titration studies. Appropriate filter settings for recording (display) of the nasal pressure signal to facilitate visualization of inspiratory flattening are also specified. The respiratory inductance plethysmography (RIP) signals to be used as alternative sensors for apnea and hypopnea detection are specified. The task force reached consensus on use of the same sensors for adult and pediatric patients except for the following: (1) the end-tidal PCO(2) signal can be used as an alternative sensor for apnea detection in children only, and (2) polyvinylidene fluoride (PVDF) belts can be used to monitor respiratory effort (thoracoabdominal

  1. The Inertia Weight Updating Strategies in Particle Swarm Optimisation Based on the Beta Distribution

    Directory of Open Access Journals (Sweden)

    Petr Maca

    2015-01-01

    Full Text Available The presented paper deals with the comparison of selected random updating strategies of inertia weight in particle swarm optimisation. Six versions of particle swarm optimization were analysed on 28 benchmark functions, prepared for the Special Session on Real-Parameter Single Objective Optimisation at CEC2013. The random components of tested inertia weight were generated from Beta distribution with different values of shape parameters. The best analysed PSO version is the multiswarm PSO, which combines two strategies of updating the inertia weight. The first is driven by the temporally varying shape parameters, while the second is based on random control of shape parameters of Beta distribution.

  2. Benchmarking and accounting for the (private) cloud

    CERN Document Server

    Belleman, J

    2015-01-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible, the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to ...

  3. Productivity benchmarks for operative service units.

    Science.gov (United States)

    Helkiö, P; Aantaa, R; Virolainen, P; Tuominen, R

    2016-04-01

    Easily accessible reliable information is crucial for strategic and tactical decision-making on operative processes. We report development of an analysis tool and resulting metrics for benchmarking purposes at a Finnish university hospital. The analysis tool is based on data collected in a resource management system and an in-house cost-reporting database. The exercise reports key metrics for four operative service units and six surgical units from 2014 and the change from year 2013. Productivity, measured as total costs per total hours, ranged from 658 to 957 €/h and utilization of the total available resource hours at the service unit level ranged from 66% to 74%. The lowest costs were in a unit running only regular working hour shifts, whereas the highest costs were in a unit operating on 24/7 basis. The tool includes additional metrics on operating room (OR) scheduling and monthly data to support more detailed analysis. This report provides the hospital management with an improved and detailed overview of its operative service units and the surgical process and related costs. The operating costs are associated with on call duties, size of operative service units, and the requirements of the surgeries. This information aids in making mid- to long range decisions on managing OR capacity. © 2016 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  4. Hydrologic information server for benchmark precipitation dataset

    Science.gov (United States)

    McEnery, John A.; McKee, Paul W.; Shelton, Gregory P.; Ramsey, Ryan W.

    2013-01-01

    This paper will present the methodology and overall system development by which a benchmark dataset of precipitation information has been made available. Rainfall is the primary driver of the hydrologic cycle. High quality precipitation data is vital for hydrologic models, hydrometeorologic studies and climate analysis,and hydrologic time series observations are important to many water resources applications. Over the past two decades, with the advent of NEXRAD radar, science to measure and record rainfall has improved dramatically. However, much existing data has not been readily available for public access or transferable among the agricultural, engineering and scientific communities. This project takes advantage of the existing CUAHSI Hydrologic Information System ODM model and tools to bridge the gap between data storage and data access, providing an accepted standard interface for internet access to the largest time-series dataset of NEXRAD precipitation data ever assembled. This research effort has produced an operational data system to ingest, transform, load and then serve one of most important hydrologic variable sets.

  5. Benchmark of systematic human action reliability procedure

    International Nuclear Information System (INIS)

    Spurgin, A.J.; Hannaman, G.W.; Moieni, P.

    1986-01-01

    Probabilistic risk assessment (PRA) methodology has emerged as one of the most promising tools for assessing the impact of human interactions on plant safety and understanding the importance of the man/machine interface. Human interactions were considered to be one of the key elements in the quantification of accident sequences in a PRA. The approach to quantification of human interactions in past PRAs has not been very systematic. The Electric Power Research Institute sponsored the development of SHARP to aid analysts in developing a systematic approach for the evaluation and quantification of human interactions in a PRA. The SHARP process has been extensively peer reviewed and has been adopted by the Institute of Electrical and Electronics Engineers as the basis of a draft guide for the industry. By carrying out a benchmark process, in which SHARP is an essential ingredient, however, it appears possible to assess the strengths and weaknesses of SHARP to aid human reliability analysts in carrying out human reliability analysis as part of a PRA

  6. Geant4 Computing Performance Benchmarking and Monitoring

    Science.gov (United States)

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-01

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. The scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  7. RIA Fuel Codes Benchmark - Volume 1

    International Nuclear Information System (INIS)

    Marchand, Olivier; Georgenthum, Vincent; Petit, Marc; Udagawa, Yutaka; Nagase, Fumihisa; Sugiyama, Tomoyuki; Arffman, Asko; Cherubini, Marco; Dostal, Martin; Klouzal, Jan; Geelhood, Kenneth; Gorzel, Andreas; Holt, Lars; Jernkvist, Lars Olof; Khvostov, Grigori; Maertens, Dietmar; Spykman, Gerold; Nakajima, Tetsuo; Nechaeva, Olga; Panka, Istvan; Rey Gayo, Jose M.; Sagrado Garcia, Inmaculada C.; Shin, An-Dong; Sonnenburg, Heinz Guenther; Umidova, Zeynab; Zhang, Jinzhao; Voglewede, John

    2013-01-01

    Reactivity-initiated accident (RIA) fuel rod codes have been developed for a significant period of time and they all have shown their ability to reproduce some experimental results with a certain degree of adequacy. However, they sometimes rely on different specific modelling assumptions the influence of which on the final results of the calculations is difficult to evaluate. The NEA Working Group on Fuel Safety (WGFS) is tasked with advancing the understanding of fuel safety issues by assessing the technical basis for current safety criteria and their applicability to high burnup and to new fuel designs and materials. The group aims at facilitating international convergence in this area, including the review of experimental approaches as well as the interpretation and use of experimental data relevant for safety. As a contribution to this task, WGFS conducted a RIA code benchmark based on RIA tests performed in the Nuclear Safety Research Reactor in Tokai, Japan and tests performed or planned in CABRI reactor in Cadarache, France. Emphasis was on assessment of different modelling options for RIA fuel rod codes in terms of reproducing experimental results as well as extrapolating to typical reactor conditions. This report provides a summary of the results of this task. (authors)

  8. LHC benchmarks from flavored gauge mediation

    Energy Technology Data Exchange (ETDEWEB)

    Ierushalmi, N.; Iwamoto, S.; Lee, G.; Nepomnyashy, V.; Shadmi, Y. [Physics Department, Technion - Israel Institute of Technology,Haifa 32000 (Israel)

    2016-07-12

    We present benchmark points for LHC searches from flavored gauge mediation models, in which messenger-matter couplings give flavor-dependent squark masses. Our examples include spectra in which a single squark — stop, scharm, or sup — is much lighter than all other colored superpartners, motivating improved quark flavor tagging at the LHC. Many examples feature flavor mixing; in particular, large stop-scharm mixing is possible. The correct Higgs mass is obtained in some examples by virtue of the large stop A-term. We also revisit the general flavor and CP structure of the models. Even though the A-terms can be substantial, their contributions to EDM’s are very suppressed, because of the particular dependence of the A-terms on the messenger coupling. This holds regardless of the messenger-coupling texture. More generally, the special structure of the soft terms often leads to stronger suppression of flavor- and CP-violating processes, compared to naive estimates.

  9. On the feasibility of using emergy analysis as a source of benchmarking criteria through data envelopment analysis: A case study for wind energy

    International Nuclear Information System (INIS)

    Iribarren, Diego; Vázquez-Rowe, Ian; Rugani, Benedetto; Benetto, Enrico

    2014-01-01

    The definition of criteria for the benchmarking of similar entities is often a critical issue in analytical studies because of the multiplicity of criteria susceptible to be taken into account. This issue can be aggravated by the need to handle multiple data for multiple facilities. This article presents a methodological framework, named the Em + DEA method, which combines emergy analysis with Data Envelopment Analysis (DEA) for the ecocentric benchmarking of multiple resembling entities (i.e., multiple decision making units or DMUs). Provided that the life-cycle inventories of these DMUs are available, an emergy analysis is performed through the computation of seven different indicators, which refer to the use of fossil, metal, mineral, nuclear, renewable energy, water and land resources. These independent emergy values are then implemented as inputs for DEA computation, thus providing operational emergy-based efficiency scores and, for the inefficient DMUs, target emergy flows (i.e., feasible emergy benchmarks that would turn inefficient DMUs into efficient). The use of the Em + DEA method is exemplified through a case study of wind energy farms. The potential use of CED (cumulative energy demand) and CExD (cumulative exergy demand) indicators as alternative benchmarking criteria to emergy is discussed. The combined use of emergy analysis with DEA is proven to be a valid methodological approach to provide benchmarks oriented towards the optimisation of the life-cycle performance of a set of multiple similar facilities, not being limited to the operational traits of the assessed units. - Highlights: • Combined emergy and DEA method to benchmark multiple resembling entities. • Life-cycle inventory, emergy analysis and DEA as key steps of the Em + DEA method. • Valid ecocentric benchmarking approach proven through a case study of wind farms. • Comparison with life-cycle energy-based benchmarking criteria (CED/CExD + DEA). • Analysts and decision and policy

  10. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  11. Fault detection of a benchmark wind turbine using interval analysis

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Odgaard, Peter Fogh; Bak, Thomas

    2012-01-01

    This paper investigates a state estimation set- membership approach for fault detection of a benchmark wind turbine. The main challenges in the benchmark are high noise on the wind speed measurement and the nonlinearities in the aerodynamic torque such that the overall model of the turbine is non...... of the measurement with a closed set that is computed based on the past measurements and a model of the system. If the measurement is not consistent with this set, a fault is detected. The result demonstrates effectiveness of the method for fault detection of the benchmark wind turbine....

  12. Implementation of benchmark management in quality assurance audit activities

    International Nuclear Information System (INIS)

    Liu Yongmei

    2008-01-01

    The concept of Benchmark Management is that the practices of the best competitor are taken as benchmark, to analyze and study the distance between that competitor and the institute, and take efficient actions to catch up and even exceed the competitor. This paper analyzes and rebuilds all the process for quality assurance audit with the concept of Benchmark Management, based on the practices during many years of quality assurance audits, in order to improve the level and effect of quality assurance audit activities. (author)

  13. Construction of a Benchmark for the User Experience Questionnaire (UEQ

    Directory of Open Access Journals (Sweden)

    Martin Schrepp

    2017-08-01

    Full Text Available Questionnaires are a cheap and highly efficient tool for achieving a quantitative measure of a product’s user experience (UX. However, it is not always easy to decide, if a questionnaire result can really show whether a product satisfies this quality aspect. So a benchmark is useful. It allows comparing the results of one product to a large set of other products. In this paper we describe a benchmark for the User Experience Questionnaire (UEQ, a widely used evaluation tool for interactive products. We also describe how the benchmark can be applied to the quality assurance process for concrete projects.

  14. Benchmarks for targeted alpha therapy for cancer

    International Nuclear Information System (INIS)

    Allen, J.B.

    2011-01-01

    Full text: Targeted alpha therapy (TAT) needs to achieve certain benchmarks if t is to find its way into the clinic. This paper reviews the status of benchmarks for dose normalisation, microdosimetry, response of micrometastases to therapy, maximum tolerance doses and adequate supplies of alpha emitting radioisotopes. In comparing dose effect for different alpha immunoconjugates (IC), patients and diseases, it is appropriate to normalise dose according to specific factors that affect the efficacy of the treatment. Body weight and body surface area are two commonly used criteria. However, more advanced criteria are required, such as the volume of distribution. Alpha dosimetry presents a special challenge in clinical trials. Monte Carlo calculations can be used to determine specific energies, but these need validation. This problem could be resolved with micronuclei biological dosimetry and mutagenesis studies of radiation Jam age. While macroscopic disease can be monitored, the impact of therapy on subclinical microscopic disease is a real problem. Magnetic cell separation of cancer cells in the blood with magnetic microspheres coated with the targeting monoclonal antibody could provide the response data. Alpha therapy needs first to establish maximum tolerance doses for practical acceptance. This has been determined with 213Bi-IC for acute myelogenous leukaemia at ∼ I mCi/kg. The maximum tolerance dose has not yet been established for metastatic melanoma, but the efficacious dose for some melanomas is less than 0.3 mCi/kg and for intra-cavity therapy of GBM it is ∼ 0.14 mCi/kg for 211 At-Ie. In the case of Ra-223 for bone cancer, the emission of four alphas with a total energy of 27 MeV results in very high cytotoxicity and an effective dose of only ∼ 5 μCi/kg. The limited supplies of Ac-225 available after separation from Th-229 are adequate for clinical trials. However, should TAT become a clinical procedure, then new supplies must be found. Accelerator

  15. Sequence History Update Tool

    Science.gov (United States)

    Khanampompan, Teerapat; Gladden, Roy; Fisher, Forest; DelGuercio, Chris

    2008-01-01

    The Sequence History Update Tool performs Web-based sequence statistics archiving for Mars Reconnaissance Orbiter (MRO). Using a single UNIX command, the software takes advantage of sequencing conventions to automatically extract the needed statistics from multiple files. This information is then used to populate a PHP database, which is then seamlessly formatted into a dynamic Web page. This tool replaces a previous tedious and error-prone process of manually editing HTML code to construct a Web-based table. Because the tool manages all of the statistics gathering and file delivery to and from multiple data sources spread across multiple servers, there is also a considerable time and effort savings. With the use of The Sequence History Update Tool what previously took minutes is now done in less than 30 seconds, and now provides a more accurate archival record of the sequence commanding for MRO.

  16. Shielding benchmark tests of JENDL-3

    International Nuclear Information System (INIS)

    Kawai, Masayoshi; Hasegawa, Akira; Ueki, Kohtaro; Yamano, Naoki; Sasaki, Kenji; Matsumoto, Yoshihiro; Takemura, Morio; Ohtani, Nobuo; Sakurai, Kiyoshi.

    1994-03-01

    The integral test of neutron cross sections for major shielding materials in JENDL-3 has been performed by analyzing various shielding benchmark experiments. For the fission-like neutron source problem, the following experiments are analyzed: (1) ORNL Broomstick experiments for oxygen, iron and sodium, (2) ASPIS deep penetration experiments for iron, (3) ORNL neutron transmission experiments for iron, stainless steel, sodium and graphite, (4) KfK leakage spectrum measurements from iron spheres, (5) RPI angular neutron spectrum measurements in a graphite block. For D-T neutron source problem, the following two experiments are analyzed: (6) LLNL leakage spectrum measurements from spheres of iron and graphite, and (7) JAERI-FNS angular neutron spectrum measurements on beryllium and graphite slabs. Analyses have been performed using the radiation transport codes: ANISN(1D Sn), DIAC(1D Sn), DOT3.5(2D Sn) and MCNP(3D point Monte Carlo). The group cross sections for Sn transport calculations are generated with the code systems PROF-GROUCH-G/B and RADHEAT-V4. The point-wise cross sections for MCNP are produced with NJOY. For comparison, the analyses with JENDL-2 and ENDF/B-IV have been also carried out. The calculations using JENDL-3 show overall agreement with the experimental data as well as those with ENDF/B-IV. Particularly, JENDL-3 gives better results than JENDL-2 and ENDF/B-IV for sodium. It has been concluded that JENDL-3 is very applicable for fission and fusion reactor shielding analyses. (author)

  17. Isprs Benchmark for Multi-Platform Photogrammetry

    Science.gov (United States)

    Nex, F.; Gerke, M.; Remondino, F.; Przybilla, H.-J.; Bäumker, M.; Zurhorst, A.

    2015-03-01

    Airborne high resolution oblique imagery systems and RPAS/UAVs are very promising technologies that will keep on influencing the development of geomatics in the future years closing the gap between terrestrial and classical aerial acquisitions. These two platforms are also a promising solution for National Mapping and Cartographic Agencies (NMCA) as they allow deriving complementary mapping information. Although the interest for the registration and integration of aerial and terrestrial data is constantly increasing, only limited work has been truly performed on this topic. Several investigations still need to be undertaken concerning algorithms ability for automatic co-registration, accurate point cloud generation and feature extraction from multiplatform image data. One of the biggest obstacles is the non-availability of reliable and free datasets to test and compare new algorithms and procedures. The Scientific Initiative "ISPRS benchmark for multi-platform photogrammetry", run in collaboration with EuroSDR, aims at collecting and sharing state-of-the-art multi-sensor data (oblique airborne, UAV-based and terrestrial images) over an urban area. These datasets are used to assess different algorithms and methodologies for image orientation and dense matching. As ground truth, Terrestrial Laser Scanning (TLS), Aerial Laser Scanning (ALS) as well as topographic networks and GNSS points were acquired to compare 3D coordinates on check points (CPs) and evaluate cross sections and residuals on generated point cloud surfaces. In this paper, the acquired data, the pre-processing steps, the evaluation procedures as well as some preliminary results achieved with commercial software will be presented.

  18. Benchmarks and statistics of entanglement dynamics

    International Nuclear Information System (INIS)

    Tiersch, Markus

    2009-01-01

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  19. Benchmarks and statistics of entanglement dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tiersch, Markus

    2009-09-04

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  20. Ontario Hydro's DSP update

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Ontario Hydro's Demand/Supply Plan (DSP), the 25 year plan which was submitted in December 1989, is currently being reviewed by the Environmental Assessment Board (EAB). Since 1989 there have been several changes which have led Ontario Hydro to update the original Demand/Supply Plan. This information sheet gives a quick overview of what has changed and how Ontario Hydro is adapting to that change

  1. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework

    International Nuclear Information System (INIS)

    Machnes, S.; Sander, U.; Glaser, S. J.; Schulte-Herbrueggen, T.; Fouquieres, P. de; Gruslys, A.; Schirmer, S.

    2011-01-01

    For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions are pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.

  2. THE IMPORTANCE OF BENCHMARKING IN MAKING MANAGEMENT DECISIONS

    Directory of Open Access Journals (Sweden)

    Adriana-Mihaela IONESCU

    2016-06-01

    Full Text Available Launching a new business or project leads managers to make decisions and choose strategies that will then apply in their company. Most often, they take decisions only on instinct, but there are also companies that use benchmarking studies. Benchmarking is a highly effective management tool and is useful in the new competitive environment that has emerged from the need of organizations to constantly improve their performance in order to be competitive. Using this benchmarking process, organizations try to find the best practices applied in a business, learn from famous leaders and identify ways to increase their performance and competitiveness. Thus, managers gather information about market trends and about competitors, especially about the leaders in the field, and use these information in finding ideas and setting of guidelines for development. Benchmarking studies are often used in businesses of commerce, real estate, and industry and high-tech software.

  3. Issues in benchmarking human reliability analysis methods: A literature review

    International Nuclear Information System (INIS)

    Boring, Ronald L.; Hendrickson, Stacey M.L.; Forester, John A.; Tran, Tuan Q.; Lois, Erasmia

    2010-01-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.

  4. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  5. Calculation of WWER-440 nuclide benchmark (CB2)

    International Nuclear Information System (INIS)

    Prodanova, R

    2005-01-01

    The present paper is intended to show the results, obtained at the INRNE, Sofia, Bulgaria on the benchmark task, announced by L. Markova at the sixth Symposium of AE, Kirkkonummi Finland 1996 (Authors)

  6. Benchmarking HRA methods against different NPP simulator data

    International Nuclear Information System (INIS)

    Petkov, Gueorgui; Filipov, Kalin; Velev, Vladimir; Grigorov, Alexander; Popov, Dimiter; Lazarov, Lazar; Stoichev, Kosta

    2008-01-01

    The paper presents both international and Bulgarian experience in assessing HRA methods, underlying models approaches for their validation and verification by benchmarking HRA methods against different NPP simulator data. The organization, status, methodology and outlooks of the studies are described

  7. Danish calculations of the NEACRP pin-power benchmark

    International Nuclear Information System (INIS)

    Hoejerup, C.F.

    1994-01-01

    This report describes calculations performed for the NEACRP pin-power benchmark. The calculations are made with the code NEM2D, a diffusion theory code based on the nodal expansion method. (au) (15 tabs., 15 ills., 5 refs.)

  8. The International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    Briggs, J.B.

    2003-01-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organisation for Economic Cooperation and Development (OECD) - Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Israel, Spain, and Brazil are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled 'International Handbook of Evaluated Criticality Safety Benchmark Experiments.' The 2003 Edition of the Handbook contains benchmark model specifications for 3070 critical or subcritical configurations that are intended for validating computer codes that calculate effective neutron multiplication and for testing basic nuclear data. (author)

  9. Luchthavengelden en overheidsheffingen: benchmark van zes Europese luchthavens: eindrapportage

    NARCIS (Netherlands)

    Pelger, M.; Veldhuis, J.

    2006-01-01

    Het Ministerie van Verkeer en Waterstaat, Directoraat-Generaal Transport en Luchtvaart heeft SEO Economisch Onderzoek, cluster Amsterdam Aviation Economics (AAE) gevraagd om de door ons in 2004 uitgevoerde kwantitatieve benchmark van luchthavengelden en overheidsheffingen te actualiseren. Het

  10. EBR-II Reactor Physics Benchmark Evaluation Report

    Energy Technology Data Exchange (ETDEWEB)

    Pope, Chad L. [Idaho State Univ., Pocatello, ID (United States); Lum, Edward S [Idaho State Univ., Pocatello, ID (United States); Stewart, Ryan [Idaho State Univ., Pocatello, ID (United States); Byambadorj, Bilguun [Idaho State Univ., Pocatello, ID (United States); Beaulieu, Quinton [Idaho State Univ., Pocatello, ID (United States)

    2017-12-28

    This report provides a reactor physics benchmark evaluation with associated uncertainty quantification for the critical configuration of the April 1986 Experimental Breeder Reactor II Run 138B core configuration.

  11. A Framework for Systematic Benchmarking of Monitoring and Diagnostic Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, we present an architecture and a formal framework to be used for systematic benchmarking of monitoring and diagnostic systems and for producing...

  12. Facility Energy Performance Benchmarking in a Data-Scarce Environment

    Science.gov (United States)

    2017-08-01

    ER D C/ CE RL T R- 17 -2 4 Military Facilities Engineering Technology Facility Energy Performance Benchmarking in a Data-Scarce...acwc.sdp.sirsi.net/client/default. Military Facilities Engineering Technology ERDC/CERL TR-17-24 August 2017 Facility Energy Performance Benchmarking in a Data...Analysis for Energy (FY12-15)” ERDC/CERL TR-17-24 ii Abstract Current federal, Department of Defense (DoD), and Army energy -effi- ciency goals

  13. Parton Shower Uncertainties with Herwig 7: Benchmarks at Leading Order

    CERN Document Server

    Bellm, Johannes; Plätzer, Simon; Schichtel, Peter; Siódmok, Andrzej

    2016-01-01

    We perform a detailed study of the sources of perturbative uncertainty in parton shower predictions within the Herwig 7 event generator. We benchmark two rather different parton shower algorithms, based on angular-ordered and dipole-type evolution, against each other. We deliberately choose leading order plus parton shower as the benchmark setting to identify a controllable set of uncertainties. This will enable us to reliably assess improvements by higher-order contributions in a follow-up work.

  14. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    Science.gov (United States)

    This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for dissolved salts as measured by conductivity in Central Appalachian streams using data from West Virginia and Kentucky. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  15. Benchmark of Deep Learning Models on Large Healthcare MIMIC Datasets

    OpenAIRE

    Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan

    2017-01-01

    Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking res...

  16. Taking the Battle Upstream: Towards a Benchmarking Role for NATO

    Science.gov (United States)

    2012-09-01

    performance management. The business benchmarking methodology was pioneered in the late 1980s by Robert C. Camp at Xerox.24 Up to that point...DMG).64 TNO Report on Defense Benchmarking: A Double Recommendation The TNO report was delivered in late 2006. It contained the analysis of the State...rounder graphs of Australia and the United Kingdom. This distinction between the two Anglo- Saxon countries and the others is interesting because there

  17. Benchmarking GJ436b for JWST

    Science.gov (United States)

    Parmentier, Vivien; Stevenson, Kevin; Crossfield, Ian; Morley, Caroline; Fortney, Jonathan; Showman, Adam; Lewis, Nikole; Line, Mike

    2017-10-01

    GJ436b is a slightly eccentric, Neptune size planet with an equilibrium temperature of approximately 770K, it is the only Neptune size planet with a thermal emission measurement. With the coming JWST GTO observations of it's emission spectrum, GJ436b will become a benchmark object of the population of Neptune-size planets that will be discovered by TESS and characterized by JWST in the coming years. The current set of 19 secondary eclipses observed by Spitzer points toward a metal-rich, well mixed, tidally heated atmosphere in disequilibrium chemistry. However, no self-consistent forward models are currently able to fit the dayside spectrum of the planet, whereas retrieval models lead to solutions that are inconsistent with the observed planet density. Clearly, some piece of the puzzle is missing to understand the atmospheric properties of this planet. Although the coming JWST observations will likely improve our understanding of this planet, it won't be able to break the degeneracies between metallicity, internal flux and energy redistribution. We propose to observe a full phase curve of GJ436b at 3.6 microns. We will obtain a measurement of the nightside flux of GJ436b at 3.6 microns. Combined with the already observed 8 microns phase curve, we will obtain the first low resolution spectrum of the nightside of a Neptune size exoplanet. By comparing the nightside flux at 3.6 and 8 microns, we will be able to place constraints on the tidal heating and the metallicity of GJ436b that will be complimentary to the the dayside spectrum that will be obtained with JWST. As seen with the example of hot Jupiters, for which much more data is available, measurements of the nightside spectrum is fundamental to understand the planet atmosphere as a whole and correctly interpret the dayside emission. As a consequence, the proposed observation is crucial for the interpretation of the coming JWST observations. As a secondary goal, our observation should be able to confirm the

  18. Sex definitions and gender practices. An update from Australia.

    Science.gov (United States)

    Cregan, Kate

    2014-07-01

    In recent years the Australian parliament has been considering the rights to protection from discrimination of intersex and gender identity disorder (GID) people. In 2013 such protections were made law in the amendment to the Sex Discrimination Act 1984, which in turn has influenced Senate inquiries into the medical treatment of intersex people. This year's Australian report describes the purview and the potential ramifications of the inquiry of the Senate Standing Committees on Community Affairs, published in October 2013, into the involuntary or coerced sterilization of intersex people in Australia.

  19. Leprosy. An update: definition, pathogenesis, classification, diagnosis, and treatment.

    Science.gov (United States)

    Eichelmann, K; González González, S E; Salas-Alanis, J C; Ocampo-Candiani, J

    2013-09-01

    Leprosy is a chronic granulomatous disease caused by the bacillus Mycobacterium leprae. It primarily affects the skin and peripheral nerves and is still endemic in various regions of the world. Clinical presentation depends on the patient's immune status at the time of infection and during the course of the disease. Leprosy is associated with disability and marginalization. Diagnosis is clinical and is made when the patient has at least 1 of the following cardinal signs specified by the World Health Organization: hypopigmented or erythematous macules with sensory loss; thickened peripheral nerves; or positive acid-fast skin smear or skin biopsy with loss of adnexa at affected sites. Leprosy is treated with a multidrug combination of rifampicin, clofazimine, and dapsone. Two main regimens are used depending on whether the patient has paucibacillary or multibacillary disease. Copyright © 2011 Elsevier España, S.L. and AEDV. All rights reserved.

  20. Update on Leukodystrophies: A Historical Perspective and Adapted Definition

    NARCIS (Netherlands)

    Kevelam, Sietske H.; Steenweg, Marjan E.; Srivastava, Siddharth; Helman, Guy; Naidu, Sakkubai; Schiffmann, Raphael; Blaser, Susan; Vanderver, Adeline; Wolf, Nicole I.; van der Knaap, Marjo S.

    2016-01-01

    Leukodystrophies were defined in the 1980s as progressive genetic disorders primarily affecting myelin of the central nervous system. At that time, a limited number of such disorders and no associated gene defects were known. The majority of the leukodystrophy patients remained without a specific

  1. International Criticality Safety Benchmark Evaluation Project (ICSBEP) - ICSBEP 2015 Handbook

    International Nuclear Information System (INIS)

    Bess, John D.

    2015-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy (DOE). The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirements and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross-section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span approximately 69000 pages and contain 567 evaluations with benchmark specifications for 4874 critical, near-critical or subcritical configurations, 31 criticality alarm placement/shielding configurations with multiple dose points for each, and 207 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the handbook are benchmark specifications for neutron activation foil and thermoluminescent dosimeter measurements performed at the SILENE critical assembly in Valduc, France as part of a joint venture in 2010 between the US DOE and the French Alternative Energies and Atomic Energy Commission (CEA). A photograph of this experiment is shown on the front cover. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these

  2. Web Browser Security Update Effectiveness

    Science.gov (United States)

    Duebendorfer, Thomas; Frei, Stefan

    We analyze the effectiveness of different Web browser update mechanisms on various operating systems; from Google Chrome's silent update mechanism to Opera's update requiring a full re-installation. We use anonymized logs from Google's world wide distributed Web servers. An analysis of the logged HTTP user-agent strings that Web browsers report when requesting any Web page is used to measure the daily browser version shares in active use. To the best of our knowledge, this is the first global scale measurement of Web browser update effectiveness comparing four different Web browser update strategies including Google Chrome. Our measurements prove that silent updates and little dependency on the underlying operating system are most effective to get users of Web browsers to surf the Web with the latest browser version.

  3. Resection of complex pancreatic injuries: Benchmarking postoperative complications using the Accordion classification.

    Science.gov (United States)

    Krige, Jake E; Jonas, Eduard; Thomson, Sandie R; Kotze, Urda K; Setshedi, Mashiko; Navsaria, Pradeep H; Nicol, Andrew J

    2017-03-27

    To benchmark severity of complications using the Accordion Severity Grading System (ASGS) in patients undergoing operation for severe pancreatic injuries. A prospective institutional database of 461 patients with pancreatic injuries treated from 1990 to 2015 was reviewed. One hundred and thirty patients with AAST grade 3, 4 or 5 pancreatic injuries underwent resection (pancreatoduodenectomy, n = 20, distal pancreatectomy, n = 110), including 30 who had an initial damage control laparotomy (DCL) and later definitive surgery. AAST injury grades, type of pancreatic resection, need for DCL and incidence and ASGS severity of complications were assessed. Uni- and multivariate logistic regression analysis was applied. Overall 238 complications occurred in 95 (73%) patients of which 73% were ASGS grades 3-6. Nineteen patients (14.6%) died. Patients more likely to have complications after pancreatic resection were older, had a revised trauma score (RTS) < 7.8, were shocked on admission, had grade 5 injuries of the head and neck of the pancreas with associated vascular and duodenal injuries, required a DCL, received a larger blood transfusion, had a pancreatoduodenectomy (PD) and repeat laparotomies. Applying univariate logistic regression analysis, mechanism of injury, RTS < 7.8, shock on admission, DCL, increasing AAST grade and type of pancreatic resection were significant variables for complications. Multivariate logistic regression analysis however showed that only age and type of pancreatic resection (PD) were significant. This ASGS-based study benchmarked postoperative morbidity after pancreatic resection for trauma. The detailed outcome analysis provided may serve as a reference for future institutional comparisons.

  4. Astrophysics Update 2

    CERN Document Server

    Mason, John W

    2006-01-01

    "Astrophysics Updates" is intended to serve the information needs of professional astronomers and postgraduate students about areas of astronomy, astrophysics and cosmology that are rich and active research spheres. Observational methods and the latest results of astronomical research are presented as well as their theoretical foundations and interrelations. The contributed commissioned articles are written by leading exponents in a format that will appeal to professional astronomers and astrophysicists who are interested in topics outside their own specific areas of research. This collection of timely reviews may also attract the interest of advanced amateur astronomers seeking scientifically rigorous coverage.

  5. Radiotherapy: An Update

    Directory of Open Access Journals (Sweden)

    Vikrant Kasat

    2010-01-01

    Full Text Available Radiotherapy is the art of using ionizing radiation to destroy malignant cells while minimizing damage to normal tissue. Radiotherapy has become a standard treatment option for a wide range of malignancies. Several new imaging techniques, both anatomical and functional are currently being evaluated as well as practiced for treatment planning of cancer. These recent developments have allowed radiation oncologists to escalate the dose of radiation delivered to tumors while minimizing the dose delivered to surrounding normal tissue. In this update, we attempt to pen down important aspects of radiotherapy.

  6. Context updates are hierarchical

    Directory of Open Access Journals (Sweden)

    Anton Karl Ingason

    2016-10-01

    Full Text Available This squib studies the order in which elements are added to the shared context of interlocutors in a conversation. It focuses on context updates within one hierarchical structure and argues that structurally higher elements are entered into the context before lower elements, even if the structurally higher elements are pronounced after the lower elements. The crucial data are drawn from a comparison of relative clauses in two head-initial languages, English and Icelandic, and two head-final languages, Korean and Japanese. The findings have consequences for any theory of a dynamic semantics.

  7. Exploring the path to success : A review of the Strategic IT benchmarking literature

    NARCIS (Netherlands)

    Ebner, Katharina; Urbach, Nils; Mueller, Benjamin

    IT organizations use strategic IT benchmarking (SITBM) to revise IT strategies or perform internal marketing. Despite benchmarking's long tradition, many strategic IT benchmarking initiatives do not reveal the desired outcomes. The vast body of knowledge on benchmarking and IT management does not

  8. Development of the Croatian HR Benchmarks List and its Comparison with the World-Approved Ones

    OpenAIRE

    Pološki Vokić, Nina; Vidović, Maja

    2004-01-01

    Human resource benchmarking has become increasingly important as organizations strive for better performance. Observing, adapting and reapplying best HR practices from others became the essential management tool. The article defines HR benchmarks appropriate and significant for the Croatian business environment, which were predominantly compensation indicators. In particular, the research revealed that Croatian HR benchmarks are different from HR benchmarks used in developed countries. Namely...

  9. Resection of complex pancreatic injuries: Benchmarking postoperative complications using the Accordion classification

    Science.gov (United States)

    Krige, Jake E; Jonas, Eduard; Thomson, Sandie R; Kotze, Urda K; Setshedi, Mashiko; Navsaria, Pradeep H; Nicol, Andrew J

    2017-01-01

    AIM To benchmark severity of complications using the Accordion Severity Grading System (ASGS) in patients undergoing operation for severe pancreatic injuries. METHODS A prospective institutional database of 461 patients with pancreatic injuries treated from 1990 to 2015 was reviewed. One hundred and thirty patients with AAST grade 3, 4 or 5 pancreatic injuries underwent resection (pancreatoduodenectomy, n = 20, distal pancreatectomy, n = 110), including 30 who had an initial damage control laparotomy (DCL) and later definitive surgery. AAST injury grades, type of pancreatic resection, need for DCL and incidence and ASGS severity of complications were assessed. Uni- and multivariate logistic regression analysis was applied. RESULTS Overall 238 complications occurred in 95 (73%) patients of which 73% were ASGS grades 3-6. Nineteen patients (14.6%) died. Patients more likely to have complications after pancreatic resection were older, had a revised trauma score (RTS) trauma. The detailed outcome analysis provided may serve as a reference for future institutional comparisons. PMID:28396721

  10. Self-shielding models of MICROX-2 code: Review and updates

    International Nuclear Information System (INIS)

    Hou, J.; Choi, H.; Ivanov, K.N.

    2014-01-01

    Highlights: • The MICROX-2 code has been improved to expand its application to advanced reactors. • New fine-group cross section libraries based on ENDF/B-VII have been generated. • Resonance self-shielding and spatial self-shielding models have been improved. • The improvements were assessed by a series of benchmark calculations against MCNPX. - Abstract: The MICROX-2 is a transport theory code that solves for the neutron slowing-down and thermalization equations of a two-region lattice cell. The MICROX-2 code has been updated to expand its application to advanced reactor concepts and fuel cycle simulations, including generation of new fine-group cross section libraries based on ENDF/B-VII. In continuation of previous work, the MICROX-2 methods are reviewed and updated in this study, focusing on its resonance self-shielding and spatial self-shielding models for neutron spectrum calculations. The improvement of self-shielding method was assessed by a series of benchmark calculations against the Monte Carlo code, using homogeneous and heterogeneous pin cell models. The results have shown that the implementation of the updated self-shielding models is correct and the accuracy of physics calculation is improved. Compared to the existing models, the updates reduced the prediction error of the infinite multiplication factor by ∼0.1% and ∼0.2% for the homogeneous and heterogeneous pin cell models, respectively, considered in this study

  11. Energy benchmarking for shopping centers in Gulf Coast region

    International Nuclear Information System (INIS)

    Juaidi, Adel; AlFaris, Fadi; Montoya, Francisco G.; Manzano-Agugliaro, Francisco

    2016-01-01

    Building sector consumes a significant amount of energy worldwide (up to 40% of the total global energy); moreover, by the year 2030 the consumption is expected to increase by 50%. One of the reasons is that the performance of buildings and its components degrade over the years. In recent years, energy benchmarking for government office buildings, large scale public buildings and large commercial buildings is one of the key energy saving projects for promoting the development of building energy efficiency and sustainable energy savings in Gulf Cooperation Council (GCC) countries. Benchmarking would increase the purchase of energy efficient equipment, reducing energy bills, CO 2 emissions and conventional air pollution. This paper focuses on energy benchmarking for shopping centers in Gulf Coast Region. In addition, this paper will analyze a sample of shopping centers data in Gulf Coast Region (Dubai, Ajman, Sharjah, Oman and Bahrain). It aims to develop a benchmark for these shopping centers by highlighting the status of energy consumption performance. This research will support the sustainability movement in Gulf area through classifying the shopping centers into: Poor, Usual and Best Practices in terms of energy efficiency. According to the benchmarking analysis in this paper, the shopping centers best energy management practices in the Gulf Coast Region are the buildings that consume less than 810 kW h/m 2 /yr, whereas the poor building practices are the centers that consume greater than 1439 kW h/m 2 /yr. The conclusions of this work can be used as a reference for shopping centres benchmarking with similar climate. - Highlights: •The energy consumption data of shopping centers in Gulf Coast Region were gathered. •A benchmarking of energy consumption for the public areas for the shopping centers in the Gulf Coast Region was developed. •The shopping centers have the usual practice in the region between 810 kW h/m 2 /yr and 1439 kW h/m 2 /yr.

  12. H.B. Robinson-2 pressure vessel benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Remec, I.; Kam, F.B.K.

    1998-02-01

    The H. B. Robinson Unit 2 Pressure Vessel Benchmark (HBR-2 benchmark) is described and analyzed in this report. Analysis of the HBR-2 benchmark can be used as partial fulfillment of the requirements for the qualification of the methodology for calculating neutron fluence in pressure vessels, as required by the U.S. Nuclear Regulatory Commission Regulatory Guide DG-1053, Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence. Section 1 of this report describes the HBR-2 benchmark and provides all the dimensions, material compositions, and neutron source data necessary for the analysis. The measured quantities, to be compared with the calculated values, are the specific activities at the end of fuel cycle 9. The characteristic feature of the HBR-2 benchmark is that it provides measurements on both sides of the pressure vessel: in the surveillance capsule attached to the thermal shield and in the reactor cavity. In section 2, the analysis of the HBR-2 benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed with three multigroup libraries based on ENDF/B-VI: BUGLE-93, SAILOR-95 and BUGLE-96. The average ratio of the calculated-to-measured specific activities (C/M) for the six dosimeters in the surveillance capsule was 0.90 {+-} 0.04 for all three libraries. The average C/Ms for the cavity dosimeters (without neptunium dosimeter) were 0.89 {+-} 0.10, 0.91 {+-} 0.10, and 0.90 {+-} 0.09 for the BUGLE-93, SAILOR-95 and BUGLE-96 libraries, respectively. It is expected that the agreement of the calculations with the measurements, similar to the agreement obtained in this research, should typically be observed when the discrete-ordinates method and ENDF/B-VI libraries are used for the HBR-2 benchmark analysis.

  13. Benchmarking: More Aspects of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Ravindrudu, Rahul [Iowa State Univ., Ames, IA (United States)

    2004-01-01

    The original HPL algorithm makes the assumption that all data can be fit entirely in the main memory. This assumption will obviously give a good performance due to the absence of disk I/O. However, not all applications can fit their entire data in memory. These applications which require a fair amount of I/O to move data to and from main memory and secondary storage, are more indicative of usage of an Massively Parallel Processor (MPP) System. Given this scenario a well designed I/O architecture will play a significant part in the performance of the MPP System on regular jobs. And, this is not represented in the current Benchmark. The modified HPL algorithm is hoped to be a step in filling this void. The most important factor in the performance of out-of-core algorithms is the actual I/O operations performed and their efficiency in transferring data to/from main memory and disk, Various methods were introduced in the report for performing I/O operations. The I/O method to use depends on the design of the out-of-core algorithm. Conversely, the performance of the out-of-core algorithm is affected by the choice of I/O operations. This implies, good performance is achieved when I/O efficiency is closely tied with the out-of-core algorithms. The out-of-core algorithms must be designed from the start. It is easily observed in the timings for various plots, that I/O plays a significant part in the overall execution time. This leads to an important conclusion, retro-fitting an existing code may not be the best choice. The right-looking algorithm selected for the LU factorization is a recursive algorithm and performs well when the entire dataset is in memory. At each stage of the loop the entire trailing submatrix is read into memory panel by panel. This gives a polynomial number of I/O reads and writes. If the left-looking algorithm was selected for the main loop, the number of I/O operations involved will be linear on the number of columns. This is due to the data access

  14. Quantum Chemical Benchmarking, Validation, and Prediction of Acidity Constants for Substituted Pyridinium Ions and Pyridinyl Radicals.

    Science.gov (United States)

    Keith, John A; Carter, Emily A

    2012-09-11

    Sensibly modeling (photo)electrocatalytic reactions involving proton and electron transfer with computational quantum chemistry requires accurate descriptions of protonated, deprotonated, and radical species in solution. Procedures to do this are generally nontrivial, especially in cases that involve radical anions that are unstable in the gas phase. Recently, pyridinium and the corresponding reduced neutral radical have been postulated as key catalysts in the reduction of CO2 to methanol. To assess practical methodologies to describe the acid/base chemistry of these species, we employed density functional theory (DFT) in tandem with implicit solvation models to calculate acidity constants for 22 substituted pyridinium cations and their corresponding pyridinyl radicals in water solvent. We first benchmarked our calculations against experimental pyridinium deprotonation energies in both gas and aqueous phases. DFT with hybrid exchange-correlation functionals provide chemical accuracy for gas-phase data and allow absolute prediction of experimental pKas with unsigned errors under 1 pKa unit. The accuracy of this economical pKa calculation approach was further verified by benchmarking against highly accurate (but very expensive) CCSD(T)-F12 calculations. We compare the relative importance and sensitivity of these energies to selection of solvation model, solvation energy definitions, implicit solvation cavity definition, basis sets, electron densities, model geometries, and mixed implicit/explicit models. After determining the most accurate model to reproduce experimentally-known pKas from first principles, we apply the same approach to predict pKas for radical pyridinyl species that have been proposed relevant under electrochemical conditions. This work provides considerable insight into the pitfalls using continuum solvation models, particularly when used for radical species.

  15. Benchmarks and performance indicators: two tools for evaluating organizational results and continuous quality improvement efforts.

    Science.gov (United States)

    McKeon, T

    1996-04-01

    Benchmarks are tools that can be compared across companies and industries to measure process output. The key to benchmarking is understanding the composition of the benchmark and whether the benchmarks consist of homogeneous groupings. Performance measures expand the concept of benchmarking and cross organizational boundaries to include factors that are strategically important to organizational success. Incorporating performance measures into a balanced score card will provide a comprehensive tool to evaluate organizational results.

  16. Benchmarking of Electricity Distribution Licensees Operating in Sri Lanka

    Directory of Open Access Journals (Sweden)

    K. T. M. U. Hemapala

    2016-01-01

    Full Text Available Electricity sector regulators are practicing benchmarking of distribution companies to regulate the allowed revenue. Mainly this is carried out based on the relative efficiency scores produced by frontier benchmarking techniques. Some of these techniques, for example, Corrected Ordinary Least Squares method and Stochastic Frontier Analysis, use econometric approach to estimate efficiency scores, while a method like Data Envelopment Analysis uses linear programming. Those relative efficiency scores are later used to calculate the efficiency factor (X-factor which is a component of the revenue control formula. In electricity distribution industry in Sri Lanka, the allowed revenue for a particular distribution licensee is calculated according to the allowed revenue control formula as specified in the tariff methodology of Public Utilities Commission of Sri Lanka. This control formula contains the X-factor as well, but its effect has not been considered yet; it just kept it zero, since there were no relative benchmarking studies carried out by the utility regulators to decide the actual value of X-factor. This paper focuses on producing a suitable benchmarking methodology by studying prominent benchmarking techniques used in international regulatory regime and by analyzing the applicability of them to Sri Lankan context, where only five Distribution Licensees are operating at present.

  17. Simplified two and three dimensional HTTR benchmark problems

    International Nuclear Information System (INIS)

    Zhang Zhan; Rahnema, Farzad; Zhang Dingkang; Pounders, Justin M.; Ougouag, Abderrafi M.

    2011-01-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  18. Analysis of a multigroup stylized CANDU half-core benchmark

    International Nuclear Information System (INIS)

    Pounders, Justin M.; Rahnema, Farzad; Serghiuta, Dumitru

    2011-01-01

    Highlights: → This paper provides a benchmark that is a stylized model problem in more than two energy groups that is realistic with respect to the underlying physics. → An 8-group cross section library is provided to augment a previously published 2-group 3D stylized half-core CANDU benchmark problem. → Reference eigenvalues and selected pin and bundle fission rates are included. → 2-, 4- and 47-group Monte Carlo solutions are compared to analyze homogenization-free transport approximations that result from energy condensation. - Abstract: An 8-group cross section library is provided to augment a previously published 2-group 3D stylized half-core Canadian deuterium uranium (CANDU) reactor benchmark problem. Reference eigenvalues and selected pin and bundle fission rates are also included. This benchmark is intended to provide computational reactor physicists and methods developers with a stylized model problem in more than two energy groups that is realistic with respect to the underlying physics. In addition to transport theory code verification, the 8-group energy structure provides reactor physicist with an ideal problem for examining cross section homogenization and collapsing effects in a full-core environment. To this end, additional 2-, 4- and 47-group full-core Monte Carlo benchmark solutions are compared to analyze homogenization-free transport approximations incurred as a result of energy group condensation.

  19. Developing a benchmark for emotional analysis of music.

    Science.gov (United States)

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  20. Developing a benchmark for emotional analysis of music.

    Directory of Open Access Journals (Sweden)

    Anna Aljanaki

    Full Text Available Music emotion recognition (MER field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM, is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution. Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  1. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    Science.gov (United States)

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  2. Coxarthrosis - an update; Koxarthrose - ein Update

    Energy Technology Data Exchange (ETDEWEB)

    Imhof, H.; Noebauer-Huhmann, I.; Trattnig, S. [Medizinische Universitaet Wien, Klinik fuer Radiodiagnostik, Wien (Austria)

    2009-05-15

    Degenerative osteoarthritis of the hip joint (coxarthrosis) is the most common disease of the hip joint in adults. The diagnosis is based on a combination of radiographic findings and characteristic clinical symptoms. The lack of a radiographic consensus definition has seemingly resulted in a variation of the published incidences and prevalence of degenerative osteoarthritis of the hip joint. The chronological sequence of degeneration includes the following basic symptoms on conventional radiographs and CT: joint space narrowing, development of osteophytes, subchondral demineralisation/sclerosis and cyst formation, as well as loose bodies, joint malalignment and deformity. MR imaging allows additional visualization of early symptoms and/or activity signs such as cartilage edema, cartilage tears and defects, subchondral bone marrow edema, synovial edema and thickening, joint effusion and muscle atrophy. The scientific dispute concerns the significance of (minimal) joint malalignment (e.g. impingement, dysplasia etc.) and forms of malpositioning which as possible prearthrosis have a high probability of leading to degenerative osteoarthritis. Moreover, without any question, the preservation of joint containment and gender differences are important additional basic diagnostic principles, which have gained great interest in recent years. In research different MR procedures such as Na and H spectroscopy, T2*-mapping etc. with ultrahigh field MR allow cartilage metabolism and its changes in early degenerative osteoarthritis (''biochemical imaging'') to be studied. There is no doubt that even in a few years new profound knowledge is to be expected in this field. (orig.) [German] Die Hueftgelenkarthrose ist im Erwachsenenalter die haeufigste Erkrankung des Hueftgelenks. Die fehlende Konsensusdefinition dieser Erkrankung fuehrt zu einer scheinbar breiten Varianz bzgl. Inzidenz und Praevalenz. Die Diagnose wird aufgrund des radiologischen Befundes und der

  3. Memory Updating and Mental Arithmetic.

    Science.gov (United States)

    Han, Cheng-Ching; Yang, Tsung-Han; Lin, Chia-Yuan; Yen, Nai-Shing

    2016-01-01

    Is domain-general memory updating ability predictive of calculation skills or are such skills better predicted by the capacity for updating specifically numerical information? Here, we used multidigit mental multiplication (MMM) as a measure for calculating skill as this operation requires the accurate maintenance and updating of information in addition to skills needed for arithmetic more generally. In Experiment 1, we found that only individual differences with regard to a task updating numerical information following addition (MUcalc) could predict the performance of MMM, perhaps owing to common elements between the task and MMM. In Experiment 2, new updating tasks were designed to clarify this: a spatial updating task with no numbers, a numerical task with no calculation, and a word task. The results showed that both MUcalc and the spatial task were able to predict the performance of MMM but only with the more difficult problems, while other updating tasks did not predict performance. It is concluded that relevant processes involved in updating the contents of working memory support mental arithmetic in adults.

  4. Memory updating and mental arithmetic

    Directory of Open Access Journals (Sweden)

    Cheng-Ching eHan

    2016-02-01

    Full Text Available Is domain-general memory updating ability predictive of calculation skills or are such skills better predicted by the capacity for updating specifically numerical information? Here, we used multidigit mental multiplication (MMM as a measure for calculating skill as this operation requires the accurate maintenance and updating of information in addition to skills needed for arithmetic more generally. In Experiment 1, we found that only individual differences with regard to a task updating numerical information following addition (MUcalc could predict the performance of MMM, perhaps owing to common elements between the task and MMM. In Experiment 2, new updating tasks were designed to clarify this: a spatial updating task with no numbers, a numerical task with no calculation, and a word task. The results showed that both MUcalc and the spatial task were able to predict the performance of MMM but only with the more difficult problems, while other updating tasks did not predict performance. It is concluded that relevant processes involved in updating the contents of working memory support mental arithmetic in adults.

  5. 2009 South American benchmarking study: natural gas transportation companies

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Nathalie [Gas TransBoliviano S.A. (Bolivia); Walter, Juliana S. [TRANSPETRO, Rio de Janeiro, RJ (Brazil)

    2009-07-01

    In the current business environment large corporations are constantly seeking to adapt their strategies. Benchmarking is an important tool for continuous improvement and decision-making. Benchmarking is a methodology that determines which aspects are the most important to be improved upon, and it proposes establishing a competitive parameter in an analysis of the best practices and processes, applying continuous improvement driven by the best organizations in their class. At the beginning of 2008, GTB (Gas TransBoliviano S.A.) contacted several South American gas transportation companies to carry out a regional benchmarking study in 2009. In this study, the key performance indicators of the South American companies, whose reality is similar, for example, in terms of prices, availability of labor, and community relations, will be compared. Within this context, a comparative study of the results, the comparative evaluation among natural gas transportation companies, is becoming an essential management instrument to help with decision-making. (author)

  6. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  7. Implementation and verification of global optimization benchmark problems

    Science.gov (United States)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  8. Implementation and verification of global optimization benchmark problems

    Directory of Open Access Journals (Sweden)

    Posypkin Mikhail

    2017-12-01

    Full Text Available The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its’ gradient at a given point and the interval estimates of a function and its’ gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  9. Effects of exposure imprecision on estimation of the benchmark dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2004-01-01

    In regression analysis failure to adjust for imprecision in the exposure variable is likely to lead to underestimation of the exposure effect. However, the consequences of exposure error for determination of safe doses of toxic substances have so far not received much attention. The benchmark...... approach is one of the most widely used methods for development of exposure limits. An important advantage of this approach is that it can be applied to observational data. However, in this type of data, exposure markers are seldom measured without error. It is shown that, if the exposure error is ignored......, then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study...

  10. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  11. Updates in ophthalmic pathology.

    Science.gov (United States)

    Mendoza, Pia R; Grossniklaus, Hans E

    2017-05-01

    Ophthalmic pathology has a long history and rich heritage in the field of ophthalmology. This review article highlights updates in ophthalmic pathology that have developed significantly through the years because of the efforts of committed individuals and the confluence of technology such as molecular biology and digital pathology. This is an exciting period in the history of ocular pathology, with cutting-edge techniques paving the way for new developments in diagnostics, therapeutics, and research. Collaborations between ocular oncologists and pathologists allow for improved and comprehensive patient care. Ophthalmic pathology continues to be a relevant specialty that is important in the understanding and clinical management of ocular disease, education of eye care providers, and overall advancement of the field.

  12. Updates in ophthalmic pathology

    Directory of Open Access Journals (Sweden)

    Pia R Mendoza

    2017-01-01

    Full Text Available Ophthalmic pathology has a long history and rich heritage in the field of ophthalmology. This review article highlights updates in ophthalmic pathology that have developed significantly through the years because of the efforts of committed individuals and the confluence of technology such as molecular biology and digital pathology. This is an exciting period in the history of ocular pathology, with cutting-edge techniques paving the way for new developments in diagnostics, therapeutics, and research. Collaborations between ocular oncologists and pathologists allow for improved and comprehensive patient care. Ophthalmic pathology continues to be a relevant specialty that is important in the understanding and clinical management of ocular disease, education of eye care providers, and overall advancement of the field.

  13. Geothermal Greenhouse Development Update

    Energy Technology Data Exchange (ETDEWEB)

    Lienau, Paul J.

    1997-01-01

    Greenhouse heating is one of the popular applications of low-to moderated-temperature geothermal resources. Using geothermal energy is both an economical and efficient way to heat greenhouses. Greenhouse heating systems can be designed to utilize low-temperature (>50oC or 122oF) resources, which makes the greenhouse an attractive application. These resources are widespread throughout the western states providing a significant potential for expansion of the geothermal greenhouse industry. This article summarizes the development of geothermal heated greenhouses, which mainly began about the mid-1970's. Based on a survey (Lienau, 1988) conducted in 1988 and updated in 1997, there are 37 operators of commercial greenhouses. Table 1 is a listing of known commercial geothermal greenhouses, we estimate that there may be an additional 25% on which data is not available.

  14. [Update Chagas disease].

    Science.gov (United States)

    Molina, Israel; Salvador, Fernando; Sánchez-Montalvá, Adrián

    2016-02-01

    The constant migration flows have favored the presence of people with Chagas disease in regions traditionally regarded as non-endemic, such as North America, Europe, Asia and Oceania. This has forced both health authorities and professionals to be updated in order to respond to such a demand for assistance. Recent years have led to significant progress in the field of diagnosis and treatment of Chagas disease, one of the most neglected tropical diseases. Recent clinical trials are providing new evidence that makes the management of these patients, a constant challenge for the professionals involved. Innovative diagnostic tools and therapeutic regimens, allow us to face the future of Chagas disease with optimism. Copyright © 2016 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  15. Y2K UPDATE

    CERN Multimedia

    Sverre JARP

    1999-01-01

    Concerning Y2K preparation, please note the following:Everybody, who has a NICE installation on his/her PC, needs to log in to NICE at least once before Xmas to get the Y2K update installed. This applies especially to dual boot systems.The test schedule on Y2Kplus.cern.ch will be prolonged. The last restart took place on 10 November and two more will take place on 24 November and 8 December, respectively. The Oracle users responsible for the maintenance of Oracle Forms applications which include PL/SQL blocks where date fields are handled with the default format are requested to contact oracle.support@cern.ch at their earliest convenience.Sverre Jarp (CERN Y2K co-ordinator, phone: 74944)

  16. Amblyopia update: new treatments.

    Science.gov (United States)

    Vagge, Aldo; Nelson, Leonard B

    2016-09-01

    This review article is an update on the current treatments for amblyopia. In particular, the authors focus on the concepts of brain plasticity and their implications for novel treatment strategies for both children and adults affected by amblyopia. A variety of strategies has been developed to treat amblyopia in children and adults. New evidence on the pathogenesis of amblyopia has been obtained both in animal models and in clinical trials. Mainly, these studies have challenged the classical concept that amblyopia becomes untreatable after the 'end' of the sensitive or critical period of visual development, because of a lack of sufficient plasticity in the adult brain. New treatments for amblyopia in children and adults are desirable and should be encouraged. However, further studies should be completed before such therapies are widely accepted into clinical practice.

  17. Update in urethral stents.

    Science.gov (United States)

    Bahouth, Z; Meyer, G; Yildiz, G; Nativ, O; Moskovitz, B

    2016-10-01

    Urethral stents were first introduced in 1988, and since then, they have undergone significant improvements. However, they did not gain a wide popularity and their use is limited to a small number of centers around the world. Urethral stents can be used in the entire urethra and for various and diverse indications. In the anterior urethra, it can be used to treat urethral strictures. In the prostatic urethra, they can be used for the treatment of prostatic obstruction, including benign, malignant and iatrogenic prostatic obstruction. Moreover, although not widely used, it can be also applied for the treatment of posterior urethral stricture and bladder neck contracture, usually resulting in urinary incontinence and the need for subsequent procedures. Our main experience are with Allium urethral stents, and as such, we provide the latest updates in urethral stents with special emphasis on the various types of Allium urethral stents: bulbar, prostatic and bladder neck stents.

  18. Update on equine allergies.

    Science.gov (United States)

    Fadok, Valerie A

    2013-12-01

    Horses develop many skin and respiratory disorders that have been attributed to allergy. These disorders include pruritic skin diseases, recurrent urticaria, allergic rhinoconjunctivitis, and reactive airway disease. Allergen-specific IgE has been detected in these horses, and allergen-specific immunotherapy is used to ameliorate clinical signs. The best understood atopic disease in horses is insect hypersensitivity, but the goal of effective treatment with allergen-specific immunotherapy remains elusive. In this review, updates in pathogenesis of allergic states and a brief mention of the new data on what is known in humans and dogs and how that relates to equine allergic disorders are discussed. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. AOS clearwater project update

    Energy Technology Data Exchange (ETDEWEB)

    Palmgren, Claes [Alberta Oilsands Inc (Canada)

    2011-07-01

    The Athabasca oil sands, located in northeastern Alberta, are one of the largest deposits of bitumen in the world. In this presentation Alberta Oilsands Inc. provides an update of their two phase Clearwater project which aims at using an SLP-SAGD system to recover Athabasca bitumen. The Clearwater project site is located nearby the Fort McMurray regional airport. The idea behind the project is to utilize a low pressure SAGD system in combination with a co-injected solvent in order to extract the bitumen. The injected solvent mixes with the bitumen, decreasing its viscosity and improving its ability to flow. This type of system requires only 1000 kPa of operating pressure which leads to a lower steam to oil ratio. By the end of phase one, the Clearwater project aims at bringing in 4,500 barrels of oil per day (BOPD) and 15,000 to 25,000 BOPD by the end of phase two.

  20. Cohort Profile Update

    DEFF Research Database (Denmark)

    Omland, Lars Haukali; Ahlström, Magnus Glindvad; Obel, Niels

    2014-01-01

    of Causes of Death, the Danish National Prescription Registry, the Attainment Register and the Integrated Database for Labour Market Research to get information on vital status, migration, cancer, hospital contacts, causes of death, dispensed prescriptions, education and employment. Using this design, rates......The DHCS is a cohort of all HIV-infected individuals seen in one of the eight Danish HIV centres after 31 December 1994. Here we update the 2009 cohort profile emphasizing the development of the cohort. Every 12-24 months, DHCS is linked with the Danish Civil Registration System (CRS) in order...... to extract an age- and sex-matched comparison cohort from the general population, as well as cohorts of family members of the HIV-infected patients and of the comparison cohort. The combined cohort is linked with CRS, the Danish Cancer Registry, the Danish National Hospital Registry, the Danish Registry...

  1. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  2. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    International Nuclear Information System (INIS)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-01-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR-06 are highlighted, and the future of the two projects is discussed

  3. ADAS Update and Maintainability

    Science.gov (United States)

    Watson, Leela R.

    2010-01-01

    Since 2000, both the National Weather Service Melbourne (NWS MLB) and the Spaceflight Meteorology Group (SMG) have used a local data integration system (LOIS) as part of their forecast and warning operations. The original LOIS was developed by the Applied Meteorology Unit (AMU) in 1998 (Manobianco and Case 1998) and has undergone subsequent improvements. Each has benefited from three-dimensional (3-D) analyses that are delivered to forecasters every 15 minutes across the peninsula of Florida. The intent is to generate products that enhance short-range weather forecasts issued in support of NWS MLB and SMG operational requirements within East Central Florida. The current LDIS uses the Advanced Regional Prediction System (ARPS) Data Analysis System (AD AS) package as its core, which integrates a wide variety of national, regional, and local observational data sets. It assimilates all available real-time data within its domain and is run at a finer spatial and temporal resolution than current national or regional-scale analysis packages. As such, it provides local forecasters with a more comprehensive understanding of evolving fine-scale weather features. Over the years, the LDIS has become problematic to maintain since it depends on AMU-developed shell scripts that were written for an earlier version of the ADAS software. The goals of this task were to update the NWS MLB/SMG LDIS with the latest version of ADAS, incorporate new sources of observational data, and upgrade and modify the AMU-developed shell scripts written to govern the system. In addition, the previously developed ADAS graphical user interface (GUI) was updated. Operationally, these upgrades will result in more accurate depictions of the current local environment to help with short-range weather forecasting applications, while also offering an improved initialization for local versions of the Weather Research and Forecasting (WRF) model used by both groups.

  4. Oil sands development update

    International Nuclear Information System (INIS)

    1999-01-01

    A detailed review and update of oil sands development in Alberta are provided covering every aspect of the production and economic aspects of the industry. It is pointed out that at present oil sands account for 28 per cent of Canadian crude oil production, expected to reach 50 per cent by 2005. Based on recent announcements, a total of 26 billion dollars worth of projects are in progress or planned; 20 billion dollars worth of this development is in the Athabasca area, the remainder in Cold Lake and other areas. The current update envisages up to 1,800,000 barrels per day by 2008, creating 47,000 new jobs and total government revenues through direct and indirect taxes of 118 billion dollars. Provinces other than Alberta also benefit from these development, since 60 per cent of all employment and income created by oil sands production is in other parts of Canada. Up to 60 per cent of the expansion is for goods and services and of this, 50 to 55 per cent will be purchased from Canadian sources. The remaining 40 per cent of the new investment is for engineering and construction of which 95 per cent is Canadian content. Aboriginal workforce by common consent of existing operators matches regional representation (about 13 per cent), and new developers are expected to match these standards. Planned or ongoing development in environmental protection through improved technologies and optimization, energy efficiency and improved tailings management, and active support of flexibility mechanisms such as emission credits trading, joint implementation and carbon sinks are very high on the industry's agenda. The importance of offsets are discussed extensively along with key considerations for international negotiations, as well as further research of other options such as sequestration, environmentally benign disposal of waste, and enhanced voluntary action

  5. Supersymmetric dark matter detection at post-LEP Benchmark points

    CERN Document Server

    Ellis, Jonathan Richard; Ferstl, A; Matchev, K T; Olive, Keith A; Ellis, John; Feng, Jonathan L; Ferstl, Andrew; Matchev, Konstantin T.; Olive, Keith A.

    2001-01-01

    We review the prospects for discovering supersymmetric dark matter in a recently proposed set of post-LEP supersymmetric benchmark scenarios. We consider direct detection through spin-independent nuclear scattering, as well as indirect detection through relic annihilations to neutrinos, photons, and positrons. We find that several of the benchmark scenarios offer good prospects for direct detection through spin-independent nuclear scattering, as well as indirect detection through muons produced by neutrinos from relic annihilations in the Sun, and photons from annihilations in the galactic center.

  6. Estimating the Need for Palliative Radiation Therapy: A Benchmarking Approach

    Energy Technology Data Exchange (ETDEWEB)

    Mackillop, William J., E-mail: william.mackillop@krcc.on.ca [Cancer Care and Epidemiology, Queen' s Cancer Research Institute, Queen' s University, Kingston, Ontario (Canada); Department of Public Health Sciences, Queen' s University, Kingston, Ontario (Canada); Department of Oncology, Queen' s University, Kingston, Ontario (Canada); Kong, Weidong [Cancer Care and Epidemiology, Queen' s Cancer Research Institute, Queen' s University, Kingston, Ontario (Canada)

    2016-01-01

    Purpose: Palliative radiation therapy (PRT) benefits many patients with incurable cancer, but the overall need for PRT is unknown. Our primary objective was to estimate the appropriate rate of use of PRT in Ontario. Methods and Materials: The Ontario Cancer Registry identified patients who died of cancer in Ontario between 2006 and 2010. Comprehensive RT records were linked to the registry. Multivariate analysis identified social and health system-related factors affecting the use of PRT, enabling us to define a benchmark population of patients with unimpeded access to PRT. The proportion of cases treated at any time (PRT{sub lifetime}), the proportion of cases treated in the last 2 years of life (PRT{sub 2y}), and number of courses of PRT per thousand cancer deaths were measured in the benchmark population. These benchmarks were standardized to the characteristics of the overall population, and province-wide PRT rates were then compared to benchmarks. Results: Cases diagnosed at hospitals with no RT on-site and residents of poorer communities and those who lived farther from an RT center, were significantly less likely than others to receive PRT. However, availability of RT at the diagnosing hospital was the dominant factor. Neither socioeconomic status nor distance from home to nearest RT center had a significant effect on the use of PRT in patients diagnosed at a hospital with RT facilities. The benchmark population therefore consisted of patients diagnosed at a hospital with RT facilities. The standardized benchmark for PRT{sub lifetime} was 33.9%, and the corresponding province-wide rate was 28.5%. The standardized benchmark for PRT{sub 2y} was 32.4%, and the corresponding province-wide rate was 27.0%. The standardized benchmark for the number of courses of PRT per thousand cancer deaths was 652, and the corresponding province-wide rate was 542. Conclusions: Approximately one-third of patients who die of cancer in Ontario need PRT, but many of them are never

  7. Assessment of Usability Benchmarks: Combining Standardized Scales with Specific Questions

    Directory of Open Access Journals (Sweden)

    Stephanie Bettina Linek

    2011-12-01

    Full Text Available The usability of Web sites and online services is of rising importance. When creating a completely new Web site, qualitative data are adequate for identifying the most usability problems. However, changes of an existing Web site should be evaluated by a quantitative benchmarking process. The proposed paper describes the creation of a questionnaire that allows a quantitative usability benchmarking, i.e. a direct comparison of the different versions of a Web site and an orientation on general standards of usability. The questionnaire is also open for qualitative data. The methodology will be explained by the digital library services of the ZBW.

  8. Burn-up TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Persic, A.; Ravnik, M.; Zagar, T.

    1998-01-01

    Different reactor codes are used for calculations of reactor parameters. The accuracy of the programs is tested through comparison of the calculated values with the experimental results. Well-defined and accurately measured benchmarks are required. The experimental results of reactivity measurements, fuel element reactivity worth distribution and fuel-up measurements are presented in this paper. The experiments were performed with partly burnt reactor core. The experimental conditions were well defined, so that the results can be used as a burn-up benchmark test case for a TRIGA Mark II reactor calculations.(author)

  9. Adventure Tourism Benchmark – Analyzing the Case of Suesca, Cundinamarca

    Directory of Open Access Journals (Sweden)

    Juan Felipe Tsao Borrero

    2012-11-01

    Full Text Available Adventure tourism is a growing sector within the tourism industry and understanding its dynamics is fundamental for adventure tourism destinations and their local authorities. Destination benchmarking is a strong tool to identify the performance of tourism services offered at the destination in order to design appropriate policies to improve its competitiveness. The benchmarking study of Suesca, an adventure tourism destination in Colombia, helps to identify the gaps compared with successful adventure tourism destinations around the world, and provides valuable information to local policy-makers on the features to be improved. The lack of available information to tourists and financial facilities hinders the capability of Suesca to improve its competitiveness.

  10. Shear Strength Measurement Benchmarking Tests for K Basin Sludge Simulants

    Energy Technology Data Exchange (ETDEWEB)

    Burns, Carolyn A.; Daniel, Richard C.; Enderlin, Carl W.; Luna, Maria; Schmidt, Andrew J.

    2009-06-10

    Equipment development and demonstration testing for sludge retrieval is being conducted by the K Basin Sludge Treatment Project (STP) at the MASF (Maintenance and Storage Facility) using sludge simulants. In testing performed at the Pacific Northwest National Laboratory (under contract with the CH2M Hill Plateau Remediation Company), the performance of the Geovane instrument was successfully benchmarked against the M5 Haake rheometer using a series of simulants with shear strengths (τ) ranging from about 700 to 22,000 Pa (shaft corrected). Operating steps for obtaining consistent shear strength measurements with the Geovane instrument during the benchmark testing were refined and documented.

  11. In-core fuel management benchmarks for PHWRs

    International Nuclear Information System (INIS)

    1996-06-01

    Under its in-core fuel management activities, the IAEA set up two co-ordinated research programmes (CRPs) on complete in-core fuel management code packages. At a consultant meeting in November 1988 the outline of the CRP on in-core fuel management benchmars for PHWRs was prepared, three benchmarks were specified and the corresponding parameters were defined. At the first research co-ordination meeting in December 1990, seven more benchmarks were specified. The objective of this TECDOC is to provide reference cases for the verification of code packages used for reactor physics and fuel management of PHWRs. 91 refs, figs, tabs

  12. Piping benchmark problems for the Westinghouse AP600 Standardized Plant

    International Nuclear Information System (INIS)

    Bezler, P.; DeGrassi, G.; Braverman, J.; Wang, Y.K.

    1997-01-01

    To satisfy the need for verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for the Westinghouse AP600 Standardized Plant, three benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the AP600 standard design. It will be required that the combined license licensees demonstrate that their solutions to these problems are in agreement with the benchmark problem set

  13. Validation of NESTLE against static reactor benchmark problems

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1996-01-01

    The NESTLE advanced modal code was developed at North Carolina State University with support from Los Alamos National Laboratory and Idaho National Engineering Laboratory. It recently has been benchmarked successfully against measured data from pressurized water reactors (PWRs). However, NESTLE's geometric capabilities are very flexible, and it can be applied to a variety of other types of reactors. This study presents comparisons of NESTLE results with those from other codes for static benchmark problems for PWRs, boiling water reactors (BWRs), high-temperature gas-cooled reactors (HTGRs) and CANDU heavy- water reactors (HWRs)

  14. Validation of NESTLE against static reactor benchmark problems

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1996-01-01

    The NESTLE advanced nodal code was developed at North Carolina State University with support from Los Alamos National Laboratory and Idaho National Engineering Laboratory. It recently has been benchmarked successfully against measured data from pressurized water reactors (PWRs). However, NESTLE's geometric capabilities are very flexible, and it can be applied to a variety of other types of reactors. This study presents comparisons of NESTLE results with those from other codes for static benchmark problems for PWRs, boiling water reactors (BWRs), high-temperature gas-cooled reactors (HTGRs), and Canada deuterium uranium (CANDU) heavy-water reactors (HWRs)

  15. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Update of CERN exchange network

    CERN Multimedia

    2003-01-01

    An update of the CERN exchange network will be done next April. Disturbances or even interruptions of telephony services may occur from 4th to 24th April during evenings from 18:30 to 00:00 but will not exceed more than 4 consecutive hours (see tentative planning below). CERN divisions are invited to avoid any change requests (set-ups, move or removals) of telephones and fax machines from 4th to 25th April. Everything will be done to minimize potential inconveniences which may occur during this update. There will be no loss of telephone functionalities. CERN GSM portable phones won't be affected by this change. Should you need more details, please send us your questions by email to Standard.Telephone@cern.ch. DateChange typeAffected areas April 11 Update of switch in LHC 4 LHC 4 Point April 14 Update of switch in LHC 5 LHC 5 Point April 15 Update of switches in LHC 3 and LHC 2 Points LHC 3 and LHC 2 April 22 Update of switch N4 Meyrin Ouest April 23 Update of switch  N6 Prévessin Site Ap...

  17. Multiplicative updates for the LASSO

    DEFF Research Database (Denmark)

    Mørup, Morten; Clemmensen, Line Katrine Harder

    2007-01-01

    .e. least squares minimization with $L_1$-norm regularization, since the multiplicative updates (MU) can efficiently exploit the structure of the problem traditionally solved using quadratic programming (QP). We derive an algorithm based on MU for the LASSO and compare the performance to Matlabs standard QP......Multiplicative updates have proven useful for non-negativity constrained optimization. Presently, we demonstrate how multiplicative updates also can be used for unconstrained optimization. This is for instance useful when estimating the least absolute shrinkage and selection operator (LASSO), i...

  18. Updating optical pseudoinverse associative memories.

    Science.gov (United States)

    Telfer, B; Casasent, D

    1989-07-01

    Selected algorithms for adding to and deleting from optical pseudoinverse associative memories are presented and compared. New realizations of pseudoinverse updating methods using vector inner product matrix bordering and reduced-dimensionality Karhunen-Loeve approximations (which have been used for updating optical filters) are described in the context of associative memories. Greville's theorem is reviewed and compared with the Widrow-Hoff algorithm. Kohonen's gradient projection method is expressed in a different form suitable for optical implementation. The data matrix memory is also discussed for comparison purposes. Memory size, speed and ease of updating, and key vector requirements are the comparison criteria used.

  19. The Inverted Pendulum Benchmark in Nonlinear Control Theory: A Survey

    Directory of Open Access Journals (Sweden)

    Olfa Boubaker

    2013-05-01

    Full Text Available Abstract For at least fifty years, the inverted pendulum has been the most popular benchmark, among others, in nonlinear control theory. The fundamental focus of this work is to enhance the wealth of this robotic benchmark and provide an overall picture of historical and current trend developments in nonlinear control theory, based on its simple structure and its rich nonlinear model. In this review, we will try to explain the high popularity of such a robotic benchmark, which is frequently used to realize experimental models, validate the efficiency of emerging control techniques and verify their implementation. We also attempt to provide details on how many standard techniques in control theory fail when tested on such a benchmark. More than 100 references in the open literature, dating back to 1960, are compiled to provide a survey of emerging ideas and challenging problems in nonlinear control theory accomplished and verified using this robotic system. Possible future trends that we can envision based on the review of this area are also presented.

  20. Benchmark en Beleidstoets voor de Drinkwatersector. Indicatoren Waterkwaliteit en Milieu

    NARCIS (Netherlands)

    Versteegh JFM; Tangena BH; Mulschlegel JHC; IMD

    2004-01-01

    Since both society and government are increasingly pressing for more transparency and efficiency in the drinking-water industry, benchmarking, as an instrument to test this efficiency, will form an element of the completely revised Drinking Water Act to come into force in 2006. This report describes

  1. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    Science.gov (United States)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  2. The rotating movement of three immiscible fluids - A benchmark problem

    NARCIS (Netherlands)

    Bakker, Mark; Oude Essink, Gualbert; Langevin, Christian D.

    2004-01-01

    A benchmark problem involving the rotating movement of three immiscible fluids is proposed for verifying the density-dependent flow component of groundwater flow codes. The problem consists of a two-dimensional strip in the vertical plane filled with three fluids of different densities separated by

  3. Results of the event sequence reliability benchmark exercise

    International Nuclear Information System (INIS)

    Silvestri, E.

    1990-01-01

    The Event Sequence Reliability Benchmark Exercise is the fourth of a series of benchmark exercises on reliability and risk assessment, with specific reference to nuclear power plant applications, and is the logical continuation of the previous benchmark exercises on System Analysis Common Cause Failure and Human Factors. The reference plant is the Nuclear Power Plant at Grohnde Federal Republic of Germany a 1300 MW PWR plant of KWU design. The specific objective of the Exercise is to model, to quantify and to analyze such event sequences initiated by the occurrence of a loss of offsite power that involve the steam generator feed. The general aim is to develop a segment of a risk assessment, which ought to include all the specific aspects and models of quantification, such as common canal failure, Human Factors and System Analysis, developed in the previous reliability benchmark exercises, with the addition of the specific topics of dependences between homologous components belonging to different systems featuring in a given event sequence and of uncertainty quantification, to end up with an overall assessment of: - the state of the art in risk assessment and the relative influences of quantification problems in a general risk assessment framework. The Exercise has been carried out in two phases, both requiring modelling and quantification, with the second phase adopting more restrictive rules and fixing certain common data, as emerged necessary from the first phase. Fourteen teams have participated in the Exercise mostly from EEC countries, with one from Sweden and one from the USA. (author)

  4. Present Status and Extensions of the Monte Carlo Performance Benchmark

    Science.gov (United States)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  5. Benchmarking the financial performance of local councils in Ireland

    Directory of Open Access Journals (Sweden)

    Robbins Geraldine

    2016-05-01

    Full Text Available It was over a quarter of a century ago that information from the financial statements was used to benchmark the efficiency and effectiveness of local government in the US. With the global adoption of New Public Management ideas, benchmarking practice spread to the public sector and has been employed to drive reforms aimed at improving performance and, ultimately, service delivery and local outcomes. The manner in which local authorities in OECD countries compare and benchmark their performance varies widely. The methodology developed in this paper to rate the relative financial performance of Irish city and county councils is adapted from an earlier assessment tool used to measure the financial condition of small cities in the US. Using our financial performance framework and the financial data in the audited annual financial statements of Irish local councils, we calculate composite scores for each of the thirty-four local authorities for the years 2007–13. This paper contributes composite scores that measure the relative financial performance of local councils in Ireland, as well as a full set of yearly results for a seven-year period in which local governments witnessed significant changes in their financial health. The benchmarking exercise is useful in highlighting those councils that, in relative financial performance terms, are the best/worst performers.

  6. Reactor critical benchmark calculations for burnup credit applications

    International Nuclear Information System (INIS)

    Renier, J.P.; Parks, C.V.

    1990-01-01

    In the criticality safety analyses for the development and certification of spent fuel casks, the current approach requires the assumption of ''fresh fuel'' isotopics. It has been shown that the removal of the ''fresh fuel'' assumption and the use of spent fuel isotopics (''burnup credit'') greatly increases the payload of spent fuel casks by reducing the reactivity of the fuel. Regulatory approval of burnup credit and the requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. Criticality analyses for low-enriched lattices of fuel pins using the ''fresh fuel isotopics'' assumption have been widely benchmarked against applicable critical experiments. However, the same computational methods have not been benchmarked against criticals containing spent fuel because of the non-existence of spent fuel critical experiments. Commercial reactors offer an excellent and inexhaustible source of critical configurations against which criticality analyses can be benchmarked for spent fuel configurations. This document provides brief descriptions of the benchmarks and the computational methods for the criticality analyses. 8 refs., 1 fig., 1 tab

  7. How Are You Doing? Key Performance Indicators and Benchmarking

    Science.gov (United States)

    Fahey, John P.

    2011-01-01

    School business officials need to "know and show" that their operations are well managed. To do so, they ask themselves questions, such as "How are they doing? How do they compare with others? Are they making progress fast enough? Are they using the best practices?" Using key performance indicators (KPIs) and benchmarking as regular parts of their…

  8. Benchmarking Neuromorphic Vision: Lessons Learnt from Computer Vision

    Directory of Open Access Journals (Sweden)

    Cheston eTan

    2015-10-01

    Full Text Available Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, and algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision.

  9. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  10. Academic library benchmarking in The Netherlands: a comparative study

    NARCIS (Netherlands)

    Voorbij, H.

    2009-01-01

    Purpose - This paper aims to describe some of the unique features of the Dutch academic library benchmarking system. Design/methodology/approach - The Dutch system is compared with similar projects in the USA, the UK and Germany. Findings - One of the most distinguishing features of the Dutch system

  11. MoleculeNet: a benchmark for molecular machine learning.

    Science.gov (United States)

    Wu, Zhenqin; Ramsundar, Bharath; Feinberg, Evan N; Gomes, Joseph; Geniesse, Caleb; Pappu, Aneesh S; Leswing, Karl; Pande, Vijay

    2018-01-14

    Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.

  12. Three anisotropic benchmark problems for adaptive finite element methods

    Czech Academy of Sciences Publication Activity Database

    Šolín, Pavel; Čertík, O.; Korous, L.

    2013-01-01

    Roč. 219, č. 13 (2013), s. 7286-7295 ISSN 0096-3003 R&D Projects: GA AV ČR IAA100760702 Institutional support: RVO:61388998 Keywords : benchmark problem * anisotropic solution * boundary layer Subject RIV: BA - General Mathematics Impact factor: 1.600, year: 2013

  13. Benchmarking of OEM Hybrid Electric Vehicles at NREL: Milestone Report

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, K. J.; Rajagopalan, A.

    2001-10-26

    A milestone report that describes the NREL's progress and activities related to the DOE FY2001 Annual Operating Plan milestone entitled ''Benchmark 2 new production or pre-production hybrids with ADVISOR.''

  14. Benchmark Simulation Model No 2 in Matlab-Simulink

    DEFF Research Database (Denmark)

    Vrecko, Darko; Gernaey, Krist; Rosen, Christian

    2006-01-01

    In this paper, implementation of the Benchmark Simulation Model No 2 (BSM2) within Matlab-Simulink is presented. The BSM2 is developed for plant-wide WWTP control strategy evaluation on a long-term basis. It consists of a pre-treatment process, an activated sludge process and sludge treatment...

  15. Benchmarking: A strategic overview of a key management tool

    Science.gov (United States)

    Chris Leclair

    1999-01-01

    Benchmarking is a continuous, systematic process for evaluating the products, services, and work processes of organizations in an effort to identifY best practices for possible adoption in support of the objectives of enhanced activity service delivery and organizational effectiveness.

  16. An XML format for benchmarks in High School Timetabling

    NARCIS (Netherlands)

    Post, Gerhard F.; Ahmadi, Samad; Daskalaki, Sophia; Kingston, Jeffrey H.; Kyngas, Jari; Nurmi, Cimmo; Ranson, David

    2012-01-01

    The High School Timetabling Problem is amongst the most widely used timetabling problems. This problem has varying structures in different high schools even within the same country or educational system. Due to lack of standard benchmarks and data formats this problem has been studied less than

  17. An XML format for benchmarks in high school timetabling II

    NARCIS (Netherlands)

    Post, Gerhard F.; Kingston, Jeffrey H.; Ahmadi, Samad; Daskalaki, Sophia; Gogos, Christos; Kyngas, Jari; Nurmi, Cimmo; Santos, Haroldo; Rorije, Ben; Schaerf, Andrea

    2010-01-01

    We present the progress on the benchmarking project for high school timetabling that was introduced at PATAT 2008. In particular, we announce the High School Timetabling Archive HSTT2010 with 15 instances from 7 countries and an evaluator capable of checking the syntax of instances and evaluating

  18. The numerical benchmark CB2-S, final evaluation

    International Nuclear Information System (INIS)

    Chrapciak, V.

    2002-01-01

    In this paper are final results of numerical benchmark CB2-S compared (activity, gamma and neutron sources, concentration of important nuclides and decay heat). The participants are: Vladimir Chrapciak (SCALE), Ludmila Markova (SCALE), Svetlana Zabrodskaja (SCALA), Pavel Mikolas (WIMS). Eva Tinkova (HELIOS) and Maria Manolova (SCALE) (Authors)

  19. Internal Quality Assurance Benchmarking. ENQA Workshop Report 20

    Science.gov (United States)

    Blackstock, Douglas; Burquel, Nadine; Comet, Nuria; Kajaste, Matti; dos Santos, Sergio Machado; Marcos, Sandra; Moser, Marion; Ponds, Henri; Scheuthle, Harald; Sixto, Luis Carlos Velon

    2012-01-01

    The Internal Quality Assurance group of ENQA (IQA Group) has been organising a yearly seminar for its members since 2007. The main objective is to share experiences concerning the internal quality assurance of work processes in the participating agencies. The overarching theme of the 2011 seminar was how to use benchmarking as a tool for…

  20. Benchmark Calculations of Noncovalent Interactions of Halogenated Molecules

    Czech Academy of Sciences Publication Activity Database

    Řezáč, Jan; Riley, Kevin Eugene; Hobza, Pavel

    2012-01-01

    Roč. 8, č. 11 (2012), s. 4285-4292 ISSN 1549-9618 R&D Projects: GA ČR GBP208/12/G016 Institutional support: RVO:61388963 Keywords : halogenated molecules * noncovalent interactions * benchmark calculations Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.389, year: 2012