WorldWideScience

Sample records for survey ngs benchmarks

  1. NGS Survey Control Map

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NGS Survey Control Map provides a map of the US which allows you to find and display geodetic survey control points stored in the database of the National...

  2. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  3. Benchmarking survey for recycling.

    Energy Technology Data Exchange (ETDEWEB)

    Marley, Margie Charlotte; Mizner, Jack Harry

    2005-06-01

    This report describes the methodology, analysis and conclusions of a comparison survey of recycling programs at ten Department of Energy sites including Sandia National Laboratories/New Mexico (SNL/NM). The goal of the survey was to compare SNL/NM's recycling performance with that of other federal facilities, and to identify activities and programs that could be implemented at SNL/NM to improve recycling performance.

  4. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  5. Airborne Gravity Data Enhances NGS Experimental Gravimetric Geoid in Alaska

    Science.gov (United States)

    Holmes, S. A.; Childers, V. A.; Li, X.; Roman, D. R.

    2014-12-01

    The U.S. National Geodetic Survey [NGS], through their Gravity for the Redefinition of the American Vertical Datum [GRAV-D] program, continues to update its gravimetry holdings by flying new airborne gravity surveys over a large fraction of the USA and its territories. By 2022, NGS intends that all orthometric heights in the USA will be determined in the field by using a reliable national gravimetric geoid model to transform from geodetic heights obtained from GPS. Several airborne campaigns have already been flown over Alaska and its coastline. Some of this Alaskan coastal data have been incorporated into a new NGS experimental geoid model - xGEOID14. The xGEOID14 model is the first in a series of annual experimental geoid models that will incorporate NGS GRAV-D airborne data. This series provides a useful benchmark for assessing and improving current techniques by which the airborne and land-survey data are filtered and cleaned, and then combined with satellite gravity models, elevation data (etc.) with the ultimate aim of computing a geoid model that can support a national physical height system by 2022. Here we will examine the NGS GRAV-D airborne data in Alaska, and assess its contribution to xGEOID14. Future prospects for xGEOID15 will also be considered.

  6. Geodetic Control Points - National Geodetic Survey Benchmarks

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This data contains a set of geodetic control stations maintained by the National Geodetic Survey. Each geodetic control station in this dataset has either a precise...

  7. 75 FR 35289 - International Services Surveys: BE-180, Benchmark Survey of Financial Services Transactions...

    Science.gov (United States)

    2010-06-22

    ...; other financial investment activities (including miscellaneous intermediation, portfolio management, investment advice, and all other financial investment activities); insurance carriers; insurance agencies... 15 CFR Part 801 RIN 0691-AA73 International Services Surveys: BE-180, Benchmark Survey of Financial...

  8. NOS/NGS activities to support development of radio interferometric surveying techniques

    Science.gov (United States)

    Carter, W. E.; Dracup, J. F.; Hothem, L. D.; Robertson, D. S.; Strange, W. E.

    1980-01-01

    National Geodetic Survey activities towards the development of operational geodetic survey systems based on radio interferometry are reviewed. Information about the field procedures, data reduction and analysis, and the results obtained to date is presented.

  9. National Geodetic Survey (NGS) Geodetic Control Stations, (Horizontal and/or Vertical Control), March 2009

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — This data contains a set of geodetic control stations maintained by the National Geodetic Survey. Each geodetic control station in this dataset has either a precise...

  10. 76 FR 50158 - International Services Surveys: Amendments to the BE-120, Benchmark Survey of Transactions in...

    Science.gov (United States)

    2011-08-12

    ..., architectural, and surveying services; (17) financial services (purchases only); (18) industrial engineering... Bureau of Economic Analysis 15 CFR Part 801 RIN 0691-AA76 International Services Surveys: Amendments to the BE-120, Benchmark Survey of Transactions in Selected Services and Intangible Assets With...

  11. NgsRelate

    DEFF Research Database (Denmark)

    Korneliussen, Thorfinn Sand; Moltke, Ida

    2015-01-01

    . Using both simulated and real data, we show that NgsRelate provides markedly better estimates for low-depth NGS data than two state-of-the-art genotype-based methods. AVAILABILITY: NgsRelate is implemented in C++ and is available under the GNU license at www.pop gen.dk/software. CONTACT: ida...

  12. NgsRelate

    DEFF Research Database (Denmark)

    Korneliussen, Thorfinn Sand; Moltke, Ida

    2015-01-01

    be called with high certainty. RESULTS: We present a software tool, NgsRelate, for estimating pairwise relatedness from NGS data. It provides maximum likelihood estimates that are based on genotype likelihoods instead of genotypes and thereby takes the inherent uncertainty of the genotypes into account....... Using both simulated and real data, we show that NgsRelate provides markedly better estimates for low-depth NGS data than two state-of-the-art genotype-based methods. AVAILABILITY: NgsRelate is implemented in C++ and is available under the GNU license at www.pop gen.dk/software. CONTACT: ida...

  13. Surveying and benchmarking techniques to analyse DNA gel fingerprint images.

    Science.gov (United States)

    Heras, Jónathan; Domínguez, César; Mata, Eloy; Pascual, Vico

    2016-11-01

    DNA fingerprinting is a genetic typing technique that allows the analysis of the genomic relatedness between samples, and the comparison of DNA patterns. The analysis of DNA gel fingerprint images usually consists of five consecutive steps: image pre-processing, lane segmentation, band detection, normalization and fingerprint comparison. In this article, we firstly survey the main methods that have been applied in the literature in each of these stages. Secondly, we focus on lane-segmentation and band-detection algorithms-as they are the steps that usually require user-intervention-and detect the seven core algorithms used for both tasks. Subsequently, we present a benchmark that includes a data set of images, the gold standards associated with those images and the tools to measure the performance of lane-segmentation and band-detection algorithms. Finally, we implement the core algorithms used both for lane segmentation and band detection, and evaluate their performance using our benchmark. As a conclusion of that study, we obtain that the average profile algorithm is the best starting point for lane segmentation and band detection.

  14. 76 FR 76029 - International Services Surveys: Amendments to the BE-120, Benchmark Survey of Transactions in...

    Science.gov (United States)

    2011-12-06

    ... information about the reporting entity. Each of the six schedules covers one or more types of transactions and... reporting requirements for the BE-120, Benchmark Survey of Transactions in Selected Services and...: This rule amends 15 CFR 801.10 to update certain reporting requirements for the BE-120,...

  15. Pharmacy Survey on Patient Safety Culture: Benchmarking Results.

    Science.gov (United States)

    Herner, Sheryl J; Rawlings, Julia E; Swartzendruber, Kelly; Delate, Thomas

    2017-03-01

    This study's objective was to assess the patient safety culture in a large, integrated health delivery system's pharmacy department to allow for benchmarking with other health systems. This was a cross-sectional survey conducted in a pharmacy department consisting of staff members who provide dispensing, clinical, and support services within an integrated health delivery system. The U.S. Agency for Healthcare Research and Quality's 11-composite, validated Pharmacy Survey on Patient Safety Culture questionnaire was transcribed into an online format. All departmental staff members were invited to participate in this anonymous survey. Cronbach α and overall results and contrasts between dispensing and clinical services staff and dispensing pharmacists and technicians/clerks as percentage positive scores (PPSs) are presented. Differences in contrasts were assessed with χ tests of association. Completed questionnaires were received from 598 (69.9%) of 855 employees. Cronbach α ranged from 0.55 to 0.90. Overall, the highest and lowest composite PPSs were for patient counseling (94.5%) and staffing and work pressure (44.7%), respectively. Compared with dispensing service, the clinical service participants had statistically higher PPSs for all composites except patient counseling, communication about mistakes, and staffing and work pressure (all P > 0.05). The technicians/clerks had a statistically higher PPS compared with the pharmacists for communication about mistakes (P = 0.007). All other composites were equivalent between groups. Patient counseling consistently had the highest PPS among composites measured, but opportunities existed for improvement in all aspects measured. Future research should identify and assess interventions targeted to improving the patient safety culture in pharmacy.

  16. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  17. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  18. [Benchmarking projects examining patient care in Germany: methods of analysis, survey results, and best practice].

    Science.gov (United States)

    Blumenstock, Gunnar; Fischer, Imma; de Cruppé, Werner; Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    A survey among 232 German health care organisations addressed benchmarking projects in patient care. 53 projects were reported and analysed using a benchmarking development scheme and a list of criteria. None of the projects satisfied all the criteria. Rather, examples of best practice for single aspects have been identified.

  19. NGS Absolute Gravity Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NGS Absolute Gravity data (78 stations) was received in July 1993. Principal gravity parameters include Gravity Value, Uncertainty, and Vertical Gradient. The...

  20. 2013 NOAA National Geodetic Survey (NGS) LIDAR of New Jersey: Barnegat Light Integrated Ocean and Coastal Mapping Product

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data were collected by the National Oceanic Atmospheric Administration National Geodetic Survey Remote Sensing Division using a Riegl VQ820G system. The data...

  1. Gaia benchmark stars and their twins in the Gaia-ESO Survey

    Science.gov (United States)

    Jofré, P.

    2016-09-01

    The Gaia benchmark stars are stars with very precise stellar parameters that cover a wide range in the HR diagram at various metallicities. They are meant to be good representative of typical FGK stars in the Milky Way. Currently, they are used by several spectroscopic surveys to validate and calibrate the methods that analyse the data. I review our recent activities done for these stars. Additionally, by applying our new method to find stellar twins on the Gaia-ESO Survey, I discuss how good representatives of Milky Way stars the benchmark stars are and how they distribute in space.

  2. Gaia Benchmark stars and their twins in the Gaia-ESO Survey

    CERN Document Server

    Jofre, Paula

    2015-01-01

    The Gaia benchmark stars are stars with very precise stellar parameters that cover a wide range in the HR diagram at various metallicities. They are meant to be good representative of typical FGK stars in the Milky Way. Currently, they are used by several spectroscopic surveys to validate and calibrate the methods that analyse the data. I review our recent activities done for these stars. Additionally, by applying our new method to find stellar twins on the Gaia-ESO Survey, I discuss how good representatives of Milky Way stars the benchmark stars are and how they distribute in space.

  3. The Use of GOCE/GRACE Information in the Latest NGS xGeoid15 Model for the USA

    Science.gov (United States)

    Holmes, S. A.; Li, X.; Youngman, M.

    2015-12-01

    The U.S. National Geodetic Survey [NGS], through its Gravity for the Redefinition of the American Vertical Datum [GRAV-D] program, is flying airborne gravity surveys over the USA and its territories. By 2022, NGS intends that all orthometric heights in the USA will be determined in the field using a reliable national gravimetric geoid model to transform from geodetic heights obtained from GPS. Towards this end, all available airborne data has been incorporated into a new NGS experimental geoid model - xGEOID15. The xGEOID15 model is the second in a series of annual experimental geoid models that incorporates NGS GRAV-D airborne data. This series provides a useful benchmark for assessing and improving current techniques, to ultimately compute a geoid model that can support a national physical height system by 2022. Here, we focus on the combination of the latest GOCE/GRACE models with the terrestrial gravimetry (land/airborne) that was applied for xGeoid15. Comparisons against existing combination gravitational solutions, such as EGM2008 and EIGEN6C4, as well as recent geoid models, such as xGeoid14 and CGG2013, are interesting for what they reveal about the respective use of the GOCE/GRACE satgrav information.

  4. Benchmarking the Importance and Use of Labor Market Surveys by Certified Rehabilitation Counselors

    Science.gov (United States)

    Barros-Bailey, Mary; Saunders, Jodi L.

    2013-01-01

    The purpose of this research was to benchmark the importance and use of labor market survey (LMS) among U.S. certified rehabilitation counselors (CRCs). A secondary post hoc analysis of data collected via the "Rehabilitation Skills Inventory--Revised" for the 2011 Commission on Rehabilitation Counselor Certification job analysis resulted in…

  5. Benchmarking the Importance and Use of Labor Market Surveys by Certified Rehabilitation Counselors

    Science.gov (United States)

    Barros-Bailey, Mary; Saunders, Jodi L.

    2013-01-01

    The purpose of this research was to benchmark the importance and use of labor market survey (LMS) among U.S. certified rehabilitation counselors (CRCs). A secondary post hoc analysis of data collected via the "Rehabilitation Skills Inventory--Revised" for the 2011 Commission on Rehabilitation Counselor Certification job analysis resulted in…

  6. 15 CFR 806.16 - Rules and regulations for BE-10, Benchmark Survey of U.S. Direct Investment Abroad-2004.

    Science.gov (United States)

    2010-01-01

    ..., Benchmark Survey of U.S. Direct Investment Abroad-2004. 806.16 Section 806.16 Commerce and Foreign Trade... COMMERCE DIRECT INVESTMENT SURVEYS § 806.16 Rules and regulations for BE-10, Benchmark Survey of U.S. Direct Investment Abroad—2004. A BE-10, Benchmark Survey of U.S. Direct Investment Abroad will...

  7. 15 CFR 806.17 - Rules and regulations for BE-12, 2007 Benchmark Survey of Foreign Direct Investment in the United...

    Science.gov (United States)

    2010-01-01

    ... Benchmark Survey of Foreign Direct Investment in the United States. 806.17 Section 806.17 Commerce and... Survey of Foreign Direct Investment in the United States. A BE-12, Benchmark Survey of Foreign Direct... of the BE-12, 2007 Benchmark Survey of Foreign Direct Investment in the United States, contained...

  8. Genetic counselors' (GC) knowledge, awareness, understanding of clinical next-generation sequencing (NGS) genomic testing.

    Science.gov (United States)

    Boland, P M; Ruth, K; Matro, J M; Rainey, K L; Fang, C Y; Wong, Y N; Daly, M B; Hall, M J

    2015-12-01

    Genomic tests are increasingly complex, less expensive, and more widely available with the advent of next-generation sequencing (NGS). We assessed knowledge and perceptions among genetic counselors pertaining to NGS genomic testing via an online survey. Associations between selected characteristics and perceptions were examined. Recent education on NGS testing was common, but practical experience limited. Perceived understanding of clinical NGS was modest, specifically concerning tumor testing. Greater perceived understanding of clinical NGS testing correlated with more time spent in cancer-related counseling, exposure to NGS testing, and NGS-focused education. Substantial disagreement about the role of counseling for tumor-based testing was seen. Finally, a majority of counselors agreed with the need for more education about clinical NGS testing, supporting this approach to optimizing implementation.

  9. 15 CFR 801.12 - Rules and regulations for the BE-140, Benchmark Survey of Insurance Transactions by U.S...

    Science.gov (United States)

    2010-01-01

    ..., Benchmark Survey of Insurance Transactions by U.S. Insurance Companies with Foreign Persons. 801.12 Section.... AND FOREIGN PERSONS § 801.12 Rules and regulations for the BE-140, Benchmark Survey of Insurance Transactions by U.S. Insurance Companies with Foreign Persons. (a) The BE-140, Benchmark Survey of...

  10. A method of transferring G.T.S. benchmark value to survey area using electronic total station

    Digital Repository Service at National Institute of Oceanography (India)

    Ganesan, P.

    is impossible. In some places, GTS benchmarks are available within a kilometer distance and can be easily transferred to the survey area by fly leveling using an automatic Level instrument and a graduated leveling staff. But in most of the cases, GTS benchmarks...

  11. Waiting for Treatment for Chronic Pain – a Survey of Existing Benchmarks: Toward Establishing Evidence-Based Benchmarks for Medically Acceptable Waiting Times

    Directory of Open Access Journals (Sweden)

    Mary E Lynch

    2007-01-01

    Full Text Available As medical costs escalate, health care resources must be prioritized. In this context, there is an increasing need for benchmarks and best practices in wait time management. In December 2005, the Canadian Pain Society struck a Task Force to identify benchmarks for acceptable wait times for treatment of chronic pain. The task force mandate included a systematic review and survey to identify national or international wait time benchmarks for chronic pain, proposed or in use, along with a review of the evidence upon which they are based. An extensive systematic review of the literature and a survey of International Association for the Study of Pain Chapter Presidents and key informants has identified that there are no established benchmarks or guidelines for acceptable wait times for the treatment of chronic pain in use in the world. In countries with generic guidelines or wait time standards that apply to all outpatient clinics, there have been significant challenges faced by pain clinics in meeting the established targets. Important next steps are to ensure appropriate additional research and the establishment of international benchmarks or guidelines for acceptable wait times for the treatment of chronic pain. This will facilitate advocacy for improved access to appropriate care for people suffering from chronic pain around the world.

  12. 15 CFR 801.11 - Rules and regulations for the BE-80, Benchmark Survey of Financial Services Transactions Between...

    Science.gov (United States)

    2010-01-01

    ... and commodity exchanges; other financial investment activities (including miscellaneous intermediation, portfolio management, investment advice, and all other financial investment activities); insurance carriers..., Benchmark Survey of Financial Services Transactions Between U.S. Financial Services Providers and...

  13. Radiochemical analyses of surface water from U.S. Geological Survey hydrologic bench-mark stations

    Science.gov (United States)

    Janzer, V.J.; Saindon, L.G.

    1972-01-01

    The U.S. Geological Survey's program for collecting and analyzing surface-water samples for radiochemical constituents at hydrologic bench-mark stations is described. Analytical methods used during the study are described briefly and data obtained from 55 of the network stations in the United States during the period from 1967 to 1971 are given in tabular form.Concentration values are reported for dissolved uranium, radium, gross alpha and gross beta radioactivity. Values are also given for suspended gross alpha radioactivity in terms of natural uranium. Suspended gross beta radioactivity is expressed both as the equilibrium mixture of strontium-90/yttrium-90 and as cesium-137.Other physical parameters reported which describe the samples include the concentrations of dissolved and suspended solids, the water temperature and stream discharge at the time of the sample collection.

  14. Analytic Validation of Immunohistochemistry Assays: New Benchmark Data From a Survey of 1085 Laboratories.

    Science.gov (United States)

    Stuart, Lauren N; Volmar, Keith E; Nowak, Jan A; Fatheree, Lisa A; Souers, Rhona J; Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Astles, J Rex; Nakhleh, Raouf E

    2017-09-01

    - A cooperative agreement between the College of American Pathologists (CAP) and the United States Centers for Disease Control and Prevention was undertaken to measure laboratories' awareness and implementation of an evidence-based laboratory practice guideline (LPG) on immunohistochemical (IHC) validation practices published in 2014. - To establish new benchmark data on IHC laboratory practices. - A 2015 survey on IHC assay validation practices was sent to laboratories subscribed to specific CAP proficiency testing programs and to additional nonsubscribing laboratories that perform IHC testing. Specific questions were designed to capture laboratory practices not addressed in a 2010 survey. - The analysis was based on responses from 1085 laboratories that perform IHC staining. Ninety-six percent (809 of 844) always documented validation of IHC assays. Sixty percent (648 of 1078) had separate procedures for predictive and nonpredictive markers, 42.7% (220 of 515) had procedures for laboratory-developed tests, 50% (349 of 697) had procedures for testing cytologic specimens, and 46.2% (363 of 785) had procedures for testing decalcified specimens. Minimum case numbers were specified by 85.9% (720 of 838) of laboratories for nonpredictive markers and 76% (584 of 768) for predictive markers. Median concordance requirements were 95% for both types. For initial validation, 75.4% (538 of 714) of laboratories adopted the 20-case minimum for nonpredictive markers and 45.9% (266 of 579) adopted the 40-case minimum for predictive markers as outlined in the 2014 LPG. The most common method for validation was correlation with morphology and expected results. Laboratories also reported which assay changes necessitated revalidation and their minimum case requirements. - Benchmark data on current IHC validation practices and procedures may help laboratories understand the issues and influence further refinement of LPG recommendations.

  15. Using Institutional Survey Data to Jump-Start Your Benchmarking Process

    Science.gov (United States)

    Chow, Timothy K. C.

    2012-01-01

    Guided by the missions and visions, higher education institutions utilize benchmarking processes to identify better and more efficient ways to carry out their operations. Aside from the initial planning and organization steps involved in benchmarking, a matching or selection step is crucial for identifying other institutions that have good…

  16. NGS-based deep bisulfite sequencing.

    Science.gov (United States)

    Lee, Suman; Kim, Joomyeong

    2016-01-01

    We have developed an NGS-based deep bisulfite sequencing protocol for the DNA methylation analysis of genomes. This approach allows the rapid and efficient construction of NGS-ready libraries with a large number of PCR products that have been individually amplified from bisulfite-converted DNA. This approach also employs a bioinformatics strategy to sort the raw sequence reads generated from NGS platforms and subsequently to derive DNA methylation levels for individual loci. The results demonstrated that this NGS-based deep bisulfite sequencing approach provide not only DNA methylation levels but also informative DNA methylation patterns that have not been seen through other existing methods.•This protocol provides an efficient method generating NGS-ready libraries from individually amplified PCR products.•This protocol provides a bioinformatics strategy sorting NGS-derived raw sequence reads.•This protocol provides deep bisulfite sequencing results that can measure DNA methylation levels and patterns of individual loci.

  17. Magic Mirror on the Wall, Who’s the Fastest Database of Them All? A survey of Database Benchmarks

    Science.gov (United States)

    1993-06-21

    Wars" and " Benchmarketing "[51. lowing criteria for a good domain-specific benchmark: The best defense is to have knowledge about the DBMS benchmarks...will be examined: by vendors. Gray sites two major benchmark abuses: "Benchmark Wars", and " Benchmarketing "!51. The o TPC Benchmark A (TPC-A) "Benchmark...the standard benchmarks defined by the the benchmark run faster. " Benchmarketing " is where a TPC. The Wisconsin benchmark is a benchmark for com

  18. Toward Better Oversight of NGS Tests.

    Science.gov (United States)

    2016-09-01

    The FDA has published two draft guidance documents aimed at streamlining its oversight of tests based on next-generation sequencing (NGS). One contains preliminary recommendations addressing the analytic validity of NGS-based tests for hereditary diseases; the other explains how test developers can obtain official recognition of their genetic variant databases, potentially speeding marketing clearance or approval.

  19. Benchmarking Work Practices and Outcomes in Australian Universities Using an Employee Survey

    Science.gov (United States)

    Langford, Peter H.

    2010-01-01

    The purpose of the current study was to benchmark a broad range of work practices and outcomes in Australian universities against other industries. Past research suggests occupational stress experienced by academic staff is worse than experienced by employees in other industries. However, no other practices or outcomes can be compared confidently.…

  20. Benchmarking Alumni Relations in Community Colleges: Findings from a 2015 CASE Survey

    Science.gov (United States)

    Paradise, Andrew

    2016-01-01

    The Benchmarking Alumni Relations in Community Colleges white paper features key data on alumni relations programs at community colleges across the United States. The paper compares results from 2015 and 2012 across such areas as the structure, operations and budget for alumni relations, alumni data collection and management, alumni communications…

  1. 76 FR 77208 - Affirmation of Vertical Datum for Surveying and Mapping Activities for the Islands of St. Croix...

    Science.gov (United States)

    2011-12-12

    ...: National Geodetic Survey (NGS), National Ocean Service (NOS), National Oceanic and Atmospheric... (NOS), National Geodetic Survey (NGS), has completed the definition and implementation of VIVD09... monuments is available in digital form, from the NGS Web site:...

  2. National Geodetic Survey's Airport Aerial Photography

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The National Geodetic Survey (NGS), formerly part of the U.S. Coast and Geodetic Survey, has been performing Aeronautical surveys since the 1920's. NGS, in...

  3. Report of results of benchmarking survey of central heating operations at NASA centers and various corporations

    Science.gov (United States)

    Hoffman, Thomas R.

    1995-01-01

    In recent years, Total Quality Management has swept across the country. Many companies and the Government have started looking at every aspect on how business is done and how money is spent. The idea or goal is to provide a service that is better, faster and cheaper. The first step in this process is to document or measure the process or operation as it stands now. For Lewis Research Center, this report is the first step in the analysis of heating plant operations. This report establishes the original benchmark that can be referred to in the future. The report also provides a comparison to other organization's heating plants to help in the brainstorming of new ideas. The next step is to propose and implement changes that would meet the goals as mentioned above. After the changes have been implemented the measuring process starts over again. This provides for a continuous improvement process.

  4. Benchmarking Alumni Relations in Community Colleges: Findings from a 2015 CASE Survey. CASE White Paper

    Science.gov (United States)

    Paradise, Andrew

    2016-01-01

    Building on the inaugural survey conducted three years prior, the 2015 CASE Community College Alumni Relations survey collected additional insightful data on staffing, structure, communications, engagement, and fundraising. This white paper features key data on alumni relations programs at community colleges across the United States. The paper…

  5. Estimating inbreeding coefficients from NGS data

    DEFF Research Database (Denmark)

    Vieira, Filipe Garrett; Fumagalli, Matteo; Albrechtsen, Anders;

    2013-01-01

    Most methods for Next-Generation Sequencing (NGS) data analyses incorporate information regarding allele frequencies using the assumption of Hardy-Weinberg Equilibrium (HWE) as a prior. However, many organisms including domesticated, partially selfing or with asexual life cycles show strong...... deviations from HWE. For such species, and specially for low coverage data, it is necessary to obtain estimates of inbreeding coefficients (F) for each individual beforecalling genotypes. Here, we present two methods for estimating inbreeding coefficients from NGS data based on an Expectation...

  6. 78 FR 47671 - Proposed Information Collection; Comment Request; Benchmark Survey of Insurance Transactions by U...

    Science.gov (United States)

    2013-08-06

    ... Mark Xu, Chief, Special Surveys Branch, Balance of Payments Division, (BE-50), Bureau of Economic... insurance companies resident abroad; (7) receipts for auxiliary insurance services; and (8) payments for... their insurance transactions for each category. The data are needed to monitor U.S. international trade...

  7. Benchmarking Alumni Relations in Community Colleges: Findings from a 2012 CASE Survey. CASE White Paper

    Science.gov (United States)

    Paradise, Andrew; Heaton, Paul

    2013-01-01

    In 2011, CASE founded the Center for Community College Advancement to provide training and resources to help community colleges build and sustain effective fundraising, alumni relations and communications and marketing programs. This white paper summarizes the results of a groundbreaking survey on alumni relations programs at community colleges…

  8. 76 FR 79054 - Direct Investment Surveys: BE-12, Benchmark Survey of Foreign Direct Investment in the United States

    Science.gov (United States)

    2011-12-21

    ... survey forms. The changes are intended to align the data collection program for multinational companies with available resources and align the statistics on multinational companies with recent changes in... data collection program for multinational companies with available resources. Under the revised...

  9. Statistical modelling of spatio-temporal dependencies in NGS data

    NARCIS (Netherlands)

    Ranciati, Saverio

    2016-01-01

    Next-generation sequencing (NGS) heeft zich snel gevestigd als de huidige standaard in de genetische analyse. Deze omschakeling van microarray naar NGS vereist nieuwe statistische strategieën om de onderzoeksvragen aan te pakken. Ten eerste, NGS data bestaat uit discrete waarnemingen, meestal gekenm

  10. The potential of whole genome NGS for infectious disease diagnosis.

    Science.gov (United States)

    Lecuit, Marc; Eloit, Marc

    2015-01-01

    Non-targeted identification of microbes is now possible directly in biological samples, based on whole-genome-NGS (WG-NGS) techniques that allow deep sequencing of nucleic acids, data mining and sorting out of sequences of pathogens without any a priori hypothesis. WG-NGS was first only used as a research tool due to its cost, complexity and lack of standardization. Recent improvements in sample preparation and bioinformatics pipelines and decrease in cost now allow actionable diagnostics in patients. The potency and limits of WG-NGS and possible future indications are discussed here. WG-NGS will likely soon become a standard procedure in microbiological diagnosis.

  11. The CERN Neutrino beam to Gran Sasso (NGS)

    CERN Document Server

    Bailey, R; Ball, A E; Bonnal, P; Buhler-Broglin, Manfred; Détraz, C; Elsener, Konrad; Ereditato, A; Faugeras, Paul E; Ferrari, A; Fortuna, G; Grant, A L; Guglielmi, A M; Hilaire, A; Hübner, Kurt; Jonker, M; Kissler, Karl Heinz; López-Hernandez, L A; Maugain, J M; Migliozzi, P; Palladino, Vittorio; Pietropaolo, F; Revol, Jean Pierre Charles; Sala, P R; Sanelli, C; Stevenson, Graham Roger; Vassilopoulos, N; Vincke, H H; Weisse, E; Wilhemsson, M

    1999-01-01

    The conceptual technical design of the NGS (CERN neutrino beam to Gran Sasso) facility has been presented in the report CERN 98-02 / INFN-AE/98-05. Additional information, in particular an update on various neutrino beam options for the NGS facility, has been provided in a memorandum to the CERN-SPSC Committee (CERN-SPSC/98-35). In the present report, further improvements on the NGS design and performance, in particular new scenarios for SPS proton cycles for NGS operation and a new version of the NGS "high energy" neutrino beam for nt appearance experiments, are described. This new NGS reference beam is estimated to provide three times more nt events per year than the beam presented in the 1998 report. The radiological aspects of the NGS facility have been re-examined with the new beam design. An updated version of the construction schedule is also presented.

  12. An Assessment of Airborn Gravimetry Collected under the NGS GRAV-D Project

    Science.gov (United States)

    Holmes, S. A.

    2009-12-01

    In the United States, the National Geodetic Survey [NGS] holds the official charter to maintain the vertical datum for the USA and its territories. This includes the responsibility to maintain a current geoid model for transforming between orthometric (H) and geodetic (h) heights. The latest (2009) NGS geoid model incorporates the latest GRACE-based satellite-only geopotential solutions, very-high-resolution digital elevation models [DEMs] derived from the Shuttle Radar Topography Mission [SRTM], and other additional data and processing improvements. This recent geoid model also benefited greatly from the prior release of the National Geospatial-Intelligence Agency’s [NGA] Earth Gravitational Model 2008 [EGM2008], which served as a computational reference field for the new geoid, but was also particularly useful for identifying and removing corrupted data from the NGS gravimetry database. Looking forwards, NGS intends to increase its efforts to refine and improve its future national geoid models. To this end, their ambitious "Gravity for the Re-definition of the American Datum” [GRAV-D] project aims to update the NGS gravimetry holdings by flying new airborn gravity surveys over a large fraction of the USA and it territories. Concurrent efforts will focus on developing new processing techniques for optimally incorporating improved gravimetry into the final geoid solution. To this end, the GRAV-D team has already flown several surveys in the Gulf of Mexico, Puerto Rico, US Virgin Islands, and Alaska. Testing and analysis aimed at calibrating and validating the preliminary survey data is already underway. The latest assessment of these recent efforts, including the extent to which this new data can be expected to contribute to an improved gravimetric geoid model, is presented here.

  13. The ICR142 NGS validation series: a resource for orthogonal assessment of NGS analysis.

    Science.gov (United States)

    Ruark, Elise; Renwick, Anthony; Clarke, Matthew; Snape, Katie; Ramsay, Emma; Elliott, Anna; Hanks, Sandra; Strydom, Ann; Seal, Sheila; Rahman, Nazneen

    2016-01-01

    To provide a useful community resource for orthogonal assessment of NGS analysis software, we present the ICR142 NGS validation series. The dataset includes high-quality exome sequence data from 142 samples together with Sanger sequence data at 730 sites; 409 sites with variants and 321 sites at which variants were called by an NGS analysis tool, but no variant is present in the corresponding Sanger sequence. The dataset includes 286 indel variants and 275 negative indel sites, and thus the ICR142 validation dataset is of particular utility in evaluating indel calling performance. The FASTQ files and Sanger sequence results can be accessed in the European Genome-phenome Archive under the accession number EGAS00001001332.

  14. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...

  15. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  16. Financial Benchmarking

    OpenAIRE

    2012-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  17. Geodetic Control Points, Benchmarks; Vertical elevation bench marks for monumented geodetic survey control points for which mean sea level elevations have been determined., Published in 1995, 1:24000 (1in=2000ft) scale, Rhode Island and Providence Plantations.

    Data.gov (United States)

    NSGIC State | GIS Inventory — Geodetic Control Points dataset current as of 1995. Benchmarks; Vertical elevation bench marks for monumented geodetic survey control points for which mean sea level...

  18. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  19. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  20. Geodetic Control Points, Approx 3 mile survey grade GPS grid incorporating NGS and Davenport & Bettendorf existing control (re-occupied), Published in 2005, 1:600 (1in=50ft) scale, Scott County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Geodetic Control Points dataset, published at 1:600 (1in=50ft) scale, was produced all or in part from Field Survey/GPS information as of 2005. It is described...

  1. Benchmarking Non-Hardware Balance-of-System (Soft) Costs for U.S. Photovoltaic Systems Using a Bottom-Up Approach and Installer Survey

    Energy Technology Data Exchange (ETDEWEB)

    Ardani, Kristen [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States); Feldman, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ong, Sean [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barbose, Galen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wiser, Ryan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-11-01

    This report presents results from the first U.S. Department of Energy (DOE) sponsored, bottom-up data-collection and analysis of non-hardware balance-of-system costs—often referred to as “business process” or “soft” costs—for residential and commercial photovoltaic (PV) systems. Annual expenditure and labor-hour-productivity data are analyzed to benchmark 2010 soft costs related to the DOE priority areas of (1) customer acquisition; (2) permitting, inspection, and interconnection; (3) installation labor; and (4) installer labor for arranging third-party financing. Annual expenditure and labor-hour data were collected from 87 PV installers. After eliminating outliers, the survey sample consists of 75 installers, representing approximately 13% of all residential PV installations and 4% of all commercial installations added in 2010. Including assumed permitting fees, in 2010 the average soft costs benchmarked in this analysis total $1.50/W for residential systems (ranging from $0.66/W to $1.66/W between the 20th and 80th percentiles). For commercial systems, the median 2010 benchmarked soft costs (including assumed permitting fees) are $0.99/W for systems smaller than 250 kW (ranging from $0.51/W to $1.45/W between the 20th and 80th percentiles) and $0.25/W for systems larger than 250 kW (ranging from $0.17/W to $0.78/W between the 20th and 80th percentiles). Additional soft costs not benchmarked in the present analysis (e.g., installer profit, overhead, financing, and contracting) are significant and would add to these figures. The survey results provide a benchmark for measuring—and helping to accelerate—progress over the next decade toward achieving the DOE SunShot Initiative’s soft-cost-reduction targets. We conclude that the selected non-hardware business processes add considerable cost to U.S. PV systems, constituting 23% of residential PV system price, 17% of small commercial system price, and 5% of large commercial system price (in 2010

  2. NGS-eval: NGS Error analysis and novel sequence VAriant detection tooL.

    Science.gov (United States)

    May, Ali; Abeln, Sanne; Buijs, Mark J; Heringa, Jaap; Crielaard, Wim; Brandt, Bernd W

    2015-07-01

    Massively parallel sequencing of microbial genetic markers (MGMs) is used to uncover the species composition in a multitude of ecological niches. These sequencing runs often contain a sample with known composition that can be used to evaluate the sequencing quality or to detect novel sequence variants. With NGS-eval, the reads from such (mock) samples can be used to (i) explore the differences between the reads and their references and to (ii) estimate the sequencing error rate. This tool maps these reads to references and calculates as well as visualizes the different types of sequencing errors. Clearly, sequencing errors can only be accurately calculated if the reference sequences are correct. However, even with known strains, it is not straightforward to select the correct references from databases. We previously analysed a pyrosequencing dataset from a mock sample to estimate sequencing error rates and detected sequence variants in our mock community, allowing us to obtain an accurate error estimation. Here, we demonstrate the variant detection and error analysis capability of NGS-eval with Illumina MiSeq reads from the same mock community. While tailored towards the field of metagenomics, this server can be used for any type of MGM-based reads. NGS-eval is available at http://www.ibi.vu.nl/programs/ngsevalwww/.

  3. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    distance functions. The frontier is given by an explicit quantile, e.g. “the best 90 %”. Using the explanatory model of the inefficiency, the user can adjust the frontiers by submitting state variables that influence the inefficiency. An efficiency study of Danish dairy farms is implemented......We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  4. Orthogonal NGS for High Throughput Clinical Diagnostics.

    Science.gov (United States)

    Chennagiri, Niru; White, Eric J; Frieden, Alexander; Lopez, Edgardo; Lieber, Daniel S; Nikiforov, Anastasia; Ross, Tristen; Batorsky, Rebecca; Hansen, Sherry; Lip, Va; Luquette, Lovelace J; Mauceli, Evan; Margulies, David; Milos, Patrice M; Napolitano, Nichole; Nizzari, Marcia M; Yu, Timothy; Thompson, John F

    2016-04-19

    Next generation sequencing is a transformative technology for discovering and diagnosing genetic disorders. However, high-throughput sequencing remains error-prone, necessitating variant confirmation in order to meet the exacting demands of clinical diagnostic sequencing. To address this, we devised an orthogonal, dual platform approach employing complementary target capture and sequencing chemistries to improve speed and accuracy of variant calls at a genomic scale. We combined DNA selection by bait-based hybridization followed by Illumina NextSeq reversible terminator sequencing with DNA selection by amplification followed by Ion Proton semiconductor sequencing. This approach yields genomic scale orthogonal confirmation of ~95% of exome variants. Overall variant sensitivity improves as each method covers thousands of coding exons missed by the other. We conclude that orthogonal NGS offers improvements in variant calling sensitivity when two platforms are used, better specificity for variants identified on both platforms, and greatly reduces the time and expense of Sanger follow-up, thus enabling physicians to act on genomic results more quickly.

  5. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  6. CRCDA--Comprehensive resources for cancer NGS data analysis.

    Science.gov (United States)

    Thangam, Manonanthini; Gopal, Ramesh Kumar

    2015-01-01

    Next generation sequencing (NGS) innovations put a compelling landmark in life science and changed the direction of research in clinical oncology with its productivity to diagnose and treat cancer. The aim of our portal comprehensive resources for cancer NGS data analysis (CRCDA) is to provide a collection of different NGS tools and pipelines under diverse classes with cancer pathways and databases and furthermore, literature information from PubMed. The literature data was constrained to 18 most common cancer types such as breast cancer, colon cancer and other cancers that exhibit in worldwide population. NGS-cancer tools for the convenience have been categorized into cancer genomics, cancer transcriptomics, cancer epigenomics, quality control and visualization. Pipelines for variant detection, quality control and data analysis were listed to provide out-of-the box solution for NGS data analysis, which may help researchers to overcome challenges in selecting and configuring individual tools for analysing exome, whole genome and transcriptome data. An extensive search page was developed that can be queried by using (i) type of data [literature, gene data and sequence read archive (SRA) data] and (ii) type of cancer (selected based on global incidence and accessibility of data). For each category of analysis, variety of tools are available and the biggest challenge is in searching and using the right tool for the right application. The objective of the work is collecting tools in each category available at various places and arranging the tools and other data in a simple and user-friendly manner for biologists and oncologists to find information easier. To the best of our knowledge, we have collected and presented a comprehensive package of most of the resources available in cancer for NGS data analysis. Given these factors, we believe that this website will be an useful resource to the NGS research community working on cancer. Database URL: http://bioinfo.au-kbc.org.in/ngs/ngshome.html.

  7. Next generation sequencing (NGS)technologies and applications

    Energy Technology Data Exchange (ETDEWEB)

    Vuyisich, Momchilo [Los Alamos National Laboratory

    2012-09-11

    NGS technology overview: (1) NGS library preparation - Nucleic acids extraction, Sample quality control, RNA conversion to cDNA, Addition of sequencing adapters, Quality control of library; (2) Sequencing - Clonal amplification of library fragments, (except PacBio), Sequencing by synthesis, Data output (reads and quality); and (3) Data analysis - Read mapping, Genome assembly, Gene expression, Operon structure, sRNA discovery, and Epigenetic analyses.

  8. 15 CFR 801.10 - Rules and regulations for the BE-120, Benchmark Survey of Transactions in Selected Services and...

    Science.gov (United States)

    2010-01-01

    ...; educational and training services; engineering, architectural, and surveying services; financial services... news; disbursements to maintain government tourism and business promotion offices; disbursements for...

  9. The Minnesota Report Card on Environmental Literacy: A Benchmark Survey of Adult Environmental Knowledge, Attitudes and Behavior.

    Science.gov (United States)

    Murphy, Tony P.

    This report documents the results of the first statewide survey concerning the environmental literacy of adults in Minnesota. During July-September 2001, a random sample of 1000 adults were surveyed for their knowledge about, attitudes toward, and behaviors related to the environment. This report describes the environmental literacy of Minnesotans…

  10. The TRENDS High-Contrast Imaging Survey. VI. Discovery of a Mass, Age, and Metallicity Benchmark Brown Dwarf

    CERN Document Server

    Crepp, Justin R; Bechter, Eric B; Montet, Benjamin T; Johnson, John Asher; Piskorz, Danielle; Howard, Andrew W; Isaacson, Howard

    2016-01-01

    The mass and age of substellar objects are degenerate parameters leaving the evolutionary state of brown dwarfs ambiguous without additional information. Theoretical models are normally used to help distinguish between old, massive brown dwarfs and young, low mass brown dwarfs but these models have yet to be properly calibrated. We have carried out an infrared high-contrast imaging program with the goal of detecting substellar objects as companions to nearby stars to help break degeneracies in inferred physical properties such as mass, age, and composition. Rather than using imaging observations alone, our targets are pre-selected based on the existence of dynamical accelerations informed from years of stellar radial velocity (RV) measurements. In this paper, we present the discovery of a rare benchmark brown dwarf orbiting the nearby ($d=18.69\\pm0.19$ pc), solar-type (G9V) star HD 4747 ([Fe/H]=$-0.22\\pm0.04$) with a projected separation of only $\\rho=11.3\\pm0.2$ AU ($\\theta \\approx$ 0.6''). Precise Doppler m...

  11. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  12. 75 FR 57263 - New Policy Announcing That Traditional Horizontal Survey Projects Performed With Terrestrial...

    Science.gov (United States)

    2010-09-20

    ... Into NGS Databases AGENCY: National Geodetic Survey (NGS), National Ocean Service (NOS), National..., 2011 the National Geodetic Survey (NGS) will cease accepting data, all orders and classes, from... September 1984 ``Standards and Specifications for Geodetic Control Networks'' for inclusion into the...

  13. Precise GPS ephemerides from DMA and NGS tested by time transfer

    Science.gov (United States)

    Lewandowski, Wlodzimierz W.; Petit, Gerard; Thomas, Claudine

    1992-01-01

    It was shown that the use of the Defense Mapping Agency's (DMA) precise ephemerides brings a significant improvement to the accuracy of GPS time transfer. At present a new set of precise ephemerides produced by the National Geodetic Survey (NGS) has been made available to the timing community. This study demonstrates that both types of precise ephemerides improve long-distance GPS time transfer and remove the effects of Selective Availability (SA) degradation of broadcast ephemerides. The issue of overcoming SA is also discussed in terms of the routine availability of precise ephemerides.

  14. 2014 NOAA NGS Topobathy Lidar: Connecticut

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data were collected by the National Oceanic Atmospheric Administration National Geodetic Survey Remote Sensing Division using a Riegl VQ820G system. The data...

  15. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  16. NGS catalog: A database of next generation sequencing studies in humans.

    Science.gov (United States)

    Xia, Junfeng; Wang, Qingguo; Jia, Peilin; Wang, Bing; Pao, William; Zhao, Zhongming

    2012-06-01

    Next generation sequencing (NGS) technologies have been rapidly applied in biomedical and biological research since its advent only a few years ago, and they are expected to advance at an unprecedented pace in the following years. To provide the research community with a comprehensive NGS resource, we have developed the database Next Generation Sequencing Catalog (NGS Catalog, http://bioinfo.mc.vanderbilt.edu/NGS/index.html), a continually updated database that collects, curates and manages available human NGS data obtained from published literature. NGS Catalog deposits publication information of NGS studies and their mutation characteristics (SNVs, small insertions/deletions, copy number variations, and structural variants), as well as mutated genes and gene fusions detected by NGS. Other functions include user data upload, NGS general analysis pipelines, and NGS software. NGS Catalog is particularly useful for investigators who are new to NGS but would like to take advantage of these powerful technologies for their own research. Finally, based on the data deposited in NGS Catalog, we summarized features and findings from whole exome sequencing, whole genome sequencing, and transcriptome sequencing studies for human diseases or traits.

  17. Photometric calibration of NGS/POSS and ESO/SRC plates using the NOAO PDS measuring engine. II - Surface photometry

    Science.gov (United States)

    Cutri, Roc M.; Low, Frank J.; Guhathakurta, Puragra

    1993-01-01

    In this paper we present a method to calibrate surface photometry of faint sources measured from direct photographic plates, such as those of the NGS/POSS and ESO/SRC Sky Survey. This calibration procedure does not require scanning sensitometer spots on the plates, but instead uses measurements of the brightness profiles of many faint stars of known brightness to fit a linearized approximation to the characteristic curve. The approximation is valid for only low- to medium-density emulsions, so this technique is appropriate only for relatively faint emission. Comparison between measurements of representative extended sources on the NGS/POSS and CCD images indicates that surface photometry can be obtained from the Sky Survey plates accurate to 0.1-0.3 mag in the range mu(B) between 23 and 27 and mu(R) between 22 and 26 mag/sq arcsec.

  18. Estimating IBD tracts from low coverage NGS data

    DEFF Research Database (Denmark)

    Garrett Vieira, Filipe Jorge; Albrechtsen, Anders; Nielsen, Rasmus

    2016-01-01

    that the new method provides a marked increase in accuracy even at low coverage. AVAILABILITY AND IMPLEMENTATION: The methods presented in this work were implemented in C/C ++ and are freely available for non-commercial use from https://github.com/fgvieira/ngsF-HMM CONTACT: fgvieira@snm.ku.dk SUPPLEMENTARY...

  19. Geodetic Survey Water Level Observations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Over one million images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) forms captured from microfiche. Tabular forms and charts...

  20. Developing a weighting strategy to include mobile phone numbers into an ongoing population health survey using an overlapping dual-frame design with limited benchmark information.

    Science.gov (United States)

    Barr, Margo L; Ferguson, Raymond A; Hughes, Phil J; Steel, David G

    2014-09-04

    In 2012 mobile phone numbers were included into the ongoing New South Wales Population Health Survey (NSWPHS) using an overlapping dual-frame design. Previously in the NSWPHS the sample was selected using random digit dialing (RDD) of landline phone numbers. The survey was undertaken using computer assisted telephone interviewing (CATI). The weighting strategy needed to be significantly expanded to manage the differing probabilities of selection by frame, including that of children of mobile-only phone users, and to adjust for the increased chance of selection of dual-phone users. This paper describes the development of the final weighting strategy to properly combine the data from two overlapping sample frames accounting for the fact that population benchmarks for the different sampling frames were not available at the state or regional level. Estimates of the number of phone numbers for the landline and mobile phone frames used to calculate the differing probabilities of selection by frame, for New South Wales (NSW) and by stratum, were obtained by apportioning Australian estimates as none were available for NSW. The weighting strategy was then developed by calculating person selection probabilities, selection weights, applying a constant composite factor to the dual-phone users sample weights, and benchmarking to the latest NSW population by age group, sex and stratum. Data from the NSWPHS for the first quarter of 2012 was used to test the weighting strategy. This consisted of data on 3395 respondents with 2171 (64%) from the landline frame and 1224 (36%) from the mobile frame. However, in order to calculate the weights, data needed to be available for all core weighting variables and so 3378 respondents, 2933 adults and 445 children, had sufficient data to be included. Average person weights were 3.3 times higher for the mobile-only respondents, 1.3 times higher for the landline-only respondents and 1.7 times higher for dual-phone users in the mobile frame

  1. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  2. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  3. NGS Catalog: A Database of Next Generation Sequencing Studies in Humans

    OpenAIRE

    Xia, Junfeng; Wang, Qingguo; Jia, Peilin; Wang, Bing; Pao, William; Zhao, Zhongming

    2012-01-01

    Next generation sequencing (NGS) technologies have been rapidly applied in biomedical and biological research since its advent only a few years ago, and they are expected to advance at an unprecedented pace in the following years. To provide the research community with a comprehensive NGS resource, we have developed the database Next Generation Sequencing Catalog (NGS Catalog, http://bioinfo.mc.vanderbilt.edu/NGS/index.html), a continually updated database that collects, curates and manages a...

  4. Benchmarking v ICT

    OpenAIRE

    Blecher, Jan

    2009-01-01

    The aim of this paper is to describe benefits of benchmarking IT in wider context and benchmarking scope at all. I specify benchmarking as a process and mention basic rules and guidelines. Further I define IT benchmarking domains and describe possibilities of their use. Best known type of IT benchmark is cost benchmark which represents only a subset of benchmark opportunities. In this paper, is cost benchmark rather an imaginary first step to benchmarking contribution to company. IT benchmark...

  5. Next generation sequencing (NGS): a golden tool in forensic toolkit.

    Science.gov (United States)

    Aly, S M; Sabri, D M

    2015-01-01

    The DNA analysis is a cornerstone in contemporary forensic sciences. DNA sequencing technologies are powerful tools that enrich molecular sciences in the past based on Sanger sequencing and continue to glowing these sciences based on Next generation sequencing (NGS). Next generation sequencing has excellent potential to flourish and increase the molecular applications in forensic sciences by jumping over the pitfalls of the conventional method of sequencing. The main advantages of NGS compared to conventional method that it utilizes simultaneously a large number of genetic markers with high-resolution of genetic data. These advantages will help in solving several challenges such as mixture analysis and dealing with minute degraded samples. Based on these new technologies, many markers could be examined to get important biological data such as age, geographical origins, tissue type determination, external visible traits and monozygotic twins identification. It also could get data related to microbes, insects, plants and soil which are of great medico-legal importance. Despite the dozens of forensic research involving NGS, there are requirements before using this technology routinely in forensic cases. Thus, there is a great need to more studies that address robustness of these techniques. Therefore, this work highlights the applications of forensic sciences in the era of massively parallel sequencing.

  6. DSP Platform Benchmarking : DSP Platform Benchmarking

    OpenAIRE

    Xinyuan, Luo

    2009-01-01

    Benchmarking of DSP kernel algorithms was conducted in the thesis on a DSP processor for teaching in the course TESA26 in the department of Electrical Engineering. It includes benchmarking on cycle count and memory usage. The goal of the thesis is to evaluate the quality of a single MAC DSP instruction set and provide suggestions for further improvement in instruction set architecture accordingly. The scope of the thesis is limited to benchmark the processor only based on assembly coding. The...

  7. National Geodetic Control Stations, Geographic NAD83, NGS (2004) [geodetic_ctrl_point_la_NGS_2004

    Data.gov (United States)

    Louisiana Geographic Information Center — This data contains a set of geodetic control stations maintained by the National Geodetic Survey. Each geodetic control station in this dataset has either a precise...

  8. Towards global benchmarking of food environments and policies to reduce obesity and diet-related non-communicable diseases: design and methods for nation-wide surveys.

    Science.gov (United States)

    Vandevijvere, Stefanie; Swinburn, Boyd

    2014-05-15

    Unhealthy diets are heavily driven by unhealthy food environments. The International Network for Food and Obesity/non-communicable diseases (NCDs) Research, Monitoring and Action Support (INFORMAS) has been established to reduce obesity, NCDs and their related inequalities globally. This paper describes the design and methods of the first-ever, comprehensive national survey on the healthiness of food environments and the public and private sector policies influencing them, as a first step towards global monitoring of food environments and policies. A package of 11 substudies has been identified: (1) food composition, labelling and promotion on food packages; (2) food prices, shelf space and placement of foods in different outlets (mainly supermarkets); (3) food provision in schools/early childhood education (ECE) services and outdoor food promotion around schools/ECE services; (4) density of and proximity to food outlets in communities; food promotion to children via (5) television, (6) magazines, (7) sport club sponsorships, and (8) internet and social media; (9) analysis of the impact of trade and investment agreements on food environments; (10) government policies and actions; and (11) private sector actions and practices. For the substudies on food prices, provision, promotion and retail, 'environmental equity' indicators have been developed to check progress towards reducing diet-related health inequalities. Indicators for these modules will be assessed by tertiles of area deprivation index or school deciles. International 'best practice benchmarks' will be identified, against which to compare progress of countries on improving the healthiness of their food environments and policies. This research is highly original due to the very 'upstream' approach being taken and its direct policy relevance. The detailed protocols will be offered to and adapted for countries of varying size and income in order to establish INFORMAS globally as a new monitoring initiative

  9. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  10. The National Geochemical Survey - database and documentation

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The USGS, in collaboration with other federal and state government agencies, industry, and academia, is conducting the National Geochemical Survey (NGS) to produce...

  11. The National Geochemical Survey - database and documentation

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The USGS, in collaboration with other federal and state government agencies, industry, and academia, is conducting the National Geochemical Survey (NGS) to produce a...

  12. Benchmarking a DSP processor

    OpenAIRE

    Lennartsson, Per; Nordlander, Lars

    2002-01-01

    This Master thesis describes the benchmarking of a DSP processor. Benchmarking means measuring the performance in some way. In this report, we have focused on the number of instruction cycles needed to execute certain algorithms. The algorithms we have used in the benchmark are all very common in signal processing today. The results we have reached in this thesis have been compared to benchmarks for other processors, performed by Berkeley Design Technology, Inc. The algorithms were programm...

  13. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    , and more are underway. As a result, there is an increasing need for an independent benchmark for spatio-temporal indexes. This paper characterizes the spatio-temporal indexing problem and proposes a benchmark for the performance evaluation and comparison of spatio-temporal indexes. Notably, the benchmark...

  14. An NGS Workflow Blueprint for DNA Sequencing Data and Its Application in Individualized Molecular Oncology

    OpenAIRE

    Jian Li; Aarif Mohamed Nazeer Batcha; Björn Grüning; Mansmann, Ulrich R.

    2016-01-01

    Next-generation sequencing (NGS) technologies that have advanced rapidly in the past few years possess the potential to classify diseases, decipher the molecular code of related cell processes, identify targets for decision-making on targeted therapy or prevention strategies, and predict clinical treatment response. Thus, NGS is on its way to revolutionize oncology. With the help of NGS, we can draw a finer map for the genetic basis of diseases and can improve our understanding of diagnostic ...

  15. A Benchmark for Management Effectiveness

    OpenAIRE

    Zimmermann, Bill; Chanaron, Jean-Jacques; Klieb, Leslie

    2007-01-01

    International audience; This study presents a tool to gauge managerial effectiveness in the form of a questionnaire that is easy to administer and score. The instrument covers eight distinct areas of organisational climate and culture of management inside a company or department. Benchmark scores were determined by administering sample-surveys to a wide cross-section of individuals from numerous firms in Southeast Louisiana, USA. Scores remained relatively constant over a seven-year timeframe...

  16. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  17. An NGS Workflow Blueprint for DNA Sequencing Data and Its Application in Individualized Molecular Oncology.

    Science.gov (United States)

    Li, Jian; Batcha, Aarif Mohamed Nazeer; Grüning, Björn; Mansmann, Ulrich R

    2015-01-01

    Next-generation sequencing (NGS) technologies that have advanced rapidly in the past few years possess the potential to classify diseases, decipher the molecular code of related cell processes, identify targets for decision-making on targeted therapy or prevention strategies, and predict clinical treatment response. Thus, NGS is on its way to revolutionize oncology. With the help of NGS, we can draw a finer map for the genetic basis of diseases and can improve our understanding of diagnostic and prognostic applications and therapeutic methods. Despite these advantages and its potential, NGS is facing several critical challenges, including reduction of sequencing cost, enhancement of sequencing quality, improvement of technical simplicity and reliability, and development of semiautomated and integrated analysis workflow. In order to address these challenges, we conducted a literature research and summarized a four-stage NGS workflow for providing a systematic review on NGS-based analysis, explaining the strength and weakness of diverse NGS-based software tools, and elucidating its potential connection to individualized medicine. By presenting this four-stage NGS workflow, we try to provide a minimal structural layout required for NGS data storage and reproducibility.

  18. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  19. NGS WFSs module for MAORY at E-ELT

    Science.gov (United States)

    Esposito, S.; Agapito, G.; Antichi, J.; Bonanno, A.; Carbonaro, L.; Giordano, C.; Spanò, P.

    We report on the natural guide star (NGS) wavefront sensors (WFS) module for MAORY, the multi-cojugate adaptive optics (MCAO) system for the ESO E-ELT. Three low-order, near-infrared (H-band), Shack-Hartmann sensors provide fast acquisition of the first 5 modes (tip, tilt, focus, astigmatism) on 3 natural guide stars over a 160 arcsec field of view. Three moderate-order (20x20), visible (600-800 nm), pyramid WFSs provide the slow Truth sensing to correct LGS wavefront estimates of low-order modes. These sensors are mounted onto three R-theta stages to patrol the field of view. The module is also equipped with a retractable, on-axis, high-order (80x80), visible, pyramid WFS for the single-conjugate AO (SCAO) mode of MAORY and MICADO. The visible WFSs share the same 80x80 pyramid WFS design. This choice enables also a MCAO NGS capability. Simulations show that Strehl ratios (SR) over 40% are reached with MCAO and three, 2x2 sub-apertures, NIR low-order WFSs working with H-mag=20 reference stars. In SCAO mode, 90% SR for a 8mag stars with a contrast down to 10-5, and 45% SR for a 16mag star, are achieved.

  20. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  1. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  2. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  3. IPv6与下一代服务(NGS)

    Institute of Scientific and Technical Information of China (English)

    雷振洲

    2005-01-01

    大家知道现在我们都在走PSTN电信网络NGN,下一代Internet就是NGI,移动网要走向3G/B3G.那么他们为什么要走向NGS,实际上特别是上个世纪90年代的移动通信告诉我们现在的网络不能适应将来的要求,为什么?互联网的大发展说明人对信息的需求、对数据的需求、对未来更有价值的服务应用的需求越来越高.移动的大发展说明移动性对个性化对无所不在的需求是越来越强烈了.

  4. Clinical validation of NGS technology for HLA: An early adopter's perspective.

    Science.gov (United States)

    Weimer, Eric T

    2016-10-01

    Clinical validation of NGS for HLA typing has been a topic of interest with many laboratories investigating the merits. NGS has proven effective at reducing ambiguities and costs while providing more detailed information on HLA genes not previously sequenced. The ability of NGS to multiplex many patients within a single run presents unique challenges and sequencing new regions of HLA genes requires application of our knowledge of genetics to accurately determine HLA typing. This review represents my laboratory's experience in validation of NGS for HLA typing. It describes the obstacles faced with validation of NGS and is broken down into pre-analytic, analytic, and post-analytic challenges. Each section includes solutions to address them.

  5. Normal and compound poisson approximations for pattern occurrences in NGS reads.

    Science.gov (United States)

    Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu

    2012-06-01

    Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/∼fsun/Programs/NGS_motif_power/NGS

  6. FGK Benchmark Stars A new metallicity scale

    CERN Document Server

    Jofre, Paula; Soubiran, C; Blanco-Cuaresma, S; Pancino, E; Bergemann, M; Cantat-Gaudin, T; Hernandez, J I Gonzalez; Hill, V; Lardo, C; de Laverny, P; Lind, K; Magrini, L; Masseron, T; Montes, D; Mucciarelli, A; Nordlander, T; Recio-Blanco, A; Sobeck, J; Sordo, R; Sousa, S G; Tabernero, H; Vallenari, A; Van Eck, S; Worley, C C

    2013-01-01

    In the era of large spectroscopic surveys of stars of the Milky Way, atmospheric parameter pipelines require reference stars to evaluate and homogenize their values. We provide a new metallicity scale for the FGK benchmark stars based on their corresponding fundamental effective temperature and surface gravity. This was done by analyzing homogeneously with up to seven different methods a spectral library of benchmark stars. Although our direct aim was to provide a reference metallicity to be used by the Gaia-ESO Survey, the fundamental effective temperatures and surface gravities of benchmark stars of Heiter et al. 2013 (in prep) and their metallicities obtained in this work can also be used as reference parameters for other ongoing surveys, such as Gaia, HERMES, RAVE, APOGEE and LAMOST.

  7. Brainstorming as a Tool for the Benchmarking For Achieving Results in the Service-Oriented-Businesses (A Online Survey: Study Approach

    Directory of Open Access Journals (Sweden)

    R. Surya Kiran, D. Shiva Sai Kumar, D. Sateesh Kumar, V. Dilip Kumar, Vikas Kumar Singh

    2013-08-01

    Full Text Available How to benchmark is the problem and this paper produces out an outline on a typical research methodology using the brainstorming technique in order to come to the effective conclusions. With the commencement of the STEP (Socio-Cultural, Technical ,Economical and Political reforms in the previous years , business environments are in a state of dynamic change and the change process is still continuing .There had been a tremendous acceleration from the tradition and the inward looking regime to a progressive and the outward looking regime of the policy framework. With the L.P.G. (Liberalization , Privatization and the Globalization in almost all the sectors of the STEM (Science, Technology, Engineering and Medicine, the roles of the different sectors are undergoing Fundamental/Conceptual changes opening up new sets for analyzing the SWOT (Strength, Weakness, Opportunity and Threat for the business sectors .The main aim of the Six Sigma concept is to make the results were right the first time, every time. So benchmarking is to be done for the profitability and the revenue growth of the organizations . Brainstorming results could be well interpreted with the superposition matrix considering the ABC and the VED analysis as the same has been tested in the designs of the inventory control .

  8. Benchmarking Non-Hardware Balance-of-System (Soft) Costs for U.S. Photovoltaic Systems, Using a Bottom-Up Approach and Installer Survey - Second Edition

    Energy Technology Data Exchange (ETDEWEB)

    Friedman, B.; Ardani, K.; Feldman, D.; Citron, R.; Margolis, R.; Zuboy, J.

    2013-10-01

    This report presents results from the second U.S. Department of Energy (DOE) sponsored, bottom-up data-collection and analysis of non-hardware balance-of-system costs -- often referred to as 'business process' or 'soft' costs -- for U.S. residential and commercial photovoltaic (PV) systems. In service to DOE's SunShot Initiative, annual expenditure and labor-hour-productivity data are analyzed to benchmark 2012 soft costs related to (1) customer acquisition and system design (2) permitting, inspection, and interconnection (PII). We also include an in-depth analysis of costs related to financing, overhead, and profit. Soft costs are both a major challenge and a major opportunity for reducing PV system prices and stimulating SunShot-level PV deployment in the United States. The data and analysis in this series of benchmarking reports are a step toward the more detailed understanding of PV soft costs required to track and accelerate these price reductions.

  9. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  10. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  11. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  12. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  13. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  14. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  15. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  16. HCV genotyping from NGS short reads and its application in genotype detection from HCV mixed infected plasma.

    Science.gov (United States)

    Qiu, Ping; Stevens, Richard; Wei, Bo; Lahser, Fred; Howe, Anita Y M; Klappenbach, Joel A; Marton, Matthew J

    2015-01-01

    Genotyping of hepatitis C virus (HCV) plays an important role in the treatment of HCV. As new genotype-specific treatment options become available, it has become increasingly important to have accurate HCV genotype and subtype information to ensure that the most appropriate treatment regimen is selected. Most current genotyping methods are unable to detect mixed genotypes from two or more HCV infections. Next generation sequencing (NGS) allows for rapid and low cost mass sequencing of viral genomes and provides an opportunity to probe the viral population from a single host. In this paper, the possibility of using short NGS reads for direct HCV genotyping without genome assembly was evaluated. We surveyed the publicly-available genetic content of three HCV drug target regions (NS3, NS5A, NS5B) in terms of whether these genes contained genotype-specific regions that could predict genotype. Six genotypes and 38 subtypes were included in this study. An automated phylogenetic analysis based HCV genotyping method was implemented and used to assess different HCV target gene regions. Candidate regions of 250-bp each were found for all three genes that have enough genetic information to predict HCV genotypes/subtypes. Validation using public datasets shows 100% genotyping accuracy. To test whether these 250-bp regions were sufficient to identify mixed genotypes, we developed a random primer-based method to sequence HCV plasma samples containing mixtures of two HCV genotypes in different ratios. We were able to determine the genotypes without ambiguity and to quantify the ratio of the abundances of the mixed genotypes in the samples. These data provide a proof-of-concept that this random primed, NGS-based short-read genotyping approach does not need prior information about the viral population and is capable of detecting mixed viral infection.

  17. HCV genotyping from NGS short reads and its application in genotype detection from HCV mixed infected plasma.

    Directory of Open Access Journals (Sweden)

    Ping Qiu

    Full Text Available Genotyping of hepatitis C virus (HCV plays an important role in the treatment of HCV. As new genotype-specific treatment options become available, it has become increasingly important to have accurate HCV genotype and subtype information to ensure that the most appropriate treatment regimen is selected. Most current genotyping methods are unable to detect mixed genotypes from two or more HCV infections. Next generation sequencing (NGS allows for rapid and low cost mass sequencing of viral genomes and provides an opportunity to probe the viral population from a single host. In this paper, the possibility of using short NGS reads for direct HCV genotyping without genome assembly was evaluated. We surveyed the publicly-available genetic content of three HCV drug target regions (NS3, NS5A, NS5B in terms of whether these genes contained genotype-specific regions that could predict genotype. Six genotypes and 38 subtypes were included in this study. An automated phylogenetic analysis based HCV genotyping method was implemented and used to assess different HCV target gene regions. Candidate regions of 250-bp each were found for all three genes that have enough genetic information to predict HCV genotypes/subtypes. Validation using public datasets shows 100% genotyping accuracy. To test whether these 250-bp regions were sufficient to identify mixed genotypes, we developed a random primer-based method to sequence HCV plasma samples containing mixtures of two HCV genotypes in different ratios. We were able to determine the genotypes without ambiguity and to quantify the ratio of the abundances of the mixed genotypes in the samples. These data provide a proof-of-concept that this random primed, NGS-based short-read genotyping approach does not need prior information about the viral population and is capable of detecting mixed viral infection.

  18. Cas-analyzer: an online tool for assessing genome editing results using NGS data.

    Science.gov (United States)

    Park, Jeongbin; Lim, Kayeong; Kim, Jin-Soo; Bae, Sangsu

    2017-01-15

    Genome editing with programmable nucleases has been widely adopted in research and medicine. Next generation sequencing (NGS) platforms are now widely used for measuring the frequencies of mutations induced by CRISPR-Cas9 and other programmable nucleases. Here, we present an online tool, Cas-Analyzer, a JavaScript-based implementation for NGS data analysis. Because Cas-Analyzer is completely used at a client-side web browser on-the-fly, there is no need to upload very large NGS datasets to a server, a time-consuming step in genome editing analysis. Currently, Cas-Analyzer supports various programmable nucleases, including single nucleases and paired nucleases.

  19. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  20. Using REDItools to Detect RNA Editing Events in NGS Datasets.

    Science.gov (United States)

    Picardi, Ernesto; D'Erchia, Anna Maria; Montalvo, Antonio; Pesole, Graziano

    2015-03-09

    RNA editing is a post-transcriptional/co-transcriptional molecular phenomenon whereby a genetic message is modified from the corresponding DNA template by means of substitutions, insertions, and/or deletions. It occurs in a variety of organisms and different cellular locations through evolutionally and biochemically unrelated proteins. RNA editing has a plethora of biological effects including the modulation of alternative splicing and fine-tuning of gene expression. RNA editing events by base substitutions can be detected on a genomic scale by NGS technologies through the REDItools package, an ad hoc suite of Python scripts to study RNA editing using RNA-Seq and DNA-Seq data or RNA-Seq data alone. REDItools implement effective filters to minimize biases due to sequencing errors, mapping errors, and SNPs. The package is freely available at Google Code repository (http://code.google.com/p/reditools/) and released under the MIT license. In the present unit we show three basic protocols corresponding to three main REDItools scripts.

  1. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  2. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  3. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  4. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  5. Benchmarking in Foodservice Operations.

    Science.gov (United States)

    2007-11-02

    51. Lingle JH, Schiemann WA. From balanced scorecard to strategic gauges: Is measurement worth it? Mgt Rev. 1996; 85(3):56-61. 52. Struebing L...studies lasted from nine to twelve months, and could extend beyond that time for numerous reasons (49). Benchmarking was not industrial tourism , a...not simply data comparison, a fad, a means for reducing resources, a quick-fix program, or industrial tourism . Benchmarking was a complete process

  6. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  7. Benchmarking 2011: Trends in Education Philanthropy

    Science.gov (United States)

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  8. The NGS WikiBook: a dynamic collaborative online training effort with long-term sustainability.

    Science.gov (United States)

    Li, Jing-Woei; Bolser, Dan; Manske, Magnus; Giorgi, Federico Manuel; Vyahhi, Nikolay; Usadel, Björn; Clavijo, Bernardo J; Chan, Ting-Fung; Wong, Nathalie; Zerbino, Daniel; Schneider, Maria Victoria

    2013-09-01

    Next-generation sequencing (NGS) is increasingly being adopted as the backbone of biomedical research. With the commercialization of various affordable desktop sequencers, NGS will be reached by increasing numbers of cellular and molecular biologists, necessitating community consensus on bioinformatics protocols to tackle the exponential increase in quantity of sequence data. The current resources for NGS informatics are extremely fragmented. Finding a centralized synthesis is difficult. A multitude of tools exist for NGS data analysis; however, none of these satisfies all possible uses and needs. This gap in functionality could be filled by integrating different methods in customized pipelines, an approach helped by the open-source nature of many NGS programmes. Drawing from community spirit and with the use of the Wikipedia framework, we have initiated a collaborative NGS resource: The NGS WikiBook. We have collected a sufficient amount of text to incentivize a broader community to contribute to it. Users can search, browse, edit and create new content, so as to facilitate self-learning and feedback to the community. The overall structure and style for this dynamic material is designed for the bench biologists and non-bioinformaticians. The flexibility of online material allows the readers to ignore details in a first read, yet have immediate access to the information they need. Each chapter comes with practical exercises so readers may familiarize themselves with each step. The NGS WikiBook aims to create a collective laboratory book and protocol that explains the key concepts and describes best practices in this fast-evolving field.

  9. Benchmarking File System Benchmarking: It *IS* Rocket Science

    OpenAIRE

    Seltzer, Margo I.; Tarasov, Vasily; Bhanage, Saumitra; Zadok, Erez

    2011-01-01

    The quality of file system benchmarking has not improved in over a decade of intense research spanning hundreds of publications. Researchers repeatedly use a wide range of poorly designed benchmarks, and in most cases, develop their own ad-hoc benchmarks. Our community lacks a definition of what we want to benchmark in a file system. We propose several dimensions of file system benchmarking and review the wide range of tools and techniques in widespread use. We experimentally show that even t...

  10. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  11. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  12. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  13. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  14. Geodetic Control Points - MO 2014 Springfield Benchmarks (SHP)

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — Points that show set benchmark or survey control locations in the City of Springfield. Many of these points are PLS section corners and quarter corners. These points...

  15. Geodetic Control Points - MO 2014 Springfield Benchmarks (SHP)

    Data.gov (United States)

    NSGIC State | GIS Inventory — Points that show set benchmark or survey control locations in the City of Springfield. Many of these points are PLS section corners and quarter corners. These points...

  16. Benchmarking Pthreads performance

    Energy Technology Data Exchange (ETDEWEB)

    May, J M; de Supinski, B R

    1999-04-27

    The importance of the performance of threads libraries is growing as clusters of shared memory machines become more popular POSIX threads, or Pthreads, is an industry threads library standard. We have implemented the first Pthreads benchmark suite. In addition to measuring basic thread functions, such as thread creation, we apply the L.ogP model to standard Pthreads communication mechanisms. We present the results of our tests for several hardware platforms. These results demonstrate that the performance of existing Pthreads implementations varies widely; parts of nearly all of these implementations could be further optimized. Since hardware differences do not fully explain these performance variations, optimizations could improve the implementations. 2. Incorporating Threads Benchmarks into SKaMPI is an MPI benchmark suite that provides a general framework for performance analysis [7]. SKaMPI does not exhaustively test the MPI standard. Instead, it

  17. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  18. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  19. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated with profess......Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated...... for 191 orthopaedics departments of German hospitals matched with survey data on bureaucratic benchmarking information provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically...

  20. The associations between work-life balance behaviours, teamwork climate and safety climate: cross-sectional survey introducing the work-life climate scale, psychometric properties, benchmarking data and future directions.

    Science.gov (United States)

    Sexton, J Bryan; Schwartz, Stephanie P; Chadwick, Whitney A; Rehder, Kyle J; Bae, Jonathan; Bokovoy, Joanna; Doram, Keith; Sotile, Wayne; Adair, Kathryn C; Profit, Jochen

    2017-08-01

    Improving the resiliency of healthcare workers is a national imperative, driven in part by healthcare workers having minimal exposure to the skills and culture to achieve work-life balance (WLB). Regardless of current policies, healthcare workers feel compelled to work more and take less time to recover from work. Satisfaction with WLB has been measured, as has work-life conflict, but how frequently healthcare workers engage in specific WLB behaviours is rarely assessed. Measurement of behaviours may have advantages over measurement of perceptions; behaviours more accurately reflect WLB and can be targeted by leaders for improvement. 1. To describe a novel survey scale for evaluating work-life climate based on specific behavioural frequencies in healthcare workers.2. To evaluate the scale's psychometric properties and provide benchmarking data from a large healthcare system.3. To investigate associations between work-life climate, teamwork climate and safety climate. Cross-sectional survey study of US healthcare workers within a large healthcare system. 7923 of 9199 eligible healthcare workers across 325 work settings within 16 hospitals completed the survey in 2009 (86% response rate). The overall work-life climate scale internal consistency was Cronbach α=0.790. t-Tests of top versus bottom quartile work settings revealed that positive work-life climate was associated with better teamwork climate, safety climate and increased participation in safety leadership WalkRounds with feedback (pworkplace norms, and aligns well with other culture constructs that have been found to correlate with clinical outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  1. HPCS HPCchallenge Benchmark Suite

    Science.gov (United States)

    2007-11-02

    measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — in the presentation and paper...from Cray X1s to Beowulf clusters — using the updated results at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi Even a small percentage of random

  2. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  3. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  4. QuickNGS elevates Next-Generation Sequencing data analysis to a new level of automation.

    Science.gov (United States)

    Wagle, Prerana; Nikolić, Miloš; Frommolt, Peter

    2015-07-01

    Next-Generation Sequencing (NGS) has emerged as a widely used tool in molecular biology. While time and cost for the sequencing itself are decreasing, the analysis of the massive amounts of data remains challenging. Since multiple algorithmic approaches for the basic data analysis have been developed, there is now an increasing need to efficiently use these tools to obtain results in reasonable time. We have developed QuickNGS, a new workflow system for laboratories with the need to analyze data from multiple NGS projects at a time. QuickNGS takes advantage of parallel computing resources, a comprehensive back-end database, and a careful selection of previously published algorithmic approaches to build fully automated data analysis workflows. We demonstrate the efficiency of our new software by a comprehensive analysis of 10 RNA-Seq samples which we can finish in only a few minutes of hands-on time. The approach we have taken is suitable to process even much larger numbers of samples and multiple projects at a time. Our approach considerably reduces the barriers that still limit the usability of the powerful NGS technology and finally decreases the time to be spent before proceeding to further downstream analysis and interpretation of the data.

  5. Atmospheric fluidized bed combustion (AFBC) plants: A performance benchmarking study

    Energy Technology Data Exchange (ETDEWEB)

    Fuller, J. A.; Beavers, H.; Bonk, D. [West Virginia University, College of Business and Economics, Division of Business Administration, Morgantown, WV (United States)

    2004-03-31

    Data from a fluidized bed boiler survey distributed during the spring of 2000 to gather data for developing atmospheric fluidized bed combustion (AFCB) performance benchmarks are analyzed. The survey was sent to members of the Council of Industrial Boiler Owners; 35 surveys were usable for analysis. A total of 18 benchmarks were considered. While the results were not such as to permit a definitive set of conclusions, the survey was successful in providing practical information to assist plant owners, operators and developers to understand their operations and to assess potential solutions or to establish preventative maintenance programs. 36 refs., 2 tabs.

  6. Targeted NGS meets expert clinical characterization: Efficient diagnosis of spastic paraplegia type 11.

    Science.gov (United States)

    Castro-Fernández, Cristina; Arias, Manuel; Blanco-Arias, Patricia; Santomé-Collazo, Luis; Amigo, Jorge; Carracedo, Ángel; Sobrido, Maria-Jesús

    2015-06-01

    Next generation sequencing (NGS) is transforming the diagnostic approach for neurological disorders, since it allows simultaneous analysis of hundreds of genes, even based on just a broad, syndromic patient categorization. However, such an approach bears a high risk of incidental and uncertain genetic findings. We report a patient with spastic paraplegia whose comprehensive neurological and imaging examination raised a high clinical suspicion of SPG11. Thus, although our NGS pipeline for this group of disorders includes gene panel and exome sequencing, in this sample only the spatacsin gene region was captured and subsequently searched for mutations. Two probably pathogenic variants were quickly and clearly identified, confirming the diagnosis of SPG11. This case illustrates how combination of expert clinical characterization with highly oriented NGS protocols leads to a fast, cost-efficient diagnosis, minimizing the risk of findings with unclear significance.

  7. NGS-Trex: an automatic analysis workflow for RNA-Seq data.

    Science.gov (United States)

    Boria, Ilenia; Boatti, Lara; Saggese, Igor; Mignone, Flavio

    2015-01-01

    RNA-Seq technology allows the rapid analysis of whole transcriptomes taking advantage of next-generation sequencing platforms. Moreover with the constant decrease of the cost of NGS analysis RNA-Seq is becoming very popular and widespread. Unfortunately data analysis is quite demanding in terms of bioinformatic skills and infrastructures required, thus limiting the potential users of this method. Here we describe the complete analysis of sample data from raw sequences to data mining of results by using NGS-Trex platform, a low user interaction, fully automatic analysis workflow. Used through a web interface, NGS-Trex processes data and profiles the transcriptome of the samples identifying expressed genes, transcripts, and new and known splice variants. It also detects differentially expressed genes and transcripts across different experiments.

  8. Targeted NGS meets expert clinical characterization: Efficient diagnosis of spastic paraplegia type 11

    Directory of Open Access Journals (Sweden)

    Cristina Castro-Fernández

    2015-06-01

    Full Text Available Next generation sequencing (NGS is transforming the diagnostic approach for neurological disorders, since it allows simultaneous analysis of hundreds of genes, even based on just a broad, syndromic patient categorization. However, such an approach bears a high risk of incidental and uncertain genetic findings. We report a patient with spastic paraplegia whose comprehensive neurological and imaging examination raised a high clinical suspicion of SPG11. Thus, although our NGS pipeline for this group of disorders includes gene panel and exome sequencing, in this sample only the spatacsin gene region was captured and subsequently searched for mutations. Two probably pathogenic variants were quickly and clearly identified, confirming the diagnosis of SPG11. This case illustrates how combination of expert clinical characterization with highly oriented NGS protocols leads to a fast, cost-efficient diagnosis, minimizing the risk of findings with unclear significance.

  9. KNIME4NGS: a comprehensive toolbox for Next Generation Sequencing analysis.

    Science.gov (United States)

    Hastreiter, Maximilian; Jeske, Tim; Hoser, Jonathan; Kluge, Michael; Ahomaa, Kaarin; Friedl, Marie-Sophie; Kopetzky, Sebastian J; Quell, Jan-Dominik; Werner Mewes, H-; Küffner, Robert

    2017-01-09

    Analysis of Next Generation Sequencing (NGS) data requires the processing of large datasets by chaining various tools with complex input and output formats. In order to automate data analysis, we propose to standardize NGS tasks into modular workflows. This simplifies reliable handling and processing of NGS data, and corresponding solutions become substantially more reproducible and easier to maintain. Here, we present a documented, linux-based, toolbox of 42 processing modules that are combined to construct workflows facilitating a variety of tasks such as DNAseq and RNAseq analysis. We also describe important technical extensions. The high throughput executor (HTE) helps to increase the reliability and to reduce manual interventions when processing complex datasets. We also provide a dedicated binary manager that assists users in obtaining the modules' executables and keeping them up to date. As basis for this actively developed toolbox we use the workflow management software KNIME.

  10. 2014 NOAA NGS Topobathy Lidar: Post Sandy, Rhode Island

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data were collected by the National Oceanic Atmospheric Administration National Geodetic Survey Remote Sensing Division using a Riegl VQ820G system. The data...

  11. Airborne Gravity: NGS' Gravity Data for EN08 (2013)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for New York, Vermont, New Hampshire, Massachusettes, Maine, and Canada collected in 2013 over 1 survey. This data set is part of the Gravity...

  12. Airborne Gravity: NGS' Gravity Data for EN01 (2011)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for New York, Canada, and Lake Ontario collected in 2011 over 1 survey. This data set is part of the Gravity for the Re-definition of the...

  13. Airborne Gravity: NGS' Gravity Data for CS01 (2014)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alabama and Florida collected in 2008 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical...

  14. Airborne Gravity: NGS' Gravity Data for CN02 (2013 & 2014)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Nebraska collected in 2013 & 2014 over 3 surveys. This data set is part of the Gravity for the Re-definition of the American Vertical...

  15. 2014 NOAA NGS Lidar: Intracoastal Waterway, Florida Keys (FL)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data were collected by the National Oceanic Atmospheric Administration National Geodetic Survey Remote Sensing Division using a Riegl VQ820G system. The data...

  16. Airborne Gravity: NGS' Gravity Data for CS03 (2009)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Texas and Louisiana collected in 2009 over 2 surveys. This data set is part of the Gravity for the Re-definition of the American Vertical...

  17. 2013 NOAA NGS Topobathy Lidar: Great Egg (NJ)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data were collected by the National Oceanic Atmospheric Administration National Geodetic Survey Remote Sensing Division using a Riegl VQ820G system. The data...

  18. Airborne Gravity: NGS' Gravity Data for EN09 (2016)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Massachusetts, Connecticut, Rhode Island, New Hampshire, New York, and the Atlantic Ocean collected in 2012 over 1 survey. This data set is...

  19. Airborne Gravity: NGS' Gravity Data for TS01 (2014)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Puerto Rico and the Virgin Islands collected in 2009 over 1 survey. This data set is part of the Gravity for the Re-definition of the...

  20. Airborne Gravity: NGS' Gravity Data for EN04 (2013)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Michigan and Lake Huron collected in 2012 over 1 survey. This data set is part of the Gravity for the Re-definition of the American...

  1. Airborne Gravity: NGS' Gravity Data for AN03 (2010)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2010 and 2012 over 2 surveys. This data set is part of the Gravity for the Re-definition of the American Vertical Datum...

  2. Airborne Gravity: NGS' Gravity Data for EN06 (2016)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Maine, Canada, and the Atlantic Ocean collected in 2012 over 2 surveys. This data set is part of the Gravity for the Re-definition of the...

  3. Airborne Gravity: NGS' Gravity Data for EN05 (2012)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Minnesota, Wisconsin, and Michigan collected in 2012 over 1 survey. This data set is part of the Gravity for the Re-definition of the...

  4. Airborne Gravity: NGS' Gravity Data for EN10 (2013)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for New York, Pennsylvania, New Jersey, Connecticut and the Atlantic Ocean collected in 2013 over 1 survey. This data set is part of the...

  5. Airborne Gravity: NGS' Gravity Data for PN01 (2014)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for California and Oregon collected in 2011 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical...

  6. Airborne Gravity: NGS' Gravity Data for ES01 (2013)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Florida, the Bahamas, and the Atlantic Ocean collected in 2013 over 1 survey. This data set is part of the Gravity for the Re-definition of...

  7. Airborne Gravity: NGS' Gravity Data for CN03 (2014)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Nebraska collected in 2014 over one survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum...

  8. 2014 NOAA NGS Topobathy Lidar: Post Sandy, Rhode Island

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data were collected by the National Oceanic Atmospheric Administration National Geodetic Survey Remote Sensing Division using a Riegl VQ820G system. The data...

  9. 2015 NOAA NGS Topobathy Lidar: Buzzards Bay (MA)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data were collected by the National Oceanic Atmospheric Administration National Geodetic Survey Remote Sensing Division using a Riegl VQ880G system. The data...

  10. 2013 NOAA NGS Topobathy Lidar: Little Egg (NJ)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data were collected by the National Oceanic Atmospheric Administration National Geodetic Survey Remote Sensing Division using a Riegl VQ820G system. The data...

  11. Airborne Gravity: NGS' Gravity Data for ES03 (2013)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Maryland, Pennsylvania, New Jersey, West Virginia, Virginia, Delaware, and the Atlantic Ocean collected in 2013 over 1 survey. This data...

  12. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  13. Towards global benchmarking of food environments and policies to reduce obesity and diet-related non-communicable diseases: design and methods for nation-wide surveys

    Science.gov (United States)

    Vandevijvere, Stefanie; Swinburn, Boyd

    2014-01-01

    Introduction Unhealthy diets are heavily driven by unhealthy food environments. The International Network for Food and Obesity/non-communicable diseases (NCDs) Research, Monitoring and Action Support (INFORMAS) has been established to reduce obesity, NCDs and their related inequalities globally. This paper describes the design and methods of the first-ever, comprehensive national survey on the healthiness of food environments and the public and private sector policies influencing them, as a first step towards global monitoring of food environments and policies. Methods and analysis A package of 11 substudies has been identified: (1) food composition, labelling and promotion on food packages; (2) food prices, shelf space and placement of foods in different outlets (mainly supermarkets); (3) food provision in schools/early childhood education (ECE) services and outdoor food promotion around schools/ECE services; (4) density of and proximity to food outlets in communities; food promotion to children via (5) television, (6) magazines, (7) sport club sponsorships, and (8) internet and social media; (9) analysis of the impact of trade and investment agreements on food environments; (10) government policies and actions; and (11) private sector actions and practices. For the substudies on food prices, provision, promotion and retail, ‘environmental equity’ indicators have been developed to check progress towards reducing diet-related health inequalities. Indicators for these modules will be assessed by tertiles of area deprivation index or school deciles. International ‘best practice benchmarks’ will be identified, against which to compare progress of countries on improving the healthiness of their food environments and policies. Dissemination This research is highly original due to the very ‘upstream’ approach being taken and its direct policy relevance. The detailed protocols will be offered to and adapted for countries of varying size and income in order to

  14. ngs_backbone: a pipeline for read cleaning, mapping and SNP calling using Next Generation Sequence

    Directory of Open Access Journals (Sweden)

    Cañizares Joaquin

    2011-06-01

    Full Text Available Abstract Background The possibilities offered by next generation sequencing (NGS platforms are revolutionizing biotechnological laboratories. Moreover, the combination of NGS sequencing and affordable high-throughput genotyping technologies is facilitating the rapid discovery and use of SNPs in non-model species. However, this abundance of sequences and polymorphisms creates new software needs. To fulfill these needs, we have developed a powerful, yet easy-to-use application. Results The ngs_backbone software is a parallel pipeline capable of analyzing Sanger, 454, Illumina and SOLiD (Sequencing by Oligonucleotide Ligation and Detection sequence reads. Its main supported analyses are: read cleaning, transcriptome assembly and annotation, read mapping and single nucleotide polymorphism (SNP calling and selection. In order to build a truly useful tool, the software development was paired with a laboratory experiment. All public tomato Sanger EST reads plus 14.2 million Illumina reads were employed to test the tool and predict polymorphism in tomato. The cleaned reads were mapped to the SGN tomato transcriptome obtaining a coverage of 4.2 for Sanger and 8.5 for Illumina. 23,360 single nucleotide variations (SNVs were predicted. A total of 76 SNVs were experimentally validated, and 85% were found to be real. Conclusions ngs_backbone is a new software package capable of analyzing sequences produced by NGS technologies and predicting SNVs with great accuracy. In our tomato example, we created a highly polymorphic collection of SNVs that will be a useful resource for tomato researchers and breeders. The software developed along with its documentation is freely available under the AGPL license and can be downloaded from http://bioinf.comav.upv.es/ngs_backbone/ or http://github.com/JoseBlanca/franklin.

  15. Radiography benchmark 2014

    Energy Technology Data Exchange (ETDEWEB)

    Jaenisch, G.-R., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Deresch, A., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Bellon, C., E-mail: Gerd-Ruediger.Jaenisch@bam.de [Federal Institute for Materials Research and Testing, Unter den Eichen 87, 12205 Berlin (Germany); Schumm, A.; Lucet-Sanchez, F.; Guerin, P. [EDF R and D, 1 avenue du Général de Gaulle, 92141 Clamart (France)

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  16. Genetic Architecture of Milk, Fat, Protein, Mastitis and Fertility Studied using NGS Data in Holstein Cattle

    DEFF Research Database (Denmark)

    Sahana, Goutam; Janss, Luc; Guldbrandtsen, Bernt;

    cattle using NGS variants. The analysis was done using a linear mixed model (LMM) and a Bayesian mixture model (BMM). The top 10 QTL identified by LMM analyses explained 22.61, 23.86, 10.88, 18.58 and 14.83% of the total genetic variance for these traits respectively. Trait-specific sets of 4,964 SNPs...... from NGS variants (most ‘associated’ SNP for each 0.5 Mbp bin) explained 81.0, 81.6, 85.0, 60.4 and 70.9% of total genetic variance for milk, fat, protein, mastitis and fertility indices when analyzed simultaneously by BMM...

  17. Benchmarking of LSTM Networks

    OpenAIRE

    Breuel, Thomas M.

    2015-01-01

    LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of different architectural and hyperparameter choices on performance. Significant findings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperf...

  18. A Web-Hosted R Workflow to Simplify and Automate the Analysis of 16S NGS Data

    Science.gov (United States)

    Next-Generation Sequencing (NGS) produces large data sets that include tens-of-thousands of sequence reads per sample. For analysis of bacterial diversity, 16S NGS sequences are typically analyzed in a workflow that containing best-of-breed bioinformatics packages that may levera...

  19. Airborne Gravity: NGS' Gravity Data for ES02 (2013)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Florida and the Gulf of Mexico collected in 2013 over 1 survey. This data set is part of the Gravity for the Re-definition of the American...

  20. Airborne Gravity: NGS' Gravity Data for AN05 (2011)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2011 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...

  1. Airborne Gravity: NGS' Gravity Data for CS04 (2009)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Texas collected in 2009 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...

  2. Airborne Gravity: NGS' Gravity Data for AS02 (2010)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2010 over 2 surveys. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...

  3. Airborne Gravity: NGS' Gravity Data for AN02 (2010)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2010 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...

  4. Airborne Gravity: NGS' Gravity Data for AN06 (2011)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2011 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...

  5. Airborne Gravity: NGS' Gravity Data for AN04 (2010)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2010 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...

  6. Airborne Gravity: NGS' Gravity Data for CS05 (2014)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Texas collected in 2014 over 2 surveys. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...

  7. Airborne Gravity: NGS' Gravity Data for AS01 (2008)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2008 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...

  8. Airborne Gravity: NGS' Gravity Data for CS08 (2015)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for CS08 collected in 2006 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...

  9. RepARK--de novo creation of repeat libraries from whole-genome NGS reads.

    Science.gov (United States)

    Koch, Philipp; Platzer, Matthias; Downie, Bryan R

    2014-05-01

    Generation of repeat libraries is a critical step for analysis of complex genomes. In the era of next-generation sequencing (NGS), such libraries are usually produced using a whole-genome shotgun (WGS) derived reference sequence whose completeness greatly influences the quality of derived repeat libraries. We describe here a de novo repeat assembly method--RepARK (Repetitive motif detection by Assembly of Repetitive K-mers)--which avoids potential biases by using abundant k-mers of NGS WGS reads without requiring a reference genome. For validation, repeat consensuses derived from simulated and real Drosophila melanogaster NGS WGS reads were compared to repeat libraries generated by four established methods. RepARK is orders of magnitude faster than the other methods and generates libraries that are: (i) composed almost entirely of repetitive motifs, (ii) more comprehensive and (iii) almost completely annotated by TEclass. Additionally, we show that the RepARK method is applicable to complex genomes like human and can even serve as a diagnostic tool to identify repetitive sequences contaminating NGS datasets.

  10. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  11. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques....... In this paper, we review the modern foundations for frontier-based regulation and we discuss its actual use in several jurisdictions....

  12. 2001 benchmarking guide.

    Science.gov (United States)

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  13. Benchmarking Query Execution Robustness

    Science.gov (United States)

    Wiener, Janet L.; Kuno, Harumi; Graefe, Goetz

    Benchmarks that focus on running queries on a well-tuned database system ignore a long-standing problem: adverse runtime conditions can cause database system performance to vary widely and unexpectedly. When the query execution engine does not exhibit resilience to these adverse conditions, addressing the resultant performance problems can contribute significantly to the total cost of ownership for a database system in over-provisioning, lost efficiency, and increased human administrative costs. For example, focused human effort may be needed to manually invoke workload management actions or fine-tune the optimization of specific queries.

  14. Benchmarking Danish Industries

    OpenAIRE

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    This report is based on the survey "Industrial Companies in Denmark – Today and Tomorrow", section IV: Supply Chain Management - Practices and Performance, question number 4.9 on performance assessment. To our knowledge, this survey is unique, as we have not been able to find results from any compatible survey. The International Manufacturing Strategy Survey (IMSS) does bring up the question of supply chain management, but unfortunately, we did not have access to the database. ...

  15. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  16. Entropy-based benchmarking methods

    OpenAIRE

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  17. Photometric calibration of NGS/POSS and ESO/SRC plates using the NOAO PDS measuring engine. I - Stellar photometry

    Science.gov (United States)

    Cutri, Roc M.; Low, Frank J.; Marvel, Kevin B.

    1992-01-01

    The PDS/Monet measuring engine at the National Optical Astronomy Observatory was used to obtain photometry of nearly 10,000 stars on the NGS/POSS and 2000 stars on the ESO/SRC Survey glass plates. These measurements have been used to show that global transformation functions exist that allow calibration of stellar photometry from any blue or red plate to equivalent Johnson B and Cousins R photoelectric magnitudes. The four transformation functions appropriate for the POSS O and E and ESO/SRC J and R plates were characterized, and it was found that, within the measurement uncertainties, they vary from plate to plate only by photometric zero-point offsets. A method is described to correct for the zero-point shifts and to obtain calibrated B and R photometry of stellar sources to an average accuracy of 0.3-0.4 mag within the range R between values of 8 and 19.5 for red plates in both surveys, B between values of 9 and 20.5 on POSS blue plates, and B between values of 10 and 20.5 on ESO/SRC blue plates. This calibration procedure makes it possible to obtain rapid photometry of very large numbers of stellar sources.

  18. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  19. Increased genetic diversity and prevalence of co-infection with Trypanosoma spp. in koalas (Phascolarctos cinereus) and their ticks identified using next-generation sequencing (NGS).

    Science.gov (United States)

    Barbosa, Amanda D; Gofton, Alexander W; Paparini, Andrea; Codello, Annachiara; Greay, Telleasha; Gillett, Amber; Warren, Kristin; Irwin, Peter; Ryan, Una

    2017-01-01

    Infections with Trypanosoma spp. have been associated with poor health and decreased survival of koalas (Phascolarctos cinereus), particularly in the presence of concurrent pathogens such as Chlamydia and koala retrovirus. The present study describes the application of a next-generation sequencing (NGS)-based assay to characterise the prevalence and genetic diversity of trypanosome communities in koalas and two native species of ticks (Ixodes holocyclus and I. tasmani) removed from koala hosts. Among 168 koalas tested, 32.2% (95% CI: 25.2-39.8%) were positive for at least one Trypanosoma sp. Previously described Trypanosoma spp. from koalas were identified, including T. irwini (32.1%, 95% CI: 25.2-39.8%), T. gilletti (25%, 95% CI: 18.7-32.3%), T. copemani (27.4%, 95% CI: 20.8-34.8%) and T. vegrandis (10.1%, 95% CI: 6.0-15.7%). Trypanosoma noyesi was detected for the first time in koalas, although at a low prevalence (0.6% 95% CI: 0-3.3%), and a novel species (Trypanosoma sp. AB-2017) was identified at a prevalence of 4.8% (95% CI: 2.1-9.2%). Mixed infections with up to five species were present in 27.4% (95% CI: 21-35%) of the koalas, which was significantly higher than the prevalence of single infections 4.8% (95% CI: 2-9%). Overall, a considerably higher proportion (79.7%) of the Trypanosoma sequences isolated from koala blood samples were identified as T. irwini, suggesting this is the dominant species. Co-infections involving T. gilletti, T. irwini, T. copemani, T. vegrandis and Trypanosoma sp. AB-2017 were also detected in ticks, with T. gilletti and T. copemani being the dominant species within the invertebrate hosts. Direct Sanger sequencing of Trypanosoma 18S rRNA gene amplicons was also performed and results revealed that this method was only able to identify the genotypes with greater amount of reads (according to NGS) within koala samples, which highlights the advantages of NGS in detecting mixed infections. The present study provides new insights on the

  20. Introduction and application of South NGS - 200 type of GPS%南方NGS-2O0型GPS测量系统及其应用

    Institute of Scientific and Technical Information of China (English)

    郭胜利

    2002-01-01

    GPS(Global Positioning System)是全球卫星定位系统的缩写.南方NGS-200型GPS测量系统及其在永定河干堤加固工程中的应用表明,GPS测量相对传统的作业模式具有突破性的改变,这一新技术的应用使得广大测量工作者从繁重的外业劳动中解脱出来,并且工作效率有了根本性的提高.

  1. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  2. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  3. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  4. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  5. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth pre

  6. Developing benchmarks for prior learning assessment. Part 1: Research.

    Science.gov (United States)

    Day, M

    The aim of the study was to develop and promote national benchmarks for those engaged in accreditation of prior learning (APL) termed 'prior learning assessment and recognition' (PLAR) assessment in Canada, in all sectors and communities. The study objectives were to gain practitioner consensus on the development of benchmarks for APL (PLAR) across Canada; produce a guide to support the implementation of national benchmarks; make recommendations for the promotion of the national benchmarks; and distribute the guide. The study also investigated the feasibility of developing a system to confirm the competence of APL (PLAR) practitioners, based on nationally agreed benchmarks for practice. A qualitative research strategy was developed, which used a benchmarking survey and focus groups as the primary research tools. These were applied to a purposive sample of APL practitioners (n = 91). The participants were identified through the use of an initial screening survey. Respondents indicated that in Canada, PLAR is used in a variety of ways to assist with individual and personal growth for human resource development, the preparation of professionals and the achievement of academic credit. The findings of the focus groups are summarised using a SWOT analysis The study identified that the main functions of the PLAR practitioners are to prepare individuals for assessment and conduct assessments. Although practitioners should be made aware of the potential conflicts in undertaking combined roles, they should be encouraged to develop confidence in both functions.

  7. Applications of Integral Benchmark Data

    Energy Technology Data Exchange (ETDEWEB)

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. (Skip) Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  8. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  9. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    is generally not advised. Several other ways in which benchmarking and policy can support one another are identified in the analysis. This leads to a range of recommended initiatives to exploit the benefits of benchmarking in transport while avoiding some of the lurking pitfalls and dead ends......Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport...

  10. Bathymetric survey of the Brandon Road Dam Spillway, Joliet, Illinois

    Science.gov (United States)

    Engel, Frank; Krahulik, Justin

    2016-01-01

    Bathymetric survey data of the Brandon Road Dam spillway was collected on May 27 and May 28, 2015 by the US Geological Survey (USGS) using Trimble Real-Time Kinematic Global Positioning System (RTK-GPS) equipment. The base station was set up over a temporarily installed survey pin on both days. This pin was surveyed into an existing NGS benchmark (PID: BBCN12) within the Brandon Road Lock property. In wadeable sections, a GPS rover with 2.0 meter range pole and flat-foot was deployed. In sections unable to be waded, a 2.0 meter range pole was fix-mounted to a jon boat, and a boat-mounted Acoustic Doppler Current Profiler (ADCP) was used to collect the depth data. ADCP depth data were reviewed in the WinRiver II software and exported for processing with the Velocity Mapping Toolbox (Parsons and others, 2013). The RTK-GPS survey points of the water surface elevations were used to convert ADCP-measured depths into bed elevations. An InSitu Level Troll collected 1-minute water level data throughout the two day survey. These data were used to verify that a flat-pool assumption was reasonable for the conversion of the ADCP data to bed elevations given the measurement precision of the ADCP. An OPUS solution was acquired for each survey day.Parsons, D. R., Jackson, P. R., Czuba, J. A., Engel, F. L., Rhoads, B. L., Oberg, K. A., Best, J. L., Mueller, D. S., Johnson, K. K. and Riley, J. D. (2013), Velocity Mapping Toolbox (VMT): a processing and visualization suite for moving-vessel ADCP measurements. Earth Surf. Process. Landforms, 38: 1244–1260. doi: 10.1002/esp.3367

  11. Functional genomics of tomato: Opportunities and challenges in post-genome NGS era

    Indian Academy of Sciences (India)

    Rahul Kumar; Ashima Khurana

    2014-12-01

    The Tomato Genome Sequencing Project represented a landmark venture in the history of sequencing projects where both Sanger’s and next-generation sequencing (NGS) technologies were employed, and a highly accurate and one of the best assembled plant genomes along with a draft of the wild relative, Solanum pimpinellifolium, were released in 2012. However, the functional potential of the major portion of this newly generated resource is still undefined. The very first challenge before scientists working on tomato functional biology is to exploit this high-quality reference sequence for tapping of the wealth of genetic variants for improving agronomic traits in cultivated tomatoes. The sequence data generated recently by 150 Tomato Genome Consortium would further uncover the natural alleles present in different tomato genotypes. Therefore, we found it relevant to have a fresh outlook on tomato functional genomics in the context of application of NGS technologies in its post-genome sequencing phase. Herein, we provide an overview how NGS technologies vis-à-vis available reference sequence have assisted each other for their mutual improvement and how their combined use could further facilitate the development of other ‘omics’ tools, required to propel the Solanaceae research. Additionally, we highlight the challenges associated with the application of these cutting-edge technologies.

  12. HLA-genotyping of clinical specimens using Ion Torrent-based NGS.

    Science.gov (United States)

    Barone, Jonathan C; Saito, Katsuyuki; Beutner, Karl; Campo, Maria; Dong, Wei; Goswami, Chirayu P; Johnson, Erica S; Wang, Zi-Xuan; Hsu, Susan

    2015-12-01

    We have evaluated and validated the NXType™ workflow (One Lambda, Inc.) and the accompanying TypeStream™ software on the Ion Torrent Next Generation Sequencing (NGS) platform using a comprehensive testing panel. The panel consisted of 285 genomic DNA (gDNA) samples derived from four major ethnic populations and contained 59 PT samples and 226 clinical specimens. The total number of alleles from the six loci interrogated by NGS was 3420. This validation panel provided a wide range of HLA sequence variations including many rare alleles, new variants and homozygous alleles. The NXType™ system (reagents and software) was able to correctly genotype the vast majority of these specimens. The concordance rate between SBT-derived genotypes and those generated by TypeStream™ auto-analysis ranged from 99.5% to 99.8% for the HLA-A, B, C, DRB1 and DQB1 loci, and was 98.9% for HLA-DPB1. A strategy for data review was developed that would allow correction of most of the few remaining typing errors. The entire NGS workflow from gDNA amplification to genotype assignment could be completed within 3 working days. Through this validation study, the limitations and shortcomings of the platform, specific assay system, and software algorithm were also revealed for further evaluation and improvement.

  13. The variation game: Cracking complex genetic disorders with NGS and omics data.

    Science.gov (United States)

    Cui, Hongzhu; Dhroso, Andi; Johnson, Nathan; Korkin, Dmitry

    2015-06-01

    Tremendous advances in Next Generation Sequencing (NGS) and high-throughput omics methods have brought us one step closer towards mechanistic understanding of the complex disease at the molecular level. In this review, we discuss four basic regulatory mechanisms implicated in complex genetic diseases, such as cancer, neurological disorders, heart disease, diabetes, and many others. The mechanisms, including genetic variations, copy-number variations, posttranscriptional variations, and epigenetic variations, can be detected using a variety of NGS methods. We propose that malfunctions detected in these mechanisms are not necessarily independent, since these malfunctions are often found associated with the same disease and targeting the same gene, group of genes, or functional pathway. As an example, we discuss possible rewiring effects of the cancer-associated genetic, structural, and posttranscriptional variations on the protein-protein interaction (PPI) network centered around P53 protein. The review highlights multi-layered complexity of common genetic disorders and suggests that integration of NGS and omics data is a critical step in developing new computational methods capable of deciphering this complexity.

  14. Development of a high-resolution NGS-based HLA-typing and analysis pipeline.

    Science.gov (United States)

    Wittig, Michael; Anmarkrud, Jarl A; Kässens, Jan C; Koch, Simon; Forster, Michael; Ellinghaus, Eva; Hov, Johannes R; Sauer, Sascha; Schimmler, Manfred; Ziemann, Malte; Görg, Siegfried; Jacob, Frank; Karlsen, Tom H; Franke, Andre

    2015-06-23

    The human leukocyte antigen (HLA) complex contains the most polymorphic genes in the human genome. The classical HLA class I and II genes define the specificity of adaptive immune responses. Genetic variation at the HLA genes is associated with susceptibility to autoimmune and infectious diseases and plays a major role in transplantation medicine and immunology. Currently, the HLA genes are characterized using Sanger- or next-generation sequencing (NGS) of a limited amplicon repertoire or labeled oligonucleotides for allele-specific sequences. High-quality NGS-based methods are in proprietary use and not publicly available. Here, we introduce the first highly automated open-kit/open-source HLA-typing method for NGS. The method employs in-solution targeted capturing of the classical class I (HLA-A, HLA-B, HLA-C) and class II HLA genes (HLA-DRB1, HLA-DQA1, HLA-DQB1, HLA-DPA1, HLA-DPB1). The calling algorithm allows for highly confident allele-calling to three-field resolution (cDNA nucleotide variants). The method was validated on 357 commercially available DNA samples with known HLA alleles obtained by classical typing. Our results showed on average an accurate allele call rate of 0.99 in a fully automated manner, identifying also errors in the reference data. Finally, our method provides the flexibility to add further enrichment target regions.

  15. Parallel WGA and WTA for Comparative Genome and Transcriptome NGS Analysis Using Tiny Cell Numbers.

    Science.gov (United States)

    Korfhage, Christian; Fricke, Evelyn; Meier, Andreas

    2015-07-01

    Genomic DNA determines how and when the transcriptome is changed by a trigger or environmental change and how cellular metabolism is influenced. Comparative genome and transcriptome analysis of the same cell sample links a defined genome with all changes in the bases, structure, or numbers of the transcriptome. However, comparative genome and transcriptome analysis using next-generation sequencing (NGS) or real-time PCR is often limited by the small amount of sample available. In mammals, the amount of DNA and RNA in a single cell is ∼10 picograms, but deep analysis of the genome and transcriptome currently requires several hundred nanograms of nucleic acids for library preparation for NGS sequencing. Consequently, accurate whole-genome amplification (WGA) and whole-transcriptome amplification (WTA) is required for such quantitative analysis. This unit describes how the genome and the transcriptome of a tiny number of cells can be amplified in a highly parallel and comparable process. Protocols for quality control of amplified DNA and application of amplified DNA for NGS are included.

  16. Functional genomics of tomato: opportunities and challenges in post-genome NGS era.

    Science.gov (United States)

    Kumar, Rahul; Khurana, Ashima

    2014-12-01

    The Tomato Genome Sequencing Project represented a landmark venture in the history of sequencing projects where both Sanger's and next-generation sequencing (NGS) technologies were employed, and a highly accurate and one of the best assembled plant genomes along with a draft of the wild relative, Solanum pimpinellifolium, were released in 2012. However, the functional potential of the major portion of this newly generated resource is still undefined. The very first challenge before scientists working on tomato functional biology is to exploit this high-quality reference sequence for tapping of the wealth of genetic variants for improving agronomic traits in cultivated tomatoes. The sequence data generated recently by 150 Tomato Genome Consortium would further uncover the natural alleles present in different tomato genotypes. Therefore, we found it relevant to have a fresh outlook on tomato functional genomics in the context of application of NGS technologies in its post-genome sequencing phase. Herein, we provide an overview how NGS technologies vis-a-vis available reference sequence have assisted each other for their mutual improvement and how their combined use could further facilitate the development of other 'omics' tools, required to propel the Solanaceae research. Additionally, we highlight the challenges associated with the application of these cutting-edge technologies.

  17. tropiTree: an NGS-based EST-SSR resource for 24 tropical tree species.

    Science.gov (United States)

    Russell, Joanne R; Hedley, Peter E; Cardle, Linda; Dancey, Siobhan; Morris, Jenny; Booth, Allan; Odee, David; Mwaura, Lucy; Omondi, William; Angaine, Peter; Machua, Joseph; Muchugi, Alice; Milne, Iain; Kindt, Roeland; Jamnadass, Ramni; Dawson, Ian K

    2014-01-01

    The development of genetic tools for non-model organisms has been hampered by cost, but advances in next-generation sequencing (NGS) have created new opportunities. In ecological research, this raises the prospect for developing molecular markers to simultaneously study important genetic processes such as gene flow in multiple non-model plant species within complex natural and anthropogenic landscapes. Here, we report the use of bar-coded multiplexed paired-end Illumina NGS for the de novo development of expressed sequence tag-derived simple sequence repeat (EST-SSR) markers at low cost for a range of 24 tree species. Each chosen tree species is important in complex tropical agroforestry systems where little is currently known about many genetic processes. An average of more than 5,000 EST-SSRs was identified for each of the 24 sequenced species, whereas prior to analysis 20 of the species had fewer than 100 nucleotide sequence citations. To make results available to potential users in a suitable format, we have developed an open-access, interactive online database, tropiTree (http://bioinf.hutton.ac.uk/tropiTree), which has a range of visualisation and search facilities, and which is a model for the efficient presentation and application of NGS data.

  18. BAL31-NGS approach for identification of telomeres de novo in large genomes.

    Science.gov (United States)

    Peška, Vratislav; Sitová, Zdeňka; Fajkus, Petr; Fajkus, Jiří

    2017-02-01

    This article describes a novel method to identify as yet undiscovered telomere sequences, which combines next generation sequencing (NGS) with BAL31 digestion of high molecular weight DNA. The method was applied to two groups of plants: i) dicots, genus Cestrum, and ii) monocots, Allium species (e.g. A. ursinum and A. cepa). Both groups consist of species with large genomes (tens of Gb) and a low number of chromosomes (2n=14-16), full of repeat elements. Both genera lack typical telomeric repeats and multiple studies have attempted to characterize alternative telomeric sequences. However, despite interesting hypotheses and suggestions of alternative candidate telomeres (retrotransposons, rDNA, satellite repeats) these studies have not resolved the question. In a novel approach based on the two most general features of eukaryotic telomeres, their repetitive character and sensitivity to BAL31 nuclease digestion, we have taken advantage of the capacity and current affordability of NGS in combination with the robustness of classical BAL31 nuclease digestion of chromosomal termini. While representative samples of most repeat elements were ensured by low-coverage (less than 5%) genomic shot-gun NGS, candidate telomeres were identified as under-represented sequences in BAL31-treated samples.

  19. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  20. Correlational effect size benchmarks.

    Science.gov (United States)

    Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A

    2015-03-01

    Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  1. Next-generation sequencing (NGS) for assessment of microbial water quality: current progress, challenges, and future opportunities.

    Science.gov (United States)

    Tan, BoonFei; Ng, Charmaine; Nshimyimana, Jean Pierre; Loh, Lay Leng; Gin, Karina Y-H; Thompson, Janelle R

    2015-01-01

    Water quality is an emergent property of a complex system comprised of interacting microbial populations and introduced microbial and chemical contaminants. Studies leveraging next-generation sequencing (NGS) technologies are providing new insights into the ecology of microbially mediated processes that influence fresh water quality such as algal blooms, contaminant biodegradation, and pathogen dissemination. In addition, sequencing methods targeting small subunit (SSU) rRNA hypervariable regions have allowed identification of signature microbial species that serve as bioindicators for sewage contamination in these environments. Beyond amplicon sequencing, metagenomic and metatranscriptomic analyses of microbial communities in fresh water environments reveal the genetic capabilities and interplay of waterborne microorganisms, shedding light on the mechanisms for production and biodegradation of toxins and other contaminants. This review discusses the challenges and benefits of applying NGS-based methods to water quality research and assessment. We will consider the suitability and biases inherent in the application of NGS as a screening tool for assessment of biological risks and discuss the potential and limitations for direct quantitative interpretation of NGS data. Secondly, we will examine case studies from recent literature where NGS based methods have been applied to topics in water quality assessment, including development of bioindicators for sewage pollution and microbial source tracking, characterizing the distribution of toxin and antibiotic resistance genes in water samples, and investigating mechanisms of biodegradation of harmful pollutants that threaten water quality. Finally, we provide a short review of emerging NGS platforms and their potential applications to the next generation of water quality assessment tools.

  2. Benchmarking in water project analysis

    Science.gov (United States)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  3. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...... already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity. © IWA Publishing 2013....... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  4. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  5. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  6. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  7. Barriers to the practice of benchmarking in South African restaurants

    Directory of Open Access Journals (Sweden)

    Carina Kleynhans

    2017-07-01

    Full Text Available The main purpose of this study is to find the barriers of benchmarking use in independent full-service restaurants in South Africa. The global restaurant industry entities operate in a highly competitive environment, and restaurateurs should have a visible ad¬vantage over competitors. A competitive advantage can be achieved only if the quality standards in terms of food and beverage products, service quality, relevant technology and price are comparable to the industry leaders. This study has deployed a descriptive, quantitative research design on the basis of a relatively large sample of restaurateurs. The data was collected through the SurveyMonkey website using a standardised questionnaire The questionnaire was mailed to 2699 restaurateurs, and 109 respondents returned fully completed answer sheets. Descriptive and inferential statistics were used to analyze the data. The main findings were as follows: 43% of respondents had never done benchmarking; only 5.5% respondents considered themselves as highly knowledgeable about benchmarking; respondents thought that the most significant barriers to benchmarking were difficulties with obtaining exemplar (benchmarking partner best-practice information and adapting the anomalous (own practices to derive a benefit from best practices. The results of this study should be used to shape the knowledge about benchmarking practices in order to develop suitable solutions for the problems in South African restaurants.

  8. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  9. Review of recent benchmark experiments on integral test for high energy nuclear data evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Nakashima, Hiroshi; Tanaka, Susumu; Konno, Chikara; Fukahori, Tokio; Hayashi, Katsumi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-11-01

    A survey work of recent benchmark experiments on an integral test for high energy nuclear data evaluation was carried out as one of the work of the Task Force on JENDL High Energy File Integral Evaluation (JHEFIE). In this paper the results are compiled and the status of recent benchmark experiments is described. (author)

  10. A Reconceptualization of CCSSE's Benchmarks of Student Engagement

    Science.gov (United States)

    Nora, Amaury; Crisp, Gloria; Matthews, Cissy

    2011-01-01

    As a great deal of importance is now placed on student engagement, it is just as imperative to establish the soundness of constructs underlying those survey instruments and benchmarks used in providing indicators of such. This study investigates the dimensionalities of student engagement among community college students as measured by the…

  11. DUSTiNGS. III. Distribution of Intermediate-age and Old Stellar Populations in Disks and Outer Extremities of Dwarf Galaxies

    Science.gov (United States)

    McQuinn, Kristen B.; Boyer, Martha; DUSTiNGS Team

    2017-06-01

    As part of the DUST in Nearby Galaxies with Spitzer (DUSTiNGS) survey, we have traced the spatial distributions of intermediate-age and old stars in nine dwarf galaxies in the distant parts of the Local Group. We find intermediate age stars are well mixed with the older populations and extend to large radii, indicating that chemical enrichment from these dust-producing stars may occur in the outer regions of galaxies with some frequency. Theories of structure formation in dwarf galaxies must account for the lack of radial gradients in intermediate-age populations and the presence of these stars in the outer extremities of dwarfs. We also identify the tip of the red giant branch (TRGB) in Spitzer IRAC 3.6 μm photometry. Unlike the constant TRGB in the I band, at 3.6 μm, the TRGB magnitude varies by ˜0.7 mag and is not a metallicity independent distance indicator.

  12. Next-generation sequencing (NGS) in the microbiological world: How to make the most of your money.

    Science.gov (United States)

    Vincent, Antony T; Derome, Nicolas; Boyle, Brian; Culley, Alexander I; Charette, Steve J

    2017-07-01

    The Sanger sequencing method produces relatively long DNA sequences of unmatched quality and has been considered for long time as the gold standard for sequencing DNA. Many improvements of the Sanger method that culminated with fluorescent dyes coupled with automated capillary electrophoresis enabled the sequencing of the first genomes. Nevertheless, using this technology to sequence whole genomes was costly, laborious and time consuming even for genomes that are relatively small in size. A major technological advance was the introduction of next-generation sequencing (NGS) pioneered by 454 Life Sciences in the early part of the 21th century. NGS allowed scientists to sequence thousands to millions of DNA molecules in a single machine run. Since then, new NGS technologies have emerged and existing NGS platforms have been improved, enabling the production of genome sequences at an unprecedented rate as well as broadening the spectrum of NGS applications. The current affordability of generating genomic information, especially with microbial samples, has resulted in a false sense of simplicity that belies the fact that many researchers still consider these technologies a black box. In this review, our objective is to identify and discuss four steps that we consider crucial to the success of any NGS-related project. These steps are: (1) the definition of the research objectives beyond sequencing and appropriate experimental planning, (2) library preparation, (3) sequencing and (4) data analysis. The goal of this review is to give an overview of the process, from sample to analysis, and discuss how to optimize your resources to achieve the most from your NGS-based research. Regardless of the evolution and improvement of the sequencing technologies, these four steps will remain relevant. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Critical review of NGS analyses for de novo genotyping multigene families.

    Science.gov (United States)

    Lighten, Jackie; van Oosterhout, Cock; Bentzen, Paul

    2014-08-01

    The genotyping of highly polymorphic multigene families across many individuals used to be a particularly challenging task because of methodological limitations associated with traditional approaches. Next-generation sequencing (NGS) can overcome most of these limitations, and it is increasingly being applied in population genetic studies of multigene families. Here, we critically review NGS bioinformatic approaches that have been used to genotype the major histocompatibility complex (MHC) immune genes, and we discuss how the significant advances made in this field are applicable to population genetic studies of gene families. Increasingly, approaches are introduced that apply thresholds of sequencing depth and sequence similarity to separate alleles from methodological artefacts. We explain why these approaches are particularly sensitive to methodological biases by violating fundamental genotyping assumptions. An alternative strategy that utilizes ultra-deep sequencing (hundreds to thousands of sequences per amplicon) to reconstruct genotypes and applies statistical methods on the sequencing depth to separate alleles from artefacts appears to be more robust. Importantly, the 'degree of change' (DOC) method avoids using arbitrary cut-off thresholds by looking for statistical boundaries between the sequencing depth for alleles and artefacts, and hence, it is entirely repeatable across studies. Although the advances made in generating NGS data are still far ahead of our ability to perform reliable processing, analysis and interpretation, the community is developing statistically rigorous protocols that will allow us to address novel questions in evolution, ecology and genetics of multigene families. Future developments in third-generation single molecule sequencing may potentially help overcome problems that still persist in de novo multigene amplicon genotyping when using current second-generation sequencing approaches.

  14. Sample Results From The Extraction, Scrub, And Strip Test For The Blended NGS Solvent

    Energy Technology Data Exchange (ETDEWEB)

    Washington, A. L. II; Peters, T. B.

    2014-03-03

    This report summarizes the results of the extraction, scrub, and strip testing for the September 2013 sampling of the Next Generation Solvent (NGS) Blended solvent from the Modular Caustic Side-Solvent Extraction Unit (MCU) Solvent Hold Tank. MCU is in the process of transitioning from the BOBCalixC6 solvent to the NGS Blend solvent. As part of that transition, MCU has intentionally created a blended solvent to be processed using the Salt Batch program. This sample represents the first sample received from that blended solvent. There were two ESS tests performed where NGS blended solvent performance was assessed using either the Tank 21 material utilized in the Salt Batch 7 analyses or a simulant waste material used in the V-5/V-10 contactor testing. This report tabulates the temperature corrected cesium distribution, or DCs values, step recovery percentage, and actual temperatures recorded during the experiment. This report also identifies the sample receipt date, preparation method, and analysis performed in the accumulation of the listed values. The calculated extraction DCs values using the Tank 21H material and simulant are 59.4 and 53.8, respectively. The DCs values for two scrub and three strip processes for the Tank 21 material are 4.58, 2.91, 0.00184, 0.0252, and 0.00575, respectively. The D-values for two scrub and three strip processes for the simulant are 3.47, 2.18, 0.00468, 0.00057, and 0.00572, respectively. These values are similar to previous measurements of Salt Batch 7 feed with lab-prepared blended solvent. These numbers are considered compatible to allow simulant testing to be completed in place of actual waste due to the limited availability of feed material.

  15. New genes and pathomechanisms in mitochondrial disorders unraveled by NGS technologies.

    Science.gov (United States)

    Legati, Andrea; Reyes, Aurelio; Nasca, Alessia; Invernizzi, Federica; Lamantea, Eleonora; Tiranti, Valeria; Garavaglia, Barbara; Lamperti, Costanza; Ardissone, Anna; Moroni, Isabella; Robinson, Alan; Ghezzi, Daniele; Zeviani, Massimo

    2016-08-01

    Next Generation Sequencing (NGS) technologies are revolutionizing the diagnostic screening for rare disease entities, including primary mitochondrial disorders, particularly those caused by nuclear gene defects. NGS approaches are able to identify the causative gene defects in small families and even single individuals, unsuitable for investigation by traditional linkage analysis. These technologies are contributing to fill the gap between mitochondrial disease cases defined on the basis of clinical, neuroimaging and biochemical readouts, which still outnumber by approximately 50% the cases for which a molecular-genetic diagnosis is attained. We have been using a combined, two-step strategy, based on targeted genes panel as a first NGS screening, followed by whole exome sequencing (WES) in still unsolved cases, to analyze a large cohort of subjects, that failed to show mutations in mtDNA and in ad hoc sets of specific nuclear genes, sequenced by the Sanger's method. Not only this approach has allowed us to reach molecular diagnosis in a significant fraction (20%) of these difficult cases, but it has also revealed unexpected and conceptually new findings. These include the possibility of marked variable penetrance of recessive mutations, the identification of large-scale DNA rearrangements, which explain spuriously heterozygous cases, and the association of mutations in known genes with unusual, previously unreported clinical phenotypes. Importantly, WES on selected cases has unraveled the presence of pathogenic mutations in genes encoding non-mitochondrial proteins (e.g. the transcription factor E4F1), an observation that further expands the intricate genetics of mitochondrial disease and suggests a new area of investigation in mitochondrial medicine. This article is part of a Special Issue entitled 'EBEC 2016: 19th European Bioenergetics Conference, Riva del Garda, Italy, July 2-6, 2016', edited by Prof. Paolo Bernardi.

  16. 77 FR 43063 - Affirmation of Vertical Datum for Surveying and Mapping Activities for the Territory of Puerto Rico

    Science.gov (United States)

    2012-07-23

    ... Activities for the Territory of Puerto Rico AGENCY: National Geodetic Survey (NGS), National Ocean Service... vertical datum for the Territory of Puerto Rico, which includes the islands of Puerto Rico,...

  17. Fast extraction from SPS LSS4 for the LHC and NGS projects

    CERN Document Server

    Goddard, B

    1998-01-01

    The proton and lead ion beams for the anti-clockwise ring of the LHC will be extracted from the SPS in LSS4 and transferred to LHC point 8 via the transfer line TT40-TI8. A conventional single turn 'fast' extraction will be used. The same extraction channel may be used to transfer protons to a neutrino target for the proposed long baseline Neutrino to Gran Sasso (NGS) project. The planned layout for the extraction channel in LSS4 now has the extraction septum in SPS half period 418, and is outlined for both fast-pulsed and DC septum designs.

  18. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  19. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  20. A high-throughput optomechanical retrieval method for sequence-verified clonal DNA from the NGS platform.

    Science.gov (United States)

    Lee, Howon; Kim, Hyoki; Kim, Sungsik; Ryu, Taehoon; Kim, Hwangbeom; Bang, Duhee; Kwon, Sunghoon

    2015-02-02

    Writing DNA plays a significant role in the fields of synthetic biology, functional genomics and bioengineering. DNA clones on next-generation sequencing (NGS) platforms have the potential to be a rich and cost-effective source of sequence-verified DNAs as a precursor for DNA writing. However, it is still very challenging to retrieve target clonal DNA from high-density NGS platforms. Here we propose an enabling technology called 'Sniper Cloning' that enables the precise mapping of target clone features on NGS platforms and non-contact rapid retrieval of targets for the full utilization of DNA clones. By merging the three cutting-edge technologies of NGS, DNA microarray and our pulse laser retrieval system, Sniper Cloning is a week-long process that produces 5,188 error-free synthetic DNAs in a single run of NGS with a single microarray DNA pool. We believe that this technology has potential as a universal tool for DNA writing in biological sciences.

  1. Unipro UGENE NGS pipelines and components for variant calling, RNA-seq and ChIP-seq data analyses.

    Science.gov (United States)

    Golosova, Olga; Henderson, Ross; Vaskin, Yuriy; Gabrielian, Andrei; Grekhov, German; Nagarajan, Vijayaraj; Oler, Andrew J; Quiñones, Mariam; Hurt, Darrell; Fursov, Mikhail; Huyen, Yentram

    2014-01-01

    The advent of Next Generation Sequencing (NGS) technologies has opened new possibilities for researchers. However, the more biology becomes a data-intensive field, the more biologists have to learn how to process and analyze NGS data with complex computational tools. Even with the availability of common pipeline specifications, it is often a time-consuming and cumbersome task for a bench scientist to install and configure the pipeline tools. We believe that a unified, desktop and biologist-friendly front end to NGS data analysis tools will substantially improve productivity in this field. Here we present NGS pipelines "Variant Calling with SAMtools", "Tuxedo Pipeline for RNA-seq Data Analysis" and "Cistrome Pipeline for ChIP-seq Data Analysis" integrated into the Unipro UGENE desktop toolkit. We describe the available UGENE infrastructure that helps researchers run these pipelines on different datasets, store and investigate the results and re-run the pipelines with the same parameters. These pipeline tools are included in the UGENE NGS package. Individual blocks of these pipelines are also available for expert users to create their own advanced workflows.

  2. lociNGS: a lightweight alternative for assessing suitability of next-generation loci for evolutionary analysis.

    Directory of Open Access Journals (Sweden)

    Sarah M Hird

    Full Text Available Genomic enrichment methods and next-generation sequencing produce uneven coverage for the portions of the genome (the loci they target; this information is essential for ascertaining the suitability of each locus for further analysis. lociNGS is a user-friendly accessory program that takes multi-FASTA formatted loci, next-generation sequence alignments and demographic data as input and collates, displays and outputs information about the data. Summary information includes the parameters coverage per locus, coverage per individual and number of polymorphic sites, among others. The program can output the raw sequences used to call loci from next-generation sequencing data. lociNGS also reformats subsets of loci in three commonly used formats for multi-locus phylogeographic and population genetics analyses - NEXUS, IMa2 and Migrate. lociNGS is available at https://github.com/SHird/lociNGS and is dependent on installation of MongoDB (freely available at http://www.mongodb.org/downloads. lociNGS is written in Python and is supported on MacOSX and Unix; it is distributed under a GNU General Public License.

  3. Analysis and Visualization of ChIP-Seq and RNA-Seq Sequence Alignments Using ngs.plot.

    Science.gov (United States)

    Loh, Yong-Hwee Eddie; Shen, Li

    2016-01-01

    The continual maturation and increasing applications of next-generation sequencing technology in scientific research have yielded ever-increasing amounts of data that need to be effectively and efficiently analyzed and innovatively mined for new biological insights. We have developed ngs.plot-a quick and easy-to-use bioinformatics tool that performs visualizations of the spatial relationships between sequencing alignment enrichment and specific genomic features or regions. More importantly, ngs.plot is customizable beyond the use of standard genomic feature databases to allow the analysis and visualization of user-specified regions of interest generated by the user's own hypotheses. In this protocol, we demonstrate and explain the use of ngs.plot using command line executions, as well as a web-based workflow on the Galaxy framework. We replicate the underlying commands used in the analysis of a true biological dataset that we had reported and published earlier and demonstrate how ngs.plot can easily generate publication-ready figures. With ngs.plot, users would be able to efficiently and innovatively mine their own datasets without having to be involved in the technical aspects of sequence coverage calculations and genomic databases.

  4. Randomized benchmarking of multiqubit gates.

    Science.gov (United States)

    Gaebler, J P; Meier, A M; Tan, T R; Bowler, R; Lin, Y; Hanneke, D; Jost, J D; Home, J P; Knill, E; Leibfried, D; Wineland, D J

    2012-06-29

    We describe an extension of single-qubit gate randomized benchmarking that measures the error of multiqubit gates in a quantum information processor. This platform-independent protocol evaluates the performance of Clifford unitaries, which form a basis of fault-tolerant quantum computing. We implemented the benchmarking protocol with trapped ions and found an error per random two-qubit Clifford unitary of 0.162±0.008, thus setting the first benchmark for such unitaries. By implementing a second set of sequences with an extra two-qubit phase gate inserted after each step, we extracted an error per phase gate of 0.069±0.017. We conducted these experiments with transported, sympathetically cooled ions in a multizone Paul trap-a system that can in principle be scaled to larger numbers of ions.

  5. NGS-Based Assay for the Identification of Individuals Carrying Recessive Genetic Mutations in Reproductive Medicine.

    Science.gov (United States)

    Abulí, Anna; Boada, Montserrat; Rodríguez-Santiago, Benjamín; Coroleu, Buenaventura; Veiga, Anna; Armengol, Lluís; Barri, Pedro N; Pérez-Jurado, Luis A; Estivill, Xavier

    2016-06-01

    Next-generation sequencing (NGS) has the capacity of carrier screening in gamete donation (GD) programs. We have developed and validated an NGS carrier-screening test (qCarrier test) that includes 200 genes associated with 368 disorders (277 autosomal recessive and 37 X-linked). Carrier screening is performed on oocyte donation candidates and the male partner of oocyte recipient. Carriers of X-linked conditions are excluded from the GD program, whereas donors are chosen who do not carry mutations for the same gene/disease as the recipients. The validation phase showed a high sensitivity (>99% sensitivity) detecting all single-nucleotide variants, 13 indels, and 25 copy-number variants included in the validation set. A total of 1,301 individuals were analysed with the qCarrier test, including 483 candidate oocyte donors and 635 receptor couples, 105 females receiving sperm donation, and 39 couples seeking pregnancy. We identified 56% of individuals who are carriers for at least one genetic condition and 1.7% of female donors who were excluded from the program due to a carrier state of X-linked conditions. Globally, 3% of a priori assigned donations had a high reproductive risk that could be minimized after testing. Genetic counselling at different stages is essential for helping to facilitate a successful and healthy pregnancy.

  6. Integrated Systems for NGS Data Management and Analysis: Open Issues and Available Solutions.

    Science.gov (United States)

    Bianchi, Valerio; Ceol, Arnaud; Ogier, Alessandro G E; de Pretis, Stefano; Galeota, Eugenia; Kishore, Kamal; Bora, Pranami; Croci, Ottavio; Campaner, Stefano; Amati, Bruno; Morelli, Marco J; Pelizzola, Mattia

    2016-01-01

    Next-generation sequencing (NGS) technologies have deeply changed our understanding of cellular processes by delivering an astonishing amount of data at affordable prices; nowadays, many biology laboratories have already accumulated a large number of sequenced samples. However, managing and analyzing these data poses new challenges, which may easily be underestimated by research groups devoid of IT and quantitative skills. In this perspective, we identify five issues that should be carefully addressed by research groups approaching NGS technologies. In particular, the five key issues to be considered concern: (1) adopting a laboratory management system (LIMS) and safeguard the resulting raw data structure in downstream analyses; (2) monitoring the flow of the data and standardizing input and output directories and file names, even when multiple analysis protocols are used on the same data; (3) ensuring complete traceability of the analysis performed; (4) enabling non-experienced users to run analyses through a graphical user interface (GUI) acting as a front-end for the pipelines; (5) relying on standard metadata to annotate the datasets, and when possible using controlled vocabularies, ideally derived from biomedical ontologies. Finally, we discuss the currently available tools in the light of these issues, and we introduce HTS-flow, a new workflow management system conceived to address the concerns we raised. HTS-flow is able to retrieve information from a LIMS database, manages data analyses through a simple GUI, outputs data in standard locations and allows the complete traceability of datasets, accompanying metadata and analysis scripts.

  7. Integrated systems for NGS data management and analysis: open issues and available solutions

    Directory of Open Access Journals (Sweden)

    Valerio eBianchi

    2016-05-01

    Full Text Available Next-generation sequencing (NGS technologies have deeply changed our understanding of cellular processes by delivering an astonishing amount of data at affordable prices; nowadays, many biology laboratories have already accumulated a large number of sequenced samples. However, managing and analyzing these data poses new challenges, which may easily be underestimated by research groups devoid of IT and quantitative skills. In this perspective, we identify five issues that should be carefully addressed by research groups approaching NGS technologies. In particular, the five key issues to be considered concern: 1 adopting a laboratory management system (LIMS and safeguard the resulting raw data structure in downstream analyses; 2 monitoring the flow of the data and standardizing input and output directories and file names, even when multiple analysis protocols are used on the same data; 3 ensuring complete traceability of the analysis performed; 4 enabling non-experienced users to run analyses through a graphical user interface (GUI acting as a front-end for the pipelines; 5 relying on standard metadata to annotate the datasets, and when possible using controlled vocabularies, ideally derived from biomedical ontologies. Finally, we discuss the currently available tools in the light of these issues, and we introduce HTS-flow, a new workflow management system (WMS conceived to address the concerns we raised. HTS-flow is able to retrieve information from a LIMS database, manages data analyses through a simple GUI, outputs data in standard locations and allows the complete traceability of datasets, accompanying metadata and analysis scripts.

  8. CDH1 mutations in gastric cancer patients from northern Brazil identified by Next- Generation Sequencing (NGS).

    Science.gov (United States)

    El-Husny, Antonette; Raiol-Moraes, Milene; Amador, Marcos; Ribeiro-Dos-Santos, André M; Montagnini, André; Barbosa, Silvanira; Silva, Artur; Assumpção, Paulo; Ishak, Geraldo; Santos, Sidney; Pinto, Pablo; Cruz, Aline; Ribeiro-Dos-Santos, Ândrea

    2016-05-13

    Gastric cancer is considered to be the fifth highest incident tumor worldwide and the third leading cause of cancer deaths. Developing regions report a higher number of sporadic cases, but there are only a few local studies related to hereditary cases of gastric cancer in Brazil to confirm this fact. CDH1 germline mutations have been described both in familial and sporadic cases, but there is only one recent molecular description of individuals from Brazil. In this study we performed Next Generation Sequencing (NGS) to assess CDH1 germline mutations in individuals who match the clinical criteria for Hereditary Diffuse Gastric Cancer (HDGC), or who exhibit very early diagnosis of gastric cancer. Among five probands we detected CDH1 germline mutations in two cases (40%). The mutation c.1023T > G was found in a HDGC family and the mutation c.1849G > A, which is nearly exclusive to African populations, was found in an early-onset case of gastric adenocarcinoma. The mutations described highlight the existence of gastric cancer cases caused by CDH1 germline mutations in northern Brazil, although such information is frequently ignored due to the existence of a large number of environmental factors locally. Our report represent the first CDH1 mutations in HDGC described from Brazil by an NGS platform.

  9. Assessment of the latest NGS enrichment capture methods in clinical context.

    Science.gov (United States)

    García-García, Gema; Baux, David; Faugère, Valérie; Moclyn, Mélody; Koenig, Michel; Claustres, Mireille; Roux, Anne-Françoise

    2016-02-11

    Enrichment capture methods for NGS are widely used, however, they evolve rapidly and it is necessary to periodically measure their strengths and weaknesses before transfer to diagnostic services. We assessed two recently released custom DNA solution-capture enrichment methods for NGS, namely Illumina NRCCE and Agilent SureSelect(QXT), against a reference method NimbleGen SeqCap EZ Choice on a similar gene panel, sharing 678 kb and 110 genes. Two Illumina MiSeq runs of 12 samples each have been performed, for each of the three methods, using the same 24 patients (affected with sensorineural disorders). Technical outcomes have been computed and compared, including depth and evenness of coverage, enrichment in targeted regions, performance in GC-rich regions and ability to generate consistent variant datasets. While we show that the three methods resulted in suitable datasets for standard DNA variant discovery, we describe significant differences between the results for the above parameters. NimbleGen offered the best depth of coverage and evenness, while NRCCE showed the highest on target levels but high duplicate rates. SureSelect(QXT) showed an overall quality close to that of NimbleGen. The new methods exhibit reduced preparation time but behave differently. These findings will guide laboratories in their choice of library enrichment approach.

  10. Panel-based NGS Reveals Novel Pathogenic Mutations in Autosomal Recessive Retinitis Pigmentosa.

    Science.gov (United States)

    Perez-Carro, Raquel; Corton, Marta; Sánchez-Navarro, Iker; Zurita, Olga; Sanchez-Bolivar, Noelia; Sánchez-Alcudia, Rocío; Lelieveld, Stefan H; Aller, Elena; Lopez-Martinez, Miguel Angel; López-Molina, Ma Isabel; Fernandez-San Jose, Patricia; Blanco-Kelly, Fiona; Riveiro-Alvarez, Rosa; Gilissen, Christian; Millan, Jose M; Avila-Fernandez, Almudena; Ayuso, Carmen

    2016-01-25

    Retinitis pigmentosa (RP) is a group of inherited progressive retinal dystrophies (RD) characterized by photoreceptor degeneration. RP is highly heterogeneous both clinically and genetically, which complicates the identification of causative genes and mutations. Targeted next-generation sequencing (NGS) has been demonstrated to be an effective strategy for the detection of mutations in RP. In our study, an in-house gene panel comprising 75 known RP genes was used to analyze a cohort of 47 unrelated Spanish families pre-classified as autosomal recessive or isolated RP. Disease-causing mutations were found in 27 out of 47 cases achieving a mutation detection rate of 57.4%. In total, 33 pathogenic mutations were identified, 20 of which were novel mutations (60.6%). Furthermore, not only single nucleotide variations but also copy-number variations, including three large deletions in the USH2A and EYS genes, were identified. Finally seven out of 27 families, displaying mutations in the ABCA4, RP1, RP2 and USH2A genes, could be genetically or clinically reclassified. These results demonstrate the potential of our panel-based NGS strategy in RP diagnosis.

  11. NGS-based Molecular diagnosis of 105 eyeGENE(®) probands with Retinitis Pigmentosa.

    Science.gov (United States)

    Ge, Zhongqi; Bowles, Kristen; Goetz, Kerry; Scholl, Hendrik P N; Wang, Feng; Wang, Xinjing; Xu, Shan; Wang, Keqing; Wang, Hui; Chen, Rui

    2015-12-15

    The National Ophthalmic Disease Genotyping and Phenotyping Network (eyeGENE(®)) was established in an effort to facilitate basic and clinical research of human inherited eye disease. In order to provide high quality genetic testing to eyeGENE(®)'s enrolled patients which potentially aids clinical diagnosis and disease treatment, we carried out a pilot study and performed Next-generation sequencing (NGS) based molecular diagnosis for 105 Retinitis Pigmentosa (RP) patients randomly selected from the network. A custom capture panel was designed, which incorporated 195 known retinal disease genes, including 61 known RP genes. As a result, disease-causing mutations were identified in 52 out of 105 probands (solving rate of 49.5%). A total of 82 mutations were identified, and 48 of them were novel. Interestingly, for three probands the molecular diagnosis was inconsistent with the initial clinical diagnosis, while for five probands the molecular information suggested a different inheritance model other than that assigned by the physician. In conclusion, our study demonstrated that NGS target sequencing is efficient and sufficiently precise for molecular diagnosis of a highly heterogeneous patient cohort from eyeGENE(®).

  12. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  13. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  14. Closed-loop neuromorphic benchmarks

    CSIR Research Space (South Africa)

    Stewart

    2015-11-01

    Full Text Available Benchmarks   Terrence C. Stewart 1* , Travis DeWolf 1 , Ashley Kleinhans 2 , Chris Eliasmith 1   1 University of Waterloo, Canada, 2 Council for Scientific and Industrial Research, South Africa   Submitted to Journal:   Frontiers in Neuroscience   Specialty... the study was exempt from ethical approval procedures.) Did the study presented in the manuscript involve human or animal subjects: No I v i w 1Closed-loop Neuromorphic Benchmarks Terrence C. Stewart 1,∗, Travis DeWolf 1, Ashley Kleinhans 2 and Chris...

  15. The contextual benchmark method: benchmarking e-government services

    NARCIS (Netherlands)

    Jansen, Jurjen; Vries, de Sjoerd; Schaik, van Paul

    2010-01-01

    This paper offers a new method for benchmarking e-Government services. Government organizations no longer doubt the need to deliver their services on line. Instead, the question that is more relevant is how well the electronic services offered by a particular organization perform in comparison with

  16. Next generation sequencing (NGS database for tandem repeats with multiple pattern 2°-shaft multicore string matching

    Directory of Open Access Journals (Sweden)

    Chinta Someswara Rao

    2016-03-01

    Full Text Available Next generation sequencing (NGS technologies have been rapidly applied in biomedical and biological research in recent years. To provide the comprehensive NGS resource for the research, in this paper , we have considered 10 loci/codi/repeats TAGA, TCAT, GAAT, AGAT, AGAA, GATA, TATC, CTTT, TCTG and TCTA. Then we developed the NGS Tandem Repeat Database (TandemRepeatDB for all the chromosomes of Homo sapiens, Callithrix jacchus, Chlorocebus sabaeus, Gorilla gorilla, Macaca fascicularis, Macaca mulatta, Nomascus leucogenys, Pan troglodytes, Papio anubis and Pongo abelii genome data sets for all those locis. We find the successive occurence frequency for all the above 10 SSR (simple sequence repeats in the above genome data sets on a chromosome-by-chromosome basis with multiple pattern 2° shaft multicore string matching.

  17. Next generation sequencing (NGS) database for tandem repeats with multiple pattern 2°-shaft multicore string matching

    Science.gov (United States)

    Someswara Rao, Chinta; Raju, S. Viswanadha

    2016-01-01

    Next generation sequencing (NGS) technologies have been rapidly applied in biomedical and biological research in recent years. To provide the comprehensive NGS resource for the research, in this paper , we have considered 10 loci/codi/repeats TAGA, TCAT, GAAT, AGAT, AGAA, GATA, TATC, CTTT, TCTG and TCTA. Then we developed the NGS Tandem Repeat Database (TandemRepeatDB) for all the chromosomes of Homo sapiens, Callithrix jacchus, Chlorocebus sabaeus, Gorilla gorilla, Macaca fascicularis, Macaca mulatta, Nomascus leucogenys, Pan troglodytes, Papio anubis and Pongo abelii genome data sets for all those locis. We find the successive occurence frequency for all the above 10 SSR (simple sequence repeats) in the above genome data sets on a chromosome-by-chromosome basis with multiple pattern 2° shaft multicore string matching. PMID:26981434

  18. A case study for cloud based high throughput analysis of NGS data using the globus genomics system.

    Science.gov (United States)

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha

    2015-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the "Globus Genomics" system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-endNGS analysis requirements. The Globus Genomics system is built on Amazon 's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.

  19. Next generation sequencing (NGS) database for tandem repeats with multiple pattern 2°-shaft multicore string matching.

    Science.gov (United States)

    Someswara Rao, Chinta; Raju, S Viswanadha

    2016-03-01

    Next generation sequencing (NGS) technologies have been rapidly applied in biomedical and biological research in recent years. To provide the comprehensive NGS resource for the research, in this paper , we have considered 10 loci/codi/repeats TAGA, TCAT, GAAT, AGAT, AGAA, GATA, TATC, CTTT, TCTG and TCTA. Then we developed the NGS Tandem Repeat Database (TandemRepeatDB) for all the chromosomes of Homo sapiens, Callithrix jacchus, Chlorocebus sabaeus, Gorilla gorilla, Macaca fascicularis, Macaca mulatta, Nomascus leucogenys, Pan troglodytes, Papio anubis and Pongo abelii genome data sets for all those locis. We find the successive occurence frequency for all the above 10 SSR (simple sequence repeats) in the above genome data sets on a chromosome-by-chromosome basis with multiple pattern 2° shaft multicore string matching.

  20. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    Directory of Open Access Journals (Sweden)

    Krithika Bhuvaneshwar

    2015-01-01

    Full Text Available Next generation sequencing (NGS technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-endNGS analysis requirements. The Globus Genomics system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.

  1. Climate Benchmark Missions: CLARREO

    Science.gov (United States)

    Wielicki, Bruce A.; Young, David F.

    2010-01-01

    CLARREO (Climate Absolute Radiance and Refractivity Observatory) is one of the four Tier 1 missions recommended by the recent NRC decadal survey report on Earth Science and Applications from Space (NRC, 2007). The CLARREO mission addresses the need to rigorously observe climate change on decade time scales and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those used in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4). A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO mission accomplishes this critical objective through highly accurate and SI traceable decadal change observations sensitive to many of the key uncertainties in climate radiative forcings, responses, and feedbacks that in turn drive uncertainty in current climate model projections. The same uncertainties also lead to uncertainty in attribution of climate change to anthropogenic forcing. The CLARREO breakthrough in decadal climate change observations is to achieve the required levels of accuracy and traceability to SI standards for a set of observations sensitive to a wide range of key decadal change variables. These accuracy levels are determined both by the projected decadal changes as well as by the background natural variability that such signals must be detected against. The accuracy for decadal change traceability to SI standards includes uncertainties of calibration, sampling, and analysis methods. Unlike most other missions, all of the CLARREO requirements are judged not by instantaneous accuracy, but instead by accuracy in large time/space scale average decadal changes. Given the focus on decadal climate change, the NRC Decadal Survey concluded that the single most critical issue for decadal change observations was their lack of accuracy and low confidence in

  2. Next-generation sequencing (NGS) for assessment of microbial water quality: current progress, challenges, and future opportunities

    Science.gov (United States)

    Tan, BoonFei; Ng, Charmaine; Nshimyimana, Jean Pierre; Loh, Lay Leng; Gin, Karina Y.-H.; Thompson, Janelle R.

    2015-01-01

    Water quality is an emergent property of a complex system comprised of interacting microbial populations and introduced microbial and chemical contaminants. Studies leveraging next-generation sequencing (NGS) technologies are providing new insights into the ecology of microbially mediated processes that influence fresh water quality such as algal blooms, contaminant biodegradation, and pathogen dissemination. In addition, sequencing methods targeting small subunit (SSU) rRNA hypervariable regions have allowed identification of signature microbial species that serve as bioindicators for sewage contamination in these environments. Beyond amplicon sequencing, metagenomic and metatranscriptomic analyses of microbial communities in fresh water environments reveal the genetic capabilities and interplay of waterborne microorganisms, shedding light on the mechanisms for production and biodegradation of toxins and other contaminants. This review discusses the challenges and benefits of applying NGS-based methods to water quality research and assessment. We will consider the suitability and biases inherent in the application of NGS as a screening tool for assessment of biological risks and discuss the potential and limitations for direct quantitative interpretation of NGS data. Secondly, we will examine case studies from recent literature where NGS based methods have been applied to topics in water quality assessment, including development of bioindicators for sewage pollution and microbial source tracking, characterizing the distribution of toxin and antibiotic resistance genes in water samples, and investigating mechanisms of biodegradation of harmful pollutants that threaten water quality. Finally, we provide a short review of emerging NGS platforms and their potential applications to the next generation of water quality assessment tools. PMID:26441948

  3. Next-generation sequencing (NGS for assessment of microbial water quality: current progress, challenges, and future opportunities

    Directory of Open Access Journals (Sweden)

    BoonFei eTan

    2015-09-01

    Full Text Available Water quality is an emergent property of a complex system comprised of interacting microbial populations and introduced microbial and chemical contaminants. Studies leveraging next-generation sequencing (NGS technologies are providing new insights into the ecology of microbially mediated processes that influence fresh water quality such as algal blooms, contaminant biodegradation, and pathogen dissemination. In addition, sequencing methods targeting small subunit (SSU rRNA hypervariable regions have allowed identification of signature microbial species that serve as bioindicators for sewage contamination in these environments. Beyond amplicon sequencing, metagenomic and metatranscriptomic analyses of microbial communities in fresh water environments reveal the genetic capabilities and interplay of waterborne microorganisms, shedding light on the mechanisms for production and biodegradation of toxins and other contaminants. This review discusses the challenges and benefits of applying NGS-based methods to water quality research and assessment. We will consider the suitability and biases inherent in the application of NGS as a screening tool for assessment of biological risks and discuss the potential and limitations for direct quantitative interpretation of NGS data. Secondly, we will examine case studies from recent literature where NGS based methods have been applied to topics in water quality assessment, including development of bioindicators for sewage pollution and microbial source tracking, characterizing the distribution of toxin and antibiotic resistance genes in water samples, and investigating mechanisms of biodegradation of harmful pollutants that threaten water quality. Finally, we provide a short review of emerging NGS platforms and their potential applications to the next generation of water quality assessment tools.

  4. Benchmarking of methods for identification of antimicrobial resistance genes in bacterial whole genome data

    DEFF Research Database (Denmark)

    Clausen, Philip T. L. C.; Zankari, Ea; Aarestrup, Frank Møller;

    2016-01-01

    Next generation sequencing (NGS) may be an alternative to phenotypic susceptibility testing for surveillance and clinical diagnosis. However, current bioinformatics methods may be associated with false positives and negatives. In this study, a novel mapping method was developed and benchmarked...... to two different methods in current use for identification of antibiotic resistance genes in bacterial WGS data. A novel method, KmerResistance, which examines the co-occurrence of k-mers between the WGS data and a database of resistance genes, was developed. The performance of this method was compared...... with two previously described methods; ResFinder and SRST2, which use an assembly/BLAST method and BWA, respectively, using two datasets with a total of 339 isolates, covering five species, originating from the Oxford University Hospitals NHS Trust and Danish pig farms. The predicted resistance...

  5. SURVEY

    DEFF Research Database (Denmark)

    SURVEY er en udbredt metode og benyttes inden for bl.a. samfundsvidenskab, humaniora, psykologi og sundhedsforskning. Også uden for forskningsverdenen er der mange organisationer som f.eks. konsulentfirmaer og offentlige institutioner samt marketingsafdelinger i private virksomheder, der arbejder...... med surveys. Denne bog gennemgår alle surveyarbejdets faser og giver en praktisk indføring i: • design af undersøgelsen og udvælgelse af stikprøver, • formulering af spørgeskemaer samt indsamling og kodning af data, • metoder til at analysere resultaterne...

  6. Benchmarking Internet of Things devices

    CSIR Research Space (South Africa)

    Kruger, CP

    2014-07-01

    Full Text Available International Conference on Industrial Informatics (INDIN), 27-30 July 2014 Benchmarking Internet of Things devices C.P. Kruger y and G.P. Hancke yz *Advanced Sensor Networks Research Group, Counsil for Scientific and Industrial Research, South...

  7. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  8. Engine Benchmarking - Final CRADA Report

    Energy Technology Data Exchange (ETDEWEB)

    Wallner, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    Detailed benchmarking of the powertrains of three light-duty vehicles was performed. Results were presented and provided to CRADA partners. The vehicles included a MY2011 Audi A4, a MY2012 Mini Cooper and a MY2014 Nissan Versa.

  9. Benchmarking Universiteitsvastgoed: Managementinformatie bij vastgoedbeslissingen

    NARCIS (Netherlands)

    Den Heijer, A.C.; De Vries, J.C.

    2004-01-01

    Voor u ligt het eindrapport van het onderzoek "Benchmarking universiteitsvastgoed". Dit rapport is de samenvoeging van twee deel producten: het theorierapport (verschenen in december 2003) en het praktijkrapport (verschenen in januari 2004). Onderwerpen in het theoriedeel zijn de analyse van andere

  10. Benchmark Lisp And Ada Programs

    Science.gov (United States)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  11. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  12. IgH-V(D)J NGS-MRD measurement pre- and early post-allotransplant defines very low- and very high-risk ALL patients.

    Science.gov (United States)

    Pulsipher, Michael A; Carlson, Chris; Langholz, Bryan; Wall, Donna A; Schultz, Kirk R; Bunin, Nancy; Kirsch, Ilan; Gastier-Foster, Julie M; Borowitz, Michael; Desmarais, Cindy; Williamson, David; Kalos, Michael; Grupp, Stephan A

    2015-05-28

    Positive detection of minimal residual disease (MRD) by multichannel flow cytometry (MFC) prior to hematopoietic cell transplantation (HCT) of patients with acute lymphoblastic leukemia (ALL) identifies patients at high risk for relapse, but many pre-HCT MFC-MRD negative patients also relapse, and the predictive power MFC-MRD early post-HCT is poor. To test whether the increased sensitivity of next-generation sequencing (NGS)-MRD better identifies pre- and post-HCT relapse risk, we performed immunoglobulin heavy chain (IgH) variable, diversity, and joining (V[D]J) DNA sequences J NGS-MRD on 56 patients with B-cell ALL enrolled in Children's Oncology Group trial ASCT0431. NGS-MRD predicted relapse and survival more accurately than MFC-MRD (P NGS-MRD detection was better at predicting relapse than MFC-MRD (P NGS-MRD positive relapse rate, 67%; P = .004). Any post-HCT NGS positivity resulted in an increase in relapse risk by multivariate analysis (hazard ratio, 7.7; P = .05). Absence of detectable IgH-V(D)J NGS-MRD pre-HCT defines good-risk patients potentially eligible for less intense treatment approaches. Post-HCT NGS-MRD is highly predictive of relapse and survival, suggesting a role for this technique in defining patients early who would be eligible for post-HCT interventions. The trial was registered at www.clinicaltrials.gov as #NCT00382109.

  13. [Molecular-genetic analysis of DNA pol and TK of HSV-1 population using NGS technology].

    Science.gov (United States)

    Gus'kova, A A; Skoblov, M Iu; Lavrov, A V; Zubtsov, D A; Andronova, V L; Gol'dshteĭn, D V; Galegov, G A; Skoblov, Iu S

    2013-01-01

    It was determined the ratio of viral DNA and DNA from Vero cells using the polymerase chain reaction in real time in Vero cell lysate, infected with L2 strain of the herpes simplex virus type 1. Copy number of the virus reached a maximum after 24 hours of incubation of infection. Total DNA was isolated and sequenced using NGS technology by Ion Torrent device. Nucleotide sequences of the thymidine kinase gene (UL23) and DNA polymerase (UL30) were determined for a population of HSV-1 strain L2. Comparison of the primary structure of these genes with the corresponding nucleotide sequences of known strains of HSV-1 KOS and 17 was conducted. Differences in the structure of genes UL23 and UL30 between strain L2 and reference strains KOS and 17 are not important, because changes are found in non-conservative regions.

  14. ascatNgs: Identifying Somatically Acquired Copy-Number Alterations from Whole-Genome Sequencing Data.

    Science.gov (United States)

    Raine, Keiran M; Van Loo, Peter; Wedge, David C; Jones, David; Menzies, Andrew; Butler, Adam P; Teague, Jon W; Tarpey, Patrick; Nik-Zainal, Serena; Campbell, Peter J

    2016-12-08

    We have developed ascatNgs to aid researchers in carrying out Allele-Specific Copy number Analysis of Tumours (ASCAT). ASCAT is capable of detecting DNA copy number changes affecting a tumor genome when comparing to a matched normal sample. Additionally, the algorithm estimates the amount of tumor DNA in the sample, known as Aberrant Cell Fraction (ACF). ASCAT itself is an R-package which requires the generation of many file types. Here, we present a suite of tools to help handle this for the user. Our code is available on our GitHub site (https://github.com/cancerit). This unit describes both 'one-shot' execution and approaches more suitable for large-scale compute farms. © 2016 by John Wiley & Sons, Inc.

  15. Cancer modelling in the NGS era - Part I: Emerging technology and initial modelling.

    Science.gov (United States)

    Rovigatti, Ugo

    2015-11-01

    It is today indisputable that great progresses have been made in our molecular understanding of cancer cells, but an effective implementation of such knowledge into dramatic cancer-cures is still belated and yet desperately needed. This review gives a snapshot at where we stand today in this search for cancer understanding and definitive treatments, how far we have progressed and what are the major obstacles we will have to overcome both technologically and for disease modelling. In the first part, promising 3rd/4th Generation Sequencing Technologies will be summarized (particularly IonTorrent and OxfordNanopore technologies). Cancer modelling will be then reviewed from its origin in XIX Century Germany to today's NGS applications for cancer understanding and therapeutic interventions. Developments after Molecular Biology revolution (1953) are discussed as successions of three phases. The first, PH1, labelled "Clonal Outgrowth" (from 1960s to mid 1980s) was characterized by discoveries in cytogenetics (Nowell, Rowley) and viral oncology (Dulbecco, Bishop, Varmus), which demonstrated clonality. Treatments were consequently dominated by a "cytotoxic eradication" strategy with chemotherapeutic agents. In PH2, (from the mid 1980s to our days) the description of cancer as "Gene Networks" led to targeted-gene-therapies (TGTs). TGTs are the focus of Section 3: in view of their apparent failing (Ephemeral Therapies), alternative strategies will be discussed in review part II (particularly cancer immunotherapy, CIT). Additional Pitfalls impinge on the concepts of tumour heterogeneity (inter/intra; ITH). The described pitfalls set the basis for a new phase, PH3, which is called "NGS Era" and will be also discussed with ten emerging cancer models in the Review 2nd part.

  16. Epigenetic DNA Methylation Profiling with MSRE: A Quantitative NGS Approach Using a Parkinson's Disease Test Case

    Directory of Open Access Journals (Sweden)

    Adam G. Marsh

    2016-11-01

    Full Text Available Epigenetics is a rapidly developing field focused on deciphering chemical fingerprints that accumulate on human genomes over time. As the nascent idea of precision medicine expands to encompass epigenetic signatures of diagnostic and prognostic relevance, there is a need for methodologies that provide high-throughput DNA methylation profiling measurements. Here we report a novel quantification methodology for computationally reconstructing site-specific CpG methylation status from next generation sequencing (NGS data using methyl-sensitive restriction endonucleases (MSRE. An integrated pipeline efficiently incorporates raw NGS metrics into a statistical discrimination platform to identify functional linkages between shifts in epigenetic DNA methylation and disease phenotypes in samples being analyzed. In this pilot proof-of-concept study we quantify and compare DNA methylation in blood serum of individuals with Parkinson's Disease relative to matched healthy blood profiles. Even with a small study of only six samples, a high degree of statistical discrimination was achieved based on CpG methylation profiles between groups, with 1,008 statistically different CpG sites (p textless 0.0025, after false discovery rate correction. A methylation load calculation was used to assess higher order impacts of methylation shifts on genes and pathways and most notably identified FGF3, FGF8, HTT, KMTA5, MIR8073, and YWHAG as differentially methylated genes with high relevance to Parkinson's Disease and neurodegeneration (based on PubMed literature citations. Of these, KMTA5 is a histone methyl-transferase gene and HTT is Huntington Disease Protein or Huntingtin, for which there are well established neurodegenerative impacts. The future need for precision diagnostics now requires more tools for exploring epigenetic processes that may be linked to cellular dysfunction and subsequent disease progression.

  17. Determining performance characteristics of an NGS-based HLA typing method for clinical applications.

    Science.gov (United States)

    Duke, J L; Lind, C; Mackiewicz, K; Ferriola, D; Papazoglou, A; Gasiewski, A; Heron, S; Huynh, A; McLaughlin, L; Rogers, M; Slavich, L; Walker, R; Monos, D S

    2016-03-01

    This study presents performance specifications of an in-house developed human leukocyte antigen (HLA) typing assay using next-generation sequencing (NGS) on the Illumina MiSeq platform. A total of 253 samples, previously characterized for HLA-A, -B, -C, -DRB1 and -DQB1 were included in this study, which were typed at high-resolution using a combination of Sanger sequencing, sequence-specific primer (SSP) and sequence-specific oligonucleotide probe (SSOP) technologies and recorded at the two-field level. Samples were selected with alleles that cover a high percentage of HLA specificities in each of five different race/ethnic groups: European, African-American, Asian Pacific Islander, Hispanic and Native American. Sequencing data were analyzed by two software programs, Omixon's target and GenDx's NGSengine. A number of metrics including allele balance, sensitivity, specificity, precision, accuracy and remaining ambiguity were assessed. Data analyzed by the two software systems are shown independently. The majority of alleles were identical in the exonic sequences (third field) with both programs for HLA-A, -B, -C and -DQB1 in 97.7% of allele determinations. Among the remaining discrepant genotype calls at least one of the analysis programs agreed with the reference typing. Upon additional manual analysis 100% of the 2530 alleles were concordant with the reference HLA genotypes; the remaining ambiguities did not exceed 0.8%. The results demonstrate the feasibility and significant benefit of HLA typing by NGS as this technology is highly accurate, eliminates virtually all ambiguities, provides complete sequencing information for the length of the HLA gene and forms the basis for utilizing a single methodology for HLA typing in the immunogenetics labs.

  18. RAP: RNA-Seq Analysis Pipeline, a new cloud-based NGS web application.

    Science.gov (United States)

    D'Antonio, Mattia; D'Onorio De Meo, Paolo; Pallocca, Matteo; Picardi, Ernesto; D'Erchia, Anna Maria; Calogero, Raffaele A; Castrignanò, Tiziana; Pesole, Graziano

    2015-01-01

    The study of RNA has been dramatically improved by the introduction of Next Generation Sequencing platforms allowing massive and cheap sequencing of selected RNA fractions, also providing information on strand orientation (RNA-Seq). The complexity of transcriptomes and of their regulative pathways make RNA-Seq one of most complex field of NGS applications, addressing several aspects of the expression process (e.g. identification and quantification of expressed genes and transcripts, alternative splicing and polyadenylation, fusion genes and trans-splicing, post-transcriptional events, etc.). In order to provide researchers with an effective and friendly resource for analyzing RNA-Seq data, we present here RAP (RNA-Seq Analysis Pipeline), a cloud computing web application implementing a complete but modular analysis workflow. This pipeline integrates both state-of-the-art bioinformatics tools for RNA-Seq analysis and in-house developed scripts to offer to the user a comprehensive strategy for data analysis. RAP is able to perform quality checks (adopting FastQC and NGS QC Toolkit), identify and quantify expressed genes and transcripts (with Tophat, Cufflinks and HTSeq), detect alternative splicing events (using SpliceTrap) and chimeric transcripts (with ChimeraScan). This pipeline is also able to identify splicing junctions and constitutive or alternative polyadenylation sites (implementing custom analysis modules) and call for statistically significant differences in genes and transcripts expression, splicing pattern and polyadenylation site usage (using Cuffdiff2 and DESeq). Through a user friendly web interface, the RAP workflow can be suitably customized by the user and it is automatically executed on our cloud computing environment. This strategy allows to access to bioinformatics tools and computational resources without specific bioinformatics and IT skills. RAP provides a set of tabular and graphical results that can be helpful to browse, filter and export

  19. DEVELOPMENT OF ANALYTICAL METHODS FOR DETERMINING SUPPRESSOR CONCENTRATION IN THE MCU NEXT GENERATION SOLVENT (NGS)

    Energy Technology Data Exchange (ETDEWEB)

    Taylor-Pashow, K.; Fondeur, F.; White, T.; Diprete, D.; Milliken, C.

    2013-07-31

    Savannah River National Laboratory (SRNL) was tasked with identifying and developing at least one, but preferably two methods for quantifying the suppressor in the Next Generation Solvent (NGS) system. The suppressor is a guanidine derivative, N,N',N"-tris(3,7-dimethyloctyl)guanidine (TiDG). A list of 10 possible methods was generated, and screening experiments were performed for 8 of the 10 methods. After completion of the screening experiments, the non-aqueous acid-base titration was determined to be the most promising, and was selected for further development as the primary method. {sup 1}H NMR also showed promising results from the screening experiments, and this method was selected for further development as the secondary method. Other methods, including {sup 36}Cl radiocounting and ion chromatography, also showed promise; however, due to the similarity to the primary method (titration) and the inability to differentiate between TiDG and TOA (tri-n-ocytlamine) in the blended solvent, {sup 1}H NMR was selected over these methods. Analysis of radioactive samples obtained from real waste ESS (extraction, scrub, strip) testing using the titration method showed good results. Based on these results, the titration method was selected as the method of choice for TiDG measurement. {sup 1}H NMR has been selected as the secondary (back-up) method, and additional work is planned to further develop this method and to verify the method using radioactive samples. Procedures for analyzing radioactive samples of both pure NGS and blended solvent were developed and issued for the both methods.

  20. Benchmarking: Achieving the best in class

    Energy Technology Data Exchange (ETDEWEB)

    Kaemmerer, L

    1996-05-01

    Oftentimes, people find the process of organizational benchmarking an onerous task, or, because they do not fully understand the nature of the process, end up with results that are less than stellar. This paper presents the challenges of benchmarking and reasons why benchmarking can benefit an organization in today`s economy.

  1. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  2. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  3. MutAid: Sanger and NGS Based Integrated Pipeline for Mutation Identification, Validation and Annotation in Human Molecular Genetics.

    Science.gov (United States)

    Pandey, Ram Vinay; Pabinger, Stephan; Kriegner, Albert; Weinhäusel, Andreas

    2016-01-01

    Traditional Sanger sequencing as well as Next-Generation Sequencing have been used for the identification of disease causing mutations in human molecular research. The majority of currently available tools are developed for research and explorative purposes and often do not provide a complete, efficient, one-stop solution. As the focus of currently developed tools is mainly on NGS data analysis, no integrative solution for the analysis of Sanger data is provided and consequently a one-stop solution to analyze reads from both sequencing platforms is not available. We have therefore developed a new pipeline called MutAid to analyze and interpret raw sequencing data produced by Sanger or several NGS sequencing platforms. It performs format conversion, base calling, quality trimming, filtering, read mapping, variant calling, variant annotation and analysis of Sanger and NGS data under a single platform. It is capable of analyzing reads from multiple patients in a single run to create a list of potential disease causing base substitutions as well as insertions and deletions. MutAid has been developed for expert and non-expert users and supports four sequencing platforms including Sanger, Illumina, 454 and Ion Torrent. Furthermore, for NGS data analysis, five read mappers including BWA, TMAP, Bowtie, Bowtie2 and GSNAP and four variant callers including GATK-HaplotypeCaller, SAMTOOLS, Freebayes and VarScan2 pipelines are supported. MutAid is freely available at https://sourceforge.net/projects/mutaid.

  4. NGS-QC Generator: A Quality Control System for ChIP-Seq and Related Deep Sequencing-Generated Datasets.

    Science.gov (United States)

    Mendoza-Parra, Marco Antonio; Saleem, Mohamed-Ashick M; Blum, Matthias; Cholley, Pierre-Etienne; Gronemeyer, Hinrich

    2016-01-01

    The combination of massive parallel sequencing with a variety of modern DNA/RNA enrichment technologies provides means for interrogating functional protein-genome interactions (ChIP-seq), genome-wide transcriptional activity (RNA-seq; GRO-seq), chromatin accessibility (DNase-seq, FAIRE-seq, MNase-seq), and more recently the three-dimensional organization of chromatin (Hi-C, ChIA-PET). In systems biology-based approaches several of these readouts are generally cumulated with the aim of describing living systems through a reconstitution of the genome-regulatory functions. However, an issue that is often underestimated is that conclusions drawn from such multidimensional analyses of NGS-derived datasets critically depend on the quality of the compared datasets. To address this problem, we have developed the NGS-QC Generator, a quality control system that infers quality descriptors for any kind of ChIP-sequencing and related datasets. In this chapter we provide a detailed protocol for (1) assessing quality descriptors with the NGS-QC Generator; (2) to interpret the generated reports; and (3) to explore the database of QC indicators (www.ngs-qc.org) for >21,000 publicly available datasets.

  5. ngs (notochord granular surface) gene encodes a novel type of intermediate filament family protein essential for notochord maintenance in zebrafish.

    Science.gov (United States)

    Tong, Xiangjun; Xia, Zhidan; Zu, Yao; Telfer, Helena; Hu, Jing; Yu, Jingyi; Liu, Huan; Zhang, Quan; Sodmergen; Lin, Shuo; Zhang, Bo

    2013-01-25

    The notochord is an important organ involved in embryonic patterning and locomotion. In zebrafish, the mature notochord consists of a single stack of fully differentiated, large vacuolated cells called chordocytes, surrounded by a single layer of less differentiated notochordal epithelial cells called chordoblasts. Through genetic analysis of zebrafish lines carrying pseudo-typed retroviral insertions, a mutant exhibiting a defective notochord with a granular appearance was isolated, and the corresponding gene was identified as ngs (notochord granular surface), which was specifically expressed in the notochord. In the mutants, the notochord started to degenerate from 32 hours post-fertilization, and the chordocytes were then gradually replaced by smaller cells derived from chordoblasts. The granular notochord phenotype was alleviated by anesthetizing the mutant embryos with tricaine to prevent muscle contraction and locomotion. Phylogenetic analysis showed that ngs encodes a new type of intermediate filament (IF) family protein, which we named chordostatin based on its function. Under the transmission electron microcopy, bundles of 10-nm-thick IF-like filaments were enriched in the chordocytes of wild-type zebrafish embryos, whereas the chordocytes in ngs mutants lacked IF-like structures. Furthermore, chordostatin-enhanced GFP (EGFP) fusion protein assembled into a filamentous network specifically in chordocytes. Taken together, our work demonstrates that ngs encodes a novel type of IF protein and functions to maintain notochord integrity for larval development and locomotion. Our work sheds light on the mechanisms of notochord structural maintenance, as well as the evolution and biological function of IF family proteins.

  6. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  7. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  8. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection.

  9. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  10. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  11. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  12. Genotype-phenotype correlation in a cohort of Portuguese patients comprising the entire spectrum of VWD types: impact of NGS.

    Science.gov (United States)

    Fidalgo, Teresa; Salvado, Ramon; Corrales, Irene; Pinto, Silva Catarina; Borràs, Nina; Oliveira, Ana; Martinho, Patricia; Ferreira, Gisela; Almeida, Helena; Oliveira, Cristina; Marques, Dalila; Gonçalves, Elsa; Diniz, MJoão; Antunes, Margarida; Tavares, Alice; Caetano, Gonçalo; Kjöllerström, Paula; Maia, Raquel; Sevivas, Teresa S; Vidal, Francisco; Ribeiro, Leticia

    2016-07-01

    The diagnosis of von Willebrand disease (VWD), the most common inherited bleeding disorder, is characterised by a variable bleeding tendency and heterogeneous laboratory phenotype. The sequencing of the entire VWF coding region has not yet become a routine practice in diagnostic laboratories owing to its high costs. Nevertheless, next-generation sequencing (NGS) has emerged as an alternative to overcome this limitation. We aimed to determine the correlation of genotype and phenotype in 92 Portuguese individuals from 60 unrelated families with VWD; therefore, we directly sequenced VWF. We compared the classical Sanger sequencing approach and NGS to assess the value-added effect on the analysis of the mutation distribution in different types of VWD. Sixty-two different VWF mutations were identified, 27 of which had not been previously described. NGS detected 26 additional mutations, contributing to a broad overview of the mutant alleles present in each VWD type. Twenty-nine probands (48.3 %) had two or more mutations; in addition, mutations with pleiotropic effects were detected, and NGS allowed an appropriate classification for seven of them. Furthermore, the differential diagnosis between VWD 2B and platelet type VWD (n = 1), Bernard-Soulier syndrome and VWD 2B (n = 1), and mild haemophilia A and VWD 2N (n = 2) was possible. NGS provided an efficient laboratory workflow for analysing VWF. These findings in our cohort of Portuguese patients support the proposal that improving VWD diagnosis strategies will enhance clinical and laboratory approaches, allowing to establish the most appropriate treatment for each patient.

  13. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  14. HS06 Benchmark for an ARM Server

    CERN Document Server

    Kluth, Stefan

    2013-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  15. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  16. Methods of practice and guidelines for using survey-grade global navigation satellite systems (GNSS) to establish vertical datum in the United States Geological Survey

    Science.gov (United States)

    Rydlund, Paul H.; Densmore, Brenda K.

    2012-01-01

    Geodetic surveys have evolved through the years to the use of survey-grade (centimeter level) global positioning to perpetuate and post-process vertical datum. The U.S. Geological Survey (USGS) uses Global Navigation Satellite Systems (GNSS) technology to monitor natural hazards, ensure geospatial control for climate and land use change, and gather data necessary for investigative studies related to water, the environment, energy, and ecosystems. Vertical datum is fundamental to a variety of these integrated earth sciences. Essentially GNSS surveys provide a three-dimensional position x, y, and z as a function of the North American Datum of 1983 ellipsoid and the most current hybrid geoid model. A GNSS survey may be approached with post-processed positioning for static observations related to a single point or network, or involve real-time corrections to provide positioning "on-the-fly." Field equipment required to facilitate GNSS surveys range from a single receiver, with a power source for static positioning, to an additional receiver or network communicated by radio or cellular for real-time positioning. A real-time approach in its most common form may be described as a roving receiver augmented by a single-base station receiver, known as a single-base real-time (RT) survey. More efficient real-time methods involving a Real-Time Network (RTN) permit the use of only one roving receiver that is augmented to a network of fixed receivers commonly known as Continually Operating Reference Stations (CORS). A post-processed approach in its most common form involves static data collection at a single point. Data are most commonly post-processed through a universally accepted utility maintained by the National Geodetic Survey (NGS), known as the Online Position User Service (OPUS). More complex post-processed methods involve static observations among a network of additional receivers collecting static data at known benchmarks. Both classifications provide users

  17. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  18. An Infrared Census of DUST in Nearby Galaxies with Spitzer (DUSTiNGS), II. Discovery of Metal-poor Dusty AGB Stars

    CERN Document Server

    Boyer, Martha L; Barmby, P; Bonanos, A Z; Gehrz, R D; Gordon, K D; Groenewegen, M A T; Lagadec, E; Lennon, D; Marengo, M; McDonald, I; Meixner, M; Skillman, E; Sloan, G C; Sonneborn, G; van Loon, J Th; Zijlstra, A

    2014-01-01

    The DUSTiNGS survey (DUST in Nearby Galaxies with Spitzer) is a 3.6 and 4.5 micron imaging survey of 50 nearby dwarf galaxies designed to identify dust-producing Asymptotic Giant Branch (AGB) stars and massive stars. Using two epochs, spaced approximately six months apart, we identify a total of 526 dusty variable AGB stars (sometimes called "extreme" or x-AGB stars; [3.6]-[4.5]>0.1 mag). Of these, 111 are in galaxies with [Fe/H] < -1.5 and 12 are in galaxies with [Fe/H] < -2.0, making them the most metal-poor dust-producing AGB stars known. We compare these identifications to those in the literature and find that most are newly discovered large-amplitude variables, with the exception of approximately 30 stars in NGC 185 and NGC 147, one star in IC 1613, and one star in Phoenix. The chemical abundances of the x-AGB variables are unknown, but the low metallicities suggest that they are more likely to be carbon-rich than oxygen-rich and comparisons with existing optical and near-IR photometry confirms tha...

  19. Benchmarking management practices in Australian public healthcare.

    Science.gov (United States)

    Agarwal, Renu; Green, Roy; Agarwal, Neeru; Randhawa, Krithika

    2016-01-01

    The purpose of this paper is to investigate the quality of management practices of public hospitals in the Australian healthcare system, specifically those in the state-managed health systems of Queensland and New South Wales (NSW). Further, the authors assess the management practices of Queensland and NSW public hospitals jointly and globally benchmark against those in the health systems of seven other countries, namely, USA, UK, Sweden, France, Germany, Italy and Canada. In this study, the authors adapt the unique and globally deployed Bloom et al. (2009) survey instrument that uses a "double blind, double scored" methodology and an interview-based scoring grid to measure and internationally benchmark the management practices in Queensland and NSW public hospitals based on 21 management dimensions across four broad areas of management - operations, performance monitoring, targets and people management. The findings reveal the areas of strength and potential areas of improvement in the Queensland and NSW Health hospital management practices when compared with public hospitals in seven countries, namely, USA, UK, Sweden, France, Germany, Italy and Canada. Together, Queensland and NSW Health hospitals perform best in operations management followed by performance monitoring. While target management presents scope for improvement, people management is the sphere where these Australian hospitals lag the most. This paper is of interest to both hospital administrators and health care policy-makers aiming to lift management quality at the hospital level as well as at the institutional level, as a vehicle to consistently deliver sustainable high-quality health services. This study provides the first internationally comparable robust measure of management capability in Australian public hospitals, where hospitals are run independently by the state-run healthcare systems. Additionally, this research study contributes to the empirical evidence base on the quality of

  20. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  1. Gaia FGK Benchmark Stars and their reference parameters

    CERN Document Server

    Jofre, Paula; Blanco-Cuaresma, Sergi; Soubiran, Caroline

    2013-01-01

    In this article we summarise on-going work on the so-called Gaia FGK Benchmark Stars. This work consists of the determination of their atmospheric parameters and of the construction of a high-resolution spectral library. The definition of such a set of reference stars has become crucial in the current era of large spectroscopic surveys. Only with homogeneous and well documented stellar parameters can one exploit these surveys consistently and understand the structure and history of the Milky Way and therefore other of galaxies in the Universe.

  2. e-DNA meta-barcoding: from NGS raw data to taxonomic profiling.

    Science.gov (United States)

    Bruno, Fosso; Marinella, Marzano; Santamaria, Monica

    2015-01-01

    In recent years, thanks to the essential support provided by the Next-Generation Sequencing (NGS) technologies, Metagenomics is enabling the direct access to the taxonomic and functional composition of mixed microbial communities living in any environmental niche, without the prerequisite to isolate or culture the single organisms. This approach has already been successfully applied for the analysis of many habitats, such as water or soil natural environments, also characterized by extreme physical and chemical conditions, food supply chains, and animal organisms, including humans. A shotgun sequencing approach can lead to investigate both organisms and genes diversity. Anyway, if the purpose is limited to explore the taxonomic complexity, an amplicon-based approach, based on PCR-targeted sequencing of selected genetic species markers, commonly named "meta-barcodes", is desirable. Among the genomic regions most widely used for the discrimination of bacterial organisms, in some cases up to the species level, some hypervariable domains of the gene coding for the 16S rRNA occupy a prominent place. The amplification of a certain meta-barcode from a microbial community through the use of PCR primers able to work in the entire considered taxonomic group is the first task after the extraction of the total DNA. Generally, this step is followed by the high-throughput sequencing of the resulting amplicons libraries by means of a selected NGS platform. Finally, the interpretation of the huge amount of produced data requires appropriate bioinformatics tools and know-how in addition to efficient computational resources. Here a computational methodology suitable for the taxonomic characterization of 454 meta-barcode sequences is described in detail. In particular, a dataset covering the V1-V3 region belonging to the bacterial 16S rRNA coding gene and produced in the Human Microbiome Project (HMP) from a palatine tonsils sample is analyzed. The proposed exercise includes the

  3. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  4. Benchmarking of human resources management

    OpenAIRE

    David M. Akinnusi

    2008-01-01

    This paper reviews the role of human resource management (HRM) which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HR...

  5. NGS-200型GPS测量系统在建立三维控制网中的应用%Application of NGS-200 GPS in Establishment of Three-Dimensional Control Network

    Institute of Scientific and Technical Information of China (English)

    赵俊兰; 冯建秋

    2001-01-01

    介绍了NGS-200GPS测量系统的组成及原理,阐述了利用该系统在北京八大处公园建立测区首级平面控制网的具体做法,对布网方案、数据处理、平差计算和测量结果可靠性进行了分析.实践表明NGS-200GPS在八大处公园山地地形复杂,树木覆盖较密的情况下 ,其性能仍然比较稳定,所取得的测量结果达到了国家有关规范要求.该系统替代常规测量技术是经济、合理可靠的.

  6. [Benchmarking in health care: conclusions and recommendations].

    Science.gov (United States)

    Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    The German Health Ministry funded 10 demonstration projects and accompanying research of benchmarking in health care. The accompanying research work aimed to infer generalisable findings and recommendations. We performed a meta-evaluation of the demonstration projects and analysed national and international approaches to benchmarking in health care. It was found that the typical benchmarking sequence is hardly ever realised. Most projects lack a detailed analysis of structures and processes of the best performers as a starting point for the process of learning from and adopting best practice. To tap the full potential of benchmarking in health care, participation in voluntary benchmarking projects should be promoted that have been demonstrated to follow all the typical steps of a benchmarking process.

  7. NGS-based identification of druggable alterations and signaling pathways – hepatocellular carcinoma case report

    Directory of Open Access Journals (Sweden)

    Kotelnikova E. A.

    2015-12-01

    Full Text Available Aim. To identify potential cancer driving or clinically relevant molecular events for a patient with hepatocellular carcinoma. Methods. In order to achieve this goal, we performed RNA-seq and exome sequencing for the tumor tissue and its matched control. We annotated the alterations found using several publicly available databases and bioinformatics tools. Results. We identified several differentially expressed genes linked to the classical sorafenib treatment as well as additional pathways potentially druggable by therapies studied in clinical trials (Erlotinib, Lapatinib and Temsirolimus. Several germline mutations, found in XRCC1, TP53 and DPYD, according to the data from other clinical trials, could be related to the increased sensitivity to platinum therapies and reduced sensitivity to 5-Fluorouracil. We also identified several potentially driving mutations that could not be currently linked to therapies, like deletion in CIRBP, SNVs in BTG1, ERBB3, TCF7L2 et al. Conclusions. The presented study shows the potential usefulness of the integrated approach to the NGS data analysis, including the analysis of germline mutations and transcriptome in addition to the cancer panel or the exome sequencing data.

  8. Geodetic Control Points, Hutchinson, KS Benchmarks created by city surveyor at that time, Published in 1980, City of Hutchinson.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Geodetic Control Points dataset, was produced all or in part from Field Survey/GPS information as of 1980. It is described as 'Hutchinson, KS Benchmarks created...

  9. Gaia FGK Benchmark Stars: New Candidates At Low-Metallicities

    CERN Document Server

    Hawkins, Keith; Heiter, Ulrike; Soubiran, Caroline; Blanco-Cuaresma, Sergi; Casagrande, Luca; Gilmore, Gerry; Lind, Karin; Magrini, Laura; Masseron, Thomas; Pancino, Elena; Randich, Sofia; Worley, Clare C

    2016-01-01

    We have entered an era of large spectroscopic surveys in which we can measure, through automated pipelines, the atmospheric parameters and chemical abundances for large numbers of stars. Calibrating these survey pipelines using a set of "benchmark stars" in order to evaluate the accuracy and precision of the provided parameters and abundances is of utmost importance. The recent proposed set of Gaia FGK benchmark stars of Heiter et al. (2015) has no recommended stars within the critical metallicity range of $-2.0 <$ [Fe/H] $< -1.0$ dex. In this paper, we aim to add candidate Gaia benchmark stars inside of this metal-poor gap. We began with a sample of 21 metal-poor stars which was reduced to 10 stars by requiring accurate photometry and parallaxes, and high-resolution archival spectra. The procedure used to determine the stellar parameters was similar to Heiter et al. (2015) and Jofre et al. (2014) for consistency. The effective temperature (T$_{\\mathrm{eff}}$) of all candidate stars was determined using...

  10. An Effective Approach for Benchmarking Implementation

    OpenAIRE

    B. M. Deros; Tan, J.; M.N.A. Rahman; N. A.Q.M. Daud

    2011-01-01

    Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty res...

  11. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  12. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  13. Tratamento de dados de NGS para pesquisa de novas mutações associadas à miocardiopatia hipertrófica

    OpenAIRE

    2012-01-01

    Tese de mestrado em Bioestatística, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2012 A Sequenciação de Nova Geração (NGS - Next Generation Sequencing) está a revolucionar a investigação na área da biomédica, contribuindo significativamente para o avanço da medicina personalizada. A NGS, apoiando-se nos conhecimentos da Estatística Bayesiana e da Bioinformática para a análise e tratamento dos dados que esta técnica origina, torna-se num excelente exemplo dos novo...

  14. Developing Benchmarks for Solar Radio Bursts

    Science.gov (United States)

    Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Domm, P.; Love, J. J.; Pierson, J.

    2016-12-01

    Solar radio bursts can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan has asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The solar radio benchmark team was also asked to define the wavelength/frequency bands of interest. The benchmark team developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks and the basis used to derive them. We will also present the work that needs to be done in order to complete the final, or phase 2 benchmarks.

  15. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt......, inden man går i gang. Forskellen på resultatbenchmarking og procesbenchmarking vil blive behandlet, hvorefter brugen af intern hhv. ekstern benchmarking vil blive diskuteret. Endelig introduceres brugen af benchmarking i budgetlægning og budgetopfølgning....

  16. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  17. Improvement of PCR-free NGS Library Preparation to Obtain Uniform Read Coverage of Genome with Extremely High AT Content

    OpenAIRE

    Williams, A.; Storton, D.; Buckles, J.; Llinas, M.; Wang, Wei

    2012-01-01

    PCR amplification is commonly used in generating libraries for Next-Generation Sequencing (NGS) to efficiently enrich and amplify sequenceable DNA fragments. However, it introduces bias in the representation of the original complex template DNA. Such artifact has devastating effects in sequencing genomes with highly unbalanced base composition: regions of extremely high or low GC content, which are a substantial fraction of such genomes, are often covered with zero or near-zero read depth. PC...

  18. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  19. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  20. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  1. Benchmarking Implementations of Functional Languages with "Pseudoknot", a float-intensive benchmark

    NARCIS (Netherlands)

    Hartel, Pieter H.; Feeley, M.; Alt, M.; Augustsson, L.

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  2. Gaia FGK Benchmark Stars: Effective temperatures and surface gravities

    CERN Document Server

    Heiter, U; Gustafsson, B; Korn, A J; Soubiran, C; Thévenin, F

    2015-01-01

    Large Galactic stellar surveys and new generations of stellar atmosphere models and spectral line formation computations need to be subjected to careful calibration and validation and to benchmark tests. We focus on cool stars and aim at establishing a sample of 34 Gaia FGK Benchmark Stars with a range of different metallicities. The goal was to determine the effective temperature and the surface gravity independently from spectroscopy and atmospheric models as far as possible. Fundamental determinations of Teff and logg were obtained in a systematic way from a compilation of angular diameter measurements and bolometric fluxes, and from a homogeneous mass determination based on stellar evolution models. The derived parameters were compared to recent spectroscopic and photometric determinations and to gravity estimates based on seismic data. Most of the adopted diameter measurements have formal uncertainties around 1%, which translate into uncertainties in effective temperature of 0.5%. The measurements of bol...

  3. An NGS-Independent Strategy for Proteome-Wide Identification of Single Amino Acid Polymorphisms by Mass Spectrometry.

    Science.gov (United States)

    Xiong, Yun; Guo, Yufeng; Xiao, Weidi; Cao, Qichen; Li, Shanshan; Qi, Xianni; Zhang, Zhidan; Wang, Qinhong; Shui, Wenqing

    2016-03-01

    Detection of proteins containing single amino acid polymorphisms (SAPs) encoded by nonsynonymous SNPs (nsSNPs) can aid researchers in studying the functional significance of protein variants. Most proteogenomic approaches for large-scale SAPs mapping require construction of a sample-specific database containing protein variants predicted from the next-generation sequencing (NGS) data. Searching shotgun proteomic data sets against these NGS-derived databases allowed for identification of SAP peptides, thus validating the proteome-level sequence variation. Contrary to the conventional approaches, our study presents a novel strategy for proteome-wide SAP detection without relying on sample-specific NGS data. By searching a deep-coverage proteomic data set from an industrial thermotolerant yeast strain using our strategy, we identified 337 putative SAPs compared to the reference genome. Among the SAP peptides identified with stringent criteria, 85.2% of SAP sites were validated using whole-genome sequencing data obtained for this organism, which indicates high accuracy of SAP identification with our strategy. More interestingly, for certain SAP peptides that cannot be predicted by genomic sequencing, we used synthetic peptide standards to verify expression of peptide variants in the proteome. Our study has provided a unique tool for proteogenomics to enable proteome-wide direct SAP identification and capture nongenetic protein variants not linked to nsSNPs.

  4. Intra-individual polymorphism in chloroplasts from NGS data: where does it come from and how to handle it?

    Science.gov (United States)

    Scarcelli, N; Mariac, C; Couvreur, T L P; Faye, A; Richard, D; Sabot, F; Berthouly-Salazar, C; Vigouroux, Y

    2016-03-01

    Next-generation sequencing allows access to a large quantity of genomic data. In plants, several studies used whole chloroplast genome sequences for inferring phylogeography or phylogeny. Even though the chloroplast is a haploid organelle, NGS plastome data identified a nonnegligible number of intra-individual polymorphic SNPs. Such observations could have several causes such as sequencing errors, the presence of heteroplasmy or transfer of chloroplast sequences in the nuclear and mitochondrial genomes. The occurrence of allelic diversity has practical important impacts on the identification of diversity, the analysis of the chloroplast data and beyond that, significant evolutionary questions. In this study, we show that the observed intra-individual polymorphism of chloroplast sequence data is probably the result of plastid DNA transferred into the mitochondrial and/or the nuclear genomes. We further assess nine different bioinformatics pipelines' error rates for SNP and genotypes calling using SNPs identified in Sanger sequencing. Specific pipelines are adequate to deal with this issue, optimizing both specificity and sensitivity. Our results will allow a proper use of whole chloroplast NGS sequence and will allow a better handling of NGS chloroplast sequence diversity.

  5. COV2HTML: a visualization and analysis tool of bacterial next generation sequencing (NGS) data for postgenomics life scientists.

    Science.gov (United States)

    Monot, Marc; Orgeur, Mickael; Camiade, Emilie; Brehier, Clément; Dupuy, Bruno

    2014-03-01

    COV2HTML is an interactive web interface, which is addressed to biologists, and allows performing both coverage visualization and analysis of NGS alignments performed on prokaryotic organisms (bacteria and phages). It combines two processes: a tool that converts the huge NGS mapping or coverage files into light specific coverage files containing information on genetic elements; and a visualization interface allowing a real-time analysis of data with optional integration of statistical results. To demonstrate the scope of COV2HTML, the program was tested with data from two published studies. The first data were from RNA-seq analysis of Campylobacter jejuni, based on comparison of two conditions with two replicates. We were able to recover 26 out of 27 genes highlighted in the publication using COV2HTML. The second data comprised of stranded TSS and RNA-seq data sets on the Archaea Sulfolobus solfataricus. COV2HTML was able to highlight most of the TSSs from the article and allows biologists to visualize both TSS and RNA-seq on the same screen. The strength of the COV2HTML interface is making possible NGS data analysis without software installation, login, or a long training period. A web version is accessible at https://mmonot.eu/COV2HTML/ . This website is free and open to users without any login requirement.

  6. RNA-CODE: a noncoding RNA classification tool for short reads in NGS data lacking reference genomes.

    Directory of Open Access Journals (Sweden)

    Cheng Yuan

    Full Text Available The number of transcriptomic sequencing projects of various non-model organisms is still accumulating rapidly. As non-coding RNAs (ncRNAs are highly abundant in living organism and play important roles in many biological processes, identifying fragmentary members of ncRNAs in small RNA-seq data is an important step in post-NGS analysis. However, the state-of-the-art ncRNA search tools are not optimized for next-generation sequencing (NGS data, especially for very short reads. In this work, we propose and implement a comprehensive ncRNA classification tool (RNA-CODE for very short reads. RNA-CODE is specifically designed for ncRNA identification in NGS data that lack quality reference genomes. Given a set of short reads, our tool classifies the reads into different types of ncRNA families. The classification results can be used to quantify the expression levels of different types of ncRNAs in RNA-seq data and ncRNA composition profiles in metagenomic data, respectively. The experimental results of applying RNA-CODE to RNA-seq of Arabidopsis and a metagenomic data set sampled from human guts demonstrate that RNA-CODE competes favorably in both sensitivity and specificity with other tools. The source codes of RNA-CODE can be downloaded at http://www.cse.msu.edu/~chengy/RNA_CODE.

  7. A new method to prevent carry-over contaminations in two-step PCR NGS library preparations.

    Science.gov (United States)

    Seitz, Volkhard; Schaper, Sigrid; Dröge, Anja; Lenze, Dido; Hummel, Michael; Hennig, Steffen

    2015-11-16

    Two-step PCR procedures are an efficient and well established way to generate amplicon libraries for NGS sequencing. However, there is a high risk of cross-contamination by carry-over of amplicons from first to second amplification rounds, potentially leading to severe misinterpretation of results. Here we describe a new method able to prevent and/or to identify carry-over contaminations by introducing the K-box, a series of three synergistically acting short sequence elements. Our K-boxes are composed of (i) K1 sequences for suppression of contaminations, (ii) K2 sequences for detection of possible residual contaminations and (iii) S sequences acting as separators to avoid amplification bias. In order to demonstrate the effectiveness of our method we analyzed two-step PCR NGS libraries derived from a multiplex PCR system for detection of T-cell receptor beta gene rearrangements. We used this system since it is of high clinical relevance and may be affected by very low amounts of contaminations. Spike-in contaminations are effectively blocked by the K-box even at high rates as demonstrated by ultra-deep sequencing of the amplicons. Thus, we recommend implementation of the K-box in two-step PCR-based NGS systems for research and diagnostic applications demanding high sensitivity and accuracy.

  8. DUSTiNGS III: Distribution of Intermediate-Age and Old Stellar Populations in Disks and Outer Extremities of Dwarf Galaxies

    CERN Document Server

    McQuinn, Kristen B W; Mitchell, Mallory B; Skillman, Evan D; Gehrz, R D; Groenewegen, Martin A T; McDonald, Iain; Sloan, G C; van Loon, Jacco Th; Whitelock, Patricia A; Zijlstra, Albert A

    2016-01-01

    We have traced the spatial distributions of intermediate-age and old stars in nine dwarf galaxies in the distant parts of the Local Group, using multi-epoch 3.6 and 4.5 micron data from the DUST in Nearby Galaxies with Spitzer (DUSTiNGS) survey. Using complementary optical imaging from the Hubble Space Telescope, we identify the tip of the red giant branch (TRGB) in the 3.6 micron photometry, separating thermally-pulsating asymptotic giant branch (TP-AGB) stars from the larger red giant branch (RGB) populations. Unlike the constant TRGB in the I-band, at 3.6 micron the TRGB magnitude varies by ~0.7 mag, making it unreliable as a distance indicator. The intermediate-age and old stars are well mixed in two-thirds of the sample with no evidence of a gradient in the ratio of the intermediate-age to old stellar populations outside the central ~1-2'. Variable AGB stars are detected in the outer extremities of the galaxies, indicating that chemical enrichment from these dust-producing stars may occur in the outer re...

  9. Benchmarking: A tool to enhance performance

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.F. [Oak Ridge National Lab., TN (United States); Kristal, J. [USDOE Assistant Secretary for Environmental Management, Washington, DC (United States); Thompson, G.; Johnson, T. [Los Alamos National Lab., NM (United States)

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  10. Benchmarking ICRF simulations for ITER

    Energy Technology Data Exchange (ETDEWEB)

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  11. Benchmarking Asteroid-Deflection Experiment

    Science.gov (United States)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  12. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  13. COG validation: SINBAD Benchmark Problems

    Energy Technology Data Exchange (ETDEWEB)

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  14. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  15. AG-NGS: a powerful and user-friendly computing application for the semi-automated preparation of next-generation sequencing libraries using open liquid handling platforms.

    Science.gov (United States)

    Callejas, Sergio; Álvarez, Rebeca; Benguria, Alberto; Dopazo, Ana

    2014-01-01

    Next-generation sequencing (NGS) is becoming one of the most widely used technologies in the field of genomics. Library preparation is one of the most critical, hands-on, and time-consuming steps in the NGS workflow. Each library must be prepared in an independent well, increasing the number of hours required for a sequencing run and the risk of human-introduced error. Automation of library preparation is the best option to avoid these problems. With this in mind, we have developed automatic genomics NGS (AG-NGS), a computing application that allows an open liquid handling platform to be transformed into a library preparation station without losing the potential of an open platform. Implementation of AG-NGS does not require programming experience, and the application has also been designed to minimize implementation costs. Automated library preparation with AG-NGS generated high-quality libraries from different samples, demonstrating its efficiency, and all quality control parameters fell within the range of optimal values.

  16. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  17. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  18. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  19. Synergetic effect of benchmarking competitive advantages

    Directory of Open Access Journals (Sweden)

    N.P. Tkachova

    2011-12-01

    Full Text Available It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  20. Synergetic effect of benchmarking competitive advantages

    OpenAIRE

    N.P. Tkachova; P.G. Pererva

    2011-01-01

    It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  1. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  2. Machines are benchmarked by code, not algorithms

    NARCIS (Netherlands)

    Poss, R.

    2013-01-01

    This article highlights how small modifications to either the source code of a benchmark program or the compilation options may impact its behavior on a specific machine. It argues that for evaluating machines, benchmark providers and users be careful to ensure reproducibility of results based on th

  3. Benchmark analysis of railway networks and undertakings

    NARCIS (Netherlands)

    Hansen, I.A.; Wiggenraad, P.B.L.; Wolff, J.W.

    2013-01-01

    Benchmark analysis of railway networks and companies has been stimulated by the European policy of deregulation of transport markets, the opening of national railway networks and markets to new entrants and separation of infrastructure and train operation. Recent international railway benchmarking s

  4. Benchmark Assessment for Improved Learning. AACC Report

    Science.gov (United States)

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  5. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  6. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticit

  7. Benchmarking Learning and Teaching: Developing a Method

    Science.gov (United States)

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  8. MicNeSs: genotyping microsatellite loci from a collection of (NGS) reads.

    Science.gov (United States)

    Suez, Marie; Behdenna, Abdelkader; Brouillet, Sophie; Graça, Paula; Higuet, Dominique; Achaz, Guillaume

    2016-03-01

    Microsatellites are widely used in population genetics to uncover recent evolutionary events. They are typically genotyped using capillary sequencer, which capacity is usually limited to 9, at most 12 loci for each run, and which analysis is a tedious task that is performed by hand. With the rise of next-generation sequencing (NGS), a much larger number of loci and individuals are available from sequencing: for example, on a single run of a GS Junior, 28 loci from 96 individuals are sequenced with a 30X cover. We have developed an algorithm to automatically and efficiently genotype microsatellites from a collection of reads sorted by individual (e.g. specific PCR amplifications of a locus or a collection of reads that encompass a locus of interest). As the sequencing and the PCR amplification introduce artefactual insertions or deletions, the set of reads from a single microsatellite allele shows several length variants. The algorithm infers, without alignment, the true unknown allele(s) of each individual from the observed distributions of microsatellites length of all individuals. MicNeSs, a python implementation of the algorithm, can be used to genotype any microsatellite locus from any organism and has been tested on 454 pyrosequencing data of several loci from fruit flies (a model species) and red deers (a nonmodel species). Without any parallelization, it automatically genotypes 22 loci from 441 individuals in 11 hours on a standard computer. The comparison of MicNeSs inferences to the standard method shows an excellent agreement, with some differences illustrating the pros and cons of both methods.

  9. Next Generation Solvent (NGS): Development for Caustic-Side Solvent Extraction of Cesium

    Energy Technology Data Exchange (ETDEWEB)

    Moyer, Bruce A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Birdwell, Jr, Joseph F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bonnesen, Peter V. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bruffey, Stephanie H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Delmau, Laetitia Helene [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Duncan, Nathan C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ensor, Dale [Tennessee Technological Univ., Cookeville, TN (United States); Hill, Talon G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Denise L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rajbanshi, Arbin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Roach, Benjamin D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Szczygiel, Patricia L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sloop, Jr., Frederick V. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Stoner, Erica L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Neil J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-03-01

    This report summarizes the FY 2010 and 2011 accomplishments at Oak Ridge National Laboratory (ORNL) in developing the Next Generation Caustic-Side Solvent Extraction (NG-CSSX) process, referred to commonly as the Next Generation Solvent (NGS), under funding from the U.S. Department of Energy, Office of Environmental Management (DOE-EM), Office of Technology Innovation and Development. The primary product of this effort is a process solvent and preliminary flowsheet capable of meeting a target decontamination factor (DF) of 40,000 for worst-case Savannah River Site (SRS) waste with a concentration factor of 15 or higher in the 18-stage equipment configuration of the SRS Modular Caustic-Side Solvent Extraction Unit (MCU). In addition, the NG-CSSX process may be readily adapted for use in the SRS Salt Waste Processing Facility (SWPF) or in supplemental tank-waste treatment at Hanford upon appropriate solvent or flowsheet modifications. Efforts in FY 2010 focused on developing a solvent composition and process flowsheet for MCU implementation. In FY 2011 accomplishments at ORNL involved a wide array of chemical-development activities and testing up through single-stage hydraulic and mass-transfer tests in 5-cm centrifugal contactors. Under subcontract from ORNL, Argonne National Laboratory (ANL) designed a preliminary flowsheet using ORNL cesium distribution data, and Tennessee Technological University confirmed a chemical model for cesium distribution ratios (DCs) as a function of feed composition. Interlaboratory efforts were coordinated with complementary engineering tests carried out (and reported separately) by personnel at Savannah River National Laboratory (SRNL) and Savannah River Remediation (SRR) with helpful advice by Parsons Engineering and General Atomics on aspects of possible SWPF implementation.

  10. Optimizing information in Next-Generation-Sequencing (NGS) reads for improving de novo genome assembly.

    Science.gov (United States)

    Liu, Tsunglin; Tsai, Cheng-Hung; Lee, Wen-Bin; Chiang, Jung-Hsien

    2013-01-01

    Next-Generation-Sequencing is advantageous because of its much higher data throughput and much lower cost compared with the traditional Sanger method. However, NGS reads are shorter than Sanger reads, making de novo genome assembly very challenging. Because genome assembly is essential for all downstream biological studies, great efforts have been made to enhance the completeness of genome assembly, which requires the presence of long reads or long distance information. To improve de novo genome assembly, we develop a computational program, ARF-PE, to increase the length of Illumina reads. ARF-PE takes as input Illumina paired-end (PE) reads and recovers the original DNA fragments from which two ends the paired reads are obtained. On the PE data of four bacteria, ARF-PE recovered >87% of the DNA fragments and achieved >98% of perfect DNA fragment recovery. Using Velvet, SOAPdenovo, Newbler, and CABOG, we evaluated the benefits of recovered DNA fragments to genome assembly. For all four bacteria, the recovered DNA fragments increased the assembly contiguity. For example, the N50 lengths of the P. brasiliensis contigs assembled by SOAPdenovo and Newbler increased from 80,524 bp to 166,573 bp and from 80,655 bp to 193,388 bp, respectively. ARF-PE also increased assembly accuracy in many cases. On the PE data of two fungi and a human chromosome, ARF-PE doubled and tripled the N50 length. However, the assembly accuracies dropped, but still remained >91%. In general, ARF-PE can increase both assembly contiguity and accuracy for bacterial genomes. For complex eukaryotic genomes, ARF-PE is promising because it raises assembly contiguity. But future error correction is needed for ARF-PE to also increase the assembly accuracy. ARF-PE is freely available at http://140.116.235.124/~tliu/arf-pe/.

  11. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  12. Comparison of targeted next-generation sequencing (NGS) and real-time PCR in the detection of EGFR, KRAS, and BRAF mutations on formalin-fixed, paraffin-embedded tumor material of non-small cell lung carcinoma-superiority of NGS.

    Science.gov (United States)

    Tuononen, Katja; Mäki-Nevala, Satu; Sarhadi, Virinder Kaur; Wirtanen, Aino; Rönty, Mikko; Salmenkivi, Kaisa; Andrews, Jenny M; Telaranta-Keerie, Aino I; Hannula, Sari; Lagström, Sonja; Ellonen, Pekka; Knuuttila, Aija; Knuutila, Sakari

    2013-05-01

    The development of tyrosine kinase inhibitor treatments has made it important to test cancer patients for clinically significant gene mutations that influence the benefit of treatment. Targeted next-generation sequencing (NGS) provides a promising method for diagnostic purposes by enabling the simultaneous detection of multiple mutations in various genes in a single test. The aim of our study was to screen EGFR, KRAS, and BRAF mutations by targeted NGS and commonly used real-time polymerase chain reaction (PCR) methods to evaluate the feasibility of targeted NGS for the detection of the mutations. Furthermore, we aimed to identify potential novel mutations by targeted NGS. We analyzed formalin-fixed, paraffin-embedded (FFPE) tumor tissue specimens from 81 non-small cell lung carcinoma patients. We observed a significant concordance (from 96.3 to 100%) of the EGFR, KRAS, and BRAF mutation detection results between targeted NGS and real-time PCR. Moreover, targeted NGS revealed seven nonsynonymous single-nucleotide variations and one insertion-deletion variation in EGFR not detectable by the real-time PCR methods. The potential clinical significance of these variants requires elucidation in future studies. Our results support the use of targeted NGS in the screening of EGFR, KRAS, and BRAF mutations in FFPE tissue material.

  13. The Gaia FGK Benchmark Stars - High resolution spectral library

    CERN Document Server

    Blanco-Cuaresma, S; Jofré, P; Heiter, U

    2014-01-01

    Context. An increasing number of high resolution stellar spectra is available today thanks to many past and ongoing spectroscopic surveys. Consequently, numerous methods have been developed in order to perform an automatic spectral analysis on a massive amount of data. When reviewing published results, biases arise and they need to be addressed and minimized. Aims. We are providing a homogeneous library with a common set of calibration stars (known as the Gaia FGK Benchmark Stars) that will allow to assess stellar analysis methods and calibrate spectroscopic surveys. Methods. High resolution and signal-to-noise spectra were compiled from different instruments. We developed an automatic process in order to homogenize the observed data and assess the quality of the resulting library. Results. We built a high quality library that will facilitate the assessment of spectral analyses and the calibration of present and future spectroscopic surveys. The automation of the process minimizes the human subjectivity and e...

  14. Performance Evaluation and Benchmarking of Next Intelligent Systems

    Energy Technology Data Exchange (ETDEWEB)

    del Pobil, Angel [Jaume-I University; Madhavan, Raj [ORNL; Bonsignorio, Fabio [Heron Robots, Italy

    2009-10-01

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this book include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.

  15. AN INFRARED CENSUS OF DUST IN NEARBY GALAXIES WITH SPITZER (DUSTiNGS). II. DISCOVERY OF METAL-POOR DUSTY AGB STARS

    Energy Technology Data Exchange (ETDEWEB)

    Boyer, Martha L.; Sonneborn, George [Observational Cosmology Lab, Code 665, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); McQuinn, Kristen B. W.; Gehrz, Robert D.; Skillman, Evan [Minnesota Institute for Astrophysics, School of Physics and Astronomy, 116 Church Street SE, University of Minnesota, Minneapolis, MN 55455 (United States); Barmby, Pauline [Department of Physics and Astronomy, University of Western Ontario, London, ON N6A 3K7 (Canada); Bonanos, Alceste Z. [IAASARS, National Observatory of Athens, GR-15236 Penteli (Greece); Gordon, Karl D.; Meixner, Margaret [STScI, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Groenewegen, M. A. T. [Royal Observatory of Belgium, Ringlaan 3, B-1180 Brussels (Belgium); Lagadec, Eric [Laboratoire Lagrange, UMR7293, Univ. Nice Sophia-Antipolis, CNRS, Observatoire de la Côte d' Azur, F-06300 Nice (France); Lennon, Daniel [ESA-European Space Astronomy Centre, Apdo. de Correo 78, E-28691 Villanueva de la Cañada, Madrid (Spain); Marengo, Massimo [Department of Physics and Astronomy, Iowa State University, Ames, IA 50011 (United States); McDonald, Iain; Zijlstra, Albert [Jodrell Bank Centre for Astrophysics, Alan Turing Building, University of Manchester, Manchester M13 9PL (United Kingdom); Sloan, G. C. [Astronomy Department, Cornell University, Ithaca, NY 14853-6801 (United States); Van Loon, Jacco Th., E-mail: martha.boyer@nasa.gov [Astrophysics Group, Lennard-Jones Laboratories, Keele University, Staffordshire ST5 5BG (United Kingdom)

    2015-02-10

    The DUSTiNGS survey (DUST in Nearby Galaxies with Spitzer) is a 3.6 and 4.5 μm imaging survey of 50 nearby dwarf galaxies designed to identify dust-producing asymptotic giant branch (AGB) stars and massive stars. Using two epochs, spaced approximately six months apart, we identify a total of 526 dusty variable AGB stars (sometimes called ''extreme'' or x-AGB stars; [3.6]-[4.5] > 0.1 mag). Of these, 111 are in galaxies with [Fe/H] < –1.5 and 12 are in galaxies with [Fe/H] < –2.0, making them the most metal-poor dust-producing AGB stars known. We compare these identifications to those in the literature and find that most are newly discovered large-amplitude variables, with the exception of ≈30 stars in NGC 185 and NGC 147, 1 star in IC 1613, and 1 star in Phoenix. The chemical abundances of the x-AGB variables are unknown, but the low metallicities suggest that they are more likely to be carbon-rich than oxygen-rich and comparisons with existing optical and near-IR photometry confirm that 70 of the x-AGB variables are confirmed or likely carbon stars. We see an increase in the pulsation amplitude with increased dust production, supporting previous studies suggesting that dust production and pulsation are linked. We find no strong evidence linking dust production with metallicity, indicating that dust can form in very metal-poor environments.

  16. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  17. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  18. Plans to update benchmarking tool.

    Science.gov (United States)

    Stokoe, Mark

    2013-02-01

    The use of the current AssetMark system by hospital health facilities managers and engineers (in Australia) has decreased to a point of no activity occurring. A number of reasons have been cited, including cost, time to do, slow process, and level of information required. Based on current levels of activity, it would not be of any value to IHEA, or to its members, to continue with this form of AssetMark. For AssetMark to remain viable, it needs to be developed as a tool seen to be of value to healthcare facilities managers, and not just healthcare facility engineers. Benchmarking is still a very important requirement in the industry, and AssetMark can fulfil this need provided that it remains abreast of customer needs. The proposed future direction is to develop an online version of AssetMark with its current capabilities regarding capturing of data (12 Key Performance Indicators), reporting, and user interaction. The system would also provide end-users with access to live reporting features via a user-friendly web nterface linked through the IHEA web page.

  19. Academic Benchmarks for Otolaryngology Leaders.

    Science.gov (United States)

    Eloy, Jean Anderson; Blake, Danielle M; D'Aguillo, Christine; Svider, Peter F; Folbe, Adam J; Baredes, Soly

    2015-08-01

    This study aimed to characterize current benchmarks for academic otolaryngologists serving in positions of leadership and identify factors potentially associated with promotion to these positions. Information regarding chairs (or division chiefs), vice chairs, and residency program directors was obtained from faculty listings and organized by degree(s) obtained, academic rank, fellowship training status, sex, and experience. Research productivity was characterized by (a) successful procurement of active grants from the National Institutes of Health and prior grants from the American Academy of Otolaryngology-Head and Neck Surgery Foundation Centralized Otolaryngology Research Efforts program and (b) scholarly impact, as measured by the h-index. Chairs had the greatest amount of experience (32.4 years) and were the least likely to have multiple degrees, with 75.8% having an MD degree only. Program directors were the most likely to be fellowship trained (84.8%). Women represented 16% of program directors, 3% of chairs, and no vice chairs. Chairs had the highest scholarly impact (as measured by the h-index) and the greatest external grant funding. This analysis characterizes the current picture of leadership in academic otolaryngology. Chairs, when compared to their vice chair and program director counterparts, had more experience and greater research impact. Women were poorly represented among all academic leadership positions. © The Author(s) 2015.

  20. Benchmarking Measures of Network Influence

    Science.gov (United States)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  1. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  2. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    these issues, and describes how effects are closely connected to the perception of benchmarking, the intended users of the system and the application of the benchmarking results. The fundamental basis of this paper is taken from the development of benchmarking in the Danish construction sector. Two distinct...... perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind...... of benchmarking. In conclusion it is argued that clients and the Danish government are the intended users of the benchmarking system. The benchmarking results are primarily used by the government for monitoring and regulation of the construction sector and by clients for contractor selection. The dominating use...

  3. Benchmarking ENDF/B-VII.0

    Science.gov (United States)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  4. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available When algorithms solve dynamic multi-objective optimisation problems (DMOOPs), benchmark functions should be used to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for dynamic multi...

  5. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and the...

  6. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  7. XWeB: the XML Warehouse Benchmark

    CERN Document Server

    Mahboubi, Hadj

    2011-01-01

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  8. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically receive bureaucratic benchmarking information from the administration. We find that more frequent bureaucratic...

  9. Benchmarking of PR Function in Serbian Companies

    National Research Council Canada - National Science Library

    Nikolić, Milan; Sajfert, Zvonko; Vukonjanski, Jelena

    2009-01-01

    The purpose of this paper is to present methodologies for carrying out benchmarking of the PR function in Serbian companies and to test the practical application of the research results and proposed...

  10. A framework of benchmarking land models

    Science.gov (United States)

    Luo, Y. Q.; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, P.; Dalmonech, D.; Fisher, J.; Fisher, R.; Friedlingstein, P.; Hibbard, K.; Hoffman, F.; Huntzinger, D.; Jones, C. D.; Koven, C.; Lawrence, D.; Li, D. J.; Mahecha, M.; Niu, S. L.; Norby, R.; Piao, S. L.; Qi, X.; Peylin, P.; Prentice, I. C.; Riley, W.; Reichstein, M.; Schwalm, C.; Wang, Y. P.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-02-01

    Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1) targeted aspects of model performance to be evaluated; (2) a set of benchmarks as defined references to test model performance; (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4) model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.

  11. A framework of benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-02-01

    Full Text Available Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1 targeted aspects of model performance to be evaluated; (2 a set of benchmarks as defined references to test model performance; (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4 model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.

  12. A framework for benchmarking land models

    Science.gov (United States)

    Luo, Y. Q.; Randerson, J. T.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, P.; Dalmonech, D.; Fisher, J. B.; Fisher, R.; Friedlingstein, P.; Hibbard, K.; Hoffman, F.; Huntzinger, D.; Jones, C. D.; Koven, C.; Lawrence, D.; Li, D. J.; Mahecha, M.; Niu, S. L.; Norby, R.; Piao, S. L.; Qi, X.; Peylin, P.; Prentice, I. C.; Riley, W.; Reichstein, M.; Schwalm, C.; Wang, Y. P.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  13. Benchmarking Attosecond Physics with Atomic Hydrogen

    Science.gov (United States)

    2015-05-25

    Final 3. DATES COVERED (From - To) 12 Mar 12 – 11 Mar 15 4. TITLE AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a...AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a. CONTRACT NUMBER FA2386-12-1-4025 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...attosecond physics with atomic hydrogen ” May 25, 2015 PI information: David Kielpinski, dave.kielpinski@gmail.com Griffith University Centre

  14. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... NACA airfoil family. (C) 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license...

  15. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  16. Implementation of NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  17. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  18. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  19. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  20. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  1. Introduction of the Python script STRinNGS for analysis of STR regions in FASTQ or BAM files and expansion of the Danish STR sequence database to 11 STRs

    DEFF Research Database (Denmark)

    Friis, Susanne L; Buchard, Anders; Rockenbauer, Eszter

    2016-01-01

    This work introduces the in-house developed Python application STRinNGS for analysis of STR sequence elements in BAM or FASTQ files. STRinNGS identifies sequence reads with STR loci by their flanking sequences, it analyses the STR sequence and the flanking regions, and generates a report with the......This work introduces the in-house developed Python application STRinNGS for analysis of STR sequence elements in BAM or FASTQ files. STRinNGS identifies sequence reads with STR loci by their flanking sequences, it analyses the STR sequence and the flanking regions, and generates a report...

  2. Developing Evidence for Action on the Postgraduate Experience: An Effective Local Instrument to Move beyond Benchmarking

    Science.gov (United States)

    Sampson, K. A.; Johnston, L.; Comer, K.; Brogt, E.

    2016-01-01

    Summative and benchmarking surveys to measure the postgraduate student research experience are well reported in the literature. While useful, we argue that local instruments that provide formative resources with an academic development focus are also required. If higher education institutions are to move beyond the identification of issues and…

  3. Diversity Recruiting: Overview of Practices and Benchmarks. CERI Research Brief 4-2013

    Science.gov (United States)

    Gardner, Phil

    2013-01-01

    Little information exists on the basic elements of diversity recruiting on college campuses. A set of questions was developed for the Collegiate Employment Research Institute's (CERI's) annual college hiring survey that attempted to capture the current practices and benchmarks being employed by organizations in their diversity recruiting programs.…

  4. Variation In Accountable Care Organization Spending And Sensitivity To Risk Adjustment: Implications For Benchmarking.

    Science.gov (United States)

    Rose, Sherri; Zaslavsky, Alan M; McWilliams, J Michael

    2016-03-01

    Spending targets (or benchmarks) for accountable care organizations (ACOs) participating in the Medicare Shared Savings Program must be set carefully to encourage program participation while achieving fiscal goals and minimizing unintended consequences, such as penalizing ACOs for serving sicker patients. Recently proposed regulatory changes include measures to make benchmarks more similar for ACOs in the same area with different historical spending levels. We found that ACOs vary widely in how their spending levels compare with those of other local providers after standard case-mix adjustments. Additionally adjusting for survey measures of patient health meaningfully reduced the variation in differences between ACO spending and local average fee-for-service spending, but substantial variation remained, which suggests that differences in care efficiency between ACOs and local non-ACO providers vary widely. Accordingly, measures to equilibrate benchmarks between high- and low-spending ACOs--such as setting benchmarks to risk-adjusted average fee-for-service spending in an area--should be implemented gradually to maintain participation by ACOs with high spending. Use of survey information also could help mitigate perverse incentives for risk selection and upcoding and limit unintended consequences of new benchmarking methodologies for ACOs serving sicker patients.

  5. Using GC-FID to Quantify the Removal of 4-sec-Butylphenol from NGS Solvent by NaOH

    Energy Technology Data Exchange (ETDEWEB)

    Sloop, Jr., Frederick V. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Moyer, Bruce A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-12-01

    A caustic wash of the solvent used in the Next-Generation Caustic-Side Solvent Extraction (NG-CSSX) process was found to remove the modifier breakdown product 4-sec-butylphenol (SBP) with varying efficiency depending on the aqueous NaOH concentration. Recent efforts at ORNL have aimed at characterizing the flowsheet chemistry and reducing the technical uncertainties of the NG-CSSX process. One technical uncertainty has been the efficacy of caustic washing of the solvent for the removal of lipophilic anions, in particular, the efficient removal of SBP, an important degradation product of the solvent modifier, Cs-7SB. In order to make this determination, it was necessary to develop a sensitive and reliable analytical technique for the detection and quantitation of SBP. This report recounts the development of a GC-FID-based (Gas Chromatography Flame Ionization Detection) technique for analyzing SBP and the utilization of the technique to subsequently confirm the ability of the caustic wash to efficiently remove SBP from the Next Generation Solvent (NGS) used in NG-CSSX. In particular, the developed technique was used to monitor the amount of SBP removed from a simple solvent and the full NGS by contact with sodium hydroxide wash solutions over a range of concentrations. The results show that caustic washing removes SBP with effectively the same efficiency as it did in the original Caustic-Side Solvent Extraction (CSSX) process.

  6. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  7. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  8. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  9. Airborne Gravity: NGS' Airborne Gravity Data for AN01 (2009-2010)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2009-2010 over 2 surveys. This data set is part of the Gravity for the Re-definition of the American Vertical Datum...

  10. Airborne Gravity: NGS' Gravity Data for EN03 (2011-2013)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Michigan, Wisconsin, Illinois, Indiana, and Lake Michigan collected in 2011 to 2013 over 3 surveys. This data set is part of the Gravity...

  11. Airborne Gravity: NGS' Gravity Data for EN02 (2011-2012)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for New York, Pennsylvania, Ohio, Michigan, Canada and Lake Erie collected in 2011 and 2012 over 3 surveys. This data set is part of the...

  12. Airborne Gravity: NGS' Gravity Data for ES04 (2013-2014)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for North Carolina, South Carolina, Virginia, and the Atlantic Ocean collected in 2013 and 2014 over two surveys. This data set is part of the...

  13. 2008 NOAA/NGS Integrated Ocean and Coastal Mapping (IOCM) LIDAR: Kenai Peninsula Alaska

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data were collected by the National Oceanic Atmospheric Administration National Geodetic Survey Remote Sensing Division using an OPTECH ALTM system. The data...

  14. High Accuracy Reference Network (HARN), Points generated from coordinates supplied by NGS, Published in 1993, MARIS.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This High Accuracy Reference Network (HARN) dataset, was produced all or in part from Field Survey/GPS information as of 1993. It is described as 'Points generated...

  15. Geodetic Networks, geodetic control points within the National Spatial Reference System, Published in unknown, NGS.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Geodetic Networks dataset, was produced all or in part from Field Survey/GPS information as of unknown. It is described as 'geodetic control points within the...

  16. Airborne Gravity: NGS' Gravity Data for AS03 (2010-2012)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2010 and 2012 over 2 surveys. This data set is part of the Gravity for the Re-definition of the American Vertical Datum...

  17. Airborne Gravity: NGS' Gravity Data for CS02 (2008-2009)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Louisana and Mississippi collected in 2008-2009 over 2 surveys. This data set is part of the Gravity for the Re-definition of the American...

  18. Airborne Gravity: NGS' Gravity Data for EN07 (2012-2013)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Maine and Canada collected in 2012 and 2013 over 2 surveys. This data set is part of the Gravity for the Re-definition of the American...

  19. 2008 NOAA/NGS Integrated Ocean and Coastal Mapping (IOCM) LIDAR: Kenai Peninsula Alaska

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data were collected by the National Oceanic Atmospheric Administration National Geodetic Survey Remote Sensing Division using an OPTECH ALTM system. The data...

  20. 2014 NOAA NGS Topobathy Lidar: Key West National Wildlife Refuge (FL)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data were collected by the National Oceanic Atmospheric Administration National Geodetic Survey Remote Sensing Division using a Riegl VQ880G system. The data...

  1. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  2. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina) Data

    Science.gov (United States)

    Katta, Mohan A. V. S. K.; Khan, Aamir W.; Doddamani, Dadakhalandar; Thudi, Mahendar; Varshney, Rajeev K.

    2015-01-01

    Rapid popularity and adaptation of next generation sequencing (NGS) approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1) (http://htslib.org), for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2), SNP calling (SAMtools) and other utilities (bedtools) towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe (https://github.com/ssadedin/bpipe) in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL https://github.com/CEG-ICRISAT/NGS-QCbox and https://github.com/CEG-ICRISAT/Raspberry for rapid quality control analysis of large-scale next generation sequencing (Illumina) data. PMID:26460497

  3. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina) Data.

    Science.gov (United States)

    Katta, Mohan A V S K; Khan, Aamir W; Doddamani, Dadakhalandar; Thudi, Mahendar; Varshney, Rajeev K

    2015-01-01

    Rapid popularity and adaptation of next generation sequencing (NGS) approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1) (http://htslib.org), for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2), SNP calling (SAMtools) and other utilities (bedtools) towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe (https://github.com/ssadedin/bpipe) in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL https://github.com/CEG-ICRISAT/NGS-QCbox and https://github.com/CEG-ICRISAT/Raspberry for rapid quality control analysis of large-scale next generation sequencing (Illumina) data.

  4. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina Data.

    Directory of Open Access Journals (Sweden)

    Mohan A V S K Katta

    Full Text Available Rapid popularity and adaptation of next generation sequencing (NGS approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1 (http://htslib.org, for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2, SNP calling (SAMtools and other utilities (bedtools towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe (https://github.com/ssadedin/bpipe in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL https://github.com/CEG-ICRISAT/NGS-QCbox and https://github.com/CEG-ICRISAT/Raspberry for rapid quality control analysis of large-scale next generation sequencing (Illumina data.

  5. Survey and benchmark of block ciphers for wireless sensor networks

    NARCIS (Netherlands)

    Law, Yee Wei; Doumen, Jeroen; Hartel, Pieter

    2004-01-01

    Choosing the most storage- and energy-efficient block cipher specifically for wireless sensor networks (WSNs) is not as straightforward as it seems. To our knowledge so far, there is no systematic evaluation framework for the purpose. In this paper, we have identified the candidates of block ciphers

  6. Survey and Benchmark of Block Ciphers for Wireless Sensor Networks

    NARCIS (Netherlands)

    Law, Y.W.; Doumen, J.M.; Hartel, Pieter H.

    Choosing the most storage- and energy-efficient block cipher specifically for wireless sensor networks (WSNs) is not as straightforward as it seems. To our knowledge so far, there is no systematic evaluation framework for the purpose. In this paper, we have identified the candidates of block ciphers

  7. A New Benchmark Brown Dwarf

    CERN Document Server

    Tinney, C G; Forveille, T; Delfosse, Xavier

    1997-01-01

    We present optical spectroscopy of three brown dwarf candidates identified in the first 1% of the DENIS sky survey. Low resolution spectra from 6430--9000A show these objects to have similar spectra to the uncertain brown dwarf candidate GD 165B. High resolution spectroscopy shows that one of the objects -- DBD 1228-1547 -- has a strong EW=2.3+-0.05A absorption line of Li I 6708A, and is therefore a brown dwarf with mass below 0.065 Msol. DBD 1228-1547 can now be the considered proto-type for objects JUST below the hydrogen burning limit.

  8. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  9. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    Science.gov (United States)

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  10. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    Science.gov (United States)

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  11. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  12. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  13. Survey of SNMP performance analysis studies

    NARCIS (Netherlands)

    Andrey, Laurent; Festor, Olivier; Lahmadi, Abdelkader; Pras, Aiko; Schönwälder, Jürgen

    2009-01-01

    This paper provides a survey of Simple Network Management Protocol (SNMP)-related performance studies. Over the last 10 years, a variety of such studies have been published. Performance benchmarking of SNMP, like all benchmarking studies, is a non-trivial task that requires substantial effort to be

  14. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  15. Coral benchmarks in the center of biodiversity.

    Science.gov (United States)

    Licuanan, W Y; Robles, R; Dygico, M; Songco, A; van Woesik, R

    2017-01-30

    There is an urgent need to quantify coral reef benchmarks that assess changes and recovery rates through time and serve as goals for management. Yet, few studies have identified benchmarks for hard coral cover and diversity in the center of marine diversity. In this study, we estimated coral cover and generic diversity benchmarks on the Tubbataha reefs, the largest and best-enforced no-take marine protected area in the Philippines. The shallow (2-6m) reef slopes of Tubbataha were monitored annually, from 2012 to 2015, using hierarchical sampling. Mean coral cover was 34% (σ±1.7) and generic diversity was 18 (σ±0.9) per 75m by 25m station. The southeastern leeward slopes supported on average 56% coral cover, whereas the northeastern windward slopes supported 30%, and the western slopes supported 18% coral cover. Generic diversity was more spatially homogeneous than coral cover. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... and professional performance but only if prior professional performance was low. Supplemental analyses support the robustness of our results. Findings indicate conditions under which bureaucratic benchmarking information may affect professional performance and advance research on professional control and social...

  17. The national hydrologic bench-mark network

    Science.gov (United States)

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  18. DWEB: A Data Warehouse Engineering Benchmark

    CERN Document Server

    Darmont, Jérôme; Boussaïd, Omar

    2005-01-01

    Data warehouse architectural choices and optimization techniques are critical to decision support query performance. To facilitate these choices, the performance of the designed data warehouse must be assessed. This is usually done with the help of benchmarks, which can either help system users comparing the performances of different systems, or help system engineers testing the effect of various design choices. While the TPC standard decision support benchmarks address the first point, they are not tuneable enough to address the second one and fail to model different data warehouse schemas. By contrast, our Data Warehouse Engineering Benchmark (DWEB) allows to generate various ad-hoc synthetic data warehouses and workloads. DWEB is fully parameterized to fulfill data warehouse design needs. However, two levels of parameterization keep it relatively easy to tune. Finally, DWEB is implemented as a Java free software that can be interfaced with most existing relational database management systems. A sample usag...

  19. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  20. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  1. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  2. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  3. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  4. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes....... This makes it difficult to compare the resources used, since some programmes by their nature require more classroom time and equipment than others. It is also far from straightforward to compare college effects with respect to grades, since the various programmes apply very different forms of assessment...

  5. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...

  6. Benchmarking af kommunernes førtidspensionspraksis

    DEFF Research Database (Denmark)

    Gregersen, Ole

    Hvert år udgiver Den Sociale Ankestyrelse statistikken over afgørelser i sager om førtidspension. I forbindelse med årsstatistikken udgives resultater fra en benchmarking model, hvor antal tilkendelser i den enkelte kommune sammenlignes med et forventet antal tilkendelser, hvis kommunen havde haft...... samme afgørelsespraksis, som den "gennemsnitlige kommune", når vi korrigerer for den sociale struktur i kommunen. Den hidtil anvendte benchmarking model er dokumenteret i Ole Gregersen (1994): Kommunernes Pensionspraksis, Servicerapport, Socialforskningsinstituttet. I dette notat dokumenteres en...

  7. Benchmarking of Heavy Ion Transport Codes

    Energy Technology Data Exchange (ETDEWEB)

    Remec, Igor [ORNL; Ronningen, Reginald M. [Michigan State University, East Lansing; Heilbronn, Lawrence [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required.

  8. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  9. [The use of benchmarking to manage the healthcare supply chain: effects on purchasing cost and quality].

    Science.gov (United States)

    Naranjo-Gil, David; Ruiz-Muñoz, David

    2015-01-01

    Healthcare supply expenses consume a large part of the financial resources allocated to public health. The aim of this study was to analyze the use of a benchmarking process in the management of hospital purchases, as well as its effect on product cost reduction and quality improvement. Data were collected through a survey conducted in 29 primary healthcare districts from 2010 to 2011, and through a healthcare database on the prices, quality, delivery time and supplier characteristics of 5373 products. The use of benchmarking processes reduced or eliminated products with a low quality and high price. These processes increased the quality of products by 10.57% and reduced their purchase price by 28.97%. The use of benchmarking by healthcare centers can reduce expenditure and allow more efficient management of the healthcare supply chain. It also facilitated the acquisition of products at lower prices and higher quality. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.

  10. Grid平台上的NGS编译技术%The Compiler Technology of NGS on the Grid

    Institute of Scientific and Technical Information of China (English)

    丛杨; 王雷; 朱凯佳; 刘又诚

    2003-01-01

    随着网络技术的发展,在异构平台上使用共同的计算和信息资源将很快成为可能.Grid(网格)就是这样一种提供资源共享的新兴平台,而在其之上的下一代软件程序(NGS)则对编译器提出了新的挑战[1].未来Grid平台上的编译系统将是能够进行动态编译和优化,根据实时系统以及网络的性能不断进行自我调整的软件模型,同时它还能为具有自适应性的应用程序提供编译支持.

  11. NGS metabarcoding proves successful for quantitative assessment of symbiont abundance: the case of feather mites on birds.

    Science.gov (United States)

    Diaz-Real, J; Serrano, D; Piriz, A; Jovani, R

    2015-10-01

    Understanding the ecological function of species and the structure of communities is crucial in the study of ecological interactions among species. For this purpose, not only the occurrence of particular species but also their abundance in ecological communities is required. However, abundance quantification of species through morphological characters is often difficult or time/money consuming when dealing with elusive or small taxa. Here we tested the use of next-generation sequencing (NGS) for abundance estimation of two species of feather mites (Proctophyllodes stylifer and Pteronyssoides parinus) under five proportions (16:1, 16:4, 16:16, 16:64, and 16:256 mites) against a mock community composed by Proctophyllodes clavatus and Proctophyllodes sylviae. In all mixtures, we retrieved sequence reads from all species. We found a strong linear relationship between 454 reads and the real proportion of individuals in the mixture for both focal species. The slope for Pr. stylifer was close to one (0.904), and the intercept close to zero (-0.007), thus showing an almost perfect correspondence between real and estimated proportions. The slope for Pt. parinus was 0.351 and the intercept 0.307, showing that while the estimated proportion increased linearly relative to real proportions of individuals in the samples, proportions were overestimated at low real proportions and underestimated at larger ones. Additionally, pyrosequencing replicates from each DNA extraction were highly repeatable (R = 0.920 and 0.972, respectively), showing that the quantification method is highly consistent given a DNA extract. Our study suggests that NGS is a promising tool for abundance estimation of feather mites' communities in birds.

  12. Detailed finite element analysis of Darlington NGS feeder pipes with locally thinned regions below pressure minimum thickness

    Energy Technology Data Exchange (ETDEWEB)

    Haq, I.; Stojakovic, M.; Li, M. [Ontario Power Generation Inc., Pickering, Ontario (Canada)

    2011-07-01

    Feeder Pipes in CANDU nuclear stations are experiencing wall thinning due to flow accelerated corrosion (FAC) resulting in locally thinned regions in addition to general thinning. In Darlington NGS these locally thinned regions can be below pressure based minimum thickness (t{sub min}), required as per ASME Code Section III NB-3600 Equation (1). A methodology is presented to qualify the locally thinned regions under NB-3200 (NB-3213 and NB-3221) for internal pressure loading only. Detailed finite element models are used for internal pressure analysis using ANSYS v11.0. All other loadings such as deadweight, thermal and seismic loadings are qualified under NB-3600 using a general purpose piping stress analysis software. The piping stress analysis is based on average thickness equal to t{sub min} along with maximum values of ASME Code stress indices (Table NB-3681(a)-1). The requirement for the use of this methodology is that the average thickness of each cross-section with the locally thinned region shall be at least t{sub min}. The finite element analysis models are thinned to 0.75 t{sub min} (in increments of 0.05 t{sub min}) all-around the circumference in the straight section region allowing for flexible inspection requirements. Two different thicknesses of 1.10 t{sub min} and 1.30 t{sub min} are assigned to the bends. Thickness vs the allowable axial extent curves were developed for different types of feeder pipes in service. Feeders differ in pipe size, straight section length, bend angle and orientation. The stress analysis results show that all Darlington NGS outlet feeder pipes are fit for service with locally thinned regions up to 75% of the pressure based minimum thickness. This paper demonstrates the effectiveness of finite element analysis in extending the useful life of degraded piping components. (author)

  13. ICO amplicon NGS data analysis: a Web tool for variant detection in common high-risk hereditary cancer genes analyzed by amplicon GS Junior next-generation sequencing.

    Science.gov (United States)

    Lopez-Doriga, Adriana; Feliubadaló, Lídia; Menéndez, Mireia; Lopez-Doriga, Sergio; Morón-Duran, Francisco D; del Valle, Jesús; Tornero, Eva; Montes, Eva; Cuesta, Raquel; Campos, Olga; Gómez, Carolina; Pineda, Marta; González, Sara; Moreno, Victor; Capellá, Gabriel; Lázaro, Conxi

    2014-03-01

    Next-generation sequencing (NGS) has revolutionized genomic research and is set to have a major impact on genetic diagnostics thanks to the advent of benchtop sequencers and flexible kits for targeted libraries. Among the main hurdles in NGS are the difficulty of performing bioinformatic analysis of the huge volume of data generated and the high number of false positive calls that could be obtained, depending on the NGS technology and the analysis pipeline. Here, we present the development of a free and user-friendly Web data analysis tool that detects and filters sequence variants, provides coverage information, and allows the user to customize some basic parameters. The tool has been developed to provide accurate genetic analysis of targeted sequencing of common high-risk hereditary cancer genes using amplicon libraries run in a GS Junior System. The Web resource is linked to our own mutation database, to assist in the clinical classification of identified variants. We believe that this tool will greatly facilitate the use of the NGS approach in routine laboratories.

  14. 77 FR 49721 - International Services Surveys and Direct Investment Surveys Reporting

    Science.gov (United States)

    2012-08-17

    ... comment rulemaking procedures. See, e.g., Direct Investment Surveys: BE-12, Benchmark Survey of Foreign... in services and direct investment surveys. The surveys are provided for by the International.... Galler, Chief, Direct Investment Division (BE-50), Bureau of Economic Analysis, U.S. Department...

  15. A framework for organizing cancer-related variations from existing databases, publications and NGS data using a High-performance Integrated Virtual Environment (HIVE).

    Science.gov (United States)

    Wu, Tsung-Jung; Shamsaddini, Amirhossein; Pan, Yang; Smith, Krista; Crichton, Daniel J; Simonyan, Vahan; Mazumder, Raja

    2014-01-01

    Years of sequence feature curation by UniProtKB/Swiss-Prot, PIR-PSD, NCBI-CDD, RefSeq and other database biocurators has led to a rich repository of information on functional sites of genes and proteins. This information along with variation-related annotation can be used to scan human short sequence reads from next-generation sequencing (NGS) pipelines for presence of non-synonymous single-nucleotide variations (nsSNVs) that affect functional sites. This and similar workflows are becoming more important because thousands of NGS data sets are being made available through projects such as The Cancer Genome Atlas (TCGA), and researchers want to evaluate their biomarkers in genomic data. BioMuta, an integrated sequence feature database, provides a framework for automated and manual curation and integration of cancer-related sequence features so that they can be used in NGS analysis pipelines. Sequence feature information in BioMuta is collected from the Catalogue of Somatic Mutations in Cancer (COSMIC), ClinVar, UniProtKB and through biocuration of information available from publications. Additionally, nsSNVs identified through automated analysis of NGS data from TCGA are also included in the database. Because of the petabytes of data and information present in NGS primary repositories, a platform HIVE (High-performance Integrated Virtual Environment) for storing, analyzing, computing and curating NGS data and associated metadata has been developed. Using HIVE, 31 979 nsSNVs were identified in TCGA-derived NGS data from breast cancer patients. All variations identified through this process are stored in a Curated Short Read archive, and the nsSNVs from the tumor samples are included in BioMuta. Currently, BioMuta has 26 cancer types with 13 896 small-scale and 308 986 large-scale study-derived variations. Integration of variation data allows identifications of novel or common nsSNVs that can be prioritized in validation studies. Database URL: BioMuta: http

  16. Algorithm and Architecture Independent Benchmarking with SEAK

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  17. A human benchmark for language recognition

    NARCIS (Netherlands)

    Orr, R.; Leeuwen, D.A. van

    2009-01-01

    In this study, we explore a human benchmark in language recognition, for the purpose of comparing human performance to machine performance in the context of the NIST LRE 2007. Humans are categorised in terms of language proficiency, and performance is presented per proficiency. Themain challenge in

  18. Benchmarking Year Five Students' Reading Abilities

    Science.gov (United States)

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  19. Benchmark Generation and Simulation at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Lagadapati, Mahesh [North Carolina State University (NCSU), Raleigh; Mueller, Frank [North Carolina State University (NCSU), Raleigh; Engelmann, Christian [ORNL

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  20. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  1. Thermodynamic benchmark study using Biacore technology

    NARCIS (Netherlands)

    Navratilova, I.; Papalia, G.A.; Rich, R.L.; Bedinger, D.; Brophy, S.; Condon, B.; Deng, T.; Emerick, A.W.; Guan, H.W.; Hayden, T.; Heutmekers, T.; Hoorelbeke, B.; McCroskey, M.C.; Murphy, M.M.; Nakagawa, T.; Parmeggiani, F.; Xiaochun, Q.; Rebe, S.; Nenad, T.; Tsang, T.; Waddell, M.B.; Zhang, F.F.; Leavitt, S.; Myszka, D.G.

    2007-01-01

    A total of 22 individuals participated in this benchmark study to characterize the thermodynamics of small-molecule inhibitor-enzyme interactions using Biacore instruments. Participants were provided with reagents (the enzyme carbonic anhydrase II, which was immobilized onto the sensor surface, and

  2. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  3. Alberta K-12 ESL Proficiency Benchmarks

    Science.gov (United States)

    Salmon, Kathy; Ettrich, Mike

    2012-01-01

    The Alberta K-12 ESL Proficiency Benchmarks are organized by division: kindergarten, grades 1-3, grades 4-6, grades 7-9, and grades 10-12. They are descriptors of language proficiency in listening, speaking, reading, and writing. The descriptors are arranged in a continuum of seven language competences across five proficiency levels. Several…

  4. Seven Benchmarks for Information Technology Investment.

    Science.gov (United States)

    Smallen, David; Leach, Karen

    2002-01-01

    Offers benchmarks to help campuses evaluate their efforts in supplying information technology (IT) services. The first three help understand the IT budget, the next three provide insight into staffing levels and emphases, and the seventh relates to the pervasiveness of institutional infrastructure. (EV)

  5. Benchmarking Peer Production Mechanisms, Processes & Practices

    Science.gov (United States)

    Fischer, Thomas; Kretschmer, Thomas

    2008-01-01

    This deliverable identifies key approaches for quality management in peer production by benchmarking peer production practices and processes in other areas. (Contains 29 footnotes, 13 figures and 2 tables.)[This report has been authored with contributions of: Kaisa Honkonen-Ratinen, Matti Auvinen, David Riley, Jose Pinzon, Thomas Fischer, Thomas…

  6. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  7. Simple benchmark for complex dose finding studies.

    Science.gov (United States)

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  8. Benchmarking 2010: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  9. Benchmark Experiment for Beryllium Slab Samples

    Institute of Scientific and Technical Information of China (English)

    NIE; Yang-bo; BAO; Jie; HAN; Rui; RUAN; Xi-chao; REN; Jie; HUANG; Han-xiong; ZHOU; Zu-ying

    2015-01-01

    In order to validate the evaluated nuclear data on beryllium,a benchmark experiment has been performed at China Institution of Atomic Energy(CIAE).Neutron leakage spectra from pure beryllium slab samples(10cm×10cm×11cm)were measured at 61°and 121°using timeof-

  10. Cleanroom Energy Efficiency: Metrics and Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  11. Issues in Benchmarking and Assessing Institutional Engagement

    Science.gov (United States)

    Furco, Andrew; Miller, William

    2009-01-01

    The process of assessing and benchmarking community engagement can take many forms. To date, more than two dozen assessment tools for measuring community engagement institutionalization have been published. These tools vary substantially in purpose, level of complexity, scope, process, structure, and focus. While some instruments are designed to…

  12. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  13. Points to consider in the clinical use of NGS panels for mitochondrial disease: an analysis of gene inclusion and consent forms.

    Science.gov (United States)

    Platt, Julia; Cox, Rachel; Enns, Gregory M

    2014-08-01

    Mitochondrial next generation sequencing (NGS) panels offer single-step analysis of the numerous nuclear genes involved in the structure, function, and maintenance of mitochondria. However, the complexities of mitochondrial biology and genetics raise points for consideration in clinical use of these tests. To understand the current status of mitochondrial genetic testing, we assessed the gene offerings and consent forms of mitochondrial NGS panels available from seven US-based clinical laboratories. The NGS panels varied markedly in number of genes (101-1204 genes), and the proportion of genes causing "classic" mitochondrial diseases and their phenocopies ranged widely between labs (18 %-94 % of panel contents). All panels included genes not associated with classic mitochondrial diseases (6 %-28 % of panel contents), including genes causing adult-onset neurodegenerative disorders, cancer predisposition, and other genetic syndromes or inborn errors of metabolism. Five of the panels included genes that are not listed in OMIM to be associated with a disease phenotype (5 %-49 % of panel contents). None of the consent documents reviewed had options for patient preference regarding receipt of incidental findings. These findings raise points of discussion applicable to mitochondrial diagnostics, but also to the larger arenas of exome and genome sequencing, including the need to consider the boundaries between clinical and research testing, the necessity of appropriate informed consent, and the responsibilities of clinical laboratories and clinicians. Based on these findings, we recommend careful evaluation by laboratories of the genes offered on NGS panels, clear communication of the predicted phenotypes, and revised consent forms to allow patients to make choices about receiving incidental findings. We hope that our analysis and recommendations will help to maximize the considerable clinical utility of NGS panels for the diagnosis of mitochondrial disease.

  14. THE AFFECTION OF TIME AND DISTANCE TO NGS-2000GPS SURVEYING PRECISION%时间距离因子对南方NGS-200GPS测量精度的影响

    Institute of Scientific and Technical Information of China (English)

    费鲜芸; 亓学翔; 刘文杰; 高祥伟

    2003-01-01

    GPS是一项在逐渐兴起的技术.与传统的测量技术相比,它具有无可比拟的优越性.本文通过对观测分析,得出测量时间的长短及测量时段的分布对测量结果都具有明显的影响.

  15. Benchmarking transaction and analytical processing systems the creation of a mixed workload benchmark and its application

    CERN Document Server

    Bog, Anja

    2014-01-01

    This book introduces a new benchmark for hybrid database systems, gauging the effect of adding OLAP to an OLTP workload and analyzing the impact of commonly used optimizations in historically separate OLTP and OLAP domains in mixed-workload scenarios.

  16. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  17. Benchmark values for forest soil carbon stocks in Europe

    DEFF Research Database (Denmark)

    De Vos, Bruno; Cools, Nathalie; Ilvesniemi, Hannu;

    2015-01-01

    to the UN/ECE ICP Forests 16 × 16 km Level I network. Plots were sampled and analysed according to harmonized methods during the 2nd European Forest Soil Condition Survey. Using continuous carbon density depth functions, we estimated SOC stocks to 30-cm and 1-m depth, and stratified these stocks according...... to 22 WRB Reference Soil Groups (RSGs) and 8 humus forms to provide European scale benchmark values. Average SOC stocks amounted to 22.1 t C ha− 1 in forest floors, 108 t C ha− 1 in mineral soils and 578 t C ha− 1 in peat soils, to 1 m depth. Relative to 1-m stocks, the vertical SOC distribution...

  18. The ACRV Picking Benchmark (APB): A Robotic Shelf Picking Benchmark to Foster Reproducible Research

    OpenAIRE

    Leitner, Jürgen; Tow, Adam W.; Dean, Jake E.; Suenderhauf, Niko; Durham, Joseph W.; Cooper, Matthew; Eich, Markus; Lehnert, Christopher; Mangels, Ruben; McCool, Christopher; Kujala, Peter; Nicholson, Lachlan; Van Pham, Trung; Sergeant, James; Wu, Liao

    2016-01-01

    Robotic challenges like the Amazon Picking Challenge (APC) or the DARPA Challenges are an established and important way to drive scientific progress. They make research comparable on a well-defined benchmark with equal test conditions for all participants. However, such challenge events occur only occasionally, are limited to a small number of contestants, and the test conditions are very difficult to replicate after the main event. We present a new physical benchmark challenge for robotic pi...

  19. Benchmark 1 - Failure Prediction after Cup Drawing, Reverse Redrawing and Expansion Part A: Benchmark Description

    Science.gov (United States)

    Watson, Martin; Dick, Robert; Huang, Y. Helen; Lockley, Andrew; Cardoso, Rui; Santos, Abel

    2016-08-01

    This Benchmark is designed to predict the fracture of a food can after drawing, reverse redrawing and expansion. The aim is to assess different sheet metal forming difficulties such as plastic anisotropic earing and failure models (strain and stress based Forming Limit Diagrams) under complex nonlinear strain paths. To study these effects, two distinct materials, TH330 steel (unstoved) and AA5352 aluminum alloy are considered in this Benchmark. Problem description, material properties, and simulation reports with experimental data are summarized.

  20. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying an...... organizational relations, behaviors and actions. In closing it is briefly considered how to study the calculative practices of benchmarking....... and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend...... this perspective develops more thorough knowledge about benchmarking and challenges the current dominating rationales. Hereby, it is argued that benchmarking is not a neutral practice. On the contrary it is highly influenced by organizational ambitions and strategies, with the potentials to transform...

  1. Effects of Exposure Imprecision on Estimation of the Benchmark Dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose......Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose...

  2. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  3. Benchmarks of support in internal medicine residency training programs.

    Science.gov (United States)

    Wolfsthal, Susan D; Beasley, Brent W; Kopelman, Richard; Stickley, William; Gabryel, Timothy; Kahn, Marc J

    2002-01-01

    To identify benchmarks of financial and staff support in internal medicine residency training programs and their correlation with indicators of quality. A survey instrument to determine characteristics of support of residency training programs was mailed to each member program of the Association of Program Directors of Internal Medicine. Results were correlated with the three-year running average of the pass rates on the American Board of Internal Medicine certifying examination using bivariate and multivariate analyses. Of 394 surveys, 287 (73%) were completed: 74% of respondents were program directors and 20% were both chair and program director. The mean duration as program director was 7.5 years (median = 5), but it was significantly lower for women than for men (4.9 versus 8.1; p =.001). Respondents spent 62% of their time in educational and administrative duties, 30% in clinical activities, 5% in research, and 2% in other activities. Most chief residents were PGY4s, with 72% receiving compensation additional to base salary. On average, there was one associate program director for every 33 residents, one chief resident for every 27 residents, and one staff person for every 21 residents. Most programs provided trainees with incremental educational stipends, meals while oncall, travel and meeting expenses, and parking. Support from pharmaceutical companies was used for meals, books, and meeting expenses. Almost all programs provided meals for applicants, with 15% providing travel allowances and 37% providing lodging. The programs' board pass rates significantly correlated with the numbers of faculty fulltime equivalents (FTEs), the numbers of resident FTEs per office staff FTEs, and the numbers of categorical and preliminary applications received and ranked by the programs in 1998 and 1999. Regression analyses demonstrated three independent predictors of the programs' board pass rates: number of faculty (a positive predictor), percentage of clinical work

  4. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    OpenAIRE

    Zaharchenko Lolita A.; Kolesnyk Oksana A.

    2013-01-01

    The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking an...

  5. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  6. Benchmarking of corporate social responsibility: Methodological problems and robustness

    OpenAIRE

    2004-01-01

    This paper investigates the possibilities and problems of benchmarking Corporate Social Responsibility (CSR). After a methodological analysis of the advantages and problems of benchmarking, we develop a benchmark method that includes economic, social and environmental aspects as well as national and international aspects of CSR. The overall benchmark is based on a weighted average of these aspects. The weights are based on the opinions of companies and NGO’s. Using different me...

  7. Enterprise Surveys : Nicaragua Country Profile 2010

    OpenAIRE

    World Bank; International Finance Corporation

    2011-01-01

    The enterprise surveys focus on the many factors that shape the business environment. The qualitative and quantitative data collected through the surveys connect a country s business environment characteristics with firm productivity and performance. The country profile for Nicaragua is based on data from the enterprise surveys conducted by the World Bank. The benchmarks include the averag...

  8. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  9. Taking Stock of Corporate Benchmarking Practices: Panacea or Pandora's Box?

    Science.gov (United States)

    Fleisher, Craig S.; Burton, Sara

    1995-01-01

    Discusses why corporate communications/public relations (cc/pr) should be benchmarked (an approach used by cc/pr managers to demonstrate the value of their activities to skeptical organizational executives). Discusses myths about cc/pr benchmarking; types, targets, and focus of cc/pr benchmarking; a process model; and critical decisions about…

  10. 47 CFR 69.108 - Transport rate benchmark.

    Science.gov (United States)

    2010-10-01

    ... with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone company... benchmark ratio of 9.6 to 1 or higher. (c) If a telephone company's initial transport rates are based on... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section...

  11. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    Science.gov (United States)

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  12. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  13. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  14. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  15. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  17. 29 CFR 1952.203 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.203 Section 1952.203... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  19. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  20. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  1. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  2. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  3. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  4. Characterization of addressability by simultaneous randomized benchmarking

    CERN Document Server

    Gambetta, Jay M; Merkel, S T; Johnson, B R; Smolin, John A; Chow, Jerry M; Ryan, Colm A; Rigetti, Chad; Poletto, S; Ohki, Thomas A; Ketchen, Mark B; Steffen, M

    2012-01-01

    The control and handling of errors arising from cross-talk and unwanted interactions in multi-qubit systems is an important issue in quantum information processing architectures. We introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking each qubit individually and then simultaneously, and the amount of addressability is related to the difference of the average gate fidelities of those experiments. We present the results on two similar samples with different amounts of cross-talk and unwanted interactions, which agree with predictions based on simple models for the amount of residual coupling.

  5. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  6. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  7. The PROOF benchmark suite measuring PROOF performance

    Science.gov (United States)

    Ryu, S.; Ganis, G.

    2012-06-01

    The PROOF benchmark suite is a new utility suite of PROOF to measure performance and scalability. The primary goal of the benchmark suite is to determine optimal configuration parameters for a set of machines to be used as PROOF cluster. The suite measures the performance of the cluster for a set of standard tasks as a function of the number of effective processes. Cluster administrators can use the suite to measure the performance of the cluster and find optimal configuration parameters. PROOF developers can also utilize the suite to help them measure, identify problems and improve their software. In this paper, the new tool is explained in detail and use cases are presented to illustrate the new tool.

  8. Measuring NUMA effects with the STREAM benchmark

    CERN Document Server

    Bergstrom, Lars

    2011-01-01

    Modern high-end machines feature multiple processor packages, each of which contains multiple independent cores and integrated memory controllers connected directly to dedicated physical RAM. These packages are connected via a shared bus, creating a system with a heterogeneous memory hierarchy. Since this shared bus has less bandwidth than the sum of the links to memory, aggregate memory bandwidth is higher when parallel threads all access memory local to their processor package than when they access memory attached to a remote package. But, the impact of this heterogeneous memory architecture is not easily understood from vendor benchmarks. Even where these measurements are available, they provide only best-case memory throughput. This work presents a series of modifications to the well-known STREAM benchmark to measure the effects of NUMA on both a 48-core AMD Opteron machine and a 32-core Intel Xeon machine.

  9. Non-judgemental Dynamic Fuel Cycle Benchmarking

    CERN Document Server

    Scopatz, Anthony Michael

    2015-01-01

    This paper presents a new fuel cycle benchmarking analysis methodology by coupling Gaussian process regression, a popular technique in Machine Learning, to dynamic time warping, a mechanism widely used in speech recognition. Together they generate figures-of-merit that are applicable to any time series metric that a benchmark may study. The figures-of-merit account for uncertainty in the metric itself, utilize information across the whole time domain, and do not require that the simulators use a common time grid. Here, a distance measure is defined that can be used to compare the performance of each simulator for a given metric. Additionally, a contribution measure is derived from the distance measure that can be used to rank order the importance of fuel cycle metrics. Lastly, this paper warns against using standard signal processing techniques for error reduction. This is because it is found that error reduction is better handled by the Gaussian process regression itself.

  10. Argonne Code Center: benchmark problem book

    Energy Technology Data Exchange (ETDEWEB)

    1977-06-01

    This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification. (RWR)

  11. Assessing and benchmarking multiphoton microscopes for biologists.

    Science.gov (United States)

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F

    2014-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs.

  12. ASBench: benchmarking sets for allosteric discovery.

    Science.gov (United States)

    Huang, Wenkang; Wang, Guanqiao; Shen, Qiancheng; Liu, Xinyi; Lu, Shaoyong; Geng, Lv; Huang, Zhimin; Zhang, Jian

    2015-08-01

    Allostery allows for the fine-tuning of protein function. Targeting allosteric sites is gaining increasing recognition as a novel strategy in drug design. The key challenge in the discovery of allosteric sites has strongly motivated the development of computational methods and thus high-quality, publicly accessible standard data have become indispensable. Here, we report benchmarking data for experimentally determined allosteric sites through a complex process, including a 'Core set' with 235 unique allosteric sites and a 'Core-Diversity set' with 147 structurally diverse allosteric sites. These benchmarking sets can be exploited to develop efficient computational methods to predict unknown allosteric sites in proteins and reveal unique allosteric ligand-protein interactions to guide allosteric drug design.

  13. Active vibration control of nonlinear benchmark buildings

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xing-de; CHEN Dao-zheng

    2007-01-01

    The present nonlinear model reduction methods unfit the nonlinear benchmark buildings as their vibration equations belong to a non-affine system. Meanwhile,the controllers designed directly by the nonlinear control strategy have a high order, and they are difficult to be applied actually. Therefore, a new active vibration control way which fits the nonlinear buildings is proposed. The idea of the proposed way is based on the model identification and structural model linearization, and exerting the control force to the built model according to the force action principle. This proposed way has a better practicability as the built model can be reduced by the balance reduction method based on the empirical Grammian matrix. A three-story benchmark structure is presented and the simulation results illustrate that the proposed method is viable for the civil engineering structures.

  14. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  15. Physics benchmarks of the VELO upgrade

    CERN Document Server

    Eklund, Lars

    2017-01-01

    The LHCb Experiment at the LHC is successfully performing precision measurements primarily in the area of flavour physics. The collaboration is preparing an upgrade that will start taking data in 2021 with a trigger-less readout at five times the current luminosity. The vertex locator has been crucial in the success of the experiment and will continue to be so for the upgrade. It will be replaced by a hybrid pixel detector and this paper discusses the performance benchmarks of the upgraded detector. Despite the challenging experimental environment, the vertex locator will maintain or improve upon its benchmark figures compared to the current detector. Finally the long term plans for LHCb, beyond those of the upgrade currently in preparation, are discussed.

  16. Experiences in Benchmarking of Autonomic Systems

    Science.gov (United States)

    Etchevers, Xavier; Coupaye, Thierry; Vachet, Guy

    Autonomic computing promises improvements of systems quality of service in terms of availability, reliability, performance, security, etc. However, little research and experimental results have so far demonstrated this assertion, nor provided proof of the return on investment stemming from the efforts that introducing autonomic features requires. Existing works in the area of benchmarking of autonomic systems can be characterized by their qualitative and fragmented approaches. Still a crucial need is to provide generic (i.e. independent from business, technology, architecture and implementation choices) autonomic computing benchmarking tools for evaluating and/or comparing autonomic systems from a technical and, ultimately, an economical point of view. This article introduces a methodology and a process for defining and evaluating factors, criteria and metrics in order to qualitatively and quantitatively assess autonomic features in computing systems. It also discusses associated experimental results on three different autonomic systems.

  17. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  18. Benchmarking Nature Tourism between Zhangjiajie and Repovesi

    OpenAIRE

    Wu, Zhou

    2014-01-01

    Since nature tourism became a booming business in modern society, more and more tourists choose nature-based tourism destination for their holidays. To find ways to promote Repovesi national park is quite significant, in a bid to reinforce the competitiveness of Repovesi national park. The topic of this thesis is both to find good marketing strategies used by the Zhangjiajie national park, via benchmarking and to provide some suggestions to Repovesi national park. The Method used in t...

  19. Benchmarking Performance of Web Service Operations

    OpenAIRE

    Zhang, Shuai

    2011-01-01

    Web services are often used for retrieving data from servers providing information of different kinds. A data providing web service operation returns collections of objects for a given set of arguments without any side effects. In this project a web service benchmark (WSBENCH) is developed to simulate the performance of web service calls. Web service operations are specified as SQL statements. The function generator of WSBENCH converts user specified SQL queries into functions and automatical...

  20. Felix Stub Generator and Benchmarks Generator

    CERN Document Server

    Valenciano, Jose Jaime

    2014-01-01

    This report discusses two projects I have been working on during my summer studentship period in the context of the FELIX upgrade for ATLAS. The first project concerns the automated code generation needed to support and speed-up the FELIX firmware and software development cycle. The second project required the execution and analysis of benchmarks of the FELIX data-decoding software as a function of data sizes, number of threads and number of data blocks.