WorldWideScience

Sample records for assessments quality metrics

  1. Assessing Software Quality Through Visualised Cohesion Metrics

    Directory of Open Access Journals (Sweden)

    Timothy Shih

    2001-05-01

    Full Text Available Cohesion is one of the most important factors for software quality as well as maintainability, reliability and reusability. Module cohesion is defined as a quality attribute that seeks for measuring the singleness of the purpose of a module. The module of poor quality can be a serious obstacle to the system quality. In order to design a good software quality, software managers and engineers need to introduce cohesion metrics to measure and produce desirable software. A highly cohesion software is thought to be a desirable constructing. In this paper, we propose a function-oriented cohesion metrics based on the analysis of live variables, live span and the visualization of processing element dependency graph. We give six typical cohesion examples to be measured as our experiments and justification. Therefore, a well-defined, well-normalized, well-visualized and well-experimented cohesion metrics is proposed to indicate and thus enhance software cohesion strength. Furthermore, this cohesion metrics can be easily incorporated with software CASE tool to help software engineers to improve software quality.

  2. Quality Assessment of Sharpened Images: Challenges, Methodology, and Objective Metrics.

    Science.gov (United States)

    Krasula, Lukas; Le Callet, Patrick; Fliegel, Karel; Klima, Milos

    2017-01-10

    Most of the effort in image quality assessment (QA) has been so far dedicated to the degradation of the image. However, there are also many algorithms in the image processing chain that can enhance the quality of an input image. These include procedures for contrast enhancement, deblurring, sharpening, up-sampling, denoising, transfer function compensation, etc. In this work, possible strategies for the quality assessment of sharpened images are investigated. This task is not trivial because the sharpening techniques can increase the perceived quality, as well as introduce artifacts leading to the quality drop (over-sharpening). Here, the framework specifically adapted for the quality assessment of sharpened images and objective metrics comparison in this context is introduced. However, the framework can be adopted in other quality assessment areas as well. The problem of selecting the correct procedure for subjective evaluation was addressed and a subjective test on blurred, sharpened, and over-sharpened images was performed in order to demonstrate the use of the framework. The obtained ground-truth data were used for testing the suitability of state-ofthe- art objective quality metrics for the assessment of sharpened images. The comparison was performed by novel procedure using ROC analyses which is found more appropriate for the task than standard methods. Furthermore, seven possible augmentations of the no-reference S3 metric adapted for sharpened images are proposed. The performance of the metric is significantly improved and also superior over the rest of the tested quality criteria with respect to the subjective data.

  3. Image quality assessment metric for frame accumulated image

    Science.gov (United States)

    Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling

    2018-01-01

    The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.

  4. A software quality model and metrics for risk assessment

    Science.gov (United States)

    Hyatt, L.; Rosenberg, L.

    1996-01-01

    A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.

  5. Unsupervised Quality Assessment of Mass Spectrometry Proteomics Experiments by Multivariate Quality Control Metrics.

    Science.gov (United States)

    Bittremieux, Wout; Meysman, Pieter; Martens, Lennart; Valkenborg, Dirk; Laukens, Kris

    2016-04-01

    Despite many technological and computational advances, the results of a mass spectrometry proteomics experiment are still subject to a large variability. For the understanding and evaluation of how technical variability affects the results of an experiment, several computationally derived quality control metrics have been introduced. However, despite the availability of these metrics, a systematic approach to quality control is often still lacking because the metrics are not fully understood and are hard to interpret. Here, we present a toolkit of powerful techniques to analyze and interpret multivariate quality control metrics to assess the quality of mass spectrometry proteomics experiments. We show how unsupervised techniques applied to these quality control metrics can provide an initial discrimination between low-quality experiments and high-quality experiments prior to manual investigation. Furthermore, we provide a technique to obtain detailed information on the quality control metrics that are related to the decreased performance, which can be used as actionable information to improve the experimental setup. Our toolkit is released as open-source and can be downloaded from https://bitbucket.org/proteinspector/qc_analysis/ .

  6. Supporting analysis and assessments quality metrics: Utility market sector

    Energy Technology Data Exchange (ETDEWEB)

    Ohi, J. [National Renewable Energy Lab., Golden, CO (United States)

    1996-10-01

    In FY96, NREL was asked to coordinate all analysis tasks so that in FY97 these tasks will be part of an integrated analysis agenda that will begin to define a 5-15 year R&D roadmap and portfolio for the DOE Hydrogen Program. The purpose of the Supporting Analysis and Assessments task at NREL is to provide this coordination and conduct specific analysis tasks. One of these tasks is to prepare the Quality Metrics (QM) for the Program as part of the overall QM effort at DOE/EERE. The Hydrogen Program one of 39 program planning units conducting QM, a process begun in FY94 to assess benefits/costs of DOE/EERE programs. The purpose of QM is to inform decisionmaking during budget formulation process by describing the expected outcomes of programs during the budget request process. QM is expected to establish first step toward merit-based budget formulation and allow DOE/EERE to get {open_quotes}most bang for its (R&D) buck.{close_quotes} In FY96. NREL coordinated a QM team that prepared a preliminary QM for the utility market sector. In the electricity supply sector, the QM analysis shows hydrogen fuel cells capturing 5% (or 22 GW) of the total market of 390 GW of new capacity additions through 2020. Hydrogen consumption in the utility sector increases from 0.009 Quads in 2005 to 0.4 Quads in 2020. Hydrogen fuel cells are projected to displace over 0.6 Quads of primary energy in 2020. In future work, NREL will assess the market for decentralized, on-site generation, develop cost credits for distributed generation benefits (such as deferral of transmission and distribution investments, uninterruptible power service), cost credits for by-products such as heat and potable water, cost credits for environmental benefits (reduction of criteria air pollutants and greenhouse gas emissions), compete different fuel cell technologies against each other for market share, and begin to address economic benefits, especially employment.

  7. Quality metrics in endoscopy.

    Science.gov (United States)

    Gurudu, Suryakanth R; Ramirez, Francisco C

    2013-04-01

    Endoscopy has evolved in the past 4 decades to become an important tool in the diagnosis and management of many digestive diseases. Greater focus on endoscopic quality has highlighted the need to ensure competency among endoscopists. A joint task force of the American College of Gastroenterology and the American Society for Gastrointestinal Endoscopy has proposed several quality metrics to establish competence and help define areas of continuous quality improvement. These metrics represent quality in endoscopy pertinent to pre-, intra-, and postprocedural periods. Quality in endoscopy is a dynamic and multidimensional process that requires continuous monitoring of several indicators and benchmarking with local and national standards. Institutions and practices should have a process in place for credentialing endoscopists and for the assessment of competence regarding individual endoscopic procedures.

  8. Software Quality Assurance Metrics

    Science.gov (United States)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  9. Colonoscopy quality: metrics and implementation.

    Science.gov (United States)

    Calderwood, Audrey H; Jacobson, Brian C

    2013-09-01

    Colonoscopy is an excellent area for quality improvement because it is high volume, has significant associated risk and expense, and there is evidence that variability in its performance affects outcomes. The best end point for validation of quality metrics in colonoscopy is colorectal cancer incidence and mortality, but a more readily accessible metric is the adenoma detection rate. Fourteen quality metrics were proposed in 2006, and these are described in this article. Implementation of quality improvement initiatives involves rapid assessments and changes on an iterative basis, and can be done at the individual, group, or facility level. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Quality Assessment of Adaptive Bitrate Videos using Image Metrics and Machine Learning

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Brunnström, Kjell

    2015-01-01

    Adaptive bitrate (ABR) streaming is widely used for distribution of videos over the internet. In this work, we investigate how well we can predict the quality of such videos using well-known image metrics, information about the bitrate levels, and a relatively simple machine learning method....... Quality assessment of ABR videos is a hard problem, but our initial results are promising. We obtain a Spearman rank order correlation of 0.88 using content-independent cross-validation....

  11. Universal blind image quality assessment metrics via natural scene statistics and multiple kernel learning.

    Science.gov (United States)

    Gao, Xinbo; Gao, Fei; Tao, Dacheng; Li, Xuelong

    2013-12-01

    Universal blind image quality assessment (IQA) metrics that can work for various distortions are of great importance for image processing systems, because neither ground truths are available nor the distortion types are aware all the time in practice. Existing state-of-the-art universal blind IQA algorithms are developed based on natural scene statistics (NSS). Although NSS-based metrics obtained promising performance, they have some limitations: 1) they use either the Gaussian scale mixture model or generalized Gaussian density to predict the nonGaussian marginal distribution of wavelet, Gabor, or discrete cosine transform coefficients. The prediction error makes the extracted features unable to reflect the change in nonGaussianity (NG) accurately. The existing algorithms use the joint statistical model and structural similarity to model the local dependency (LD). Although this LD essentially encodes the information redundancy in natural images, these models do not use information divergence to measure the LD. Although the exponential decay characteristic (EDC) represents the property of natural images that large/small wavelet coefficient magnitudes tend to be persistent across scales, which is highly correlated with image degradations, it has not been applied to the universal blind IQA metrics; and 2) all the universal blind IQA metrics use the same similarity measure for different features for learning the universal blind IQA metrics, though these features have different properties. To address the aforementioned problems, we propose to construct new universal blind quality indicators using all the three types of NSS, i.e., the NG, LD, and EDC, and incorporating the heterogeneous property of multiple kernel learning (MKL). By analyzing how different distortions affect these statistical properties, we present two universal blind quality assessment models, NSS global scheme and NSS two-step scheme. In the proposed metrics: 1) we exploit the NG of natural images

  12. Assessing colon polypectomy competency and its association with established quality metrics.

    Science.gov (United States)

    Duloy, Anna M; Kaltenbach, Tonya R; Keswani, Rajesh N

    2018-03-01

    Inadequate polypectomy leads to incomplete resection, interval colorectal cancer, and adverse events. However, polypectomy competency is rarely reported, and quality metrics are lacking. The primary aims of this study were to assess polypectomy competency among a cohort of gastroenterologists and to measure the correlation between polypectomy competency and established colonoscopy quality metrics (adenoma detection rate and withdrawal time). We conducted a prospective observational study to assess polypectomy competency among 13 high-volume screening colonoscopists at an academic medical center. Over 6 weeks, we made video recordings of ≥28 colonoscopies per colonoscopist and randomly selected 10 polypectomies per colonoscopist for evaluation. Two raters graded the polypectomies by using the Direct Observation of Polypectomy Skills, a polypectomy competency assessment tool, which assesses individual polypectomy skills and overall competency. We evaluated 130 polypectomies. A total of 83 polypectomies (64%) were rated as competent, which was more likely for diminutive (70%) than small and/or large polyps (50%, P = .03). Overall Direct Observation of Polypectomy Skills competency scores varied significantly among colonoscopists (P = .001), with overall polypectomy competency rates ranging between 30% and 90%. Individual skills scores, such as accurately directing the snare over the lesion (P = .02) and trapping an appropriate amount of tissue within the snare (P = .001) varied significantly between colonoscopists. Polypectomy competency rates did not significantly correlate with the adenoma detection rate (r = 0.4; P = .2) or withdrawal time (r = 0.2; P = .5). Polypectomy competency varies significantly among colonoscopists and does not sufficiently correlate with established quality metrics. Given the clinical implications of suboptimal polypectomy, efforts to educate colonoscopists in polypectomy techniques and develop a metric of polypectomy quality are

  13. Quality Metrics in Inpatient Neurology.

    Science.gov (United States)

    Dhand, Amar

    2015-12-01

    Quality of care in the context of inpatient neurology is the standard of performance by neurologists and the hospital system as measured against ideal models of care. There are growing regulatory pressures to define health care value through concrete quantifiable metrics linked to reimbursement. Theoretical models of quality acknowledge its multimodal character with quantitative and qualitative dimensions. For example, the Donabedian model distils quality as a phenomenon of three interconnected domains, structure-process-outcome, with each domain mutually influential. The actual measurement of quality may be implicit, as in peer review in morbidity and mortality rounds, or explicit, in which criteria are prespecified and systemized before assessment. As a practical contribution, in this article a set of candidate quality indicators for inpatient neurology based on an updated review of treatment guidelines is proposed. These quality indicators may serve as an initial blueprint for explicit quality metrics long overdue for inpatient neurology. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  14. Quality metrics for detailed clinical models.

    Science.gov (United States)

    Ahn, SunJu; Huff, Stanley M; Kim, Yoon; Kalra, Dipak

    2013-05-01

    To develop quality metrics for detailed clinical models (DCMs) and test their validity. Based on existing quality criteria which did not include formal metrics, we developed quality metrics by applying the ISO/IEC 9126 software quality evaluation model. The face and content validity of the initial quality metrics were assessed by 9 international experts. Content validity was defined as agreement by over 70% of the panelists. For eliciting opinions and achieving consensus of the panelists, a two round Delphi survey was conducted. Valid quality metrics were considered reliable if agreement between two evaluators' assessments of two example DCMs was over 0.60 in terms of the kappa coefficient. After reliability and validity were tested, the final DCM quality metrics were selected. According to the results of the reliability test, the degree of agreement was high (a kappa coefficient of 0.73). Based on the results of the reliability test, 8 quality evaluation domains and 29 quality metrics were finalized as DCM quality metrics. Quality metrics were validated by a panel of international DCM experts. Therefore, we expect that the metrics, which constitute essential qualitative and quantitative quality requirements for DCMs, can be used to support rational decision-making by DCM developers and clinical users. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. Radiation dose metrics in CT: assessing dose using the National Quality Forum CT patient safety measure.

    Science.gov (United States)

    Keegan, Jillian; Miglioretti, Diana L; Gould, Robert; Donnelly, Lane F; Wilson, Nicole D; Smith-Bindman, Rebecca

    2014-03-01

    The National Quality Forum (NQF) is a nonprofit consensus organization that recently endorsed a measure focused on CT radiation doses. To comply, facilities must summarize the doses from consecutive scans within age and anatomic area strata and report the data in the medical record. Our purpose was to assess the time needed to assemble the data and to demonstrate how review of such data permits a facility to understand doses. To assemble the data we used for analysis, we used the dose monitoring software eXposure to automatically export dose metrics from consecutive scans in 2010 and 2012. For a subset of 50 exams, we also collected dose metrics manually, copying data directly from the PACS into an excel spreadsheet. Manual data collection for 50 scans required 2 hours and 15 minutes. eXposure compiled the data in under an hour. All dose metrics demonstrated a 30% to 50% reduction between 2010 and 2012. There was also a significant decline and a reduction in the variability of the doses over time. The NQF measure facilitates an institution's capacity to assess the doses they are using for CT as part of routine practice. The necessary data can be collected within a reasonable amount of time either with automatic software or manually. The collection and review of these data will allow facilities to compare their radiation dose distributions with national distributions and allow assessment of temporal trends in the doses they are using. Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  16. Sigma metrics used to assess analytical quality of clinical chemistry assays: importance of the allowable total error (TEa) target.

    Science.gov (United States)

    Hens, Koen; Berth, Mario; Armbruster, Dave; Westgard, Sten

    2014-07-01

    Six Sigma metrics were used to assess the analytical quality of automated clinical chemistry and immunoassay tests in a large Belgian clinical laboratory and to explore the importance of the source used for estimation of the allowable total error. Clinical laboratories are continually challenged to maintain analytical quality. However, it is difficult to measure assay quality objectively and quantitatively. The Sigma metric is a single number that estimates quality based on the traditional parameters used in the clinical laboratory: allowable total error (TEa), precision and bias. In this study, Sigma metrics were calculated for 41 clinical chemistry assays for serum and urine on five ARCHITECT c16000 chemistry analyzers. Controls at two analyte concentrations were tested and Sigma metrics were calculated using three different TEa targets (Ricos biological variability, CLIA, and RiliBÄK). Sigma metrics varied with analyte concentration, the TEa target, and between/among analyzers. Sigma values identified those assays that are analytically robust and require minimal quality control rules and those that exhibit more variability and require more complex rules. The analyzer to analyzer variability was assessed on the basis of Sigma metrics. Six Sigma is a more efficient way to control quality, but the lack of TEa targets for many analytes and the sometimes inconsistent TEa targets from different sources are important variables for the interpretation and the application of Sigma metrics in a routine clinical laboratory. Sigma metrics are a valuable means of comparing the analytical quality of two or more analyzers to ensure the comparability of patient test results.

  17. Chromosome microarray proficiency testing and analysis of quality metric data trends through an external quality assessment program for Australasian laboratories.

    Science.gov (United States)

    Wright, D C; Adayapalam, N; Bain, N; Bain, S M; Brown, A; Buzzacott, N; Carey, L; Cross, J; Dun, K; Joy, C; McCarthy, C; Moore, S; Murch, A R; O'Malley, F; Parker, E; Watt, J; Wilkin, H; Fagan, K; Pertile, M D; Peters, G B

    2016-10-01

    Chromosome microarrays are an essential tool for investigation of copy number changes in children with congenital anomalies and intellectual deficit. Attempts to standardise microarray testing have focused on establishing technical and clinical quality criteria, however external quality assessment programs are still needed. We report on a microarray proficiency testing program for Australasian laboratories. Quality metrics evaluated included analytical accuracy, result interpretation, report completeness, and laboratory performance data: sample numbers, success and abnormality rate and reporting times. Between 2009 and 2014 nine samples were dispatched with variable results for analytical accuracy (30-100%), correct interpretation (32-96%), and report completeness (30-92%). Laboratory performance data (2007-2014) showed an overall mean success rate of 99.2% and abnormality rate of 23.6%. Reporting times decreased from >90 days to 102 days to quality metrics, however only 'report completeness' and reporting times reached statistical significance. Whether the overall improvement in laboratory performance was due to participation in this program, or from accumulated laboratory experience over time, is not clear. Either way, the outcome is likely to assist referring clinicians and improve patient care. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  18. Metric qualities of the cognitive behavioral assessment for outcome evaluation to estimate psychological treatment effects.

    Science.gov (United States)

    Bertolotti, Giorgio; Michielin, Paolo; Vidotto, Giulio; Sanavio, Ezio; Bottesi, Gioia; Bettinardi, Ornella; Zotti, Anna Maria

    2015-01-01

    Cognitive behavioral assessment for outcome evaluation was developed to evaluate psychological treatment interventions, especially for counseling and psychotherapy. It is made up of 80 items and five scales: anxiety, well-being, perception of positive change, depression, and psychological distress. The aim of the study was to present the metric qualities and to show validity and reliability of the five constructs of the questionnaire both in nonclinical and clinical subjects. Four steps were completed to assess reliability and factor structure: criterion-related and concurrent validity, responsiveness, and convergent-divergent validity. A nonclinical group of 269 subjects was enrolled, as was a clinical group comprising 168 adults undergoing psychotherapy and psychological counseling provided by the Italian public health service. Cronbach's alphas were between 0.80 and 0.91 for the clinical sample and between 0.74 and 0.91 in the nonclinical one. We observed an excellent structural validity for the five interrelated dimensions. The clinical group showed higher scores in the anxiety, depression, and psychological distress scales, as well as lower scores in well-being and perception of positive change scales than those observed in the nonclinical group. Responsiveness was large for the anxiety, well-being, and depression scales; the psychological distress and perception of positive change scales showed a moderate effect. The questionnaire showed excellent psychometric properties, thus demonstrating that the questionnaire is a good evaluative instrument, with which to assess pre- and post-treatment outcomes.

  19. Does the lentic-lotic character of rivers affect invertebrate metrics used in the assessment of ecological quality?

    Directory of Open Access Journals (Sweden)

    Stefania ERBA

    2009-02-01

    Full Text Available The importance of local hydraulic conditions on the structuring of freshwater biotic communities is widely recognized by the scientific community. In spite of this, most current methods based upon invertebrates do not take this factor into account in their assessment of ecological quality. The aim of this paper is to investigate the influence of local hydraulic conditions on invertebrate community metrics and to estimate their potential weight in the evaluation of river water quality. The dataset used consisted of 130 stream sites located in four broad European geographical contexts: Alps, Central mountains, Mediterranean mountains and Lowland streams. Using River Habitat Survey data, the river hydromorphology was evaluated by means of the Lentic-lotic River Descriptor and the Habitat Modification Score. To quantify the level of water pollution, a synoptic Organic Pollution Descriptor was calculated. For their established, wide applicability, STAR Intercalibration Common Metrics and index were selected as biological quality indices. Significant relationships between selected environmental variables and biological metrics devoted to the evaluation of ecological quality were obtained by means of Partial Least Squares regression analysis. The lentic-lotic character was the most significant factor affecting invertebrate communities in the Mediterranean mountains, even if it is a relevant factor for most quality metrics also in the Alpine and Central mountain rivers. Therefore, this character should be taken into account when assessing ecological quality of rivers because it can greatly affect the assignment of ecological status.

  20. Quantitative metrics for assessment of chemical image quality and spatial resolution.

    Science.gov (United States)

    Kertesz, Vilmos; Cahill, John F; Van Berkel, Gary J

    2016-04-15

    Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe the chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Two image metrics, viz., "chemical image contrast" (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and "corrected resolving power factor" (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest in an image, were developed. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system. Published in 2016. This article is a U.S. Government work and is in the public domain in the USA. Published in 2016. This article is a U.S. Government work and is in the public domain in

  1. Application of Sigma Metrics Analysis for the Assessment and Modification of Quality Control Program in the Clinical Chemistry Laboratory of a Tertiary Care Hospital.

    Science.gov (United States)

    Iqbal, Sahar; Mustansar, Tazeen

    2017-03-01

    Sigma is a metric that quantifies the performance of a process as a rate of Defects-Per-Million opportunities. In clinical laboratories, sigma metric analysis is used to assess the performance of laboratory process system. Sigma metric is also used as a quality management strategy for a laboratory process to improve the quality by addressing the errors after identification. The aim of this study is to evaluate the errors in quality control of analytical phase of laboratory system by sigma metric. For this purpose sigma metric analysis was done for analytes using the internal and external quality control as quality indicators. Results of sigma metric analysis were used to identify the gaps and need for modification in the strategy of laboratory quality control procedure. Sigma metric was calculated for quality control program of ten clinical chemistry analytes including glucose, chloride, cholesterol, triglyceride, HDL, albumin, direct bilirubin, total bilirubin, protein and creatinine, at two control levels. To calculate the sigma metric imprecision and bias was calculated with internal and external quality control data, respectively. The minimum acceptable performance was considered as 3 sigma. Westgard sigma rules were applied to customize the quality control procedure. Sigma level was found acceptable (≥3) for glucose (L2), cholesterol, triglyceride, HDL, direct bilirubin and creatinine at both levels of control. For rest of the analytes sigma metric was found quality control procedure. In this study application of sigma rules provided us the practical solution for improved and focused design of QC procedure.

  2. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    Science.gov (United States)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  3. Existing Model Metrics and Relations to Model Quality

    OpenAIRE

    Mohagheghi, Parastoo; Dehlen, Vegard

    2009-01-01

    This paper presents quality goals for models and provides a state-of-the-art analysis regarding model metrics. While model-based software development often requires assessing the quality of models at different abstraction and precision levels and developed for multiple purposes, existing work on model metrics do not reflect this need. Model size metrics are descriptive and may be used for comparing models but their relation to model quality is not welldefined. Code metrics are proposed to be ...

  4. Quality metrics for sensor images

    Science.gov (United States)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery

  5. A management-oriented framework for selecting metrics used to assess habitat- and path-specific quality in spatially structured populations

    Science.gov (United States)

    Nicol, Sam; Wiederholt, Ruscena; Diffendorfer, James E.; Mattsson, Brady; Thogmartin, Wayne E.; Semmens, Darius J.; Laura Lopez-Hoffman,; Norris, Ryan

    2016-01-01

    Mobile species with complex spatial dynamics can be difficult to manage because their population distributions vary across space and time, and because the consequences of managing particular habitats are uncertain when evaluated at the level of the entire population. Metrics to assess the importance of habitats and pathways connecting habitats in a network are necessary to guide a variety of management decisions. Given the many metrics developed for spatially structured models, it can be challenging to select the most appropriate one for a particular decision. To guide the management of spatially structured populations, we define three classes of metrics describing habitat and pathway quality based on their data requirements (graph-based, occupancy-based, and demographic-based metrics) and synopsize the ecological literature relating to these classes. Applying the first steps of a formal decision-making approach (problem framing, objectives, and management actions), we assess the utility of metrics for particular types of management decisions. Our framework can help managers with problem framing, choosing metrics of habitat and pathway quality, and to elucidate the data needs for a particular metric. Our goal is to help managers to narrow the range of suitable metrics for a management project, and aid in decision-making to make the best use of limited resources.

  6. Metric qualities of the cognitive behavioral assessment for outcome evaluation to estimate psychological treatment effects

    Directory of Open Access Journals (Sweden)

    Bertolotti G

    2015-09-01

    Full Text Available Giorgio Bertolotti,1 Paolo Michielin,2 Giulio Vidotto,2 Ezio Sanavio,2 Gioia Bottesi,2 Ornella Bettinardi,3 Anna Maria Zotti4 1Psychology Unit, Salvatore Maugeri Foundation, IRCCS, Scientific Institute, Tradate, VA, 2Department of General Psychology, Padua University, Padova, 3Department of Mental Health and Addictive Behavior, AUSL Piacenza, Piacenza, 4Salvatore Maugeri Foundation, IRCCS, Scientific Institute, Veruno, NO, Italy Background: Cognitive behavioral assessment for outcome evaluation was developed to evaluate psychological treatment interventions, especially for counseling and psychotherapy. It is made up of 80 items and five scales: anxiety, well-being, perception of positive change, depression, and psychological distress. The aim of the study was to present the metric qualities and to show validity and reliability of the five constructs of the questionnaire both in nonclinical and clinical subjects. Methods: Four steps were completed to assess reliability and factor structure: criterion-related and concurrent validity, responsiveness, and convergent–divergent validity. A nonclinical group of 269 subjects was enrolled, as was a clinical group comprising 168 adults undergoing psychotherapy and psychological counseling provided by the Italian public health service. Results: Cronbach’s alphas were between 0.80 and 0.91 for the clinical sample and between 0.74 and 0.91 in the nonclinical one. We observed an excellent structural validity for the five interrelated dimensions. The clinical group showed higher scores in the anxiety, depression, and psychological distress scales, as well as lower scores in well-being and perception of positive change scales than those observed in the nonclinical group. Responsiveness was large for the anxiety, well-being, and depression scales; the psychological distress and perception of positive change scales showed a moderate effect. Conclusion: The questionnaire showed excellent psychometric

  7. Landscape pattern metrics and regional assessment

    Science.gov (United States)

    O'Neill, R. V.; Riitters, K.H.; Wickham, J.D.; Jones, K.B.

    1999-01-01

    The combination of remote imagery data, geographic information systems software, and landscape ecology theory provides a unique basis for monitoring and assessing large-scale ecological systems. The unique feature of the work has been the need to develop and interpret quantitative measures of spatial pattern-the landscape indices. This article reviews what is known about the statistical properties of these pattern metrics and suggests some additional metrics based on island biogeography, percolation theory, hierarchy theory, and economic geography. Assessment applications of this approach have required interpreting the pattern metrics in terms of specific environmental endpoints, such as wildlife and water quality, and research into how to represent synergystic effects of many overlapping sources of stress.

  8. Static and Dynamic Software Quality Metric Tools

    OpenAIRE

    Mayo, Kevin A.; Wake, Steven A.; Henry, Sallie M.

    1990-01-01

    The ability to detect and predict poor software quality is of major importance to software engineers, managers, and quality assurance organizations. Poor software quality leads to increased development costs and expensive maintenance. With so much attention on exacerbated budgetary constraints, a viable alternative is necessary. Software quality metrics are designed for this purpose. Metrics measure aspects of code or PDL representations, and can be collected and used throughout the life ...

  9. What Metrics Accurately Reflect Surgical Quality?

    Science.gov (United States)

    Ibrahim, Andrew M; Dimick, Justin B

    2018-01-29

    Surgeons are increasingly under pressure to measure and improve their quality. While there is broad consensus that we ought to track surgical quality, there is far less agreement about which metrics matter most. This article reviews the important statistical concepts of case mix and chance as they apply to understanding the observed wide variation in surgical quality. We then discuss the benefits and drawbacks of current measurement strategies through the framework of structure, process, and outcomes approaches. Finally, we describe emerging new metrics, such as video evaluation and network optimization, that are likely to take on an increasingly important role in the future of measuring surgical quality.

  10. Image quality metrics for the evaluation of print quality

    Science.gov (United States)

    Pedersen, Marius; Bonnier, Nicolas; Hardeberg, Jon Y.; Albregtsen, Fritz

    2011-01-01

    Image quality metrics have become more and more popular in the image processing community. However, so far, no one has been able to define an image quality metric well correlated with the percept for overall image quality. One of the causes is that image quality is multi-dimensional and complex. One approach to bridge the gap between perceived and calculated image quality is to reduce the complexity of image quality, by breaking the overall quality into a set of quality attributes. In our research we have presented a set of quality attributes built on existing attributes from the literature. The six proposed quality attributes are: sharpness, color, lightness, artifacts, contrast, and physical. This set keeps the dimensionality to a minimum. An experiment validated the quality attributes as suitable for image quality evaluation. The process of applying image quality metrics to printed images is not straightforward, because image quality metrics require a digital input. A framework has been developed for this process, which includes scanning the print to get a digital copy, image registration, and the application of image quality metrics. With quality attributes for the evaluation of image quality and a framework for applying image quality metrics, a selection of suitable image quality metrics for the different quality attributes has been carried out. Each of the quality attributes has been investigated, and an experimental analysis carried out to find the most suitable image quality metrics for the given quality attributes. For the sharpness attributes the Structural SIMilarity index (SSIM) by Wang et al. (2004) is the the most suitable, and for the other attributes further evaluation is required.

  11. Survival As a Quality Metric of Cancer Care: Use of the National Cancer Data Base to Assess Hospital Performance.

    Science.gov (United States)

    Shulman, Lawrence N; Palis, Bryan E; McCabe, Ryan; Mallin, Kathy; Loomis, Ashley; Winchester, David; McKellar, Daniel

    2018-01-01

    Survival is considered an important indicator of the quality of cancer care, but the validity of different methodologies to measure comparative survival rates is less well understood. We explored whether the National Cancer Data Base (NCDB) could serve as a source of unadjusted and risk-adjusted cancer survival data and whether these data could be used as quality indicators for individual hospitals or in the aggregate by hospital type. The NCDB, an aggregate of > 1,500 hospital cancer registries, was queried to analyze unadjusted and risk-adjusted hazards of death for patients with stage III breast cancer (n = 116,787) and stage IIIB or IV non-small-cell lung cancer (n = 252,392). Data were analyzed at the individual hospital level and by hospital type. At the hospital level, after risk adjustment, few hospitals had comparative risk-adjusted survival rates that were statistically better or worse. By hospital type, National Cancer Institute-designated comprehensive cancer centers had risk-adjusted survival ratios that were statistically significantly better than those of academic cancer centers and community hospitals. Using the NCDB as the data source, survival rates for patients with stage III breast cancer and stage IIIB or IV non-small-cell lung cancer were statistically better at National Cancer Institute-designated comprehensive cancer centers when compared with other hospital types. Compared with academic hospitals, risk-adjusted survival was lower in community hospitals. At the individual hospital level, after risk adjustment, few hospitals were shown to have statistically better or worse survival, suggesting that, using NCDB data, survival may not be a good metric to determine relative quality of cancer care at this level.

  12. Assessing De Novo transcriptome assembly metrics for consistency and utility.

    Science.gov (United States)

    O'Neil, Shawn T; Emrich, Scott J

    2013-07-09

    Transcriptome sequencing and assembly represent a great resource for the study of non-model species, and many metrics have been used to evaluate and compare these assemblies. Unfortunately, it is still unclear which of these metrics accurately reflect assembly quality. We simulated sequencing transcripts of Drosophila melanogaster. By assembling these simulated reads using both a "perfect" and a modern transcriptome assembler while varying read length and sequencing depth, we evaluated quality metrics to determine whether they 1) revealed perfect assemblies to be of higher quality, and 2) revealed perfect assemblies to be more complete as data quantity increased.Several commonly used metrics were not consistent with these expectations, including average contig coverage and length, though they became consistent when singletons were included in the analysis. We found several annotation-based metrics to be consistent and informative, including contig reciprocal best hit count and contig unique annotation count. Finally, we evaluated a number of novel metrics such as reverse annotation count, contig collapse factor, and the ortholog hit ratio, discovering that each assess assembly quality in unique ways. Although much attention has been given to transcriptome assembly, little research has focused on determining how best to evaluate assemblies, particularly in light of the variety of options available for read length and sequencing depth. Our results provide an important review of these metrics and give researchers tools to produce the highest quality transcriptome assemblies.

  13. Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling

    Science.gov (United States)

    Mog, Robert A.

    1997-01-01

    Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.

  14. How to evaluate objective video quality metrics reliably

    DEFF Research Database (Denmark)

    Korhonen, Jari; Burini, Nino; You, Junyong

    2012-01-01

    The typical procedure for evaluating the performance of different objective quality metrics and indices involves comparisons between subjective quality ratings and the quality indices obtained using the objective metrics in question on the known video sequences. Several correlation indicators can...... as processing of subjective data. We also suggest some general guidelines for researchers to make comparison studies of objective video quality metrics more reliable and useful for the practitioners in the field....

  15. A Single Conjunction Risk Assessment Metric: the F-Value

    Science.gov (United States)

    Frigm, Ryan Clayton; Newman, Lauri K.

    2009-01-01

    The Conjunction Assessment Team at NASA Goddard Space Flight Center provides conjunction risk assessment for many NASA robotic missions. These risk assessments are based on several figures of merit, such as miss distance, probability of collision, and orbit determination solution quality. However, these individual metrics do not singly capture the overall risk associated with a conjunction, making it difficult for someone without this complete understanding to take action, such as an avoidance maneuver. The goal of this analysis is to introduce a single risk index metric that can easily convey the level of risk without all of the technical details. The proposed index is called the conjunction "F-value." This paper presents the concept of the F-value and the tuning of the metric for use in routine Conjunction Assessment operations.

  16. Improving Endoscopic Adherence to Quality Metrics in Colonoscopy.

    Science.gov (United States)

    Lu, Jonathan J; Decker, Christopher H; Connolly, Sean E

    2015-01-01

    Appropriate documentation of quality metrics in the endoscopy reports provides evidence that a thorough and complete examination was performed. The aim of our study was to assess compliance with 3 current quality metrics for colonoscopy defined by the American Society for Gastrointestinal Endoscopy. We retrospectively examined colonoscopy reports from 6 gastroenterologists at Ochsner Medical Center for appropriate documentation of the quality of the bowel preparation and photodocumentation of the appendiceal orifice and the ileocecal valve. A performance review and educational session then took place with each physician. Subsequent colonoscopy reports were evaluated to monitor for improvement. Bowel preparation documentation was high before and after the educational sessions (97.5% and 97.2%). Preeducation, the mean photodocumentation rate of the appendiceal orifice was 55% (range, 23%-84%). For the ileocecal valve, the documentation rate was 32.5% (range, 3%-73%). Posteducation, the mean appendiceal orifice labeling increased to an average of 91%, with a median change of 28.5% (P=0.0313). Documentation of the ileocecal valve improved to an average of 73%, a median change of 37.5% (P=0.0625). Although reassessment of subsequent reports will be necessary to evaluate the permanence of this intervention, our evidence suggests that educational sessions can improve the quality and accuracy of documentation of quality metrics during colonoscopies.

  17. Quality metrics in neonatal and pediatric critical care transport: a consensus statement.

    Science.gov (United States)

    Bigham, Michael T; Schwartz, Hamilton P

    2013-06-01

    The transport of neonatal and pediatric patients to tertiary care medical centers for specialized care demands monitoring the quality of care delivered during transport and its impact on patient outcomes. Accurate assessment of quality indicators and patient outcomes requires the use of a standard language permitting comparisons among transport programs. No consensus exists on a set of quality metrics for benchmarking transport teams. The aim of this project was to achieve consensus on appropriate neonatal and pediatric transport quality metrics. Candidate quality metrics were identified through literature review and those metrics currently tracked by each program. Consensus was governed by nominal group technique. Metrics were categorized in two dimensions: Institute of Medicine quality domains and Donabedian's structure/process/outcome framework. Two-day Ohio statewide quality metrics conference. Nineteen transport leaders and staff representing six statewide neonatal/pediatric specialty programs convened to achieve consensus. Two hundred fifty-seven performance metrics relevant to neonatal/pediatric transport were identified. Eliminating duplicate and overlapping metrics resulted in 70 candidate metrics. Nominal group methodology yielded 23 final quality metrics, the largest portion representing Donabedian's outcome category (n = 12, 52%) and the Institute of Medicine quality domains of effectiveness (n = 7, 30%) and safety (n = 9, 39%). Sample final metrics include measurement of family presence, pain management, intubation success, neonatal temperature control, use of lights and sirens, and medication errors. Lastly, a definition for each metric was established and agreed upon for consistency among institutions. This project demonstrates that quality metrics can be achieved through consensus building and provides the foundation for benchmarking among neonatal and pediatric transport programs and quality improvement projects.

  18. Survey on Impact of Software Metrics on Software Quality

    OpenAIRE

    Mrinal Singh Rawat; Arpita Mittal; Sanjay Kumar Dubey

    2012-01-01

    Software metrics provide a quantitative basis for planning and predicting software development processes. Therefore the quality of software can be controlled and improved easily. Quality in fact aids higher productivity, which has brought software metrics to the forefront. This research paper focuses on different views on software quality. Moreover, many metrics and models have been developed; promoted and utilized resulting in remarkable successes. This paper examines the realm of software e...

  19. Towards Video Quality Metrics Based on Colour Fractal Geometry

    Directory of Open Access Journals (Sweden)

    Richard Noël

    2010-01-01

    Full Text Available Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.

  20. Quality in Software Development: a pragmatic approach using metrics

    Directory of Open Access Journals (Sweden)

    Daniel Acton

    2014-06-01

    Full Text Available As long as software has been produced, there have been efforts to strive for quality in software products. In order to understand quality in software products, researchers have built models of software quality that rely on metrics in an attempt to provide a quantitative view of software quality. The aim of these models is to provide software producers with the capability to define and evaluate metrics related to quality and use these metrics to improve the quality of the software they produce over time. The main disadvantage of these models is that they require effort and resources to define and evaluate metrics from software projects. This article briefly describes some prominent models of software quality in the literature and continues to describe a new approach to gaining insight into quality in software development projects. A case study based on this new approach is described and results from the case study are discussed.

  1. Quality metric for spherical panoramic video

    Science.gov (United States)

    Zakharchenko, Vladyslav; Choi, Kwang Pyo; Park, Jeong Hoon

    2016-09-01

    Virtual reality (VR)/ augmented reality (AR) applications allow users to view artificial content of a surrounding space simulating presence effect with a help of special applications or devices. Synthetic contents production is well known process form computer graphics domain and pipeline has been already fixed in the industry. However emerging multimedia formats for immersive entertainment applications such as free-viewpoint television (FTV) or spherical panoramic video require different approaches in content management and quality assessment. The international standardization on FTV has been promoted by MPEG. This paper is dedicated to discussion of immersive media distribution format and quality estimation process. Accuracy and reliability of the proposed objective quality estimation method had been verified with spherical panoramic images demonstrating good correlation results with subjective quality estimation held by a group of experts.

  2. Indicators and metrics for the assessment of climate engineering

    Science.gov (United States)

    Oschlies, A.; Held, H.; Keller, D.; Keller, K.; Mengis, N.; Quaas, M.; Rickels, W.; Schmidt, H.

    2017-01-01

    Selecting appropriate indicators is essential to aggregate the information provided by climate model outputs into a manageable set of relevant metrics on which assessments of climate engineering (CE) can be based. From all the variables potentially available from climate models, indicators need to be selected that are able to inform scientists and society on the development of the Earth system under CE, as well as on possible impacts and side effects of various ways of deploying CE or not. However, the indicators used so far have been largely identical to those used in climate change assessments and do not visibly reflect the fact that indicators for assessing CE (and thus the metrics composed of these indicators) may be different from those used to assess global warming. Until now, there has been little dedicated effort to identifying specific indicators and metrics for assessing CE. We here propose that such an effort should be facilitated by a more decision-oriented approach and an iterative procedure in close interaction between academia, decision makers, and stakeholders. Specifically, synergies and trade-offs between social objectives reflected by individual indicators, as well as decision-relevant uncertainties should be considered in the development of metrics, so that society can take informed decisions about climate policy measures under the impression of the options available, their likely effects and side effects, and the quality of the underlying knowledge base.

  3. Evaluating which plan quality metrics are appropriate for use in lung SBRT.

    Science.gov (United States)

    Yaparpalvi, Ravindra; Garg, Madhur K; Shen, Jin; Bodner, William R; Mynampati, Dinesh K; Gafar, Aleiya; Kuo, Hsiang-Chi; Basavatia, Amar K; Ohri, Nitin; Hong, Linda X; Kalnicki, Shalom; Tome, Wolfgang A

    2018-02-01

    Several dose metrics in the categories-homogeneity, coverage, conformity and gradient have been proposed in literature for evaluating treatment plan quality. In this study, we applied these metrics to characterize and identify the plan quality metrics that would merit plan quality assessment in lung stereotactic body radiation therapy (SBRT) dose distributions. Treatment plans of 90 lung SBRT patients, comprising 91 targets, treated in our institution were retrospectively reviewed. Dose calculations were performed using anisotropic analytical algorithm (AAA) with heterogeneity correction. A literature review on published plan quality metrics in the categories-coverage, homogeneity, conformity and gradient was performed. For each patient, using dose-volume histogram data, plan quality metric values were quantified and analysed. For the study, the radiation therapy oncology group (RTOG) defined plan quality metrics were: coverage (0.90 ± 0.08); homogeneity (1.27 ± 0.07); conformity (1.03 ± 0.07) and gradient (4.40 ± 0.80). Geometric conformity strongly correlated with conformity index (p metrics as appropriate surrogates for establishing SBRT lung plan quality guidelines-coverage % (ICRU 62), conformity (CN or CI Paddick ) and gradient (R 50% ). Furthermore, we strongly recommend that RTOG lung SBRT protocols adopt either CN or CI Padddick in place of prescription isodose to target volume ratio for conformity index evaluation. Advances in knowledge: Our study metrics are valuable tools for establishing lung SBRT plan quality guidelines.

  4. Development of quality metrics for ambulatory pediatric cardiology: Infection prevention.

    Science.gov (United States)

    Johnson, Jonathan N; Barrett, Cindy S; Franklin, Wayne H; Graham, Eric M; Halnon, Nancy J; Hattendorf, Brandy A; Krawczeski, Catherine D; McGovern, James J; O'Connor, Matthew J; Schultz, Amy H; Vinocur, Jeffrey M; Chowdhury, Devyani; Anderson, Jeffrey B

    2017-12-01

    In 2012, the American College of Cardiology's (ACC) Adult Congenital and Pediatric Cardiology Council established a program to develop quality metrics to guide ambulatory practices for pediatric cardiology. The council chose five areas on which to focus their efforts; chest pain, Kawasaki Disease, tetralogy of Fallot, transposition of the great arteries after arterial switch, and infection prevention. Here, we sought to describe the process, evaluation, and results of the Infection Prevention Committee's metric design process. The infection prevention metrics team consisted of 12 members from 11 institutions in North America. The group agreed to work on specific infection prevention topics including antibiotic prophylaxis for endocarditis, rheumatic fever, and asplenia/hyposplenism; influenza vaccination and respiratory syncytial virus prophylaxis (palivizumab); preoperative methods to reduce intraoperative infections; vaccinations after cardiopulmonary bypass; hand hygiene; and testing to identify splenic function in patients with heterotaxy. An extensive literature review was performed. When available, previously published guidelines were used fully in determining metrics. The committee chose eight metrics to submit to the ACC Quality Metric Expert Panel for review. Ultimately, metrics regarding hand hygiene and influenza vaccination recommendation for patients did not pass the RAND analysis. Both endocarditis prophylaxis metrics and the RSV/palivizumab metric passed the RAND analysis but fell out during the open comment period. Three metrics passed all analyses, including those for antibiotic prophylaxis in patients with heterotaxy/asplenia, for influenza vaccination compliance in healthcare personnel, and for adherence to recommended regimens of secondary prevention of rheumatic fever. The lack of convincing data to guide quality improvement initiatives in pediatric cardiology is widespread, particularly in infection prevention. Despite this, three metrics were

  5. Experiences with Software Quality Metrics in the EMI Middleware

    OpenAIRE

    Alandes, Maria

    2012-01-01

    PUBLISHED he EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteris...

  6. Experiences with Software Quality Metrics in the EMI middlewate

    OpenAIRE

    Alandes, M; Kenny, E M; Meneses, D; Pucciani, G

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristi...

  7. Metrics for rapid quality control in RNA structure probing experiments.

    Science.gov (United States)

    Choudhary, Krishna; Shih, Nathan P; Deng, Fei; Ledda, Mirko; Li, Bo; Aviran, Sharon

    2016-12-01

    The diverse functionalities of RNA can be attributed to its capacity to form complex and varied structures. The recent proliferation of new structure probing techniques coupled with high-throughput sequencing has helped RNA studies expand in both scope and depth. Despite differences in techniques, most experiments face similar challenges in reproducibility due to the stochastic nature of chemical probing and sequencing. As these protocols expand to transcriptome-wide studies, quality control becomes a more daunting task. General and efficient methodologies are needed to quantify variability and quality in the wide range of current and emerging structure probing experiments. We develop metrics to rapidly and quantitatively evaluate data quality from structure probing experiments, demonstrating their efficacy on both small synthetic libraries and transcriptome-wide datasets. We use a signal-to-noise ratio concept to evaluate replicate agreement, which has the capacity to identify high-quality data. We also consider and compare two methods to assess variability inherent in probing experiments, which we then utilize to evaluate the coverage adjustments needed to meet desired quality. The developed metrics and tools will be useful in summarizing large-scale datasets and will help standardize quality control in the field. The data and methods used in this article are freely available at: http://bme.ucdavis.edu/aviranlab/SPEQC_software CONTACT: saviran@ucdavis.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Towards a Visual Quality Metric for Digital Video

    Science.gov (United States)

    Watson, Andrew B.

    1998-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  9. Objective Image Quality Metrics for Ultrasound Imaging

    OpenAIRE

    Simpson, Cecilie Øinæs

    2009-01-01

    Objective evaluation of the image quality on ultrasound images is a comprehensive task due to the relatively low image quality compared to other imaging techniques. It is desirable to objectively determine the quality of ultrasound images since quantification of the quality removes the subjective evaluation which can lead to varying results. The scanner will also be more user friendly if the user is given feedback on the quality of the current image. This thesis has investigated in the obje...

  10. [Clinical trial data management and quality metrics system].

    Science.gov (United States)

    Chen, Zhao-hua; Huang, Qin; Deng, Ya-zhong; Zhang, Yue; Xu, Yu; Yu, Hao; Liu, Zong-fan

    2015-11-01

    Data quality management system is essential to ensure accurate, complete, consistent, and reliable data collection in clinical research. This paper is devoted to various choices of data quality metrics. They are categorized by study status, e.g. study start up, conduct, and close-out. In each category, metrics for different purposes are listed according to ALCOA+ principles such us completeness, accuracy, timeliness, traceability, etc. Some general quality metrics frequently used are also introduced. This paper contains detail information as much as possible to each metric by providing definition, purpose, evaluation, referenced benchmark, and recommended targets in favor of real practice. It is important that sponsors and data management service providers establish a robust integrated clinical trial data quality management system to ensure sustainable high quality of clinical trial deliverables. It will also support enterprise level of data evaluation and bench marking the quality of data across projects, sponsors, data management service providers by using objective metrics from the real clinical trials. We hope this will be a significant input to accelerate the improvement of clinical trial data quality in the industry.

  11. Experiences with Software Quality Metrics in the EMI middleware

    Science.gov (United States)

    Alandes, M.; Kenny, E. M.; Meneses, D.; Pucciani, G.

    2012-12-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering - Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project to extract “code metrics” on the status of the software products and “process metrics” related to the quality of the development and support process such as reaction time to critical bugs, requirements tracking and delays in product releases.

  12. A universal color image quality metric

    NARCIS (Netherlands)

    Toet, A.; Lucassen, M.P.

    2003-01-01

    We extend a recently introduced universal grayscale image quality index to a newly developed perceptually decorrelated color space. The resulting color image quality index quantifies the distortion of a processed color image relative to its original version. We evaluated the new color image quality

  13. Feasibility of and Rationale for the Collection of Orthopaedic Trauma Surgery Quality of Care Metrics.

    Science.gov (United States)

    Miller, Anna N; Kozar, Rosemary; Wolinsky, Philip

    2017-06-01

    Reproducible metrics are needed to evaluate the delivery of orthopaedic trauma care, national care, norms, and outliers. The American College of Surgeons (ACS) is uniquely positioned to collect and evaluate the data needed to evaluate orthopaedic trauma care via the Committee on Trauma and the Trauma Quality Improvement Project. We evaluated the first quality metrics the ACS has collected for orthopaedic trauma surgery to determine whether these metrics can be appropriately collected with accuracy and completeness. The metrics include the time to administration of the first dose of antibiotics for open fractures, the time to surgical irrigation and débridement of open tibial fractures, and the percentage of patients who undergo stabilization of femoral fractures at trauma centers nationwide. These metrics were analyzed to evaluate for variances in the delivery of orthopaedic care across the country. The data showed wide variances for all metrics, and many centers had incomplete ability to collect the orthopaedic trauma care metrics. There was a large variability in the results of the metrics collected among different trauma center levels, as well as among centers of a particular level. The ACS has successfully begun tracking orthopaedic trauma care performance measures, which will help inform reevaluation of the goals and continued work on data collection and improvement of patient care. Future areas of research may link these performance measures with patient outcomes, such as long-term tracking, to assess nonunion and function. This information can provide insight into center performance and its effect on patient outcomes. The ACS was able to successfully collect and evaluate the data for three metrics used to assess the quality of orthopaedic trauma care. However, additional research is needed to determine whether these metrics are suitable for evaluating orthopaedic trauma care and cutoff values for each metric.

  14. Analysis of Solar Cell Quality Using Voltage Metrics: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Toberer, E. S.; Tamboli, A. C.; Steiner, M.; Kurtz, S.

    2012-06-01

    The highest efficiency solar cells provide both excellent voltage and current. Of these, the open-circuit voltage (Voc) is more frequently viewed as an indicator of the material quality. However, since the Voc also depends on the band gap of the material, the difference between the band gap and the Voc is a better metric for comparing material quality of unlike materials. To take this one step further, since Voc also depends on the shape of the absorption edge, we propose to use the ultimate metric: the difference between the measured Voc and the Voc calculated from the external quantum efficiency using a detailed balance approach. This metric is less sensitive to changes in cell design and definition of band gap. The paper defines how to implement this metric and demonstrates how it can be useful in tracking improvements in Voc, especially as Voc approaches its theoretical maximum.

  15. Experiences with Software Quality Metrics in the EMI Middleware

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project t...

  16. Experiences with Software Quality Metrics in the EMI middlewate

    CERN Document Server

    Alandes, M; Meneses, D; Pucciani, G

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project to ...

  17. Efficacy of algal metrics for assessing nutrient and organic enrichment in flowing waters

    Science.gov (United States)

    Porter, S.D.; Mueller, D.K.; Spahr, N.E.; Munn, M.D.; Dubrovsky, N.M.

    2008-01-01

    1. Algal-community metrics were calculated for periphyton samples collected from 976 streams and rivers by the U.S. Geological Survey’s National Water-Quality Assessment (NAWQA) Programme during 1993–2001 to evaluate national and regional relations with water chemistry and to compare whether algal-metric values differ significantly among undeveloped and developed land-use classifications.

  18. Development of soil quality metrics using mycorrhizal fungi

    Energy Technology Data Exchange (ETDEWEB)

    Baar, J.

    2010-07-01

    Based on the Treaty on Biological Diversity of Rio de Janeiro in 1992 for maintaining and increasing biodiversity, several countries have started programmes monitoring soil quality and the above- and below ground biodiversity. Within the European Union, policy makers are working on legislation for soil protection and management. Therefore, indicators are needed to monitor the status of the soils and these indicators reflecting the soil quality, can be integrated in working standards or soil quality metrics. Soil micro-organisms, particularly arbuscular mycorrhizal fungi (AMF), are indicative of soil changes. These soil fungi live in symbiosis with the great majority of plants and are sensitive to changes in the physico-chemical conditions of the soil. The aim of this study was to investigate whether AMF are reliable and sensitive indicators for disturbances in the soils and can be used for the development of soil quality metrics. Also, it was studied whether soil quality metrics based on AMF meet requirements to applicability by users and policy makers. Ecological criterions were set for the development of soil quality metrics for different soils. Multiple root samples containing AMF from various locations in The Netherlands were analyzed. The results of the analyses were related to the defined criterions. This resulted in two soil quality metrics, one for sandy soils and a second one for clay soils, with six different categories ranging from very bad to very good. These soil quality metrics meet the majority of requirements for applicability and are potentially useful for the development of legislations for the protection of soil quality. (Author) 23 refs.

  19. Metrics for Measuring Data Quality - Foundations for an Economic Oriented Management of Data Quality

    OpenAIRE

    Heinrich, Bernd; Kaiser, Marcus; Klier, Mathias

    2007-01-01

    The article develops metrics for an economic oriented management of data quality. Two data quality dimensions are focussed: consistency and timeliness. For deriving adequate metrics several requirements are stated (e. g. normalisation, cardinality, adaptivity, interpretability). Then the authors discuss existing approaches for measuring data quality and illustrate their weaknesses. Based upon these considerations, new metrics are developed for the data quality dimensions consistency and timel...

  20. Quality metrics can help the expert during neurological clinical trials

    Science.gov (United States)

    Mahé, L.; Autrusseau, F.; Desal, H.; Guédon, J.; Der Sarkissian, H.; Le Teurnier, Y.; Davila, S.

    2016-03-01

    Carotid surgery is a frequent act corresponding to 15 to 20 thousands operations per year in France. Cerebral perfusion has to be tracked before and after carotid surgery. In this paper, a diagnosis support using quality metrics is proposed to detect vascular lesions on MR images. Our key stake is to provide a detection tool mimicking the human visual system behavior during the visual inspection. Relevant Human Visual System (HVS) properties should be integrated in our lesion detection method, which must be robust to common distortions in medical images. Our goal is twofold: to help the neuroradiologist to perform its task better and faster but also to provide a way to reduce the risk of bias in image analysis. Objective quality metrics (OQM) are methods whose goal is to predict the perceived quality. In this work, we use Objective Quality Metrics to detect perceivable differences between pairs of images.

  1. First statistical analysis of Geant4 quality software metrics

    Science.gov (United States)

    Ronchieri, Elisabetta; Grazia Pia, Maria; Giacomini, Francesco

    2015-12-01

    Geant4 is a simulation system of particle transport through matter, widely used in several experimental areas from high energy physics and nuclear experiments to medical studies. Some of its applications may involve critical use cases; therefore they would benefit from an objective assessment of the software quality of Geant4. In this paper, we provide a first statistical evaluation of software metrics data related to a set of Geant4 physics packages. The analysis aims at identifying risks for Geant4 maintainability, which would benefit from being addressed at an early stage. The findings of this pilot study set the grounds for further extensions of the analysis to the whole of Geant4 and to other high energy physics software systems.

  2. Quality of life Metrics in Pediatric Uveitis

    OpenAIRE

    Angeles-Han, Sheila T.

    2015-01-01

    Uveitis can lead to vision loss and blindness in children. It can significantly impact a child’s vision related quality of life and daily function. Outcome studies in pediatric uveitis focus on the clinical ocular exam and general measures of quality of life whereas in adults, measures of visual function are incorporated. Adequate vision can affect a child’s daily activities and is crucial for daily function in the home and school. A comprehensive approach that incorporates all aspects of dis...

  3. Quality metrics currently used in academic radiology departments: results of the QUALMET survey.

    Science.gov (United States)

    Walker, Eric A; Petscavage-Thomas, Jonelle M; Fotos, Joseph S; Bruno, Michael A

    2017-03-01

    We present the results of the 2015 quality metrics (QUALMET) survey, which was designed to assess the commonalities and variability of selected quality and productivity metrics currently employed by a large sample of academic radiology departments representing all regions in the USA. The survey of key radiology metrics was distributed in March-April of 2015 via personal e-mail to 112 academic radiology departments. There was a 34.8% institutional response rate. We found that most academic departments of radiology commonly utilize metrics of hand hygiene, report turn around time (RTAT), relative value unit (RVU) productivity, patient satisfaction and participation in peer review. RTAT targets were found to vary widely. The implementation of radiology peer review and the variety of ways in which peer review results are used within academic radiology departments, the use of clinical decision support tools and requirements for radiologist participation in Maintenance of Certification also varied. Policies for hand hygiene and critical results communication were very similar across all institutions reporting, and most departments utilized some form of missed case/difficult case conference as part of their quality and safety programme, as well as some form of periodic radiologist performance reviews. Results of the QUALMET survey suggest many similarities in tracking and utilization of the selected quality and productivity metrics included in our survey. Use of quality indicators is not a fully standardized process among academic radiology departments. Advances in knowledge: This article examines the current quality and productivity metrics in academic radiology.

  4. Predicting visual performance from optical quality metrics in keratoconus.

    Science.gov (United States)

    Schoneveld, Paul; Pesudovs, Konrad; Coster, Douglas J

    2009-05-01

    The aim was to identify optical quality metrics predictive of visual performance in eyes with keratoconus and penetrating keratoplasty (PK) for keratoconus. Fifty-four participants were recruited for this prospective, cross-sectional study. Data were collected from one eye of each participant: 26 keratoconus, 10 PK and 18 normal eyes: average age (mean +/- standard deviation) 45.2 +/- 10.6 years and 56 per cent female. Visual performance was tested by 10 methods including visual acuity (VA), both high and low contrast (HC- and LC-) and high and low luminance (LL-), and Pelli-Robson contrast sensitivity, all tested with and without glare. Corneal first surface wavefront aberrations were calculated from Orbscan corneal topographic data using VOLPro software v7.08 (Sarver and Associates) as a tenth-order Zernike expansion across three, 4.0 mm and 5.0 mm pupils and converted into 31 optical quality metrics. Pearson correlation coefficients and linear regression were used to relate wavefront aberration metrics to visual performance. Visual performance was highly predictable from optical quality with the average correlation of the order of 0.5. Pupil fraction metrics (for example, PFWc) were responsible for all of the highest correlations at large pupils for example, with HCVA (r = 0.80), LCVA (r = 0.80) and LLLCVA (r = 0.75). Image plane metrics, derived from the optical transfer function (OTF) were responsible for most of the highest correlations at smaller pupils for example, volume under the OTF (VOTF) with HCVA (r = 0.76) and LCVA (r = 0.73). As in normal eyes, visual performance in keratoconus was predicable from optical quality; albeit by different metrics. Optical quality metrics predictive of visual performance in normal eyes, for example, visual Strehl, lack the dynamic range to represent visual performance in highly aberrated eyes with keratoconus. Optical quality outcomes for keratoconus could be reported using many different metrics, but pupil fraction

  5. Developing a Quality Assurance Metric: A Panoptic View

    Science.gov (United States)

    Love, Steve; Scoble, Rosa

    2006-01-01

    There are a variety of techniques that lecturers can use to get feedback on their teaching--for example, module feedback and coursework results. However, a question arises about how reliable and valid are the content that goes into these quality assurance metrics. The aim of this article is to present a new approach for collecting and analysing…

  6. National Quality Forum Metrics for Thoracic Surgery.

    Science.gov (United States)

    Cipriano, Anthony; Burfeind, William R

    2017-08-01

    The National Quality Forum (NQF) is a multistakeholder, nonprofit, membership-based organization improving health care through preferential use of valid performance measures. NQF-endorsed measures are considered the gold standard for health care measurement in the United States. The Society of Thoracic Surgeons is the steward of the only six NQF-endorsed general thoracic surgery measures. These measures include one structure measure (participation in a national general thoracic surgery database), two process measures (recording of clinical stage and recording performance status before lung and esophageal resections), and three outcome measures (risk-adjusted morbidity and mortality after lung and esophageal resections and risk-adjusted length of stay greater than 14 days after lobectomy). Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Pragmatic quality metrics for evolutionary software development models

    Science.gov (United States)

    Royce, Walker

    1990-01-01

    Due to the large number of product, project, and people parameters which impact large custom software development efforts, measurement of software product quality is a complex undertaking. Furthermore, the absolute perspective from which quality is measured (customer satisfaction) is intangible. While we probably can't say what the absolute quality of a software product is, we can determine the relative quality, the adequacy of this quality with respect to pragmatic considerations, and identify good and bad trends during development. While no two software engineers will ever agree on an optimum definition of software quality, they will agree that the most important perspective of software quality is its ease of change. We can call this flexibility, adaptability, or some other vague term, but the critical characteristic of software is that it is soft. The easier the product is to modify, the easier it is to achieve any other software quality perspective. This paper presents objective quality metrics derived from consistent lifecycle perspectives of rework which, when used in concert with an evolutionary development approach, can provide useful insight to produce better quality per unit cost/schedule or to achieve adequate quality more efficiently. The usefulness of these metrics is evaluated by applying them to a large, real world, Ada project.

  8. Development of Quality Metrics in Ambulatory Pediatric Cardiology.

    Science.gov (United States)

    Chowdhury, Devyani; Gurvitz, Michelle; Marelli, Ariane; Anderson, Jeffrey; Baker-Smith, Carissa; Diab, Karim A; Edwards, Thomas C; Hougen, Tom; Jedeikin, Roy; Johnson, Jonathan N; Karpawich, Peter; Lai, Wyman; Lu, Jimmy C; Mitchell, Stephanie; Newburger, Jane W; Penny, Daniel J; Portman, Michael A; Satou, Gary; Teitel, David; Villafane, Juan; Williams, Roberta; Jenkins, Kathy

    2017-02-07

    The American College of Cardiology Adult Congenital and Pediatric Cardiology (ACPC) Section had attempted to create quality metrics (QM) for ambulatory pediatric practice, but limited evidence made the process difficult. The ACPC sought to develop QMs for ambulatory pediatric cardiology practice. Five areas of interest were identified, and QMs were developed in a 2-step review process. In the first step, an expert panel, using the modified RAND-UCLA methodology, rated each QM for feasibility and validity. The second step sought input from ACPC Section members; final approval was by a vote of the ACPC Council. Work groups proposed a total of 44 QMs. Thirty-one metrics passed the RAND process and, after the open comment period, the ACPC council approved 18 metrics. The project resulted in successful development of QMs in ambulatory pediatric cardiology for a range of ambulatory domains. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  9. Neurosurgical virtual reality simulation metrics to assess psychomotor skills during brain tumor resection.

    Science.gov (United States)

    Azarnoush, Hamed; Alzhrani, Gmaan; Winkler-Schwartz, Alexander; Alotaibi, Fahad; Gelinas-Phaneuf, Nicholas; Pazos, Valérie; Choudhury, Nusrat; Fares, Jawad; DiRaddo, Robert; Del Maestro, Rolando F

    2015-05-01

    Virtual reality simulator technology together with novel metrics could advance our understanding of expert neurosurgical performance and modify and improve resident training and assessment. This pilot study introduces innovative metrics that can be measured by the state-of-the-art simulator to assess performance. Such metrics cannot be measured in an operating room and have not been used previously to assess performance. Three sets of performance metrics were assessed utilizing the NeuroTouch platform in six scenarios with simulated brain tumors having different visual and tactile characteristics. Tier 1 metrics included percentage of brain tumor resected and volume of simulated "normal" brain tissue removed. Tier 2 metrics included instrument tip path length, time taken to resect the brain tumor, pedal activation frequency, and sum of applied forces. Tier 3 metrics included sum of forces applied to different tumor regions and the force bandwidth derived from the force histogram. The results outlined are from a novice resident in the second year of training and an expert neurosurgeon. The three tiers of metrics obtained from the NeuroTouch simulator do encompass the wide variability of technical performance observed during novice/expert resections of simulated brain tumors and can be employed to quantify the safety, quality, and efficiency of technical performance during simulated brain tumor resection. Tier 3 metrics derived from force pyramids and force histograms may be particularly useful in assessing simulated brain tumor resections. Our pilot study demonstrates that the safety, quality, and efficiency of novice and expert operators can be measured using metrics derived from the NeuroTouch platform, helping to understand how specific operator performance is dependent on both psychomotor ability and cognitive input during multiple virtual reality brain tumor resections.

  10. Performance of Consultative Palliative Care Model in Achieving Quality Metrics in the ICU.

    Science.gov (United States)

    Wysham, Nicholas G; Hochman, Michael J; Wolf, Steven P; Cox, Christopher E; Kamal, Arif H

    2016-12-01

    Quality metrics for intensive care unit (ICU)-based palliative care have been proposed, but it is unknown how consultative palliative care can contribute to performance on these measures. Assess adherence to proposed quality metrics of ICU-based palliative care by palliative care specialists. Surrogates for 9/14 patient-level quality metrics were assessed in all patients who received an initial palliative care specialist consult while in an ICU from 10/26/2012 to 1/16/2015 in the Global Palliative Care Quality Alliance, a nationwide palliative care quality registry. Two hundred fifty-four patients received an initial palliative care consultation in an ICU setting. Mean (SD) age was 67.5 (17.3) years, 52% were female. The most common reasons for consultation were symptom management (33%) and end-of-life transition (24%). Adherence to ICU quality metrics for palliative care was variable: clinicians documented presence or absence of advance directives in 36% of encounters, assessed pain in 52.0%, dyspnea in 50.8%, spiritual support in 62%, and reported an intervention for pain in 100% of patients with documented moderate to severe intensity pain. Palliative care consultations in an ICU setting are characterized by variable adherence to candidate ICU palliative care quality metrics. Although symptom management was the most common reason for palliative care consultation, consultants infrequently documented symptom assessments. Palliative care consultants performed better in offering spiritual support and managing documented symptoms. These results highlight specific competencies of consultative palliative care that should be complimented by ICU teams to ensure high-quality comprehensive care for the critically ill. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  11. Quantitative metrics to evaluate image quality for computed radiographic images

    Science.gov (United States)

    Pitcher, Christopher D.

    Traditional methods of evaluating a computed radiography (CR) imaging system's performance (e.g. the noise power spectrum (NPS), the modulation transfer function (MTF), the detective quantum efficiency (DQE) and contrast-detail analysis) were adapted in order to evaluate the feasibility of identifying a quantitative metric to evaluate image quality for digital radiographic images. The addition of simulated patient scattering media when acquiring the images to calculate these parameters altered their fundamental meaning. To avoid confusion with other research they were renamed the clinical noise power spectrum (NPSC), the clinical modulation transfer function (MTFC), the clinical detective quantum efficiency (DQEC) and the clinical contrast detail score (CDSC). These metrics were then compared to the subjective evaluation of radiographic images of an anthropomorphic phantom representing a one-year old pediatric patient. Computer algorithms were developed to implement the traditional mathematical procedures for calculating the system performance parameters. In order to easily compare these three metrics, the integral up to the system Nyquist frequency was used as the final image quality metric. These metrics are identified as the INPSC, the IMTFC and the IDQEC respectively. A computer algorithm was also developed, based on the results of the observer study, to determine the threshold contrast to noise ratio (CNRT) for objects of different sizes. This algorithm was then used to determine the CDSC by scoring images without the use of observers. The four image quality metrics identified in this study were evaluated to determine if they could distinguish between small changes in image acquisition parameters e.g., current-time product and peak-tube potential. All of the metrics were able to distinguish these small changes in at least one of the image acquisition parameters, but the ability to digitally manipulate the raw image data made the identification of a broad

  12. Recommendations for mass spectrometry data quality metrics for open access data (corollary to the Amsterdam Principles).

    Science.gov (United States)

    Kinsinger, Christopher R; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W; Deutsch, Eric W; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L; Omenn, Gilbert S; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L; Simpson, Richard J; Slotta, Douglas; Smith, Richard D; Stein, Stephen E; Tabb, David L; Tagle, Danilo; Yates, John R; Rodriguez, Henry

    2012-02-03

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the U.S. National Cancer Institute (NCI) convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: (1) an evolving list of comprehensive quality metrics and (2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals.

  13. Design For Six Sigma with Critical-To-Quality Metrics for Research Investments

    Energy Technology Data Exchange (ETDEWEB)

    Logan, R W

    2005-06-22

    Design for Six Sigma (DFSS) has evolved as a worthy predecessor to the application of Six-Sigma principles to production, process control, and quality. At Livermore National Laboratory (LLNL), we are exploring the interrelation of our current research, development, and design safety standards as they would relate to the principles of DFSS and Six-Sigma. We have had success in prioritization of research and design using a quantitative scalar metric for value, so we further explore the use of scalar metrics to represent the outcome of our use of the DFSS process. We use the design of an automotive component as an example of combining DFSS metrics into a scalar decision quantity. We then extend this concept to a high-priority, personnel safety example representing work that is toward the mature end of DFSS, and begins the transition into Six-Sigma for safety assessments in a production process. This latter example and objective involves the balance of research investment, quality control, and system operation and maintenance of high explosive handling at LLNL and related production facilities. Assuring a sufficiently low probability of failure (reaction of a high explosive given an accidental impact) is a Critical-To-Quality (CTQ) component of our weapons and stockpile stewardship operation and cost. Our use of DFSS principles, with quantification and merging of CTQ metrics, provides ways to quantify clear (preliminary) paths forward for both the automotive example and the explosive safety example. The presentation of simple, scalar metrics to quantify the path forward then provides a focal point for qualitative caveats and discussion for inclusion of other metrics besides a single, provocative scalar. In this way, carrying a scalar decision metric along with the DFSS process motivates further discussion and ideas for process improvement from the DFSS into the Six-Sigma phase of the product. We end with an example of how our DFSS-generated scalar metric could be

  14. SU-E-T-222: How to Define and Manage Quality Metrics in Radiation Oncology.

    Science.gov (United States)

    Harrison, A; Cooper, K; DeGregorio, N; Doyle, L; Yu, Y

    2012-06-01

    Since the 2001 IOM Report Crossing the Quality Chasm: A New Health System for the 21st Century, the need to provide quality metrics in health care has increased. Quality metrics have yet to be defined for the field of radiation oncology. This study represents one institutes initial efforts defining and measuring quality metrics using our electronic medical record and verify system(EMR) as a primary data collection tool. This effort began by selecting meaningful quality metrics rooted in the IOM definition of quality (safe, timely, efficient, effective, equitable and patient-centered care) that were also measurable targets based on current data input and workflow. Elekta MOSAIQ 2.30.04D1 was used to generate reports on the number of Special Physics Consults(SPC) charged as a surrogate for treatment complexity, daily patient time in department(DTP) as a measure of efficiency and timeliness, and time from CT-simulation to first LINAC appointment(STL). The number of IMRT QAs delivered in the department was also analyzed to assess complexity. Although initial MOSAIQ reports were easily generated, the data needed to be assessed and adjusted for outliers. Patients with delays outside of radiation oncology such as chemotherapy or surgery were excluded from STL data. We found an average STL of six days for all CT-simulated patients and an average DTP of 52 minutes total time, with 23 minutes in the LINAC vault. Annually, 7.3% of all patient require additional physics support indicated by SPC. Utilizing our EMR, an entire year's worth of useful data characterizing our clinical experience was analyzed in less than one day. Having baseline quality metrics is necessary to improve patient care. Future plans include dissecting this data into more specific categories such as IMRT DTP, workflow timing following CT-simulation, beam-on hours, chart review outcomes, and dosimetric quality indicators. © 2012 American Association of Physicists in Medicine.

  15. Editorial: On the Quality of Quality Metrics: Rethinking What Defines a Good Colonoscopy.

    Science.gov (United States)

    Dominitz, Jason A; Spiegel, Brennan

    2016-05-01

    The colonoscopy quality assurance movement has focused on a variety of process metrics, including the adenoma detection rate (ADR). However, the ADR only ascertains whether or not at least one adenoma is identified. Supplemental measures that quantify all neoplasia have been proposed. In this issue of the American Journal of Gastroenterology, Aniwan and colleagues performed tandem screening colonoscopies to determine the adenoma miss rate among high-ADR endoscopists. This permitted validation of supplemental colonoscopy quality metrics. This study highlights potential limitations of ADR and the need for further refinement of colonoscopy quality metrics, although logistic challenges abound.

  16. On the Efficiency of Image Metrics for Evaluating the Visual Quality of 3D Models.

    Science.gov (United States)

    Lavoue, Guillaume; Larabi, Mohamed Chaker; Vasa, Libor

    2016-08-01

    3D meshes are deployed in a wide range of application processes (e.g., transmission, compression, simplification, watermarking and so on) which inevitably introduce geometric distortions that may alter the visual quality of the rendered data. Hence, efficient model-based perceptual metrics, operating on the geometry of the meshes being compared, have been recently introduced to control and predict these visual artifacts. However, since the 3D models are ultimately visualized on 2D screens, it seems legitimate to use images of the models (i.e., snapshots from different viewpoints) to evaluate their visual fidelity. In this work we investigate the use of image metrics to assess the visual quality of 3D models. For this goal, we conduct a wide-ranging study involving several 2D metrics, rendering algorithms, lighting conditions and pooling algorithms, as well as several mean opinion score databases. The collected data allow (1) to determine the best set of parameters to use for this image-based quality assessment approach and (2) to compare this approach to the best performing model-based metrics and determine for which use-case they are respectively adapted. We conclude by exploring several applications that illustrate the benefits of image-based quality assessment.

  17. Assessment of the Log-Euclidean Metric Performance in Diffusion Tensor Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mostafa Charmi

    2010-06-01

    Full Text Available Introduction: Appropriate definition of the distance measure between diffusion tensors has a deep impact on Diffusion Tensor Image (DTI segmentation results. The geodesic metric is the best distance measure since it yields high-quality segmentation results. However, the important problem with the geodesic metric is a high computational cost of the algorithms based on it. The main goal of this paper is to assess the possible substitution of the geodesic metric with the Log-Euclidean one to reduce the computational cost of a statistical surface evolution algorithm. Materials and Methods: We incorporated the Log-Euclidean metric in the statistical surface evolution algorithm framework. To achieve this goal, the statistics and gradients of diffusion tensor images were defined using the Log-Euclidean metric. Numerical implementation of the segmentation algorithm was performed in the MATLAB software using the finite difference techniques. Results: In the statistical surface evolution framework, the Log-Euclidean metric was able to discriminate the torus and helix patterns in synthesis datasets and rat spinal cords in biological phantom datasets from the background better than the Euclidean and J-divergence metrics. In addition, similar results were obtained with the geodesic metric. However, the main advantage of the Log-Euclidean metric over the geodesic metric was the dramatic reduction of computational cost of the segmentation algorithm, at least by 70 times. Discussion and Conclusion: The qualitative and quantitative results have shown that the Log-Euclidean metric is a good substitute for the geodesic metric when using a statistical surface evolution algorithm in DTIs segmentation.

  18. Getting started on metrics - Jet Propulsion Laboratory productivity and quality

    Science.gov (United States)

    Bush, M. W.

    1990-01-01

    A review is presented to describe the effort and difficulties of reconstructing fifteen years of JPL software history. In 1987 the collection and analysis of project data were started with the objective of creating laboratory-wide measures of quality and productivity for software development. As a result of this two-year Software Product Assurance metrics study, a rough measurement foundation for software productivity and software quality, and an order-of-magnitude quantitative baseline for software systems and subsystems are now available.

  19. A Validation of Object-Oriented Design Metrics as Quality Indicators

    Science.gov (United States)

    Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio

    1997-01-01

    This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.

  20. Visual Quality Metrics Resulting from Dynamic Corneal Tear Film Topography

    Science.gov (United States)

    Solem, Cameron Cole

    The visual quality effects from the dynamic behavior of the tear film have been determined through measurements acquired with a high resolution Twyman-Green interferometer. The base shape of the eye has been removed to isolate the aberrations induced by the tear film. The measured tear film was then combined with a typical human eye model to simulate visual performance. Fourier theory has been implemented to calculate the incoherent point spread function, the modulation transfer function, and the subjective quality factor for this system. Analysis software has been developed for ease of automation for large data sets, and outputs movies have been made that display these visual quality metrics alongside the tear film. Post processing software was written to identify and eliminate bad frames. As a whole, this software creates the potential for increased intuition about the connection between blinks, tear film dynamics and visual quality.

  1. Energy-Based Metrics for Arthroscopic Skills Assessment.

    Science.gov (United States)

    Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa

    2017-08-05

    Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.

  2. Beyond metrics? Utilizing 'soft intelligence' for healthcare quality and safety.

    Science.gov (United States)

    Martin, Graham P; McKee, Lorna; Dixon-Woods, Mary

    2015-10-01

    Formal metrics for monitoring the quality and safety of healthcare have a valuable role, but may not, by themselves, yield full insight into the range of fallibilities in organizations. 'Soft intelligence' is usefully understood as the processes and behaviours associated with seeking and interpreting soft data-of the kind that evade easy capture, straightforward classification and simple quantification-to produce forms of knowledge that can provide the basis for intervention. With the aim of examining current and potential practice in relation to soft intelligence, we conducted and analysed 107 in-depth qualitative interviews with senior leaders, including managers and clinicians, involved in healthcare quality and safety in the English National Health Service. We found that participants were in little doubt about the value of softer forms of data, especially for their role in revealing troubling issues that might be obscured by conventional metrics. Their struggles lay in how to access softer data and turn them into a useful form of knowing. Some of the dominant approaches they used risked replicating the limitations of hard, quantitative data. They relied on processes of aggregation and triangulation that prioritised reliability, or on instrumental use of soft data to animate the metrics. The unpredictable, untameable, spontaneous quality of soft data could be lost in efforts to systematize their collection and interpretation to render them more tractable. A more challenging but potentially rewarding approach involved processes and behaviours aimed at disrupting taken-for-granted assumptions about quality, safety, and organizational performance. This approach, which explicitly values the seeking out and the hearing of multiple voices, is consistent with conceptual frameworks of organizational sensemaking and dialogical understandings of knowledge. Using soft intelligence this way can be challenging and discomfiting, but may offer a critical defence against the

  3. Using Qualitative and Quantitative Methods to Choose a Habitat Quality Metric for Air Pollution Policy Evaluation.

    Science.gov (United States)

    Rowe, Edwin C; Ford, Adriana E S; Smart, Simon M; Henrys, Peter A; Ashmore, Mike R

    2016-01-01

    Atmospheric nitrogen (N) deposition has had detrimental effects on species composition in a range of sensitive habitats, although N deposition can also increase agricultural productivity and carbon storage, and favours a few species considered of importance for conservation. Conservation targets are multiple, and increasingly incorporate services derived from nature as well as concepts of intrinsic value. Priorities vary. How then should changes in a set of species caused by drivers such as N deposition be assessed? We used a novel combination of qualitative semi-structured interviews and quantitative ranking to elucidate the views of conservation professionals specialising in grasslands, heathlands and mires. Although conservation management goals are varied, terrestrial habitat quality is mainly assessed by these specialists on the basis of plant species, since these are readily observed. The presence and abundance of plant species that are scarce, or have important functional roles, emerged as important criteria for judging overall habitat quality. However, species defined as 'positive indicator-species' (not particularly scarce, but distinctive for the habitat) were considered particularly important. Scarce species are by definition not always found, and the presence of functionally important species is not a sufficient indicator of site quality. Habitat quality as assessed by the key informants was rank-correlated with the number of positive indicator-species present at a site for seven of the nine habitat classes assessed. Other metrics such as species-richness or a metric of scarcity were inconsistently or not correlated with the specialists' assessments. We recommend that metrics of habitat quality used to assess N pollution impacts are based on the occurrence of, or habitat-suitability for, distinctive species. Metrics of this type are likely to be widely applicable for assessing habitat change in response to different drivers. The novel combined

  4. Using Qualitative and Quantitative Methods to Choose a Habitat Quality Metric for Air Pollution Policy Evaluation.

    Directory of Open Access Journals (Sweden)

    Edwin C Rowe

    Full Text Available Atmospheric nitrogen (N deposition has had detrimental effects on species composition in a range of sensitive habitats, although N deposition can also increase agricultural productivity and carbon storage, and favours a few species considered of importance for conservation. Conservation targets are multiple, and increasingly incorporate services derived from nature as well as concepts of intrinsic value. Priorities vary. How then should changes in a set of species caused by drivers such as N deposition be assessed? We used a novel combination of qualitative semi-structured interviews and quantitative ranking to elucidate the views of conservation professionals specialising in grasslands, heathlands and mires. Although conservation management goals are varied, terrestrial habitat quality is mainly assessed by these specialists on the basis of plant species, since these are readily observed. The presence and abundance of plant species that are scarce, or have important functional roles, emerged as important criteria for judging overall habitat quality. However, species defined as 'positive indicator-species' (not particularly scarce, but distinctive for the habitat were considered particularly important. Scarce species are by definition not always found, and the presence of functionally important species is not a sufficient indicator of site quality. Habitat quality as assessed by the key informants was rank-correlated with the number of positive indicator-species present at a site for seven of the nine habitat classes assessed. Other metrics such as species-richness or a metric of scarcity were inconsistently or not correlated with the specialists' assessments. We recommend that metrics of habitat quality used to assess N pollution impacts are based on the occurrence of, or habitat-suitability for, distinctive species. Metrics of this type are likely to be widely applicable for assessing habitat change in response to different drivers. The novel

  5. Reliability and accuracy of the thoracic impedance signal for measuring cardiopulmonary resuscitation quality metrics.

    Science.gov (United States)

    Alonso, Erik; Ruiz, Jesús; Aramendi, Elisabete; González-Otero, Digna; Ruiz de Gauna, Sofía; Ayala, Unai; Russell, James K; Daya, Mohamud

    2015-03-01

    To determine the accuracy and reliability of the thoracic impedance (TI) signal to assess cardiopulmonary resuscitation (CPR) quality metrics. A dataset of 63 out-of-hospital cardiac arrest episodes containing the compression depth (CD), capnography and TI signals was used. We developed a chest compression (CC) and ventilation detector based on the TI signal. TI shows fluctuations due to CCs and ventilations. A decision algorithm classified the local maxima as CCs or ventilations. Seven CPR quality metrics were computed: mean CC-rate, fraction of minutes with inadequate CC-rate, chest compression fraction, mean ventilation rate, fraction of minutes with hyperventilation, instantaneous CC-rate and instantaneous ventilation rate. The CD and capnography signals were accepted as the gold standard for CC and ventilation detection respectively. The accuracy of the detector was evaluated in terms of sensitivity and positive predictive value (PPV). Distributions for each metric computed from the TI and from the gold standard were calculated and tested for normality using one sample Kolmogorov-Smirnov test. For normal and not normal distributions, two sample t-test and Mann-Whitney U test respectively were applied to test for equal means and medians respectively. Bland-Altman plots were represented for each metric to analyze the level of agreement between values obtained from the TI and gold standard. The CC/ventilation detector had a median sensitivity/PPV of 97.2%/97.7% for CCs and 92.2%/81.0% for ventilations respectively. Distributions for all the metrics showed equal means or medians, and agreements >95% between metrics and gold standard was achieved for most of the episodes in the test set, except for the instantaneous ventilation rate. With our data, the TI can be reliably used to measure all the CPR quality metrics proposed in this study, except for the instantaneous ventilation rate. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Implementation of performance metrics to assess pharmacists' activities in ambulatory care clinics.

    Science.gov (United States)

    Schmidt, Lauren; Klink, Chris; Iglar, Arlene; Sharpe, Neha

    2017-01-01

    The development and implementation of performance metrics for assessing the impact of pharmacists' activities in ambulatory care clinics are described. Ambulatory care clinic pharmacists within an integrated health system were surveyed to ascertain baseline practices for documenting and tracking performance metrics. Through literature review and meetings with various stakeholders, priorities for metric development were identified; measures of care quality, financial impact, and patient experience were developed. To measure the quality of care, pharmacists' interventions at five ambulatory care clinics within the health system were assessed. Correlation of pharmacist interventions with estimated cost avoidance provided a measure of financial impact. Surveys were distributed at the end of clinic visits to measure satisfaction with the patient care experience. An electronic system for metric documentation and automated tabulation of data on quality and financial impact was built. In a 12-week pilot program conducted at three clinic sites, the metrics were used to assess pharmacists' activities. A total of 764 interventions were documented (a mean of 24 accepted recommendations per pharmacist full-time equivalent each week), resulting in estimated cost avoidance of more than $40,000; survey results indicated high patient satisfaction with the services provided by pharmacists. Biweekly report auditing and solicitation of feedback guided metric refinement and further training of pharmacists. Tools and procedures were established for future metric expansion. Development and implementation of performance metrics resulted in successful capture and characterization of pharmacists' activities and their impact on patient care in three ambulatory care clinics. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  7. Automatic red eye correction and its quality metric

    Science.gov (United States)

    Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho

    2008-01-01

    The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.

  8. Modeling quality attributes and metrics for web service selection

    Science.gov (United States)

    Oskooei, Meysam Ahmadi; Daud, Salwani binti Mohd; Chua, Fang-Fang

    2014-06-01

    Since the service-oriented architecture (SOA) has been designed to develop the system as a distributed application, the service selection has become a vital aspect of service-oriented computing (SOC). Selecting the appropriate web service with respect to quality of service (QoS) through using mathematical solution for optimization of problem turns the service selection problem into a common concern for service users. Nowadays, number of web services that provide the same functionality is increased and selection of services from a set of alternatives which differ in quality parameters can be difficult for service consumers. In this paper, a new model for QoS attributes and metrics is proposed to provide a suitable solution for optimizing web service selection and composition with low complexity.

  9. Proxy Graph: Visual Quality Metrics of Big Graph Sampling.

    Science.gov (United States)

    Nguyen, Quan Hoang; Hong, Seok-Hee; Eades, Peter; Meidiana, Amyra

    2017-06-01

    Data sampling has been extensively studied for large scale graph mining. Many analyses and tasks become more efficient when performed on graph samples of much smaller size. The use of proxy objects is common in software engineering for analysis and interaction with heavy objects or systems. In this paper, we coin the term 'proxy graph' and empirically investigate how well a proxy graph visualization can represent a big graph. Our investigation focuses on proxy graphs obtained by sampling; this is one of the most common proxy approaches. Despite the plethora of data sampling studies, this is the first evaluation of sampling in the context of graph visualization. For an objective evaluation, we propose a new family of quality metrics for visual quality of proxy graphs. Our experiments cover popular sampling techniques. Our experimental results lead to guidelines for using sampling-based proxy graphs in visualization.

  10. Applicability of Existing Objective Metrics of Perceptual Quality for Adaptive Video Streaming

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Krasula, Lukás; Shahid, Muhammad

    2016-01-01

    Objective video quality metrics are designed to estimate the quality of experience of the end user. However, these objective metrics are usually validated with video streams degraded under common distortion types. In the presented work, we analyze the performance of published and known full......-reference and noreference quality metrics in estimating the perceived quality of adaptive bit-rate video streams knowingly out of scope. Experimental results indicate not surprisingly that state of the art objective quality metrics overlook the perceived degradations in the adaptive video streams and perform poorly...... in estimating the subjective quality results....

  11. Environmental Quality and Aquatic Invertebrate Metrics Relationships at Patagonian Wetlands Subjected to Livestock Grazing Pressures.

    Science.gov (United States)

    Epele, Luis Beltrán; Miserendino, María Laura

    2015-01-01

    Livestock grazing can compromise the biotic integrity and health of wetlands, especially in remotes areas like Patagonia, which provide habitat for several endemic terrestrial and aquatic species. Understanding the effects of these land use practices on invertebrate communities can help prevent the deterioration of wetlands and provide insights for restoration. In this contribution, we assessed the responses of 36 metrics based on the structural and functional attributes of invertebrates (130 taxa) at 30 Patagonian wetlands that were subject to different levels of livestock grazing intensity. These levels were categorized as low, medium and high based on eight features (livestock stock densities plus seven wetland measurements). Significant changes in environmental features were detected across the gradient of wetlands, mainly related to pH, conductivity, and nutrient values. Regardless of rainfall gradient, symptoms of eutrophication were remarkable at some highly disturbed sites. Seven invertebrate metrics consistently and accurately responded to livestock grazing on wetlands. All of them were negatively related to increased levels of grazing disturbance, with the number of insect families appearing as the most robust measure. A multivariate approach (RDA) revealed that invertebrate metrics were significantly affected by environmental variables related to water quality: in particular, pH, conductivity, dissolved oxygen, nutrient concentrations, and the richness and coverage of aquatic plants. Our results suggest that the seven aforementioned metrics could be used to assess ecological quality in the arid and semi-arid wetlands of Patagonia, helping to ensure the creation of protected areas and their associated ecological services.

  12. "Assessment of different bioequivalent metrics in Rifampin bioequivalence study "

    Directory of Open Access Journals (Sweden)

    "Rouini MR

    2002-08-01

    Full Text Available The use of secondary metrics has become special interest in bioequivalency studies. The applicability of partial area method, truncated AUC and Cmax/AUC has been argued by many authors. This study aims to evaluate the possible superiority of these metrics to primary metrics (i.e. AUCinf, Cmax and Tmax. The suitability of truncated AUC for assessment of absorption extent as well as Cmax/AUC and partial AUC for the evaluation of absorption rate in bioequivalency determination was investigated following administration of same product as test and reference to 7 healthy volunteers. Among the pharmacokinetic parameters obtained, Cmax/AUCinf was a better indicator or absorption rate and the AUCinf was more sensitive than truncated AUC in evaluation of absorption extent.

  13. Quality Metrics in Neonatal and Pediatric Critical Care Transport: A National Delphi Project.

    Science.gov (United States)

    Schwartz, Hamilton P; Bigham, Michael T; Schoettker, Pamela J; Meyer, Keith; Trautman, Michael S; Insoft, Robert M

    2015-10-01

    The transport of neonatal and pediatric patients to tertiary care facilities for specialized care demands monitoring the quality of care delivered during transport and its impact on patient outcomes. In 2011, pediatric transport teams in Ohio met to identify quality indicators permitting comparisons among programs. However, no set of national consensus quality metrics exists for benchmarking transport teams. The aim of this project was to achieve national consensus on appropriate neonatal and pediatric transport quality metrics. Modified Delphi technique. The first round of consensus determination was via electronic mail survey, followed by rounds of consensus determination in-person at the American Academy of Pediatrics Section on Transport Medicine's 2012 Quality Metrics Summit. All attendees of the American Academy of Pediatrics Section on Transport Medicine Quality Metrics Summit, conducted on October 21-23, 2012, in New Orleans, LA, were eligible to participate. Candidate quality metrics were identified through literature review and those metrics currently tracked by participating programs. Participants were asked in a series of rounds to identify "very important" quality metrics for transport. It was determined a priori that consensus on a metric's importance was achieved when at least 70% of respondents were in agreement. This is consistent with other Delphi studies. Eighty-two candidate metrics were considered initially. Ultimately, 12 metrics achieved consensus as "very important" to transport. These include metrics related to airway management, team mobilization time, patient and crew injuries, and adverse patient care events. Definitions were assigned to the 12 metrics to facilitate uniform data tracking among programs. The authors succeeded in achieving consensus among a diverse group of national transport experts on 12 core neonatal and pediatric transport quality metrics. We propose that transport teams across the country use these metrics to

  14. Sustainability metrics: life cycle assessment and green design in polymers.

    Science.gov (United States)

    Tabone, Michaelangelo D; Cregg, James J; Beckman, Eric J; Landis, Amy E

    2010-11-01

    This study evaluates the efficacy of green design principles such as the "12 Principles of Green Chemistry," and the "12 Principles of Green Engineering" with respect to environmental impacts found using life cycle assessment (LCA) methodology. A case study of 12 polymers is presented, seven derived from petroleum, four derived from biological sources, and one derived from both. The environmental impacts of each polymer's production are assessed using LCA methodology standardized by the International Organization for Standardization (ISO). Each polymer is also assessed for its adherence to green design principles using metrics generated specifically for this paper. Metrics include atom economy, mass from renewable sources, biodegradability, percent recycled, distance of furthest feedstock, price, life cycle health hazards and life cycle energy use. A decision matrix is used to generate single value metrics for each polymer evaluating either adherence to green design principles or life-cycle environmental impacts. Results from this study show a qualified positive correlation between adherence to green design principles and a reduction of the environmental impacts of production. The qualification results from a disparity between biopolymers and petroleum polymers. While biopolymers rank highly in terms of green design, they exhibit relatively large environmental impacts from production. Biopolymers rank 1, 2, 3, and 4 based on green design metrics; however they rank in the middle of the LCA rankings. Polyolefins rank 1, 2, and 3 in the LCA rankings, whereas complex polymers, such as PET, PVC, and PC place at the bottom of both ranking systems.

  15. A task-based quality control metric for digital mammography

    Science.gov (United States)

    Maki Bloomquist, A. K.; Mainprize, J. G.; Mawdsley, G. E.; Yaffe, M. J.

    2014-11-01

    A reader study was conducted to tune the parameters of an observer model used to predict the detectability index (dʹ ) of test objects as a task-based quality control (QC) metric for digital mammography. A simple test phantom was imaged to measure the model parameters, namely, noise power spectrum, modulation transfer function and test-object contrast. These are then used in a non-prewhitening observer model, incorporating an eye-filter and internal noise, to predict dʹ. The model was tuned by measuring dʹ of discs in a four-alternative forced choice reader study. For each disc diameter, dʹ was used to estimate the threshold thicknesses for detectability. Data were obtained for six types of digital mammography systems using varying detector technologies and x-ray spectra. A strong correlation was found between measured and modeled values of dʹ, with Pearson correlation coefficient of 0.96. Repeated measurements from separate images of the test phantom show an average coefficient of variation in dʹ for different systems between 0.07 and 0.10. Standard deviations in the threshold thickness ranged between 0.001 and 0.017 mm. The model is robust and the results are relatively system independent, suggesting that observer model dʹ shows promise as a cross platform QC metric for digital mammography.

  16. Development of Quality Metrics to Evaluate Pediatric Hematologic Oncology Care in the Outpatient Setting.

    Science.gov (United States)

    Teichman, Jennifer; Punnett, Angela; Gupta, Sumit

    2017-03-01

    There are currently no clinic-level quality of care metrics for outpatient pediatric oncology. We sought to develop a list of quality of care metrics for a leukemia-lymphoma (LL) clinic using a consensus process that can be adapted to other clinic settings. Medline-Ovid was searched for quality indicators relevant to pediatric oncology. A provisional list of 27 metrics spanning 7 categories was generated and circulated to a Consensus Group (CG) of LL clinic medical and nursing staff. A Delphi process comprising 2 rounds of ranking generated consensus on a final list of metrics. Consensus was defined as ≥70% of CG members ranking a metric within 2 consecutive scores. In round 1, 19 of 27 (70%) metrics reached consensus. CG members' comments resulted in 4 new metrics and revision of 8 original metrics. All 31 metrics were included in round 2. Twenty-four of 31 (77%) metrics reached consensus after round 2. Thirteen were chosen for the final list based on highest scores and eliminating redundancy. These included: patient communication/education; pain management; delay in access to clinical psychology, documentation of chemotherapy, of diagnosis/extent of disease, of treatment plan and of follow-up scheme; referral to transplant; radiation exposure during follow-up; delay until chemotherapy; clinic cancellations; and school attendance. This study provides a model of quality metric development that other clinics may use for local use. The final metrics will be used for ongoing quality improvement in the LL clinic.

  17. Metrics and the effective computational scientist: process, quality and communication.

    Science.gov (United States)

    Baldwin, Eric T

    2012-09-01

    Recent treatments of computational knowledge worker productivity have focused upon the value the discipline brings to drug discovery using positive anecdotes. While this big picture approach provides important validation of the contributions of these knowledge workers, the impact accounts do not provide the granular detail that can help individuals and teams perform better. I suggest balancing the impact-focus with quantitative measures that can inform the development of scientists. Measuring the quality of work, analyzing and improving processes, and the critical evaluation of communication can provide immediate performance feedback. The introduction of quantitative measures can complement the longer term reporting of impacts on drug discovery. These metric data can document effectiveness trends and can provide a stronger foundation for the impact dialogue. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Generating reliable quality of information (QoI) metrics for target tracking

    Science.gov (United States)

    Tan, Chung Huat J.; Gillies, Duncan F.

    2009-05-01

    Recently considerable research has been undertaken into estimating the quality of information (QoI) delivered by military sensor networks. QoI essentially estimates the probability that the information available from the network is correct. Knowledge of the QoI would clearly be of great use to decision makers using a network. An important class of sensors, that provide inputs to networks in real-life, are concerned with target tracking. Assessing the tracking performance of these sensors is an essential component in estimating the QoI of the whole network. We have investigated three potential QoI metrics for estimating the dynamic target tracking performance of systems based on some state estimation algorithms. We have tested them on different scenarios with varying degrees of tracking difficulty. We performed experiments on simulated data so that we have a ground truth against which to assess the performance of each metric. Our measure of ground truth is the Euclidean distance between the estimated position and the true position. Recently researchers have suggested using the entropy of the covariance matrix as a metric of QoI [1][2]. Two of our metrics were based on this approach, the first being the entropy of the co-variance matrix relative to an ideal distribution, and the second is the information gain at each update of the covariance matrix. The third metric was calculated by smoothing the residual likelihood value at each new measurement point, similar to the model update likelihood function in an IMM filter. Our experiment results show that reliable QoI metrics cannot be formulated by using solely the covariance matrices. In other words it is possible that a covariance matrix can have high information content, while the position estimate is wrong. On the other hand the smoothed residual likelihood does correlate well with tracking performance, and can be measured without knowledge of the true target position.

  19. Macroinvertebrate and diatom metrics as indicators of water-quality conditions in connected depression wetlands in the Mississippi Alluvial Plain

    Science.gov (United States)

    Justus, Billy; Burge, David; Cobb, Jennifer; Marsico, Travis; Bouldin, Jennifer

    2016-01-01

    Methods for assessing wetland conditions must be established so wetlands can be monitored and ecological services can be protected. We evaluated biological indices compiled from macroinvertebrate and diatom metrics developed primarily for streams to assess their ability to indicate water quality in connected depression wetlands. We collected water-quality and biological samples at 24 connected depressions dominated by water tupelo (Nyssa aquatica) or bald cypress (Taxodium distichum) (water depths = 0.5–1.0 m). Water quality of the least-disturbed connected depressions was characteristic of swamps in the southeastern USA, which tend to have low specific conductance, nutrient concentrations, and pH. We compared 162 macroinvertebrate metrics and 123 diatom metrics with a water-quality disturbance gradient. For most metrics, we evaluated richness, % richness, abundance, and % relative abundance values. Three of the 4 macroinvertebrate metrics that were most beneficial for identifying disturbance in connected depressions decreased along the disturbance gradient even though they normally increase relative to stream disturbance. The negative relationship to disturbance of some taxa (e.g., dipterans, mollusks, and crustaceans) that are considered tolerant in streams suggests that the tolerance scale for some macroinvertebrates can differ markedly between streams and wetlands. Three of the 4 metrics chosen for the diatom index reflected published tolerances or fit the usual perception of metric response to disturbance. Both biological indices may be useful in connected depressions elsewhere in the Mississippi Alluvial Plain Ecoregion and could have application in other wetland types. Given the paradoxical relationship of some macroinvertebrate metrics to dissolved O2 (DO), we suggest that the diatom metrics may be easier to interpret and defend for wetlands with low DO concentrations in least-disturbed conditions.

  20. Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.

    Science.gov (United States)

    Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel

    2017-07-28

    New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.

  1. Quality metrics in solid organ transplantation: protocol for a systematic scoping review.

    Science.gov (United States)

    Brett, Kendra E; Bennett, Alexandria; Fergusson, Nicholas; Knoll, Greg A

    2016-06-14

    Transplantation is often the best, if not the only treatment for end-stage organ failure; however, the quality metrics for determining whether a transplant program is delivering safe, high quality care remains unknown. The purpose of this study is to identify and describe quality indicators or metrics in patients who have received a solid organ transplant. We will conduct a systematic scoping review to evaluate and describe quality indicators or metrics in patients who have received a solid organ transplant. We will search MEDLINE, Embase, and the Cochrane Central Register for Controlled Trials. Two reviewers will conduct all screening and data extraction independently. The articles will be categorized according to the six domains of quality, and the metrics will be appraised using criteria for a good quality measure. The results of this review will guide the development, selection, and validation of appropriate quality metrics necessary to drive quality improvement in transplantation. PROSPERO CRD42016035353 .

  2. No Reference Prediction of Quality Metrics for H.264 Compressed Infrared Image Sequences for UAV Applications

    DEFF Research Database (Denmark)

    Hossain, Kabir; Mantel, Claire; Forchhammer, Søren

    2018-01-01

    The framework for this research work is the acquisition of Infrared (IR) images from Unmanned Aerial Vehicles (UAV). In this paper we consider the No-Reference (NR) prediction of Full Reference Quality Metrics for Infrared (IR) video sequences which are compressed and thus distorted by an H.264...... and temporal perceptual information. Those features are then mapped, using a machine learning (ML) algorithm, the Support Vector Regression (SVR), to the quality scores of Full Reference (FR) quality metrics. The novelty of this work is to design a NR framework for the prediction of quality metrics by applying...... with the true FR quality metrics scores of four images metrics: PSNR, NQM, SSIM and UQI and one video metric: VQM. Results show that our technique achieves a fairly reasonable performance. The improved performance obtained in SROCC and LCC is up to 0.99 and the RMSE is reduced to as little as 0.01 between...

  3. External Quality Metrics for Object-Oriented Software: A Systematic Literature Review

    Directory of Open Access Journals (Sweden)

    Danilo Santos

    2017-12-01

    Full Text Available Software quality metrics can be categorized into internal quality, external quality, and quality in use metrics. Although exist close relationship between internal and external software quality, there are not explicit evidences in literature that attributes and metrics of internal quality impact external quality. This is essential to know which metric to use according to the software characteristic that you want to improve. Hence, we carried out a systematic literature review for identifying this relationship. After analyzing 664 papers, 12 papers were studied in depth. As result, we found 65 metrics related to maintainability, usability, reliability, and quality characteristics as well as main attributes that impact external metrics (size, coupling, and cohesion. In follow, we filtered some metrics that have clear definitions, are appropriately related to the characteristic that purports to measure, and do not use subjective attributes in their computation. Therefore, these metrics are more robust and reliable to evaluate software characteristics. So, these metrics are better for use in practice by professionals working in the software market.

  4. Eye metrics as an objective assessment of surgical skill.

    Science.gov (United States)

    Richstone, Lee; Schwartz, Michael J; Seideman, Casey; Cadeddu, Jeffrey; Marshall, Sandra; Kavoussi, Louis R

    2010-07-01

    Currently, surgical skills assessment relies almost exclusively on subjective measures, which are susceptible to multiple biases. We investigate the use of eye metrics as an objective tool for assessment of surgical skill. Eye tracking has helped elucidate relationships between eye movements, visual attention, and insight, all of which are employed during complex task performance (Kowler and Martins, Science. 1982;215:997-999; Tanenhaus et al, Science. 1995;268:1632-1634; Thomas and Lleras, Psychon Bull Rev. 2007;14:663-668; Thomas and Lleras, Cognition. 2009;111:168-174; Schriver et al, Hum Factors. 2008;50:864-878; Kahneman, Attention and Effort. 1973). Discovery of associations between characteristic eye movements and degree of cognitive effort have also enhanced our appreciation of the learning process. Using linear discriminate analysis (LDA) and nonlinear neural network analyses (NNA) to classify surgeons into expert and nonexpert cohorts, we examine the relationship between complex eye and pupillary movements, collectively referred to as eye metrics, and surgical skill level. Twenty-one surgeons participated in the simulated and live surgical environments. In the simulated surgical setting, LDA and NNA were able to correctly classify surgeons as expert or nonexpert with 91.9% and 92.9% accuracy, respectively. In the live operating room setting, LDA and NNA were able to correctly classify surgeons as expert or nonexpert with 81.0% and 90.7% accuracy, respectively. We demonstrate, in simulated and live-operating environments, that eye metrics can reliably distinguish nonexpert from expert surgeons. As current medical educators rely on subjective measures of surgical skill, eye metrics may serve as the basis for objective assessment in surgical education and credentialing in the future. Further development of this potential educational tool is warranted to assess its ability to both reliably classify larger groups of surgeons and follow progression of surgical

  5. Attention modeling for video quality assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2010-01-01

    This paper proposes to evaluate video quality by balancing two quality components: global quality and local quality. The global quality is a result from subjects allocating their ttention equally to all regions in a frame and all frames n a video. It is evaluated by image quality metrics (IQM) ith...... quality modeling algorithm can improve the performance of image quality metrics on video quality assessment compared to the normal averaged spatiotemporal pooling scheme....... averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in modeling visual attention, which is then integrated into IQMs to calculate the local quality of a video frame...

  6. Software metrics: The key to quality software on the NCC project

    Science.gov (United States)

    Burns, Patricia J.

    1993-01-01

    Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.

  7. Modeling the interannual variability of microbial quality metrics of irrigation water in a Pennsylvania stream.

    Science.gov (United States)

    Hong, Eun-Mi; Shelton, Daniel; Pachepsky, Yakov A; Nam, Won-Ho; Coppock, Cary; Muirhead, Richard

    2017-02-01

    Knowledge of the microbial quality of irrigation waters is extremely limited. For this reason, the US FDA has promulgated the Produce Rule, mandating the testing of irrigation water sources for many farms. The rule requires the collection and analysis of at least 20 water samples over two to four years to adequately evaluate the quality of water intended for produce irrigation. The objective of this work was to evaluate the effect of interannual weather variability on surface water microbial quality. We used the Soil and Water Assessment Tool model to simulate E. coli concentrations in the Little Cove Creek; this is a perennial creek located in an agricultural watershed in south-eastern Pennsylvania. The model performance was evaluated using the US FDA regulatory microbial water quality metrics of geometric mean (GM) and the statistical threshold value (STV). Using the 90-year time series of weather observations, we simulated and randomly sampled the time series of E. coli concentrations. We found that weather conditions of a specific year may strongly affect the evaluation of microbial quality and that the long-term assessment of microbial water quality may be quite different from the evaluation based on short-term observations. The variations in microbial concentrations and water quality metrics were affected by location, wetness of the hydrological years, and seasonality, with 15.7-70.1% of samples exceeding the regulatory threshold. The results of this work demonstrate the value of using modeling to design and evaluate monitoring protocols to assess the microbial quality of water used for produce irrigation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Development of quality metrics for ambulatory pediatric cardiology: Chest pain.

    Science.gov (United States)

    Lu, Jimmy C; Bansal, Manish; Behera, Sarina K; Boris, Jeffrey R; Cardis, Brian; Hokanson, John S; Kakavand, Bahram; Jedeikin, Roy

    2017-12-01

    As part of the American College of Cardiology Adult Congenital and Pediatric Cardiology Section effort to develop quality metrics (QMs) for ambulatory pediatric practice, the chest pain subcommittee aimed to develop QMs for evaluation of chest pain. A group of 8 pediatric cardiologists formulated candidate QMs in the areas of history, physical examination, and testing. Consensus candidate QMs were submitted to an expert panel for scoring by the RAND-UCLA modified Delphi process. Recommended QMs were then available for open comments from all members. These QMs are intended for use in patients 5-18 years old, referred for initial evaluation of chest pain in an ambulatory pediatric cardiology clinic, with no known history of pediatric or congenital heart disease. A total of 10 candidate QMs were submitted; 2 were rejected by the expert panel, and 5 were removed after the open comment period. The 3 approved QMs included: (1) documentation of family history of cardiomyopathy, early coronary artery disease or sudden death, (2) performance of electrocardiogram in all patients, and (3) performance of an echocardiogram to evaluate coronary arteries in patients with exertional chest pain. Despite practice variation and limited prospective data, 3 QMs were approved, with measurable data points which may be extracted from the medical record. However, further prospective studies are necessary to define practice guidelines and to develop appropriate use criteria in this population. © 2017 Wiley Periodicals, Inc.

  9. A Granular Hierarchical Multiview Metrics Suite for Statecharts Quality

    Directory of Open Access Journals (Sweden)

    Mokhtar Beldjehem

    2013-01-01

    Full Text Available This paper presents a bottom-up approach for a multiview measurement of statechart size, topological properties, and internal structural complexity for understandability prediction and assurance purposes. It tackles the problem at different conceptual depths or equivalently at several abstraction levels. The main idea is to study and evaluate a statechart at different levels of granulation corresponding to different conceptual depth levels or levels of details. The higher level corresponds to a flat process view diagram (depth = 0, the adequate upper depth limit is determined by the modelers according to the inherent complexity of the problem under study and the level of detail required for the situation at hand (it corresponds to the all states view. For purposes of measurement, we proceed using bottom-up strategy starting with all state view diagram, identifying and measuring its deepest composite states constituent parts and then gradually collapsing them to obtain the next intermediate view (we decrement depth while aggregating measures incrementally, until reaching the flat process view diagram. To this goal we first identify, define, and derive a relevant metrics suite useful to predict the level of understandability and other quality aspects of a statechart, and then we propose a fuzzy rule-based system prototype for understandability prediction, assurance, and for validation purposes.

  10. Application of the ITIQUE Image Quality Modeling Metric to SSA Domain Imagery

    Science.gov (United States)

    Gerwe, D.; Luna, C.; Calef, B.

    2012-09-01

    This paper describes and assesses a metric for quantifying the level of image visual information content and quality in terms of the National Imagery Interpretability Rating Scale (NIIRS). The Information Theoretic Image Quality Equation (ITIQUE) metric is based on the Shannon mutual information (MI) at multiple spatial scales between a pristine object and the image output from a detailed image formation chain simulation. Integrating the MI at each spatial scale and applying a calibration offset produces a prediction of NIIRS image quality indicating the level of interpretation tasks that could be supported. The model enables prediction of NIIRS quality obtainable as dependent on image collection conditions and imaging system design including both hardware and processing algorithms. The focus of this paper is on ITIQUE's applicability to Space Situational Awareness (SSA) domain imagery, degradations, and non-linear processing techniques. ITIQUE results were compared with visual assessments from a panel of human observers for a set of 480 images that spanned a 16x resolution range, encompassed many degradation types, and included linear and non-linear image enhancement processing. ITIQUE model predictions are shown to human scores to nearly within the human-to-human variability.

  11. Supporting visual quality assessment with machine learning

    NARCIS (Netherlands)

    Gastaldo, P.; Zunino, R.; Redi, J.

    2013-01-01

    Objective metrics for visual quality assessment often base their reliability on the explicit modeling of the highly non-linear behavior of human perception; as a result, they may be complex and computationally expensive. Conversely, machine learning (ML) paradigms allow to tackle the quality

  12. Metrics of quality care in veterans: correlation between primary-care performance measures and inappropriate myocardial perfusion imaging.

    Science.gov (United States)

    Winchester, David E; Kitchen, Andrew; Brandt, John C; Dusaj, Raman S; Virani, Salim S; Bradley, Steven M; Shaw, Leslee J; Beyth, Rebecca J

    2015-04-01

    Approximately 10% to 20% of myocardial perfusion imaging (MPI) tests are inappropriate based on professional-society recommendations. The correlation between inappropriate MPI and quality care metrics is not known. Inappropriate MPI will be associated with low achievement of quality care metrics. We conducted a retrospective cross-sectional investigation at a single Veterans Affairs medical center. Myocardial perfusion imaging tests ordered by primary-care clinicians between December 2010 and July 2011 were assessed for appropriateness (by 2009 criteria). Using documentation of the clinical encounter where MPI was ordered, we determined how often quality care metrics were achieved. Among 516 MPI patients, 52 (10.1%) were inappropriate and 464 (89.9%) were not inappropriate (either appropriate or uncertain). Hypertension (82.2%), diabetes mellitus (41.3%), and coronary artery disease (41.1%) were common. Glycated hemoglobin levels were lower in the inappropriate MPI cohort (6.6% vs 7.5%; P = 0.04). No difference was observed in the proportion with goal hemoglobin (62.5% vs 46.3% for appropriate/uncertain; P = 0.258). Systolic blood pressure was not different (132 mm Hg vs 135 mm Hg; P = 0.34). Achievement of several other categorical quality metrics was low in both cohorts and no differences were observed. More than 90% of clinicians documented a plan to achieve most metrics. Inappropriate MPI is not associated with performance on metrics of quality care. If an association exists, it may be between inappropriate MPI and overly aggressive care. Most clinicians document a plan of care to address failure of quality metrics, suggesting awareness of the problem. © 2015 Wiley Periodicals, Inc.

  13. A guide to calculating habitat-quality metrics to inform conservation of highly mobile species

    Science.gov (United States)

    Bieri, Joanna A.; Sample, Christine; Thogmartin, Wayne E.; Diffendorfer, James E.; Earl, Julia E.; Erickson, Richard A.; Federico, Paula; Flockhart, D. T. Tyler; Nicol, Sam; Semmens, Darius J.; Skraber, T.; Wiederholt, Ruscena; Mattsson, Brady J.

    2018-01-01

    Many metrics exist for quantifying the relative value of habitats and pathways used by highly mobile species. Properly selecting and applying such metrics requires substantial background in mathematics and understanding the relevant management arena. To address this multidimensional challenge, we demonstrate and compare three measurements of habitat quality: graph-, occupancy-, and demographic-based metrics. Each metric provides insights into system dynamics, at the expense of increasing amounts and complexity of data and models. Our descriptions and comparisons of diverse habitat-quality metrics provide means for practitioners to overcome the modeling challenges associated with management or conservation of such highly mobile species. Whereas previous guidance for applying habitat-quality metrics has been scattered in diversified tracks of literature, we have brought this information together into an approachable format including accessible descriptions and a modeling case study for a typical example that conservation professionals can adapt for their own decision contexts and focal populations.Considerations for Resource ManagersManagement objectives, proposed actions, data availability and quality, and model assumptions are all relevant considerations when applying and interpreting habitat-quality metrics.Graph-based metrics answer questions related to habitat centrality and connectivity, are suitable for populations with any movement pattern, quantify basic spatial and temporal patterns of occupancy and movement, and require the least data.Occupancy-based metrics answer questions about likelihood of persistence or colonization, are suitable for populations that undergo localized extinctions, quantify spatial and temporal patterns of occupancy and movement, and require a moderate amount of data.Demographic-based metrics answer questions about relative or absolute population size, are suitable for populations with any movement pattern, quantify demographic

  14. Advanced thermodynamics metrics for sustainability assessments of open engineering systems

    Directory of Open Access Journals (Sweden)

    Sekulić Dušan P.

    2006-01-01

    Full Text Available This paper offers a verification of the following hypotheses. Advanced thermodynamics metrics based on entropy generation assessments indicate the level of sustainability of transient open systems, such as in manufacturing or process industries. The indicator of sustainability may be related to particular property uniformity during materials processing. In such a case the property uniformity would indicate systems’ distance from equilibrium i.e., from the sustainable energy utilization level. This idea is applied to a selected state-of-the-art manufacturing process. The system under consideration involves thermal processing of complex aluminum structures during controlled atmosphere brazing for a near-net-shape mass production of compact heat exchangers.

  15. Economic Benefits: Metrics and Methods for Landscape Performance Assessment

    Directory of Open Access Journals (Sweden)

    Zhen Wang

    2016-04-01

    Full Text Available This paper introduces an expanding research frontier in the landscape architecture discipline, landscape performance research, which embraces the scientific dimension of landscape architecture through evidence-based designs that are anchored in quantitative performance assessment. Specifically, this paper summarizes metrics and methods for determining landscape-derived economic benefits that have been utilized in the Landscape Performance Series (LPS initiated by the Landscape Architecture Foundation. This paper identifies 24 metrics and 32 associated methods for the assessment of economic benefits found in 82 published case studies. Common issues arising through research in quantifying economic benefits for the LPS are discussed and the various approaches taken by researchers are clarified. The paper also provides an analysis of three case studies from the LPS that are representative of common research methods used to quantify economic benefits. The paper suggests that high(er levels of sustainability in the built environment require the integration of economic benefits into landscape performance assessment portfolios in order to forecast project success and reduce uncertainties. Therefore, evidence-based design approaches increase the scientific rigor of landscape architecture education and research, and elevate the status of the profession.

  16. Metrics-based assessments of research: incentives for 'institutional plagiarism'?

    Science.gov (United States)

    Berry, Colin

    2013-06-01

    The issue of plagiarism--claiming credit for work that is not one's own, rightly, continues to cause concern in the academic community. An analysis is presented that shows the effects that may arise from metrics-based assessments of research, when credit for an author's outputs (chiefly publications) is given to an institution that did not support the research but which subsequently employs the author. The incentives for what is termed here "institutional plagiarism" are demonstrated with reference to the UK Research Assessment Exercise in which submitting units of assessment are shown in some instances to derive around twice the credit for papers produced elsewhere by new recruits, compared to papers produced 'in-house'.

  17. ENVIRONMENTAL COMPARISON METRICS FOR LIFE CYCLE IMPACT ASSESSMENT AND PROCESS DESIGN

    Science.gov (United States)

    Metrics (potentials, potency factors, equivalency factors or characterization factors) are available to support the environmental comparison of alternatives in application domains like proces design and product life-cycle assessment (LCA). These metrics typically provide relative...

  18. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale

    Science.gov (United States)

    Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Overview Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms—Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. Cluster Quality Metrics We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Network Clustering Algorithms Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large

  19. Effective dose efficiency: an application-specific metric of quality and dose for digital radiography

    Energy Technology Data Exchange (ETDEWEB)

    Samei, Ehsan; Ranger, Nicole T; Dobbins, James T III; Ravin, Carl E, E-mail: samei@duke.edu [Carl E Ravin Advanced Imaging Laboratories, Department of Radiology (United States)

    2011-08-21

    The detective quantum efficiency (DQE) and the effective DQE (eDQE) are relevant metrics of image quality for digital radiography detectors and systems, respectively. The current study further extends the eDQE methodology to technique optimization using a new metric of the effective dose efficiency (eDE), reflecting both the image quality as well as the effective dose (ED) attributes of the imaging system. Using phantoms representing pediatric, adult and large adult body habitus, image quality measurements were made at 80, 100, 120 and 140 kVp using the standard eDQE protocol and exposures. ED was computed using Monte Carlo methods. The eDE was then computed as a ratio of image quality to ED for each of the phantom/spectral conditions. The eDQE and eDE results showed the same trends across tube potential with 80 kVp yielding the highest values and 120 kVp yielding the lowest. The eDE results for the pediatric phantom were markedly lower than the results for the adult phantom at spatial frequencies lower than 1.2-1.7 mm{sup -1}, primarily due to a correspondingly higher value of ED per entrance exposure. The relative performance for the adult and large adult phantoms was generally comparable but affected by kVps. The eDE results for the large adult configuration were lower than the eDE results for the adult phantom, across all spatial frequencies (120 and 140 kVp) and at spatial frequencies greater than 1.0 mm{sup -1} (80 and 100 kVp). Demonstrated for chest radiography, the eDE shows promise as an application-specific metric of imaging performance, reflective of body habitus and radiographic technique, with utility for radiography protocol assessment and optimization.

  20. Evaluation of cassette-based digital radiography detectors using standardized image quality metrics: AAPM TG-150 Draft Image Detector Tests.

    Science.gov (United States)

    Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E

    2016-09-08

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be

  1. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2011-01-01

    and agreed up on two primary needs for the wide use of quality metrics: 1) an evolving list of comprehensive quality metrics and 2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics...... and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use......Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...

  2. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2012-01-01

    and agreed up on two primary needs for the wide use of quality metrics: 1) an evolving list of comprehensive quality metrics and 2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics...... and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use......Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...

  3. Novel image fusion quality metrics based on sensor models and image statistics

    Science.gov (United States)

    Smith, Forrest A.; Chari, Srikant; Halford, Carl E.; Fanning, Jonathan; Reynolds, Joseph P.

    2009-05-01

    This paper presents progress in image fusion modeling. One fusion quality metric based on the Targeting Task performance (TTP) metric and another based on entropy are presented. A human perception test was performed with fused imagery to determine effectiveness of the metrics in predicting image fusion quality. Both fusion metrics first establish which of two source images is ideal in a particular spatial frequency pass band. The fused output of a given algorithm is then measured against this ideal in each pass band. The entropy based fusion quality metric (E-FQM) uses statistical information (entropy) from the images while the Targeting Task Performance fusion quality metric (TTPFQM) utilizes the TTP metric value in each spatial frequency band. This TTP metric value is the measure of available excess contrast determined by the Contrast Threshold Function (CTF) of the source system and the target contrast. The paper also proposes an image fusion algorithm that chooses source image contributions using a quality measure similar to the TTP-FQM. To test the effectiveness of TTP-FQM and E-FQM in predicting human image quality preferences, SWIR and LWIR imagery of tanks were fused using four different algorithms. A paired comparison test was performed with both source and fused imagery as stimuli. Eleven observers were asked to select which image enabled them to better identify the target. Over the ensemble of test images, the experiment showed that both TTP-FQM and E-FQM were capable of identifying the fusion algorithms most and least preferred by human observers. Analysis also showed that the performance of the TTP-FQM and E-FQM in identifying human image preferences are better than existing fusion quality metrics such as the Weighted Fusion Quality Index and Mutual Information.

  4. Large-scale seismic waveform quality metric calculation using Hadoop

    Science.gov (United States)

    Magana-Zook, S.; Gaylord, J. M.; Knapp, D. R.; Dodge, D. A.; Ruppert, S. D.

    2016-09-01

    In this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of 0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. These experiments were conducted multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely

  5. Stroke quality metrics: systematic reviews of the relationships to patient-centered outcomes and impact of public reporting.

    Science.gov (United States)

    Parker, Carol; Schwamm, Lee H; Fonarow, Gregg C; Smith, Eric E; Reeves, Mathew J

    2012-01-01

    Stroke quality metrics play an increasingly important role in quality improvement and policies related to provider reimbursement, accreditation, and public reporting. We conducted 2 systematic reviews examining the relationships between compliance with stroke quality metrics and patient-centered outcomes, and public reporting of stroke metrics and quality improvement, quality of care, or outcomes. MEDLINE and EMBASE databases were searched to identify studies that evaluated the relationship between stroke quality metric compliance and patient-centered outcomes in acute hospital settings and public reporting of stroke quality metrics and quality improvement activities, quality of care, or patient outcomes. We specifically excluded studies that evaluated the effect of stroke units or hospital certification. Fourteen studies met eligibility criteria for the review of stroke quality metric compliance and patient-centered outcomes; 9 found mostly positive associations, whereas 5 found no or very limited associations. Only 2 eligible studies were found that directly addressed the public reporting of stroke quality metrics. Some studies have found positive associations between stroke metric compliance and improved patient-centered outcomes. However, high-quality studies are lacking and several methodological difficulties make the interpretation of the reported associations challenging. Information on the impact of public reporting of stroke quality metric data is extremely limited. Legitimate questions remain as to whether public reporting of stroke metrics is accurate, effective, or has the potential for unintended consequences. The generation of high-quality data examining quality metrics and stroke outcomes as well as the impact of public reporting should be given priority.

  6. Neurosurgical Assessment of Metrics Including Judgment and Dexterity Using the Virtual Reality Simulator NeuroTouch (NAJD Metrics).

    Science.gov (United States)

    Alotaibi, Fahad E; AlZhrani, Gmaan A; Sabbagh, Abulrahman J; Azarnoush, Hamed; Winkler-Schwartz, Alexander; Del Maestro, Rolando F

    2015-12-01

    Advances in computer-based technology has created a significant opportunity for implementing new training paradigms in neurosurgery focused on improving skill acquisition, enhancing procedural outcome, and surgical skills assessment. NeuroTouch is a computer-based virtual reality system that can generate output data known as metrics from operator performance during simulated brain tumor resection. These measures of quantitative assessment are used to track and compare psychomotor performance during simulated operative procedures. Data output from the NeuroTouch system is recorded in a comma-separated values file. Data mining from this file and subsequent metrics development requires the use of sophisticated software and engineering expertise. In this article, we introduce a system to extract a series of new metrics using the same data file using Excel software. Based on the data contained in the NeuroTouch comma-separated values file, 13 novel NeuroTouch metrics were developed and classified. Tier 1 metrics include blood loss, tumor percentage resected, and total simulated normal brain volume removed. Tier 2 metrics include total instrument tip path length, maximum force applied, sum of forces utilized, and average forces utilized by the simulated ultrasonic aspirator and suction instrument along with pedal activation frequency of the ultrasonic aspirator. Advanced tier 2 metrics include instrument tips average separation distance, efficiency index, ultrasonic aspirator path length index, coordination index, and ultrasonic aspirator bimanual forces ratio. This system of data extraction provides researchers expedited access for analyzing the data files available for NeuroTouch platform to assess the multiple psychomotor and cognitive neurosurgical skills involved in complex surgical procedures. © The Author(s) 2015.

  7. Metrical Segmentation in Dutch: Vowel Quality or Stress?

    Science.gov (United States)

    Quene, Hugo; Koster, Mariette L.

    1998-01-01

    Examines metrical segmentation strategy in Dutch. The first experiment shows that stress strongly affects Dutch listeners' ability and speed in spotting Dutch monosyllabic words in disyllabic nonwords. The second experiment finds the same stress effect when only the target words are presented without a subsequent syllable triggering segmentation.…

  8. The Nutrient Balance Concept: A New Quality Metric for Composite Meals and Diets.

    Directory of Open Access Journals (Sweden)

    Edward B Fern

    Full Text Available Combinations of foods that provide suitable levels of nutrients and energy are required for optimum health. Currently, however, it is difficult to define numerically what are 'suitable levels'.To develop new metrics based on energy considerations-the Nutrient Balance Concept (NBC-for assessing overall nutrition quality when combining foods and meals.The NBC was developed using the USDA Food Composition Database (Release 27 and illustrated with their MyPlate 7-day sample menus for a 2000 calorie food pattern. The NBC concept is centered on three specific metrics for a given food, meal or diet-a Qualifying Index (QI, a Disqualifying Index (DI and a Nutrient Balance (NB. The QI and DI were determined, respectively, from the content of 27 essential nutrients and 6 nutrients associated with negative health outcomes. The third metric, the Nutrient Balance (NB, was derived from the Qualifying Index (QI and provided key information on the relative content of qualifying nutrients in the food. Because the Qualifying and Disqualifying Indices (QI and DI were standardized to energy content, both become constants for a given food/meal/diet and a particular consumer age group, making it possible to develop algorithms for predicting nutrition quality when combining different foods.Combining different foods into composite meals and daily diets led to improved nutrition quality as seen by QI values closer to unity (indicating nutrient density was better equilibrated with energy density, DI values below 1.0 (denoting an acceptable level of consumption of disqualifying nutrients and increased NB values (signifying complementarity of foods and better provision of qualifying nutrients.The Nutrient Balance Concept (NBC represents a new approach to nutrient profiling and the first step in the progression from the nutrient evaluation of individual foods to that of multiple foods in the context of meals and total diets.

  9. A reduced-reference perceptual image and video quality metric based on edge preservation

    Science.gov (United States)

    Martini, Maria G.; Villarini, Barbara; Fiorucci, Federico

    2012-12-01

    In image and video compression and transmission, it is important to rely on an objective image/video quality metric which accurately represents the subjective quality of processed images and video sequences. In some scenarios, it is also important to evaluate the quality of the received video sequence with minimal reference to the transmitted one. For instance, for quality improvement of video transmission through closed-loop optimisation, the video quality measure can be evaluated at the receiver and provided as feedback information to the system controller. The original image/video sequence--prior to compression and transmission--is not usually available at the receiver side, and it is important to rely at the receiver side on an objective video quality metric that does not need reference or needs minimal reference to the original video sequence. The observation that the human eye is very sensitive to edge and contour information of an image underpins the proposal of our reduced reference (RR) quality metric, which compares edge information between the distorted and the original image. Results highlight that the metric correlates well with subjective observations, also in comparison with commonly used full-reference metrics and with a state-of-the-art RR metric.

  10. Extracting Patterns from Educational Traces via Clustering and Associated Quality Metrics

    NARCIS (Netherlands)

    Mihaescu, Marian; Tanasie, Alexandru; Dascalu, Mihai; Trausan-Matu, Stefan

    2016-01-01

    Clustering algorithms, pattern mining techniques and associated quality metrics emerged as reliable methods for modeling learners’ performance, comprehension and interaction in given educational scenarios. The specificity of available data such as missing values, extreme values or outliers,

  11. Automated Neuropsychological Assessment Metrics: Repeated Assessment with Two Military Samples

    Science.gov (United States)

    2011-01-01

    Eonta, Operational & Undersea Medicine/Neurotrauma, Naval Medical Re- search Center, 503 Robert Grant Ave., Silver Spring, MD 20910-7500...DC : Winston & Sons ; 1972 . 30. Wilken JA, Sullivan CL, Lewandowski A, Kane RL . The use of ANAM to assess the side-effect profi les

  12. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    Science.gov (United States)

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  13. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    Directory of Open Access Journals (Sweden)

    Scott Emmons

    Full Text Available Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes.We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information.Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  14. Quality metrics for high order meshes: analysis of the mechanical simulation of the heart beat.

    Science.gov (United States)

    Lamata, Pablo; Roy, Ishani; Blazevic, Bojan; Crozier, Andrew; Land, Sander; Niederer, Steven A; Hose, D Rod; Smith, Nicolas P

    2013-01-01

    The quality of a computational mesh is an important characteristic for stable and accurate simulations. Quality depends on the regularity of the initial mesh, and in mechanical simulations it evolves in time, with deformations causing changes in volume and distortion of mesh elements. Mesh quality metrics are therefore relevant for both mesh personalization and the monitoring of the simulation process. This work evaluates the significance, in meshes with high order interpolation, of four quality metrics described in the literature, applying them to analyse the stability of the simulation of the heart beat. It also investigates how image registration and mesh warping parameters affect the quality and stability of meshes. Jacobian-based metrics outperformed or matched the results of coarse geometrical metrics of aspect ratio or orthogonality, although they are more expensive computationally. The stability of simulations of a complete heart cycle was best predicted with a specificity of 61%, sensitivity of 85%, and only nominal differences were found changing the intra-element and per-element combination of quality values. A compromise between fitting accuracy and mesh stability and quality was found. Generic geometrical quality metrics have a limited success predicting stability, and an analysis of the simulation problem may be required for an optimal definition of quality.

  15. Quality metrics in high-dimensional data visualization: an overview and systematization.

    Science.gov (United States)

    Bertini, Enrico; Tatu, Andrada; Keim, Daniel

    2011-12-01

    In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research. © 2010 IEEE

  16. It's All Relative: A Validation of Radiation Quality Comparison Metrics

    Science.gov (United States)

    Chappell, Lori J.; Milder, Caitlin M.; Elgart, S. Robin; Semones, Edward J.

    2017-01-01

    The difference between high-LET and low-LET radiation is quantified by a measure called relative biological effectiveness (RBE). RBE is defined as the ratio of the dose of a reference radiation to that of a test radiation to achieve the same effect level, and thus, is described either as an iso-effector dose-to-dose ratio. A single dose point is not sufficient to calculate an RBE value; therefore, studies with only one dose point usually calculate an effect-to-effect ratio. While not formally used in radiation protection, these iso-dose values may still be informative. Shuryak, et al 2017 investigated the use of an iso-dose metric termed "radiation effects ratio" (RER) and used both RBE and RER to estimate high-LET risks. To apply RBE or RER to risk prediction, the selected metric must be uniquely defined. That is, the calculated value must be consistent within a model given a constant set of constraints and assumptions, regardless of how effects are defined using statistical transformations from raw endpoint data. We first test the RBE and the RER to determine whether they are uniquely defined using transformations applied to raw data. Then, we test whether both metrics can predict heavy ion response data after simulated effect size scaling between human populations or when converting animal to human endpoints.

  17. Metrics for Assessment of Smart Grid Data Integrity Attacks

    Energy Technology Data Exchange (ETDEWEB)

    Annarita Giani; Miles McQueen; Russell Bent; Kameshwar Poolla; Mark Hinrichs

    2012-07-01

    There is an emerging consensus that the nation’s electricity grid is vulnerable to cyber attacks. This vulnerability arises from the increasing reliance on using remote measurements, transmitting them over legacy data networks to system operators who make critical decisions based on available data. Data integrity attacks are a class of cyber attacks that involve a compromise of information that is processed by the grid operator. This information can include meter readings of injected power at remote generators, power flows on transmission lines, and relay states. These data integrity attacks have consequences only when the system operator responds to compromised data by redispatching generation under normal or contingency protocols. These consequences include (a) financial losses from sub-optimal economic dispatch to service loads, (b) robustness/resiliency losses from placing the grid at operating points that are at greater risk from contingencies, and (c) systemic losses resulting from cascading failures induced by poor operational choices. This paper is focused on understanding the connections between grid operational procedures and cyber attacks. We first offer two examples to illustrate how data integrity attacks can cause economic and physical damage by misleading operators into taking inappropriate decisions. We then focus on unobservable data integrity attacks involving power meter data. These are coordinated attacks where the compromised data are consistent with the physics of power flow, and are therefore passed by any bad data detection algorithm. We develop metrics to assess the economic impact of these attacks under re-dispatch decisions using optimal power flow methods. These metrics can be use to prioritize the adoption of appropriate countermeasures including PMU placement, encryption, hardware upgrades, and advance attack detection algorithms.

  18. Design of video quality metrics with multi-way data analysis a data driven approach

    CERN Document Server

    Keimel, Christian

    2016-01-01

    This book proposes a data-driven methodology using multi-way data analysis for the design of video-quality metrics. It also enables video- quality metrics to be created using arbitrary features. This data- driven design approach not only requires no detailed knowledge of the human visual system, but also allows a proper consideration of the temporal nature of video using a three-way prediction model, corresponding to the three-way structure of video. Using two simple example metrics, the author demonstrates not only that this purely data- driven approach outperforms state-of-the-art video-quality metrics, which are often optimized for specific properties of the human visual system, but also that multi-way data analysis methods outperform the combination of two-way data analysis methods and temporal pooling. .

  19. SU-E-T-776: Use of Quality Metrics for a New Hypo-Fractionated Pre-Surgical Mesothelioma Protocol

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, S; Mehta, V [Swedish Cancer Institute, Seattle, WA (United States)

    2015-06-15

    Purpose: The “SMART” (Surgery for Mesothelioma After Radiation Therapy) approach involves hypo-fractionated radiotherapy of the lung pleura to 25Gy over 5 days followed by surgical resection within 7. Early clinical results suggest that this approach is very promising, but also logistically challenging due to the multidisciplinary involvement. Due to the compressed schedule, high dose, and shortened planning time, the delivery of the planned doses were monitored for safety with quality metric software. Methods: Hypo-fractionated IMRT treatment plans were developed for all patients and exported to Quality Reports™ software. Plan quality metrics or PQMs™ were created to calculate an objective scoring function for each plan. This allows for an objective assessment of the quality of the plan and a benchmark for plan improvement for subsequent patients. The priorities of various components were incorporated based on similar hypo-fractionated protocols such as lung SBRT treatments. Results: Five patients have been treated at our institution using this approach. The plans were developed, QA performed, and ready within 5 days of simulation. Plan Quality metrics utilized in scoring included doses to OAR and target coverage. All patients tolerated treatment well and proceeded to surgery as scheduled. Reported toxicity included grade 1 nausea (n=1), grade 1 esophagitis (n=1), grade 2 fatigue (n=3). One patient had recurrent fluid accumulation following surgery. No patients experienced any pulmonary toxicity prior to surgery. Conclusion: An accelerated course of pre-operative high dose radiation for mesothelioma is an innovative and promising new protocol. Without historical data, one must proceed cautiously and monitor the data carefully. The development of quality metrics and scoring functions for these treatments allows us to benchmark our plans and monitor improvement. If subsequent toxicities occur, these will be easy to investigate and incorporate into the

  20. A Metric Tool for Predicting Source Code Quality from a PDL Design

    OpenAIRE

    Henry, Sallie M.; Selig, Calvin

    1987-01-01

    The software crisis has increased the demand for automated tools to assist software developers in the production of quality software. Quality metrics have given software developers a tool to measure software quality. These measurements, however, are available only after the software has been produced. Due to high cost, software managers are reluctant, to redesign and reimplement low quality software. Ideally, a life cycle which allows early measurement of software quality is a necessary ingre...

  1. Computer Systems Acquisition Metrics Handbook. Volume II. Quality Factor Modules.

    Science.gov (United States)

    1982-05-01

    SIMPLICITY Pou Code: Re IMC.3 LI CYERA: SOMMS): IILEMENTTION Reim. 5 ReD4.6 SY] U NAME: __ Qslsmmm4 ReIM. 7 I. . EMIC S4 Uc N: 1. Design Structure Muasure 2...SYSM4 NW_ ___ I. M ETIC SUMMARY 1 ’CION: 1. Access Audit C ecklist -II. CRITSM W M : Sum of Above Scores Criteia Vale No. of Metrics III. EVA UWAf... etiCs above? (l-lO)_ C if you are mable to evaluate) PIWnAME BY: APW BY:I II II:_ _ I ( Ma- 30 ____________________________por COd: 14MC..3 MEAIL

  2. Assessing the suitability of diversity metrics to detect biodiversity change

    NARCIS (Netherlands)

    Santini, L.; Belmaker, J.; Costello, M.J.; Pereira, H.M.; Rossberg, A.G.; Schipper, A.M.; Ceaușu, S.; Dornelas, M.; Hilbers, J.P.; Hortal, J.; Huijbregts, M.A.J.; Navarro, L.M.; Schiffers, K.H.; Visconti, P.; Rondinini, C.

    2016-01-01

    A large number of diversity metrics are available to study and monitor biodiversity, and their responses to biodiversity changes are not necessarily coherent with each other. The choice of biodiversity metrics may thus strongly affect our interpretation of biodiversity change and, hence,

  3. Defining quality metrics and improving safety and outcome in allergy care.

    Science.gov (United States)

    Lee, Stella; Stachler, Robert J; Ferguson, Berrylin J

    2014-04-01

    The delivery of allergy immunotherapy in the otolaryngology office is variable and lacks standardization. Quality metrics encompasses the measurement of factors associated with good patient-centered care. These factors have yet to be defined in the delivery of allergy immunotherapy. We developed and applied quality metrics to 6 allergy practices affiliated with an academic otolaryngic allergy center. This work was conducted at a tertiary academic center providing care to over 1500 patients. We evaluated methods and variability between 6 sites. Tracking of errors and anaphylaxis was initiated across all sites. A nationwide survey of academic and private allergists was used to collect data on current practice and use of quality metrics. The most common types of errors recorded were patient identification errors (n = 4), followed by vial mixing errors (n = 3), and dosing errors (n = 2). There were 7 episodes of anaphylaxis of which 2 were secondary to dosing errors for a rate of 0.01% or 1 in every 10,000 injection visits/year. Site visits showed that 86% of key safety measures were followed. Analysis of nationwide survey responses revealed that quality metrics are still not well defined by either medical or otolaryngic allergy practices. Academic practices were statistically more likely to use quality metrics (p = 0.021) and perform systems reviews and audits in comparison to private practices (p = 0.005). Quality metrics in allergy delivery can help improve safety and quality care. These metrics need to be further defined by otolaryngic allergists in the changing health care environment. © 2014 ARS-AAOA, LLC.

  4. Using business intelligence to monitor clinical quality metrics.

    Science.gov (United States)

    Resetar, Ervina; Noirot, Laura A; Reichley, Richard M; Storey, Patricia; Skiles, Ann M; Traynor, Patrick; Dunagan, W Claiborne; Bailey, Thomas C

    2007-10-11

    BJC HealthCare (BJC) uses a number of industry standard indicators to monitor the quality of services provided by each of its hospitals. By establishing an enterprise data warehouse as a central repository of clinical quality information, BJC is able to monitor clinical quality performance in a timely manner and improve clinical outcomes.

  5. Software Metrics: Measuring Haskell

    OpenAIRE

    Ryder, Chris; Thompson, Simon

    2005-01-01

    Software metrics have been used in software engineering as a mechanism for assessing code quality and for targeting software development activities, such as testing or refactoring, at areas of a program that will most benefit from them. Haskell has many tools for software engineering, such as testing, debugging and refactoring tools, but software metrics have mostly been neglected. The work presented in this paper identifies a collection of software metrics for use with Haskell programs. Thes...

  6. Metrics for assessing retailers based on consumer perception

    Directory of Open Access Journals (Sweden)

    Klimin Anastasii

    2017-01-01

    Full Text Available The article suggests a new look at trading platforms, which is called “metrics.” Metrics are a way to look at the point of sale in a large part from the buyer’s side. The buyer enters the store and make buying decision based on those factors that the seller often does not consider, or considers in part, because “does not see” them, since he is not a buyer. The article proposes the classification of retailers, metrics and a methodology for their determination, presents the results of an audit of retailers in St. Petersburg on the proposed methodology.

  7. Assessment of hazard metrics for predicting field benthic invertebrate toxicity in the Detroit River, Ontario, Canada.

    Science.gov (United States)

    McPhedran, Kerry N; Grgicak-Mannion, Alice; Paterson, Gord; Briggs, Ted; Ciborowski, Jan Jh; Haffner, G Douglas; Drouillard, Ken G

    2017-03-01

    Numerical sediment quality guidelines (SQGs) are frequently used to interpret site-specific sediment chemistry and predict potential toxicity to benthic communities. These SQGs are useful for a screening line of evidence (LOE) that can be combined with other LOEs in a full weight of evidence (WOE) assessment of impacted sites. Three common multichemical hazard quotient methods (probable effect concentration [PEC]-Qavg , PEC-Qmet , and PEC-Qsum ) and a novel (hazard score [HZD]) approach were used in conjunction with a consensus-based set of SQGs to evaluate the ability of different scoring metrics to predict the biological effects of sediment contamination under field conditions. Multivariate analyses were first used to categorize river sediments into distinct habitats based on a set of physicochemical parameters to include gravel, low and high flow sand, and silt. For high flow sand and gravel, no significant dose-response relationships between numerically dominant species and various toxicity metric scores were observed. Significant dose-response relationships were observed for chironomid abundances and toxicity scores in low flow sand and silt habitats. For silt habitats, the HZD scoring metric provided the best predictor of chironomid abundances compared to various PEC-Q methods according to goodness-of-fit tests. For low flow sand habitats, PEC-Qsum followed by HZD, provided the best predictors of chironomid abundance. Differences in apparent chironomid toxicity between the 2 habitats suggest habitat-specific differences in chemical bioavailability and indicator taxa sensitivity. Using an IBI method, the HZD, PEC-Qavg , and PEC-Qmet approaches provided reasonable correlations with calculated IBI values in both silt and low flow sand habitats but not for gravel or high flow sands. Computation differences between the various multi-chemical toxicity scoring metrics and how this contributes to bias in different estimates of chemical mixture toxicity scores are

  8. Image forgery detection by means of no-reference quality metrics

    Science.gov (United States)

    Battisti, F.; Carli, M.; Neri, A.

    2012-03-01

    In this paper a methodology for digital image forgery detection by means of an unconventional use of image quality assessment is addressed. In particular, the presence of differences in quality degradations impairing the images is adopted to reveal the mixture of different source patches. The ratio behind this work is in the hypothesis that any image may be affected by artifacts, visible or not, caused by the processing steps: acquisition (i.e., lens distortion, acquisition sensors imperfections, analog to digital conversion, single sensor to color pattern interpolation), processing (i.e., quantization, storing, jpeg compression, sharpening, deblurring, enhancement), and rendering (i.e., image decoding, color/size adjustment). These defects are generally spatially localized and their strength strictly depends on the content. For these reasons they can be considered as a fingerprint of each digital image. The proposed approach relies on a combination of image quality assessment systems. The adopted no-reference metric does not require any information about the original image, thus allowing an efficient and stand-alone blind system for image forgery detection. The experimental results show the effectiveness of the proposed scheme.

  9. RNA-SeQC: RNA-seq metrics for quality control and process optimization.

    Science.gov (United States)

    DeLuca, David S; Levin, Joshua Z; Sivachenko, Andrey; Fennell, Timothy; Nazaire, Marc-Danie; Williams, Chris; Reich, Michael; Winckler, Wendy; Getz, Gad

    2012-06-01

    RNA-seq, the application of next-generation sequencing to RNA, provides transcriptome-wide characterization of cellular activity. Assessment of sequencing performance and library quality is critical to the interpretation of RNA-seq data, yet few tools exist to address this issue. We introduce RNA-SeQC, a program which provides key measures of data quality. These metrics include yield, alignment and duplication rates; GC bias, rRNA content, regions of alignment (exon, intron and intragenic), continuity of coverage, 3'/5' bias and count of detectable transcripts, among others. The software provides multi-sample evaluation of library construction protocols, input materials and other experimental parameters. The modularity of the software enables pipeline integration and the routine monitoring of key measures of data quality such as the number of alignable reads, duplication rates and rRNA contamination. RNA-SeQC allows investigators to make informed decisions about sample inclusion in downstream analysis. In summary, RNA-SeQC provides quality control measures critical to experiment design, process optimization and downstream computational analysis. See www.genepattern.org to run online, or www.broadinstitute.org/rna-seqc/ for a command line tool.

  10. On the performance of metrics to predict quality in point cloud representations

    Science.gov (United States)

    Alexiou, Evangelos; Ebrahimi, Touradj

    2017-09-01

    Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.

  11. Evaluation of Quality Metrics for Surgically Treated Laryngeal Squamous Cell Carcinoma.

    Science.gov (United States)

    Graboyes, Evan M; Townsend, Melanie E; Kallogjeri, Dorina; Piccirillo, Jay F; Nussenbaum, Brian

    2016-12-01

    Quality metrics for patients with laryngeal squamous cell carcinoma (SCC) exist, but whether compliance with these metrics correlates with improved survival is unknown. To examine whether compliance with proposed quality metrics is associated with improved survival in patients with laryngeal SCC treated with surgery with or without adjuvant therapy. This retrospective cohort study included patients from a tertiary care academic medical center who had previously untreated laryngeal SCC and underwent surgery with or without adjuvant therapy from January 1, 2003, through December 31, 2012. Data analysis was performed from August 4, 2015, through December 13, 2015. Surgery with or without adjuvant therapy. Compliance with quality metrics from the American Head and Neck Society (AHNS), National Comprehensive Cancer Network (NCCN) guidelines, and institutional metrics with face validity covering pretreatment evaluation, treatment, and posttreatment surveillance was evaluated. The association between compliance with the group of metrics and overall survival (OS), disease-specific survival (DSS), and disease-free survival (DFS) was explored using Cox proportional hazards analysis. The association between compliance with individual metrics and survival was similarly determined. A total of 243 patients (184 men and 59 women) were included in the study (median age, 62 years; age range, 23-87 years). No association was found between increasing levels of compliance with the AHNS or NCCN metrics and survival. The only AHNS or NCCN metric for which greater compliance correlated with improved survival on multivariable Cox proportional hazards analysis controlling for pT stage, pN stage, extracapsular spread, margin status, and comorbidity was pretreatment multidisciplinary evaluation for patients with stage cT3-4 or cN1-3 disease (OS adjusted hazard ratio [aHR], 0.47; 95% CI, 0.24-0.94; DFS aHR, 0.45; 95% CI, 0.23-0.85). For the institutional metrics, multidisciplinary evaluation

  12. Treatment plan complexity metrics for predicting IMRT pre-treatment quality assurance results.

    Science.gov (United States)

    Crowe, S B; Kairn, T; Kenny, J; Knight, R T; Hill, B; Langton, C M; Trapp, J V

    2014-09-01

    The planning of IMRT treatments requires a compromise between dose conformity (complexity) and deliverability. This study investigates established and novel treatment complexity metrics for 122 IMRT beams from prostate treatment plans. The Treatment and Dose Assessor software was used to extract the necessary data from exported treatment plan files and calculate the metrics. For most of the metrics, there was strong overlap between the calculated values for plans that passed and failed their quality assurance (QA) tests. However, statistically significant variation between plans that passed and failed QA measurements was found for the established modulation index and for a novel metric describing the proportion of small apertures in each beam. The 'small aperture score' provided threshold values which successfully distinguished deliverable treatment plans from plans that did not pass QA, with a low false negative rate.

  13. Objective assessment based on motion-related metrics and technical performance in laparoscopic suturing.

    Science.gov (United States)

    Sánchez-Margallo, Juan A; Sánchez-Margallo, Francisco M; Oropesa, Ignacio; Enciso, Silvia; Gómez, Enrique J

    2017-02-01

    The aim of this study is to present the construct and concurrent validity of a motion-tracking method of laparoscopic instruments based on an optical pose tracker and determine its feasibility as an objective assessment tool of psychomotor skills during laparoscopic suturing. A group of novice ([Formula: see text] laparoscopic procedures), intermediate (11-100 laparoscopic procedures) and experienced ([Formula: see text] laparoscopic procedures) surgeons performed three intracorporeal sutures on an ex vivo porcine stomach. Motion analysis metrics were recorded using the proposed tracking method, which employs an optical pose tracker to determine the laparoscopic instruments' position. Construct validation was measured for all 10 metrics across the three groups and between pairs of groups. Concurrent validation was measured against a previously validated suturing checklist. Checklists were completed by two independent surgeons over blinded video recordings of the task. Eighteen novices, 15 intermediates and 11 experienced surgeons took part in this study. Execution time and path length travelled by the laparoscopic dissector presented construct validity. Experienced surgeons required significantly less time ([Formula: see text]), travelled less distance using both laparoscopic instruments ([Formula: see text]) and made more efficient use of the work space ([Formula: see text]) compared with novice and intermediate surgeons. Concurrent validation showed strong correlation between both the execution time and path length and the checklist score ([Formula: see text] and [Formula: see text], [Formula: see text]). The suturing performance was successfully assessed by the motion analysis method. Construct and concurrent validity of the motion-based assessment method has been demonstrated for the execution time and path length metrics. This study demonstrates the efficacy of the presented method for objective evaluation of psychomotor skills in laparoscopic suturing

  14. Perceptual Quality Assessment of Screen Content Images.

    Science.gov (United States)

    Yang, Huan; Fang, Yuming; Lin, Weisi

    2015-11-01

    Research on screen content images (SCIs) becomes important as they are increasingly used in multi-device communication applications. In this paper, we present a study on perceptual quality assessment of distorted SCIs subjectively and objectively. We construct a large-scale screen image quality assessment database (SIQAD) consisting of 20 source and 980 distorted SCIs. In order to get the subjective quality scores and investigate, which part (text or picture) contributes more to the overall visual quality, the single stimulus methodology with 11 point numerical scale is employed to obtain three kinds of subjective scores corresponding to the entire, textual, and pictorial regions, respectively. According to the analysis of subjective data, we propose a weighting strategy to account for the correlation among these three kinds of subjective scores. Furthermore, we design an objective metric to measure the visual quality of distorted SCIs by considering the visual difference of textual and pictorial regions. The experimental results demonstrate that the proposed SCI perceptual quality assessment scheme, consisting of the objective metric and the weighting strategy, can achieve better performance than 11 state-of-the-art IQA methods. To the best of our knowledge, the SIQAD is the first large-scale database published for quality evaluation of SCIs, and this research is the first attempt to explore the perceptual quality assessment of distorted SCIs.

  15. Visual signal quality assessment quality of experience (QOE)

    CERN Document Server

    Ma, Lin; Lin, Weisi; Ngan, King

    2015-01-01

    This book provides comprehensive coverage of the latest trends/advances in subjective and objective quality evaluation for traditional visual signals, such as 2D images and video, as well as the most recent challenges for the field of multimedia quality assessment and processing, such as mobile video and social media. Readers will learn how to ensure the highest storage/delivery/ transmission quality of visual content (including image, video, graphics, animation, etc.) from the server to the consumer, under resource constraints, such as computation, bandwidth, storage space, battery life, etc.    Provides an overview of quality assessment for traditional visual signals; Covers newly emerged visual signals such as social media, 3D image/video, mobile video, high dynamic range (HDR) images, graphics/animation, etc., which demand better quality of experience (QoE); Helps readers to develop better quality metrics and processing methods for newly emerged visual signals; Enables testing, optimizing, benchmarking...

  16. Instrument Motion Metrics for Laparoscopic Skills Assessment in Virtual Reality and Augmented Reality.

    Science.gov (United States)

    Fransson, Boel A; Chen, Chi-Ya; Noyes, Julie A; Ragle, Claude A

    2016-11-01

    To determine the construct and concurrent validity of instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality simulators. Evaluation study. Veterinarian students (novice, n = 14) and veterinarians (experienced, n = 11) with no or variable laparoscopic experience. Participants' minimally invasive surgery (MIS) experience was determined by hospital records of MIS procedures performed in the Teaching Hospital. Basic laparoscopic skills were assessed by 5 tasks using a physical box trainer. Each participant completed 2 tasks for assessments in each type of simulator (virtual reality: bowel handling and cutting; augmented reality: object positioning and a pericardial window model). Motion metrics such as instrument path length, angle or drift, and economy of motion of each simulator were recorded. None of the motion metrics in a virtual reality simulator showed correlation with experience, or to the basic laparoscopic skills score. All metrics in augmented reality were significantly correlated with experience (time, instrument path, and economy of movement), except for the hand dominance metric. The basic laparoscopic skills score was correlated to all performance metrics in augmented reality. The augmented reality motion metrics differed between American College of Veterinary Surgeons diplomates and residents, whereas basic laparoscopic skills score and virtual reality metrics did not. Our results provide construct validity and concurrent validity for motion analysis metrics for an augmented reality system, whereas a virtual reality system was validated only for the time score. © Copyright 2016 by The American College of Veterinary Surgeons.

  17. DRAW+SneakPeek: analysis workflow and quality metric management for DNA-seq experiments.

    Science.gov (United States)

    Lin, Chiao-Feng; Valladares, Otto; Childress, D Micah; Klevak, Egor; Geller, Evan T; Hwang, Yih-Chii; Tsai, Ellen A; Schellenberg, Gerard D; Wang, Li-San

    2013-10-01

    We report our new DRAW+SneakPeek software for DNA-seq analysis. DNA resequencing analysis workflow (DRAW) automates the workflow of processing raw sequence reads including quality control, read alignment and variant calling on high-performance computing facilities such as Amazon elastic compute cloud. SneakPeek provides an effective interface for reviewing dozens of quality metrics reported by DRAW, so users can assess the quality of data and diagnose problems in their sequencing procedures. Both DRAW and SneakPeek are freely available under the MIT license, and are available as Amazon machine images to be used directly on Amazon cloud with minimal installation. DRAW+SneakPeek is released under the MIT license and is available for academic and nonprofit use for free. The information about source code, Amazon machine images and instructions on how to install and run DRAW+SneakPeek locally and on Amazon elastic compute cloud is available at the National Institute on Aging Genetics of Alzheimer's Disease Data Storage Site (http://www.niagads.org/) and Wang lab Web site (http://wanglab.pcbi.upenn.edu/).

  18. Engineering index : a metric for assessing margin in engineered systems

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, Ronald M.

    2002-01-01

    Inherent in most engineered products is some measure of margin or over design. Engineers often do not retain design and performance knowledge so they can quantify uncertainties and estimate how much margin their product possesses. When knowledge-capture and quantification is neither possible, nor permissible, engineers rely on cultural lore and institutionalised practices to assign nominal conditions and tolerances. Often what gets lost along the way is design intent, product requirements, and their relationship with the product's intended application. The Engineering Index was developed to assess the goodness or quality of a product.

  19. Developing a more useful surface quality metric for laser optics

    Science.gov (United States)

    Turchette, Quentin; Turner, Trey

    2011-02-01

    Light scatter due to surface defects on laser resonator optics produces losses which lower system efficiency and output power. The traditional methodology for surface quality inspection involves visual comparison of a component to scratch and dig (SAD) standards under controlled lighting and viewing conditions. Unfortunately, this process is subjective and operator dependent. Also, there is no clear correlation between inspection results and the actual performance impact of the optic in a laser resonator. As a result, laser manufacturers often overspecify surface quality in order to ensure that optics will not degrade laser performance due to scatter. This can drive up component costs and lengthen lead times. Alternatively, an objective test system for measuring optical scatter from defects can be constructed with a microscope, calibrated lighting, a CCD detector and image processing software. This approach is quantitative, highly repeatable and totally operator independent. Furthermore, it is flexible, allowing the user to set threshold levels as to what will or will not constitute a defect. This paper details how this automated, quantitative type of surface quality measurement can be constructed, and shows how its results correlate against conventional loss measurement techniques such as cavity ringdown times.

  20. Developing a Comprehensive Metric for Assessing Discussion Board Effectiveness

    Science.gov (United States)

    Kay, Robin H.

    2006-01-01

    The use of online discussion boards has grown extensively in the past 5 years, yet some researchers argue that our understanding of how to use this tool in an effective and meaningful way is minimal at best. Part of the problem in acquiring more cohesive and useful information rests in the absence of a comprehensive, theory-driven metric to assess…

  1. Embracing the Fog of War: Assessment and Metrics in Counterinsurgency

    Science.gov (United States)

    2012-01-01

    tradicts itself not only here but also on page II-2, where it states, “System nodes are the tangible elements within a system that can be ‘targeted...Afghanistan. Every core metric used in Vietnam, Iraq, and Afghanistan contains internal con- tradictions like these; they are unavoidable because context

  2. Data quality monitoring and performance metrics of a prospective, population-based observational study of maternal and newborn health in low resource settings.

    Science.gov (United States)

    Goudar, Shivaprasad S; Stolka, Kristen B; Koso-Thomas, Marion; Honnungar, Narayan V; Mastiholi, Shivanand C; Ramadurg, Umesh Y; Dhaded, Sangappa M; Pasha, Omrana; Patel, Archana; Esamai, Fabian; Chomba, Elwyn; Garces, Ana; Althabe, Fernando; Carlo, Waldemar A; Goldenberg, Robert L; Hibberd, Patricia L; Liechty, Edward A; Krebs, Nancy F; Hambidge, Michael K; Moore, Janet L; Wallace, Dennis D; Derman, Richard J; Bhalachandra, Kodkany S; Bose, Carl L

    2015-01-01

    To describe quantitative data quality monitoring and performance metrics adopted by the Global Network's (GN) Maternal Newborn Health Registry (MNHR), a maternal and perinatal population-based registry (MPPBR) based in low and middle income countries (LMICs). Ongoing prospective, population-based data on all pregnancy outcomes within defined geographical locations participating in the GN have been collected since 2008. Data quality metrics were defined and are implemented at the cluster, site and the central level to ensure data quality. Quantitative performance metrics are described for data collected between 2010 and 2013. Delivery outcome rates over 95% illustrate that all sites are successful in following patients from pregnancy through delivery. Examples of specific performance metric reports illustrate how both the metrics and reporting process are used to identify cluster-level and site-level quality issues and illustrate how those metrics track over time. Other summary reports (e.g. the increasing proportion of measured birth weight compared to estimated and missing birth weight) illustrate how a site has improved quality over time. High quality MPPBRs such as the MNHR provide key information on pregnancy outcomes to local and international health officials where civil registration systems are lacking. The MNHR has measures in place to monitor data collection procedures and improve the quality of data collected. Sites have increasingly achieved acceptable values of performance metrics over time, indicating improvements in data quality, but the quality control program must continue to evolve to optimize the use of the MNHR to assess the impact of community interventions in research protocols in pregnancy and perinatal health. NCT01073475.

  3. National evaluation of multidisciplinary quality metrics for head and neck cancer.

    Science.gov (United States)

    Cramer, John D; Speedy, Sedona E; Ferris, Robert L; Rademaker, Alfred W; Patel, Urjeet A; Samant, Sandeep

    2017-11-15

    The National Quality Forum has endorsed quality-improvement measures for multiple cancer types that are being developed into actionable tools to improve cancer care. No nationally endorsed quality metrics currently exist for head and neck cancer. The authors identified patients with surgically treated, invasive, head and neck squamous cell carcinoma in the National Cancer Data Base from 2004 to 2014 and compared the rate of adherence to 5 different quality metrics and whether compliance with these quality metrics impacted overall survival. The metrics examined included negative surgical margins, neck dissection lymph node (LN) yield ≥ 18, appropriate adjuvant radiation, appropriate adjuvant chemoradiation, adjuvant therapy within 6 weeks, as well as overall quality. In total, 76,853 eligible patients were identified. There was substantial variability in patient-level adherence, which was 80% for negative surgical margins, 73.1% for neck dissection LN yield, 69% for adjuvant radiation, 42.6% for adjuvant chemoradiation, and 44.5% for adjuvant therapy within 6 weeks. Risk-adjusted Cox proportional-hazard models indicated that all metrics were associated with a reduced risk of death: negative margins (hazard ratio [HR] 0.73; 95% confidence interval [CI], 0.71-0.76), LN yield ≥ 18 (HR, 0.93; 95% CI, 0.89-0.96), adjuvant radiation (HR, 0.67; 95% CI, 0.64-0.70), adjuvant chemoradiation (HR, 0.84; 95% CI, 0.79-0.88), and adjuvant therapy ≤6 weeks (HR, 0.92; 95% CI, 0.89-0.96). Patients who received high-quality care had a 19% reduced adjusted hazard of mortality (HR, 0.81; 95% CI, 0.79-0.83). Five head and neck cancer quality metrics were identified that have substantial variability in adherence and meaningfully impact overall survival. These metrics are appropriate candidates for national adoption. Cancer 2017;123:4372-81. © 2017 American Cancer Society. © 2017 American Cancer Society.

  4. Quality Metrics of a Fecal Immunochemical Test-Based Colorectal Cancer Screening Program in Korea.

    Science.gov (United States)

    Kim, Dae Ho; Cha, Jae Myung; Kwak, Min Seob; Yoon, Jin Young; Cho, Young-Hak; Jeon, Jung Won; Shin, Hyun Phil; Joo, Kwang Ro; Lee, Joung Il

    2018-03-15

    Knowledge regarding the quality metrics of fecal immunochemical test (FIT)-based colorectal cancer screening programs is limited. The aim of this study was to investigate the performance and quality metrics of a FIT-based screening program. In our screening program, asymptomatic subjects aged ≥50 years underwent an annual FIT, and subjects with positive FIT results underwent a subsequent colonoscopy. The performance of the FIT and colonoscopy was analyzed in individuals with a positive FIT who completed the program between 2009 and 2015 at a university hospital. Among the 51,439 screened participants, 75.1% completed the FIT. The positive rate was 1.1%, and the colonoscopy completion rate in these patients was 68.6%. The positive predictive values of cancer and advanced neoplasia were 5.5% and 19.1%, respectively. The adenoma detection rate in the patients who underwent colonoscopy after a positive FIT was 48.2% (60.0% for men and 33.6% for women). The group with the highest tertile quantitative FIT level showed a significantly higher detection rate of advanced neoplasia than the group with the lowest tertile (odds ratio, 2.6; 95% confidence interval, 1.4 to 5.1; p<0.001). The quality metrics used in the United States and Europe may be directly introduced to other countries, including Korea. However, the optimal quality metrics should be established in each country.

  5. Implementation of a Clinical Documentation Improvement Curriculum Improves Quality Metrics and Hospital Charges in an Academic Surgery Department.

    Science.gov (United States)

    Reyes, Cynthia; Greenbaum, Alissa; Porto, Catherine; Russell, John C

    2017-03-01

    Accurate clinical documentation (CD) is necessary for many aspects of modern health care, including excellent communication, quality metrics reporting, and legal documentation. New requirements have mandated adoption of ICD-10-CM coding systems, adding another layer of complexity to CD. A clinical documentation improvement (CDI) and ICD-10 training program was created for health care providers in our academic surgery department. We aimed to assess the impact of our CDI curriculum by comparing quality metrics, coding, and reimbursement before and after implementation of our CDI program. A CDI/ICD-10 training curriculum was instituted in September 2014 for all members of our university surgery department. The curriculum consisted of didactic lectures, 1-on-1 provider training, case reviews, e-learning modules, and CD queries from nurse CDI staff and hospital coders. Outcomes parameters included monthly documentation completion rates, severity of illness (SOI), risk of mortality (ROM), case-mix index (CMI), all-payer refined diagnosis-related groups (APR-DRG), and Surgical Care Improvement Program (SCIP) metrics. Financial gain from responses to CDI queries was determined retrospectively. Surgery department delinquent documentation decreased by 85% after CDI implementation. Compliance with SCIP measures improved from 85% to 97%. Significant increases in surgical SOI, ROM, CMI, and APR-DRG (all p quality measures. Copyright © 2016 American College of Surgeons. All rights reserved.

  6. Automating Quality Metrics in the Era of Electronic Medical Records: Digital Signatures for Ventilator Bundle Compliance.

    Science.gov (United States)

    Lan, Haitao; Thongprayoon, Charat; Ahmed, Adil; Herasevich, Vitaly; Sampathkumar, Priya; Gajic, Ognjen; O'Horo, John C

    2015-01-01

    Ventilator-associated events (VAEs) are associated with increased risk of poor outcomes, including death. Bundle practices including thromboembolism prophylaxis, stress ulcer prophylaxis, oral care, and daily sedation breaks and spontaneous breathing trials aim to reduce rates of VAEs and are endorsed as quality metrics in the intensive care units. We sought to create electronic search algorithms (digital signatures) to evaluate compliance with ventilator bundle components as the first step in a larger project evaluating the ventilator bundle effect on VAE. We developed digital signatures of bundle compliance using a retrospective cohort of 542 ICU patients from 2010 for derivation and validation and testing of signature accuracy from a cohort of random 100 patients from 2012. Accuracy was evaluated against manual chart review. Overall, digital signatures performed well, with median sensitivity of 100% (range, 94.4%-100%) and median specificity of 100% (range, 100%-99.8%). Automated ascertainment from electronic medical records accurately assesses ventilator bundle compliance and can be used for quality reporting and research in VAE.

  7. Quality-of-life metrics with vagus nerve stimulation for epilepsy from provider survey data.

    Science.gov (United States)

    Englot, Dario J; Hassnain, Kevin H; Rolston, John D; Harward, Stephen C; Sinha, Saurabh R; Haglund, Michael M

    2017-01-01

    Drug-resistant epilepsy is a devastating disorder associated with diminished quality of life (QOL). Surgical resection leads to seizure freedom and improved QOL in many epilepsy patients, but not all individuals are candidates for resection. In these cases, neuromodulation-based therapies such as vagus nerve stimulation (VNS) are often used, but most VNS studies focus exclusively on reduction of seizure frequency. QOL changes and predictors with VNS remain poorly understood. Using the VNS Therapy Patient Outcome Registry, we examined 7 metrics related to QOL after VNS for epilepsy in over 5000 patients (including over 3000 with ≥12months follow-up), as subjectively assessed by treating physicians. Trends and predictors of QOL changes were examined and related to post-operative seizure outcome and likelihood of VNS generator replacement. After VNS therapy, physicians reported patient improvement in alertness (58-63%, range over follow-up period), post-ictal state (55-62%), cluster seizures (48-56%), mood change (43-49%), verbal communication (38-45%), school/professional achievements (29-39%), and memory (29-38%). Predictors of net QOL improvement included shorter time to implant (odds ratio [OR], 1.3; 95% confidence interval [CI], 1.1-1.6), generalized seizure type (OR, 1.2; 95% CI, 1.0-1.4), female gender (OR, 1.2; 95% CI, 1.0-1.4), and Caucasian ethnicity (OR, 1.3; 95% CI, 1.0-1.5). No significant trends were observed over time. Patients with net QOL improvement were more likely to have favorable seizure outcomes (chi square [χ 2 ]=148.1, pmetrics subjectively rated by physicians. QOL improvement is associated with favorable seizure outcome and a higher likelihood of generator replacement, suggesting satisfaction with therapy. It is important to consider QOL metrics in neuromodulation for epilepsy, given the deleterious effects of seizures on patient QOL. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  8. How good are the data? Feasible approach to validation of metrics of quality derived from an outpatient electronic health record.

    Science.gov (United States)

    Benin, Andrea L; Fenick, Ada; Herrin, Jeph; Vitkauskas, Grace; Chen, John; Brandt, Cynthia

    2011-01-01

    Although electronic health records (EHRs) promise to be efficient resources for measuring metrics of quality, they are not designed for such population-based analyses. Thus, extracting meaningful clinical data from them is not straightforward. To avoid poorly executed measurements, standardized methods to measure and to validate metrics of quality are needed. This study provides and evaluates a use case for a generally applicable approach to validating quality metrics measured electronically from EHR-based data. The authors iteratively refined and validated 4 outpatient quality metrics and classified errors in measurement. Multiple iterations of validation and measurement resulted in high levels of sensitivity and agreement versus the "gold standard" of manual review. In contrast, substantial differences remained for measurement based on coded billing data. Measuring quality metrics using an EHR-based electronic process requires validation to ensure accuracy; approaches to validation such as those described in this study should be used by organizations measuring quality from EHR-based information.

  9. Usability Evaluations of a Wearable Inertial Sensing System and Quality of Movement Metrics for Stroke Survivors by Care Professionals.

    Science.gov (United States)

    Klaassen, Bart; van Beijnum, Bert-Jan F; Held, Jeremia P; Reenalda, Jasper; van Meulen, Fokke B; Veltink, Peter H; Hermens, Hermie J

    2017-01-01

    Inertial motion capture systems are used in many applications such as measuring the movement quality in stroke survivors. The absence of clinical effectiveness and usability evidence in these assistive technologies into rehabilitation has delayed the transition of research into clinical practice. Recently, a new inertial motion capture system was developed in a project, called INTERACTION, to objectively measure the quality of movement (QoM) in stroke survivors during daily-life activity. With INTERACTION, we are to be able to investigate into what happens with patients after discharge from the hospital. Resulting QoM metrics, where a metric is defined as a measure of some property, are subsequently presented to care professionals. Metrics include for example: reaching distance, walking speed, and hand distribution plots. The latter shows a density plot of the hand position in the transversal plane. The objective of this study is to investigate the opinions of care professionals in using these metrics obtained from INTERACTION and its usability. By means of a semi-structured interview, guided by a presentation, presenting two patient reports. Each report includes several QoM metric (like reaching distance, hand position density plots, shoulder abduction) results obtained during daily-life measurements and in clinic and were evaluated by care professionals not related to the project. The results were compared with care professionals involved within the INTERACTION project. Furthermore, two questionnaires (5-point Likert and open questionnaire) were handed over to rate the usability of the metrics and to investigate if they would like such a system in their clinic. Eleven interviews were conducted, where each interview included either two or three care professionals as a group, in Switzerland and The Netherlands. Evaluation of the case reports (CRs) by participants and INTERACTION members showed a high correlation for both lower and upper extremity metrics

  10. Quality of Experience: From User Perception to Instrumental Metrics (Dagstuhl Seminar 12181)

    OpenAIRE

    Fiedler, Markus; Möller, Sebastian; Reichl, Peter

    2012-01-01

    This report documents the program and the outcomes of Dagstuhl Seminar 12181 "Quality of Experience: From User Perception to Instrumental Metrics". As follow-up of the Dagstuhl Seminar 09192 "From Quality of Service to Quality of Experience", it focused on the further development of an agreed definition of the term Quality of Experience (QoE) in collaboration with the COST Action IC1003 "Qualinet", as well as inventories of possibilities to measure QoE (beyond the usual user polls) and to exp...

  11. Patent Assessment Quality

    DEFF Research Database (Denmark)

    Burke, Paul F.; Reitzig, Markus

    2006-01-01

    The increasing number of patent applications worldwide and the extension of patenting to the areas of software and business methods have triggered a debate on "patent quality". While patent quality may have various dimensions, this paper argues that consistency in the decision making on the side...... of the patent office is one important dimension, particularly in new patenting areas (emerging technologies). In order to understand whether patent offices appear capable of providing consistent assessments of a patent's technological quality in such novel industries from the beginning, we study the concordance...... of the European Patent Office's (EPO's) granting and opoposition decisions for individual patents. We use the historical example of biotech patens filed between 1978 until 1986, the early stage of the industry. Our results indicate that the EPO shows systematically different assessments of technological quality...

  12. Area of Concern: a new paradigm in life cycle assessment for the development of footprint metrics

    Science.gov (United States)

    Purpose: As a class of environmental metrics, footprints have been poorly defined, have shared an unclear relationship to life cycle assessment (LCA), and the variety of approaches to quantification have sometimes resulted in confusing and contradictory messages in the marketplac...

  13. A comparison of Image Quality Models and Metrics Predicting Object Detection

    Science.gov (United States)

    Rohaly, Ann Marie; Ahumada, Albert J., Jr.; Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Many models and metrics for image quality predict image discriminability, the visibility of the difference between a pair of images. Some image quality applications, such as the quality of imaging radar displays, are concerned with object detection and recognition. Object detection involves looking for one of a large set of object sub-images in a large set of background images and has been approached from this general point of view. We find that discrimination models and metrics can predict the relative detectability of objects in different images, suggesting that these simpler models may be useful in some object detection and recognition applications. Here we compare three alternative measures of image discrimination, a multiple frequency channel model, a single filter model, and RMS error.

  14. Improvement in Quality Metrics by the UPMC Enhanced Care Program: A Novel Super-Utilizer Program.

    Science.gov (United States)

    Bryk, Jodie; Fischer, Gary S; Lyons, Anita; Shroff, Swati; Bui, Thuy; Simak, Deborah; Kapoor, Wishwa

    2017-09-25

    The aim was to evaluate pre-post quality of care measures among super-utilizer patients enrolled in the Enhanced Care Program (ECP), a primary care intensive care program. A pre-post analysis of metrics of quality of care for diabetes, hypertension, cancer screenings, and connection to mental health care for participants in the ECP was conducted for patients enrolled in ECP for 6 or more months. Patients enrolled in ECP showed statistically significant improvements in hemoglobin A1c, retinal exams, blood pressure measurements, and screenings for colon cancer, and trends toward improvement in diabetic foot exams and screenings for cervical and breast cancer. There was a significant increase in connecting patients to mental health care. This study shows that super-utilizer patients enrolled in the ECP had significant improvements in quality metrics from those prior to enrollment in ECP.

  15. Development and Evaluation of Quality Metrics for Bioinformatics Analysis of Viral Insertion Site Data Generated Using High Throughput Sequencing.

    Science.gov (United States)

    Gao, Hongyu; Hawkins, Troy; Jasti, Aparna; Chen, Yu-Hsiang; Mockaitis, Keithanne; Dinauer, Mary; Cornetta, Kenneth

    2014-05-06

    Integration of viral vectors into a host genome is associated with insertional mutagenesis and subjects in clinical gene therapy trials must be monitored for this adverse event. Several PCR based methods such as ligase-mediated (LM) PCR, linear-amplification-mediated (LAM) PCR and non-restrictive (nr) LAM PCR were developed to identify sites of vector integration. Coupling the power of next-generation sequencing technologies with various PCR approaches will provide a comprehensive and genome-wide profiling of insertion sites and increase throughput. In this bioinformatics study, we aimed to develop and apply quality metrics to viral insertion data obtained using next-generation sequencing. We developed five simple metrics for assessing next-generation sequencing data from different PCR products and showed how the metrics can be used to objectively compare runs performed with the same methodology as well as data generated using different PCR techniques. The results will help researchers troubleshoot complex methodologies, understand the quality of sequencing data, and provide a starting point for developing standardization of vector insertion site data analysis.

  16. The appropriateness of 30-day mortality as a quality metric in colorectal cancer surgery.

    Science.gov (United States)

    Adam, Mohamed Abdelgadir; Turner, Megan C; Sun, Zhifei; Kim, Jina; Ezekian, Brian; Migaly, John; Mantyh, Christopher R

    2018-01-01

    Our study compares 30-day vs. 90-day mortality following colorectal cancer surgery (CRS), and examines hospital performance ranking based on this assessment. Mortality rates were compared between 30 vs. 90 days following CRS for patients with stage I-III colorectal cancers from the National Cancer Database (2004-2012). Risk-adjusted hierarchical regression models evaluated hospital performance based on mortality. Hospitals were ranked into top (10%), middle (80%), and lowest (10%) performance groups. Among 185,464 patients, 90-day mortality was nearly double the 30-day mortality (4.4% vs. 2.5%). Following risk adjustment 176 hospitals changed performance ranking: 39% in the top 30-day mortality group changed ranking to the middle group; 37% of hospitals in the lowest 30-day group changed ranking to the middle 90-day group. Evaluation of hospital performance based on 30-day mortality is associated with misclassification for 15% of hospitals. Ninety-day mortality may be a better quality metric in oncologic CRS. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. The impact of interhospital transfers on surgical quality metrics for academic medical centers.

    Science.gov (United States)

    Crippen, Cristina J; Hughes, Steven J; Chen, Sugong; Behrns, Kevin E

    2014-07-01

    The emergence of pay-for-performance systems pose a risk to an academic medical center's (AMC) mission to provide care for interhospital surgical transfer patients. This study examines quality metrics and resource consumption for a sample of these patients from the University Health System Consortium (UHC) and our Department of Surgery (DOS). Standard benchmarks, including mortality rate, length of stay (LOS), and cost, were used to evaluate the impact of interhospital surgical transfers versus direct admission (DA) patients from January 2010 to December 2012. For 1,423,893 patients, the case mix index for transfer patients was 38 per cent (UHC) and 21 per cent (DOS) greater than DA patients. Mortality rates were 5.70 per cent (UHC) and 6.93 per cent (DOS) in transferred patients compared with 1.79 per cent (UHC) and 2.93 per cent (DOS) for DA patients. Mean LOS for DA patients was 4 days shorter. Mean total costs for transferred patients were greater $13,613 (UHC) and $13,356 (DOS). Transfer patients have poorer outcomes and consume more resources than DA patients. Early recognition and transfer of complex surgical patients may improve patient rescue and decrease resource consumption. Surgeons at AMCs and in the community should develop collaborative programs that permit collective assessment and decision-making for complicated surgical patients.

  18. How Do Publicly Reported Medicare Quality Metrics for Radiologists Compare With Those of Other Specialty Groups?

    Science.gov (United States)

    Rosenkrantz, Andrew B; Hughes, Danny R; Duszak, Richard

    2016-03-01

    To characterize and compare the performance of radiologists in Medicare's new Physician Compare Initiative with that of other provider groups. CMS Physician Compare data were obtained for all 900,334 health care providers (including 30,614 radiologists) enrolled in Medicare in early 2015. All publicly reported metrics were compared among eight provider categories (radiologists, pathologists, primary care, other medical subspecialists, surgeons, all other physicians, nurse practitioners and physician assistants, and all other nonphysicians). Overall radiologist satisfaction of all six Physician Compare Initiative metrics differed significantly from that of nonradiologists (all P ≤ .005): acceptance of Medicare-approved amount as payment in full, 75.8% versus 85.0%; Electronic Prescribing, 11.2% versus 25.1%; Physician Quality Reporting System (PQRS), 60.5% versus 39.4%; electronic health record participation, 15.8% versus 25.4%; receipt of the PQRS Maintenance of Certification Program Incentive, 4.7% versus 0.3%; and Million Hearts initiative participation, 0.007% versus 0.041%. Among provider categories, radiologists and pathologists demonstrated the highest and second-highest performance levels, respectively, for the two metrics (PQRS and MOC) with specialty-specific designs, but they ranked between fifth and eighth in all remaining non-specialty-specific metrics. The performance of radiologists and pathologists in Medicare's Physician Compare Initiative may relate to the extent to which metrics are tailored to the distinct aspects of their practices as diagnostic information specialists. If more physician participation in these programs is desired, more meaningful specialty-specific (rather than generic) metrics are encouraged. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  19. Measuring the effects of health information technology on quality of care: a novel set of proposed metrics for electronic quality reporting.

    Science.gov (United States)

    Kern, Lisa M; Dhopeshwarkar, Rina; Barrón, Yolanda; Wilcox, Adam; Pincus, Harold; Kaushal, Rainu

    2009-07-01

    Electronic health records (EHRs), in combination with health information exchange, are being promoted in the United States as a strategy for improving quality of care. No single metric set exists for measuring the effectiveness of these interventions. A set of quality metrics was sought that could be retrieved electronically and would be sensitive to the changes in quality that EHRs with health information exchange may contribute to ambulatory care. A literature search identified quality metric sets for ambulatory care. Two rounds of quantitative rating of individual metrics were conducted. Metrics were developed de novo to capture additional expected effects of EHRs with health information exchange. A 36-member national expert panel validated the rating process and final metric set. Seventeen metric sets containing 1,064 individual metrics were identified; 510 metrics met inclusion criteria. Two rounds of rating narrowed these to 59 metrics and then to 18. The final 18 consisted of metrics for asthma, cardiovascular disease, congestive heart failure, diabetes, medication and allergy documentation, mental health, osteoporosis, and prevention. Fourteen metrics were developed de novo to address test ordering, medication management, referrals, follow-up after discharge, and revisits. The novel set of 32 metrics is proposed as suitable for electronic reporting to capture the potential quality effects of EHRs with health information exchange. This metric set may have broad utility as health information technology becomes increasingly common with funding from the federal stimulus package and other sources. This work may also stimulate discussion on improving how data are entered and extracted from clinically rich, electronic sources, with the goal of more accurately measuring and improving care.

  20. Workflow and metrics for image quality control in large-scale high-content screens.

    Science.gov (United States)

    Bray, Mark-Anthony; Fraser, Adam N; Hasaka, Thomas P; Carpenter, Anne E

    2012-02-01

    Automated microscopes have enabled the unprecedented collection of images at a rate that precludes visual inspection. Automated image analysis is required to identify interesting samples and extract quantitative information for high-content screening (HCS). However, researchers are impeded by the lack of metrics and software tools to identify image-based aberrations that pollute data, limiting experiment quality. The authors have developed and validated approaches to identify those image acquisition artifacts that prevent optimal extraction of knowledge from high-content microscopy experiments. They have implemented these as a versatile, open-source toolbox of algorithms and metrics readily usable by biologists to improve data quality in a wide variety of biological experiments.

  1. Multivariate Analyses of Quality Metrics for Crystal Structures in the PDB Archive.

    Science.gov (United States)

    Shao, Chenghua; Yang, Huanwang; Westbrook, John D; Young, Jasmine Y; Zardecki, Christine; Burley, Stephen K

    2017-03-07

    Following deployment of an augmented validation system by the Worldwide Protein Data Bank (wwPDB) partnership, the quality of crystal structures entering the PDB has improved. Of significance are improvements in quality measures now prominently displayed in the wwPDB validation report. Comparisons of PDB depositions made before and after introduction of the new reporting system show improvements in quality measures relating to pairwise atom-atom clashes, side-chain torsion angle rotamers, and local agreement between the atomic coordinate structure model and experimental electron density data. These improvements are largely independent of resolution limit and sample molecular weight. No significant improvement in the quality of associated ligands was observed. Principal component analysis revealed that structure quality could be summarized with three measures (Rfree, real-space R factor Z score, and a combined molecular geometry quality metric), which can in turn be reduced to a single overall quality metric readily interpretable by all PDB archive users. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Large performance incentives had the greatest impact on providers whose quality metrics were lowest at baseline.

    Science.gov (United States)

    Greene, Jessica; Hibbard, Judith H; Overton, Valerie

    2015-04-01

    This study examined the impact of Fairview Health Services' primary care provider compensation model, in which 40 percent of compensation was based on clinic-level quality outcomes. Fairview Health Services is a Pioneer accountable care organization in Minnesota. Using publicly reported performance data from 2010 and 2012, we found that Fairview's improvement in quality metrics was not greater than the improvement in other comparable Minnesota medical groups. An analysis of Fairview's administrative data found that the largest predictor of improvement over the first two years of the compensation model was primary care providers' baseline quality performance. Providers whose baseline performance was in the lowest tertile improved three times more, on average, across the three quality metrics studied than those in the middle tertile, and almost six times more than those in the top tertile. As a result, there was a narrowing of variation in performance across all primary care providers at Fairview and a narrowing of the gap in quality between providers who treated the highest-income patient panels and those who treated the lowest-income panels. The large quality incentive fell short of its overall quality improvement aim. However, the results suggest that payment reform may help narrow variation in primary care provider performance, which can translate into narrowing socioeconomic disparities. Project HOPE—The People-to-People Health Foundation, Inc.

  3. Development of quality metrics for ambulatory care in pediatric patients with tetralogy of Fallot.

    Science.gov (United States)

    Villafane, Juan; Edwards, Thomas C; Diab, Karim A; Satou, Gary M; Saarel, Elizabeth; Lai, Wyman W; Serwer, Gerald A; Karpawich, Peter P; Cross, Russell; Schiff, Russell; Chowdhury, Devyani; Hougen, Thomas J

    2017-12-01

    The objective of this study was to develop quality metrics (QMs) relating to the ambulatory care of children after complete repair of tetralogy of Fallot (TOF). A workgroup team (WT) of pediatric cardiologists with expertise in all aspects of ambulatory cardiac management was formed at the request of the American College of Cardiology (ACC) and the Adult Congenital and Pediatric Cardiology Council (ACPC), to review published guidelines and consensus data relating to the ambulatory care of repaired TOF patients under the age of 18 years. A set of quality metrics (QMs) was proposed by the WT. The metrics went through a two-step evaluation process. In the first step, the RAND-UCLA modified Delphi methodology was employed and the metrics were voted on feasibility and validity by an expert panel. In the second step, QMs were put through an "open comments" process where feedback was provided by the ACPC members. The final QMs were approved by the ACPC council. The TOF WT formulated 9 QMs of which only 6 were submitted to the expert panel; 3 QMs passed the modified RAND-UCLA and went through the "open comments" process. Based on the feedback through the open comment process, only 1 metric was finally approved by the ACPC council. The ACPC Council was able to develop QM for ambulatory care of children with repaired TOF. These patients should have documented genetic testing for 22q11.2 deletion. However, lack of evidence in the literature made it a challenge to formulate other evidence-based QMs. © 2017 Wiley Periodicals, Inc.

  4. Simulation of devices mobility to estimate wireless channel quality metrics in 5G networks

    Science.gov (United States)

    Orlov, Yu.; Fedorov, S.; Samuylov, A.; Gaidamaka, Yu.; Molchanov, D.

    2017-07-01

    The problem of channel quality estimation for devices in a wireless 5G network is formulated. As a performance metrics of interest we choose the signal-to-interference-plus-noise ratio, which depends essentially on the distance between the communicating devices. A model with a plurality of moving devices in a bounded three-dimensional space and a simulation algorithm to determine the distances between the devices for a given motion model are devised.

  5. Program analysis methodology Office of Transportation Technologies: Quality Metrics final report

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2002-03-01

    "Quality Metrics" is the analytical process for measuring and estimating future energy, environmental and economic benefits of US DOE Office of Energy Efficiency and Renewable Energy (EE/RE) programs. This report focuses on the projected benefits of the programs currently supported by the Office of Transportation Technologies (OTT) within EE/RE. For analytical purposes, these various benefits are subdivided in terms of Planning Units which are related to the OTT program structure.

  6. Quality control metrics improve repeatability and reproducibility of single-nucleotide variants derived from whole-genome sequencing.

    Science.gov (United States)

    Zhang, W; Soika, V; Meehan, J; Su, Z; Ge, W; Ng, H W; Perkins, R; Simonyan, V; Tong, W; Hong, H

    2015-08-01

    Although many quality control (QC) methods have been developed to improve the quality of single-nucleotide variants (SNVs) in SNV-calling, QC methods for use subsequent to single-nucleotide polymorphism-calling have not been reported. We developed five QC metrics to improve the quality of SNVs using the whole-genome-sequencing data of a monozygotic twin pair from the Korean Personal Genome Project. The QC metrics improved both repeatability between the monozygotic twin pair and reproducibility between SNV-calling pipelines. We demonstrated the QC metrics improve reproducibility of SNVs derived from not only whole-genome-sequencing data but also whole-exome-sequencing data. The QC metrics are calculated based on the reference genome used in the alignment without accessing the raw and intermediate data or knowing the SNV-calling details. Therefore, the QC metrics can be easily adopted in downstream association analysis.

  7. Assessment of Nutrition Information System Using Health Metrics Network Framework

    Directory of Open Access Journals (Sweden)

    Mochamad Iqbal Nurmansyah

    2015-08-01

    Sistem informasi gizi (Sigizi dikembangkan oleh Direktorat Bina Gizi Kementerian Kesehatan sejak 2011. Data Sigizi mencakup data penimbangan balita di posyandu, kasus gizi buruk, cakupan pemberian tablet Fe pada ibu hamil, konsumsi garam beryodium, pemberian vitamin A, dan ASI eksklusif. Penelitian ini bertujuan untuk mengukur kinerja pengelolaan Sigizi di Dinas Kesehatan Kota Tangerang Selatan menggunakan kerangka Health Metrics Network yang dikeluarkan oleh WHO tahun 2008. Sigizi merupakan sistem informasi yang diaplikasikan pada tingkat nasional dengan mekanisme pelaporan berjenjang, dari 508 kabupaten/kota menuju 34 provinsi dan bermuara di tingkat nasional. Di Provinsi Banten, terdapat delapan kabupaten/kota yang menjalankan Sigizi. Informan penelitian berjumlah enam orang, yaitu seksi gizi, seksi sumber daya kesehatan dan sistem informasi kesehatan, dua tenaga pelaksana gizi, dan dua kader posyandu. Pengumpulan data dilakukan Januari – April 2013 menggunakan pedoman wawancara, observasi, dan telaah dokumen. Analisis interpretasi digunakan dalam menganalisis data. Hasil penelitian menunjukan belum ada kebijakan serta pelatihan mengenai pengawasan gizi. Kegiatan pemantauan telah dilakukan. Sarana dinilai cukup, namun terdapat kekurangan dalam upaya perawatannya. Terdapat enam indikator dalam pembinaan gizi yang mengacu pada MDGs. Terdapat pengelompokan dan kamus data. Pelaporan data dilakukan setiap bulan. Grafik dan peta digunakan untuk menyajikan data. Data yang tersedia digunakan untuk pemonitoran dan pengambilan keputusan dalam kegiatan pembinaan gizi, baik di tingkat posyandu, puskesmas maupun dinkes. Secara umum, pelaksanaan Sigizi di Dinas Kesehatan Kota Tangerang Selatan telah memadai.

  8. Developing a composite weighted quality metric to reflect the total benefit conferred by a health plan.

    Science.gov (United States)

    Taskler, Glen B; Braithwaite, R Scott

    2015-03-01

    To improve individual health quality measures, which are associated with varying degrees of health benefit, and composite quality metrics, which weight individual measures identically. We developed a health-weighted composite quality measure reflecting the total health benefit conferred by a health plan annually, using preventive care as a test case. Using national disease prevalence, we simulated a hypothetical insurance panel of individuals aged 25 to 84 years. For each individual, we estimated the gain in life expectancy associated with 1 year of health system exposure to encourage adherence to major preventive care guidelines, controlling for patient characteristics (age, race, gender, comorbidity) and variation in individual adherence rates. This personalized gain in life expectancy was used to proxy for the amount of health benefit conferred by a health plan annually to its members, and formed weights in our health-weighted composite quality measure. We aggregated health benefits across the health insurance membership panel to analyze total health system performance. Our composite quality metric gave the highest weights to health plans that succeeded in implementing tobacco cessation and weight loss. One year of compliance with these goals was associated with 2 to 10 times as much health benefit as compliance with easier-to-follow preventive care services, such as mammography, aspirin, and antihypertensives. For example, for women aged 55 to 64 years, successful interventions to encourage weight loss were associated with 2.1 times the health benefit of blood pressure reduction and 3.9 times the health benefit of increasing adherence with screening mammography. A single health-weighted quality metric may inform measurement of total health system performance.

  9. Integrated Metrics for Improving the Life Cycle Approach to Assessing Product System Sustainability

    Directory of Open Access Journals (Sweden)

    Wesley Ingwersen

    2014-03-01

    Full Text Available Life cycle approaches are critical for identifying and reducing environmental burdens of products. While these methods can indicate potential environmental impacts of a product, current Life Cycle Assessment (LCA methods fail to integrate the multiple impacts of a system into unified measures of social, economic or environmental performance related to sustainability. Integrated metrics that combine multiple aspects of system performance based on a common scientific or economic principle have proven to be valuable for sustainability evaluation. In this work, we propose methods of adapting four integrated metrics for use with LCAs of product systems: ecological footprint, emergy, green net value added, and Fisher information. These metrics provide information on the full product system in land, energy, monetary equivalents, and as a unitless information index; each bundled with one or more indicators for reporting. When used together and for relative comparison, integrated metrics provide a broader coverage of sustainability aspects from multiple theoretical perspectives that is more likely to illuminate potential issues than individual impact indicators. These integrated metrics are recommended for use in combination with traditional indicators used in LCA. Future work will test and demonstrate the value of using these integrated metrics and combinations to assess product system sustainability.

  10. MUSTANG: A Community-Facing Web Service to Improve Seismic Data Quality Awareness Through Metrics

    Science.gov (United States)

    Templeton, M. E.; Ahern, T. K.; Casey, R. E.; Sharer, G.; Weertman, B.; Ashmore, S.

    2014-12-01

    IRIS DMC is engaged in a new effort to provide broad and deep visibility into the quality of data and metadata found in its terabyte-scale geophysical data archive. Taking advantage of large and fast disk capacity, modern advances in open database technologies, and nimble provisioning of virtual machine resources, we are creating an openly accessible treasure trove of data measurements for scientists and the general public to utilize in providing new insights into the quality of this data. We have branded this statistical gathering system MUSTANG, and have constructed it as a component of the web services suite that IRIS DMC offers. MUSTANG measures over forty data metrics addressing issues with archive status, data statistics and continuity, signal anomalies, noise analysis, metadata checks, and station state of health. These metrics could potentially be used both by network operators to diagnose station problems and by data users to sort suitable data from unreliable or unusable data. Our poster details what MUSTANG is, how users can access it, what measurements they can find, and how MUSTANG fits into the IRIS DMC's data access ecosystem. Progress in data processing, approaches to data visualization, and case studies of MUSTANG's use for quality assurance will be presented. We want to illustrate what is possible with data quality assurance, the need for data quality assurance, and how the seismic community will benefit from this freely available analytics service.

  11. Health-weighted Composite Quality Metrics Offer Promise to Improve Health Outcomes in a Learning Health System.

    Science.gov (United States)

    Braithwaite, Scott; Stine, Nicholas

    2013-01-01

    Health system leaders sometimes adopt quality metrics without robust supporting evidence of improvements in quality and/or quantity of life, which may impair rather than facilitate improved health outcomes. In brief, there is now no easy way to measure how much "health" is conferred by a health system. However, we argue that this goal is achievable. Health-weighted composite quality metrics have the potential to measure "health" by synthesizing individual evidence-based quality metrics into a summary measure, utilizing relative weightings that reflect the relative amount of health benefit conferred by each constituent quality metric. Previously, it has been challenging to create health-weighted composite quality metrics because of methodological and data limitations. However, advances in health information technology and mathematical modeling of disease progression promise to help mitigate these challenges by making patient-level data (eg, from the electronic health record and mobile health (mHealth) more accessible and more actionable for use. Accordingly, it may now be possible to use health information technology to calculate and track a health-weighted composite quality metric for each patient that reflects the health benefit conferred to that patient by the health system. These health-weighted composite quality metrics can be employed for a multitude of important aims that improve health outcomes, including quality evaluation, population health maximization, health disparity attenuation, panel management, resource allocation, and personalization of care. We describe the necessary attributes, the possible uses, and the likely limitations and challenges of health-weighted composite quality metrics using patient-level health data.

  12. Water Quality Assessment and Management

    Science.gov (United States)

    Overview of Clean Water Act (CWA) restoration framework including; water quality standards, monitoring/assessment, reporting water quality status, TMDL development, TMDL implementation (point & nonpoint source control)

  13. Teaching and assessing procedural skills using simulation: metrics and methodology.

    Science.gov (United States)

    Lammers, Richard L; Davenport, Moira; Korley, Frederick; Griswold-Theodorson, Sharon; Fitch, Michael T; Narang, Aneesh T; Evans, Leigh V; Gross, Amy; Rodriguez, Elliot; Dodge, Kelly L; Hamann, Cara J; Robey, Walter C

    2008-11-01

    Simulation allows educators to develop learner-focused training and outcomes-based assessments. However, the effectiveness and validity of simulation-based training in emergency medicine (EM) requires further investigation. Teaching and testing technical skills require methods and assessment instruments that are somewhat different than those used for cognitive or team skills. Drawing from work published by other medical disciplines as well as educational, behavioral, and human factors research, the authors developed six research themes: measurement of procedural skills; development of performance standards; assessment and validation of training methods, simulator models, and assessment tools; optimization of training methods; transfer of skills learned on simulator models to patients; and prevention of skill decay over time. The article reviews relevant and established educational research methodologies and identifies gaps in our knowledge of how physicians learn procedures. The authors present questions requiring further research that, once answered, will advance understanding of simulation-based procedural training and assessment in EM.

  14. Quality assessment of images displayed on LCD screen with local backlight dimming

    DEFF Research Database (Denmark)

    Mantel, Claire; Burini, Nino; Korhonen, Jari

    2013-01-01

    This paper presents a subjective experiment collecting quality assessment of images displayed on a LCD with local backlight dimming using two methodologies: absolute category ratings and paired-comparison. Some well-known objective quality metrics are then applied to the stimuli...... and their respective performance are analyzed. The HDR-VDP metric seems to achieve good performance on every source image....

  15. Beyond metrics? Utilizing ‘soft intelligence’ for healthcare quality and safety

    Science.gov (United States)

    Martin, Graham P.; McKee, Lorna; Dixon-Woods, Mary

    2015-01-01

    Formal metrics for monitoring the quality and safety of healthcare have a valuable role, but may not, by themselves, yield full insight into the range of fallibilities in organizations. ‘Soft intelligence’ is usefully understood as the processes and behaviours associated with seeking and interpreting soft data—of the kind that evade easy capture, straightforward classification and simple quantification—to produce forms of knowledge that can provide the basis for intervention. With the aim of examining current and potential practice in relation to soft intelligence, we conducted and analysed 107 in-depth qualitative interviews with senior leaders, including managers and clinicians, involved in healthcare quality and safety in the English National Health Service. We found that participants were in little doubt about the value of softer forms of data, especially for their role in revealing troubling issues that might be obscured by conventional metrics. Their struggles lay in how to access softer data and turn them into a useful form of knowing. Some of the dominant approaches they used risked replicating the limitations of hard, quantitative data. They relied on processes of aggregation and triangulation that prioritised reliability, or on instrumental use of soft data to animate the metrics. The unpredictable, untameable, spontaneous quality of soft data could be lost in efforts to systematize their collection and interpretation to render them more tractable. A more challenging but potentially rewarding approach involved processes and behaviours aimed at disrupting taken-for-granted assumptions about quality, safety, and organizational performance. This approach, which explicitly values the seeking out and the hearing of multiple voices, is consistent with conceptual frameworks of organizational sensemaking and dialogical understandings of knowledge. Using soft intelligence this way can be challenging and discomfiting, but may offer a critical defence

  16. Quality specifications of routine clinical chemistry methods based on sigma metrics in performance evaluation.

    Science.gov (United States)

    Xia, Jun; Chen, Su-Feng; Xu, Fei; Zhou, Yong-Lie

    2017-06-23

    Sigma metrics were applied to evaluate the performance of 20 routine chemistry assays, and individual quality control criteria were established based on the sigma values of different assays. Precisions were expressed as the average coefficient variations (CVs) of long-term two-level chemistry controls. The biases of the 20 assays were obtained from the results of trueness programs organized by National Center for Clinical Laboratories (NCCL, China) in 2016. Four different allowable total error (TEa) targets were chosen from biological variation (minimum, desirable, optimal), Clinical Laboratory Improvements Amendments (CLIA, US), Analytical Quality Specification for Routine Analytes in Clinical Chemistry (WS/T 403-2012, China) and the National Cholesterol Education Program (NECP). The sigma values from different TEa targets varied. The TEa targets for ALT, AMY, Ca, CHOL, CK, Crea, GGT, K, LDH, Mg, Na, TG, TP, UA and Urea were chosen from WS/T 403-2012; the targets for ALP, AST and GLU were chosen from CLIA; the target for K was chosen from desirable biological variation; and the targets for HDL and LDL were chosen from the NECP. Individual quality criteria were established based on different sigma values. Sigma metrics are an optimal tool to evaluate the performance of different assays. An assay with a high value could use a simple internal quality control rule, while an assay with a low value should be monitored strictly. © 2017 Wiley Periodicals, Inc.

  17. New Adaptive Image Quality Assessment Based on Distortion Classification

    Directory of Open Access Journals (Sweden)

    Xin JIN

    2014-01-01

    Full Text Available This paper proposes a new adaptive image quality assessment (AIQA method, which is based on distortion classifying. AIQA contains two parts, distortion classification and image quality assessment. Firstly, we analysis characteristics of the original and distorted images, including the distribution of wavelet coefficient, the ratio of edge energy and inner energy of the differential image block, we divide distorted images into White Noise distortion, JPEG compression distortion and fuzzy distortion. To evaluate the quality of first two type distortion images, we use pixel based structure similarity metric and DCT based structural similarity metric respectively. For those blurriness pictures, we present a new wavelet-based structure similarity algorithm. According to the experimental results, AIQA takes the advantages of different structural similarity metrics, and it’s able to simulate the human visual perception effectively.

  18. Revision and extension of Eco-LCA metrics for sustainability assessment of the energy and chemical processes.

    Science.gov (United States)

    Yang, Shiying; Yang, Siyu; Kraslawski, Andrzej; Qian, Yu

    2013-12-17

    Ecologically based life cycle assessment (Eco-LCA) is an appealing approach for the evaluation of resources utilization and environmental impacts of the process industries from an ecological scale. However, the aggregated metrics of Eco-LCA suffer from some drawbacks: the environmental impact metric has limited applicability; the resource utilization metric ignores indirect consumption; the renewability metric fails to address the quantitative distinction of resources availability; the productivity metric seems self-contradictory. In this paper, the existing Eco-LCA metrics are revised and extended for sustainability assessment of the energy and chemical processes. A new Eco-LCA metrics system is proposed, including four independent dimensions: environmental impact, resource utilization, resource availability, and economic effectiveness. An illustrative example of comparing assessment between a gas boiler and a solar boiler process provides insight into the features of the proposed approach.

  19. The palmar metric: A novel radiographic assessment of the equine ...

    African Journals Online (AJOL)

    Digital radiographs are often used to subjectively assess the equine digit. Recently, quantitative and objective radiographic measurements have been reported that give new insight into the form and function of the equine digit. We investigated a radio-dense curvilinear profile along the distal phalanx on lateral radiographs ...

  20. Video quality assessment for web content mirroring

    Science.gov (United States)

    He, Ye; Fei, Kevin; Fernandez, Gustavo A.; Delp, Edward J.

    2014-03-01

    Due to the increasing user expectation on watching experience, moving web high quality video streaming content from the small screen in mobile devices to the larger TV screen has become popular. It is crucial to develop video quality metrics to measure the quality change for various devices or network conditions. In this paper, we propose an automated scoring system to quantify user satisfaction. We compare the quality of local videos with the videos transmitted to a TV. Four video quality metrics, namely Image Quality, Rendering Quality, Freeze Time Ratio and Rate of Freeze Events are used to measure video quality change during web content mirroring. To measure image quality and rendering quality, we compare the matched frames between the source video and the destination video using barcode tools. Freeze time ratio and rate of freeze events are measured after extracting video timestamps. Several user studies are conducted to evaluate the impact of each objective video quality metric on the subjective user watching experience.

  1. Object-Oriented Metrics Which Predict Maintainability

    OpenAIRE

    Li, Wei; Henry, Sallie M.

    1993-01-01

    Software metrics have been studied in the procedural paradigm as a quantitative means of assessing the software development process as well as the quality of software products. Several studies have validated that various metrics are useful indicators of maintenance effort in the procedural paradigm. However, software metrics have rarely been studied in the object oriented paradigm. Very few metrics have been proposed to measure object oriented systems, and the proposed ones have not been v...

  2. Advancing efforts to achieve health equity: equity metrics for health impact assessment practice.

    Science.gov (United States)

    Heller, Jonathan; Givens, Marjory L; Yuen, Tina K; Gould, Solange; Jandu, Maria Benkhalti; Bourcier, Emily; Choi, Tim

    2014-10-24

    Equity is a core value of Health Impact Assessment (HIA). Many compelling moral, economic, and health arguments exist for prioritizing and incorporating equity considerations in HIA practice. Decision-makers, stakeholders, and HIA practitioners see the value of HIAs in uncovering the impacts of policy and planning decisions on various population subgroups, developing and prioritizing specific actions that promote or protect health equity, and using the process to empower marginalized communities. There have been several HIA frameworks developed to guide the inclusion of equity considerations. However, the field lacks clear indicators for measuring whether an HIA advanced equity. This article describes the development of a set of equity metrics that aim to guide and evaluate progress toward equity in HIA practice. These metrics also intend to further push the field to deepen its practice and commitment to equity in each phase of an HIA. Over the course of a year, the Society of Practitioners of Health Impact Assessment (SOPHIA) Equity Working Group took part in a consensus process to develop these process and outcome metrics. The metrics were piloted, reviewed, and refined based on feedback from reviewers. The Equity Metrics are comprised of 23 measures of equity organized into four outcomes: (1) the HIA process and products focused on equity; (2) the HIA process built the capacity and ability of communities facing health inequities to engage in future HIAs and in decision-making more generally; (3) the HIA resulted in a shift in power benefiting communities facing inequities; and (4) the HIA contributed to changes that reduced health inequities and inequities in the social and environmental determinants of health. The metrics are comprised of a measurement scale, examples of high scoring activities, potential data sources, and example interview questions to gather data and guide evaluators on scoring each metric.

  3. Survey of Quantitative Research Metrics to Assess Pilot Performance in Upset Recovery

    Science.gov (United States)

    Le Vie, Lisa R.

    2016-01-01

    Accidents attributable to in-flight loss of control are the primary cause for fatal commercial jet accidents worldwide. The National Aeronautics and Space Administration (NASA) conducted a literature review to determine and identify the quantitative standards for assessing upset recovery performance. This review contains current recovery procedures for both military and commercial aviation and includes the metrics researchers use to assess aircraft recovery performance. Metrics include time to first input, recognition time and recovery time and whether that input was correct or incorrect. Other metrics included are: the state of the autopilot and autothrottle, control wheel/sidestick movement resulting in pitch and roll, and inputs to the throttle and rudder. In addition, airplane state measures, such as roll reversals, altitude loss/gain, maximum vertical speed, maximum/minimum air speed, maximum bank angle and maximum g loading are reviewed as well.

  4. Assessing Urban Rail Transit Systems Vulnerability: Metrics vs. Interdiction Models

    OpenAIRE

    Starita, Stefano; Esposito Amideo, Annunziata; Scaparra, Maria Paola

    2018-01-01

    Urban rail transit systems are highly vulnerable to a variety of disruptions, including accidental failures, natural disasters and terrorist attacks. Due to the crucial role that railway infrastructures play in economic development, productivity and social well-being of communities, evaluating their vulnerability and identifying their most critical components is of paramount importance. Two main approaches can be deployed to assess transport infrastructure vulnerabilities: vulnerability metri...

  5. Comparison of Quality Metrics for Pediatric Shunt Surgery and Proposal of the Negative Shunt Revision Rate.

    Science.gov (United States)

    Beez, Thomas; Steiger, Hans-Jakob

    2018-01-01

    Shunt surgery is common in pediatric neurosurgery and is associated with relevant complication rates. We aimed to compare previously published metrics in a single data set and propose the Negative Shunt Revision Rate (NSRR), defined as proportion of shunt explorations revealing a properly working system, as a new quality metric. Retrospective analysis of our shunt surgery activity in 2015 was performed. Demographic, clinical, and radiologic variables were extracted from electronic medical notes. Surgical Activity Rate, Revision Quotient, 30-day shunt malfunction rate, 90-day global shunt revision rate, Preventable Shunt Revision Rate, and novel NSRR were calculated. Of 60 shunt operations analyzed, 18 (39%) were new shunt insertions, and 42 (70%) were revisions. Median age was 18 months (range, 0.03-204 months), and main etiologies were posthemorrhagic (n = 16; 41%), congenital (n = 11; 28%), and tumor-associated (n = 8; 21%) hydrocephalus. Within 90 days after index surgery, 13 shunt failures occurred, predominantly owing to proximal failure (n = 6; 46%). Surgical Activity Rate was 0.127, Revision Quotient was 2.333, 30-day shunt malfunction rate was 0.166, 90-day global shunt revision rate was 21.7%, and Preventable Shunt Revision Rate was 38.5%. NSRR was 7.1%. Our results correlate with published values and offer measurement of quality that can be compared across studies and considered patient-oriented, easily measurable, and potentially modifiable. We propose NSRR as a new quality metric, covering an aspect of shunt surgery that was not addressed previously. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Association of Compliance With Process-Related Quality Metrics and Improved Survival in Oral Cavity Squamous Cell Carcinoma.

    Science.gov (United States)

    Graboyes, Evan M; Gross, Jennifer; Kallogjeri, Dorina; Piccirillo, Jay F; Al-Gilani, Maha; Stadler, Michael E; Nussenbaum, Brian

    2016-05-01

    Quality metrics for patients with head and neck cancer are available, but it is unknown whether compliance with these metrics is associated with improved patient survival. To identify whether compliance with various process-related quality metrics is associated with improved survival in patients with oral cavity squamous cell carcinoma who receive definitive surgery with or without adjuvant therapy. A retrospective cohort study was conducted at a tertiary academic medical center among 192 patients with previously untreated oral cavity squamous cell carcinoma who underwent definitive surgery with or without adjuvant therapy between January 1, 2003, and December 31, 2010. Data analysis was performed from January 26 to August 7, 2015. Surgery with or without adjuvant therapy. Compliance with a collection of process-related quality metrics possessing face validity that covered pretreatment evaluation, treatment, and posttreatment surveillance was evaluated. Association between compliance with these quality metrics and overall survival, disease-specific survival, and disease-free survival was calculated using univariable and multivariable Cox proportional hazards analysis. Among 192 patients, compliance with the individual quality metrics ranged from 19.7% to 93.6% (median, 82.8%). No pretreatment or surveillance metrics were associated with improved survival. Compliance with the following treatment-related quality metrics was associated with improved survival: elective neck dissection with lymph node yield of 18 or more, no unplanned surgery within 14 days of the index surgery, no unplanned 30-day readmissions, and referral for adjuvant radiotherapy for pathologic stage III or IV disease. Increased compliance with a "clinical care signature" composed of these 4 metrics was associated with improved overall survival, disease-specific survival, and disease-free survival on univariable analysis (log-rank test; P metrics was associated with improved overall survival (100

  7. Development and validation of trauma surgical skills metrics: Preliminary assessment of performance after training.

    Science.gov (United States)

    Shackelford, Stacy; Garofalo, Evan; Shalin, Valerie; Pugh, Kristy; Chen, Hegang; Pasley, Jason; Sarani, Babak; Henry, Sharon; Bowyer, Mark; Mackenzie, Colin F

    2015-07-01

    Maintaining trauma-specific surgical skills is an ongoing challenge for surgical training programs. An objective assessment of surgical skills is needed. We hypothesized that a validated surgical performance assessment tool could detect differences following a training intervention. We developed surgical performance assessment metrics based on discussion with expert trauma surgeons, video review of 10 experts and 10 novice surgeons performing three vascular exposure procedures and lower extremity fasciotomy on cadavers, and validated the metrics with interrater reliability testing by five reviewers blinded to level of expertise and a consensus conference. We tested these performance metrics in 12 surgical residents (Year 3-7) before and 2 weeks after vascular exposure skills training in the Advanced Surgical Skills for Exposure in Trauma (ASSET) course. Performance was assessed in three areas as follows: knowledge (anatomic, management), procedure steps, and technical skills. Time to completion of procedures was recorded, and these metrics were combined into a single performance score, the Trauma Readiness Index (TRI). Wilcoxon matched-pairs signed-ranks test compared pretraining/posttraining effects. Mean time to complete procedures decreased by 4.3 minutes (from 13.4 minutes to 9.1 minutes). The performance component most improved by the 1-day skills training was procedure steps, completion of which increased by 21%. Technical skill scores improved by 12%. Overall knowledge improved by 3%, with 18% improvement in anatomic knowledge. TRI increased significantly from 50% to 64% with ASSET training. Interrater reliability of the surgical performance assessment metrics was validated with single intraclass correlation coefficient of 0.7 to 0.98. A trauma-relevant surgical performance assessment detected improvements in specific procedure steps and anatomic knowledge taught during a 1-day course, quantified by the TRI. ASSET training reduced time to complete vascular

  8. Development of quality metrics for ambulatory pediatric cardiology: Transposition of the great arteries after arterial switch operation.

    Science.gov (United States)

    Baker-Smith, Carissa M; Carlson, Karina; Ettedgui, Jose; Tsuda, Takeshi; Jayakumar, K Anitha; Park, Matthew; Tede, Nikola; Uzark, Karen; Fleishman, Craig; Connuck, David; Likes, Maggie; Penny, Daniel J

    2018-01-01

    To develop quality metrics (QMs) for the ambulatory care of patients with transposition of the great arteries following arterial switch operation (TGA/ASO). Under the auspices of the American College of Cardiology Adult Congenital and Pediatric Cardiology (ACPC) Steering committee, the TGA/ASO team generated candidate QMs related to TGA/ASO ambulatory care. Candidate QMs were submitted to the ACPC Steering Committee and were reviewed for validity and feasibility using individual expert panel member scoring according to the RAND-UCLA methodology. QMs were then made available for review by the entire ACC ACPC during an "open comment period." Final approval of each QM was provided by a vote of the ACC ACPC Council. Patients with TGA who had undergone an ASO were included. Patients with complex transposition were excluded. Twelve candidate QMs were generated. Seven metrics passed the RAND-UCLA process. Four passed the "open comment period" and were ultimately approved by the Council. These included: (1) at least 1 echocardiogram performed during the first year of life reporting on the function, aortic dimension, degree of neoaortic valve insufficiency, the patency of the systemic and pulmonary outflows, the patency of the branch pulmonary arteries and coronary arteries, (2) neurodevelopmental (ND) assessment after ASO; (3) lipid profile by age 11 years; and (4) documentation of a transition of care plan to an adult congenital heart disease (CHD) provider by 18 years of age. Application of the RAND-UCLA methodology and linkage of this methodology to the ACPC approval process led to successful generation of 4 QMs relevant to the care of TGA/ASO pediatric patients in the ambulatory setting. These metrics have now been incorporated into the ACPC Quality Network providing guidance for the care of TGA/ASO patients across 30 CHD centers. © 2017 Wiley Periodicals, Inc.

  9. Lyapunov exponent as a metric for assessing the dynamic content and predictability of large-eddy simulations

    Science.gov (United States)

    Nastac, Gabriel; Labahn, Jeffrey W.; Magri, Luca; Ihme, Matthias

    2017-09-01

    Metrics used to assess the quality of large-eddy simulations commonly rely on a statistical assessment of the solution. While these metrics are valuable, a dynamic measure is desirable to further characterize the ability of a numerical simulation for capturing dynamic processes inherent in turbulent flows. To address this issue, a dynamic metric based on the Lyapunov exponent is proposed which assesses the growth rate of the solution separation. This metric is applied to two turbulent flow configurations: forced homogeneous isotropic turbulence and a turbulent jet diffusion flame. First, it is shown that, despite the direct numerical simulation (DNS) and large-eddy simulation (LES) being high-dimensional dynamical systems with O (107) degrees of freedom, the separation growth rate qualitatively behaves like a lower-dimensional dynamical system, in which the dimension of the Lyapunov system is substantially smaller than the discretized dynamical system. Second, a grid refinement analysis of each configuration demonstrates that as the LES filter width approaches the smallest scales of the system the Lyapunov exponent asymptotically approaches a plateau. Third, a small perturbation is superimposed onto the initial conditions of each configuration, and the Lyapunov exponent is used to estimate the time required for divergence, thereby providing a direct assessment of the predictability time of simulations. By comparing inert and reacting flows, it is shown that combustion increases the predictability of the turbulent simulation as a result of the dilatation and increased viscosity by heat release. The predictability time is found to scale with the integral time scale in both the reacting and inert jet flows. Fourth, an analysis of the local Lyapunov exponent is performed to demonstrate that this metric can also determine flow-dependent properties, such as regions that are sensitive to small perturbations or conditions of large turbulence within the flow field. Finally

  10. Metric Assessments of Books As Families of Works

    DEFF Research Database (Denmark)

    Zuccala, Alesia Ann; Breum, Mads; Bruun, Kasper

    2017-01-01

    We describe the intellectual and physical properties of books as manifestations, expressions and works and assess the current indexing and metadata structure of monographs in the Book Citation Index (BKCI). Our focus is on the interrelationship of these properties in light of the Functional...... Requirements for Bibliographic Records (FRBR). Data pertaining to monographs were collected from the Danish PURE repository system as well as the BKCI (2005-2015) via their International Standard Book Numbers (ISBNs). Each ISBN was then matched to the same ISBN and family-related ISBNs cataloged in two...

  11. Impact of artefact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data.

    Directory of Open Access Journals (Sweden)

    Thomas Samuel Carroll

    2014-04-01

    Full Text Available With the advent of ChIP-seq multiplexing technologies and the subsequent increase in ChIP-seq throughput, the development of working standards for the quality assessment of ChIP-seq studies has received significant attention. The ENCODE consortium’s large scale analysis of transcription factor binding and epigenetic marks as well as concordant work on ChIP-seq by other laboratories has established a new generation of ChIP-seq quality control measures. The use of these metrics alongside common processing steps has however not been evaluated. In this study, we investigate the effects of blacklisting and removal of duplicated reads on established metrics of ChIP-seq quality and show that the interpretation of these metrics is highly dependent on the ChIP-seq preprocessing steps applied. Further to this we perform the first investigation of the use of these metrics for ChIP-exo data and make recommendations for the adaptation of the NSC statistic to allow for the assessment of ChIP-exo efficiency.

  12. Impact of artifact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data

    Science.gov (United States)

    Carroll, Thomas S.; Liang, Ziwei; Salama, Rafik; Stark, Rory; de Santiago, Ines

    2014-01-01

    With the advent of ChIP-seq multiplexing technologies and the subsequent increase in ChIP-seq throughput, the development of working standards for the quality assessment of ChIP-seq studies has received significant attention. The ENCODE consortium's large scale analysis of transcription factor binding and epigenetic marks as well as concordant work on ChIP-seq by other laboratories has established a new generation of ChIP-seq quality control measures. The use of these metrics alongside common processing steps has however not been evaluated. In this study, we investigate the effects of blacklisting and removal of duplicated reads on established metrics of ChIP-seq quality and show that the interpretation of these metrics is highly dependent on the ChIP-seq preprocessing steps applied. Further to this we perform the first investigation of the use of these metrics for ChIP-exo data and make recommendations for the adaptation of the NSC statistic to allow for the assessment of ChIP-exo efficiency. PMID:24782889

  13. Metric Assessments of Books As Families of Works

    DEFF Research Database (Denmark)

    Zuccala, Alesia Ann

    2017-01-01

    We describe the intellectual and physical properties of books as manifestations, expressions and works and assess the current indexing and metadata structure of monographs in the Book Citation Index (BKCI). Our focus is on the interrelationship of these properties in light of the Functional...... additional databases: OCLC-WorldCat and Goodreads. With the retrieval of all family-related ISBNs, we were able to determine the number of monograph expressions present in the BKCI and their collective relationship to one work. Our results show that the majority of missing expressions from the BKCI...... are emblematic (i.e., first editions of monographs) and that both the indexing and metadata structure of this commercial database could significantly improve with the introduction of distinct expression IDs (i.e., for every distinct editions) and unifying work-related IDs. This improved metadata structure would...

  14. Drinking water quality assessment.

    Science.gov (United States)

    Aryal, J; Gautam, B; Sapkota, N

    2012-09-01

    Drinking water quality is the great public health concern because it is a major risk factor for high incidence of diarrheal diseases in Nepal. In the recent years, the prevalence rate of diarrhoea has been found the highest in Myagdi district. This study was carried out to assess the quality of drinking water from different natural sources, reservoirs and collection taps at Arthunge VDC of Myagdi district. A cross-sectional study was carried out using random sampling method in Arthunge VDC of Myagdi district from January to June,2010. 84 water samples representing natural sources, reservoirs and collection taps from the study area were collected. The physico-chemical and microbiological analysis was performed following standards technique set by APHA 1998 and statistical analysis was carried out using SPSS 11.5. The result was also compared with national and WHO guidelines. Out of 84 water samples (from natural source, reservoirs and tap water) analyzed, drinking water quality parameters (except arsenic and total coliform) of all water samples was found to be within the WHO standards and national standards.15.48% of water samples showed pH (13) higher than the WHO permissible guideline values. Similarly, 85.71% of water samples showed higher Arsenic value (72) than WHO value. Further, the statistical analysis showed no significant difference (Pwater for collection taps water samples of winter (January, 2010) and summer (June, 2010). The microbiological examination of water samples revealed the presence of total coliform in 86.90% of water samples. The results obtained from physico-chemical analysis of water samples were within national standard and WHO standards except arsenic. The study also found the coliform contamination to be the key problem with drinking water.

  15. Homogeneity and EPR metrics for assessment of regular grids used in CW EPR powder simulations

    Science.gov (United States)

    Crăciun, Cora

    2014-08-01

    CW EPR powder spectra may be approximated numerically using a spherical grid and a Voronoi tessellation-based cubature. For a given spin system, the quality of simulated EPR spectra depends on the grid type, size, and orientation in the molecular frame. In previous work, the grids used in CW EPR powder simulations have been compared mainly from geometric perspective. However, some grids with similar homogeneity degree generate different quality simulated spectra. This paper evaluates the grids from EPR perspective, by defining two metrics depending on the spin system characteristics and the grid Voronoi tessellation. The first metric determines if the grid points are EPR-centred in their Voronoi cells, based on the resonance magnetic field variations inside these cells. The second metric verifies if the adjacent Voronoi cells of the tessellation are EPR-overlapping, by computing the common range of their resonance magnetic field intervals. Beside a series of well known regular grids, the paper investigates a modified ZCW grid and a Fibonacci spherical code, which are new in the context of EPR simulations. For the investigated grids, the EPR metrics bring more information than the homogeneity quantities and are better related to the grids’ EPR behaviour, for different spin system symmetries. The metrics’ efficiency and limits are finally verified for grids generated from the initial ones, by using the original or magnetic field-constraint variants of the Spherical Centroidal Voronoi Tessellation method.

  16. Assessment of data quality in ATLAS

    CERN Document Server

    Wilson, M G

    2008-01-01

    Assessing the quality of data recorded with the ATLAS detector is crucial for commissioning and operating the detector to achieve sound physics measurements. In particular, the fast assessment of complex quantities obtained during event reconstruction and the ability to easily track them over time are especially important given the large data throughput and the distributed nature of the analysis environment. The data are processed once on a computer farm comprising O(1, 000) nodes before being distributed on the Grid, and reliable, centralized methods must be used to organize, merge, present, and archive data-quality metrics for performance experts and analysts. A review of the tools and approaches employed by the detector and physics groups in this environment and a summary of their performances during commissioning are presented.

  17. Using research metrics to evaluate the International Atomic Energy Agency guidelines on quality assurance for R&D

    Energy Technology Data Exchange (ETDEWEB)

    Bodnarczuk, M.

    1994-06-01

    The objective of the International Atomic Energy Agency (IAEA) Guidelines on Quality Assurance for R&D is to provide guidance for developing quality assurance (QA) programs for R&D work on items, services, and processes important to safety, and to support the siting, design, construction, commissioning, operation, and decommissioning of nuclear facilities. The standard approach to writing papers describing new quality guidelines documents is to present a descriptive overview of the contents of the document. I will depart from this approach. Instead, I will first discuss a conceptual framework of metrics for evaluating and improving basic and applied experimental science as well as the associated role that quality management should play in understanding and implementing these metrics. I will conclude by evaluating how well the IAEA document addresses the metrics from this conceptual framework and the broader principles of quality management.

  18. Assessing Metrics for Estimating Fire Induced Change in the Forest Understorey Structure Using Terrestrial Laser Scanning

    Directory of Open Access Journals (Sweden)

    Vaibhav Gupta

    2015-06-01

    Full Text Available Quantifying post-fire effects in a forested landscape is important to ascertain burn severity, ecosystem recovery and post-fire hazard assessments and mitigation planning. Reporting of such post-fire effects assumes significance in fire-prone countries such as USA, Australia, Spain, Greece and Portugal where prescribed burns are routinely carried out. This paper describes the use of Terrestrial Laser Scanning (TLS to estimate and map change in the forest understorey following a prescribed burn. Eighteen descriptive metrics are derived from bi-temporal TLS which are used to analyse and visualise change in a control and fire-altered plot. Metrics derived are Above Ground Height-based (AGH percentiles and heights, point count and mean intensity. Metrics such as AGH50change, mean AGHchange and point countchange are sensitive enough to detect subtle fire-induced change (28%–52% whilst observing little or no change in the control plot (0–4%. A qualitative examination with field measurements of the spatial distribution of burnt areas and percentage area burnt also show similar patterns. This study is novel in that it examines the behaviour of TLS metrics for estimating and mapping fire induced change in understorey structure in a single-scan mode with a minimal fixed reference system. Further, the TLS-derived metrics can be used to produce high resolution maps of change in the understorey landscape.

  19. Assessing precision, bias and sigma-metrics of 53 measurands of the Alinity ci system.

    Science.gov (United States)

    Westgard, Sten; Petrides, Victoria; Schneider, Sharon; Berman, Marvin; Herzogenrath, Jörg; Orzechowski, Anthony

    2017-12-01

    Assay performance is dependent on the accuracy and precision of a given method. These attributes can be combined into an analytical Sigma-metric, providing a simple value for laboratorians to use in evaluating a test method's capability to meet its analytical quality requirements. Sigma-metrics were determined for 37 clinical chemistry assays, 13 immunoassays, and 3 ICT methods on the Alinity ci system. Analytical Performance Specifications were defined for the assays, following a rationale of using CLIA goals first, then Ricos Desirable goals when CLIA did not regulate the method, and then other sources if the Ricos Desirable goal was unrealistic. A precision study was conducted at Abbott on each assay using the Alinity ci system following the CLSI EP05-A2 protocol. Bias was estimated following the CLSI EP09-A3 protocol using samples with concentrations spanning the assay's measuring interval tested in duplicate on the Alinity ci system and ARCHITECT c8000 and i2000 SR systems, where testing was also performed at Abbott. Using the regression model, the %bias was estimated at an important medical decisions point. Then the Sigma-metric was estimated for each assay and was plotted on a method decision chart. The Sigma-metric was calculated using the equation: Sigma-metric=(%TEa-|%bias|)/%CV. The Sigma-metrics and Normalized Method Decision charts demonstrate that a majority of the Alinity assays perform at least at five Sigma or higher, at or near critical medical decision levels. More than 90% of the assays performed at Five and Six Sigma. None performed below Three Sigma. Sigma-metrics plotted on Normalized Method Decision charts provide useful evaluations of performance. The majority of Alinity ci system assays had sigma values >5 and thus laboratories can expect excellent or world class performance. Laboratorians can use these tools as aids in choosing high-quality products, further contributing to the delivery of excellent quality healthcare for patients

  20. Testing Quality and Metrics for the LHC Magnet Powering System throughout Past and Future Commissioning

    CERN Document Server

    Anderson, D; Charifoulline, Z; Dragu, M; Fuchsberger, K; Garnier, JC; Gorzawski, AA; Koza, M; Krol, K; Rowan, S; Stamos, K; Zerlauth, M

    2014-01-01

    The LHC magnet powering system is composed of thousands of individual components to assure a safe operation when operating with stored energies as high as 10GJ in the superconducting LHC magnets. Each of these components has to be thoroughly commissioned following interventions and machine shutdown periods to assure their protection function in case of powering failures. As well as having dependable tracking of test executions it is vital that the executed commissioning steps and applied analysis criteria adequately represent the operational state of each component. The Accelerator Testing (AccTesting) framework in combination with a domain specific analysis language provides the means to quantify and improve the quality of analysis for future campaigns. Dedicated tools were developed to analyse in detail the reasons for failures and success of commissioning steps in past campaigns and to compare the results with newly developed quality metrics. Observed shortcomings and discrepancies are used to propose addi...

  1. Using image quality metrics to identify adversarial imagery for deep learning networks

    Science.gov (United States)

    Harguess, Josh; Miclat, Jeremy; Raheema, Julian

    2017-05-01

    Deep learning has continued to gain momentum in applications across many critical areas of research in computer vision and machine learning. In particular, deep learning networks have had much success in image classification, especially when training data are abundantly available, as is the case with the ImageNet project. However, several researchers have exposed potential vulnerabilities of these networks to carefully crafted adversarial imagery. Additionally, researchers have shown the sensitivity of these networks to some types of noise and distortion. In this paper, we investigate the use of no-reference image quality metrics to identify adversarial imagery and images of poor quality that could potentially fool a deep learning network or dramatically reduce its accuracy. Results are shown on several adversarial image databases with comparisons to popular image classification databases.

  2. The role of metrics and measurements in a software intensive total quality management environment

    Science.gov (United States)

    Daniels, Charles B.

    1992-01-01

    Paramax Space Systems began its mission as a member of the Rockwell Space Operations Company (RSOC) team which was the successful bidder on a massive operations consolidation contract for the Mission Operations Directorate (MOD) at JSC. The contract awarded to the team was the Space Transportation System Operations Contract (STSOC). Our initial challenge was to accept responsibility for a very large, highly complex and fragmented collection of software from eleven different contractors and transform it into a coherent, operational baseline. Concurrently, we had to integrate a diverse group of people from eleven different companies into a single, cohesive team. Paramax executives recognized the absolute necessity to develop a business culture based on the concept of employee involvement to execute and improve the complex process of our new environment. Our executives clearly understood that management needed to set the example and lead the way to quality improvement. The total quality management policy and the metrics used in this endeavor are presented.

  3. A Simple Composite Metric for the Assessment of Glycemic Status from Continuous Glucose Monitoring Data: Implications for Clinical Practice and the Artificial Pancreas

    Science.gov (United States)

    Hirsch, Irl B.; Balo, Andrew K.; Sayer, Kevin; Buckingham, Bruce A.; Peyser, Thomas A.

    2017-01-01

    Abstract Background: The potential clinical benefits of continuous glucose monitoring (CGM) have been recognized for many years, but CGM is used by a small fraction of patients with diabetes. One obstacle to greater use of the technology is the lack of simplified tools for assessing glycemic control from CGM data without complicated visual displays of data. Methods: We developed a simple new metric, the personal glycemic state (PGS), to assess glycemic control solely from continuous glucose monitoring data. PGS is a composite index that assesses four domains of glycemic control: mean glucose, glycemic variability, time in range and frequency and severity of hypoglycemia. The metric was applied to data from six clinical studies for the G4 Platinum continuous glucose monitoring system (Dexcom, San Diego, CA). The PGS was also applied to data from a study of artificial pancreas comparing results from open loop and closed loop in adolescents and in adults. Results: The new metric for glycemic control, PGS, was able to characterize the quality of glycemic control in a wide range of study subjects with various mean glucose, minimal, moderate, and excessive glycemic variability and subjects on open loop versus closed loop control. Conclusion: A new composite metric for the assessment of glycemic control based on CGM data has been defined for use in assessing glycemic control in clinical practice and research settings. The new metric may help rapidly identify problems in glycemic control and may assist with optimizing diabetes therapy during time-constrained physician office visits. PMID:28585873

  4. Information System Quality Assessment Methods

    OpenAIRE

    Korn, Alexandra

    2014-01-01

    This thesis explores challenging topic of information system quality assessment and mainly process assessment. In this work the term Information System Quality is defined as well as different approaches in a quality definition for different domains of information systems are outlined. Main methods of process assessment are overviewed and their relationships are described. Process assessment methods are divided into two categories: ISO standards and best practices. The main objective of this w...

  5. Brief educational interventions to improve performance on novel quality metrics in ambulatory settings in Kenya: A multi-site pre-post effectiveness trial.

    Science.gov (United States)

    Korom, Robert Ryan; Onguka, Stephanie; Halestrap, Peter; McAlhaney, Maureen; Adam, Mary

    2017-01-01

    The quality of primary care delivered in resource-limited settings is low. While some progress has been made using educational interventions, it is not yet clear how to sustainably improve care for common acute illnesses in the outpatient setting. Management of urinary tract infection is particularly important in resource-limited settings, where it is commonly diagnosed and associated with high levels of antimicrobial resistance. We describe an educational programme targeting non-physician health care providers and its effects on various clinical quality metrics for urinary tract infection. We used a series of educational interventions including 1) formal introduction of a clinical practice guideline, 2) peer-to-peer chart review, and 3) peer-reviewed literature describing local antimicrobial resistance patterns. Interventions were conducted for clinical officers (N = 24) at two outpatient centers near Nairobi, Kenya over a one-year period. The medical records of 474 patients with urinary tract infections were scored on five clinical quality metrics, with the primary outcome being the proportion of cases in which the guideline-recommended antibiotic was prescribed. The results at baseline and following each intervention were compared using chi-squared tests and unpaired two-tailed T-tests for significance. Logistic regression analysis was used to assess for possible confounders. Clinician adherence to the guideline-recommended antibiotic improved significantly during the study period, from 19% at baseline to 68% following all interventions (Χ2 = 150.7, p quality score also improved significantly from an average of 2.16 to 3.00 on a five-point scale (t = 6.58, p quality of care for routine acute illnesses in the outpatient setting. Measurement of quality metrics allows for further targeting of educational interventions depending on the needs of the providers and the community. Further study is needed to expand routine measurement of quality metrics and to identify

  6. Heart rate variability metrics for fine-grained stress level assessment.

    Science.gov (United States)

    Pereira, Tânia; Almeida, Pedro R; Cunha, João P S; Aguiar, Ana

    2017-09-01

    In spite of the existence of a multitude of techniques that allow the estimation of stress from physiological indexes, its fine-grained assessment is still a challenge for biomedical engineering. The short-term assessment of stress condition overcomes the limits to stress characterization with long blocks of time and allows to evaluate the behaviour change in real-world settings and also the stress level dynamics. The aim of the present study was to evaluate time and frequency domain and nonlinear heart rate variability (HRV) metrics for stress level assessment using a short-time window. The electrocardiogram (ECG) signal from 14 volunteers was monitored using the Vital JacketTM while they performed the Trier Social Stress Test (TSST) which is a standardized stress-inducing protocol. Window lengths from 220 s to 50 s for HRV analysis were tested in order to evaluate which metrics could be used to monitor stress levels in an almost continuous way. A sub-set of HRV metrics (AVNN, rMSSD, SDNN and pNN20) showed consistent differences between stress and non-stress phases, and showed to be reliable parameters for the assessment of stress levels in short-term analysis. The AVNN metric, using 50 s of window length analysis, showed that it is the most reliable metric to recognize stress level across the four phases of TSST and allows a fine-grained analysis of stress effect as an index of psychological stress and provides an insight into the reaction of the autonomic nervous system to stress. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Quality assessment for wireless capsule endoscopy videos compressed via HEVC: From diagnostic quality to visual perception.

    Science.gov (United States)

    Usman, Muhammad Arslan; Usman, Muhammad Rehan; Shin, Soo Young

    2017-12-01

    Maintaining the quality of medical images and videos is an essential part of the e-services provided by the healthcare sector. The convergence of modern communication systems and the healthcare industry necessitates the provision of better quality of service and experience by the service provider. Recent inclusion and standardization of the high efficiency video coder (HEVC) has made it possible for medical data to be compressed and transmitted over wireless networks with minimal compromise of the quality. Quality evaluation and assessment of these medical videos transmitted over wireless networks is another important research area that requires further exploration and attention. In this paper, we have conducted an in-depth study of video quality assessment for compressed wireless capsule endoscopy (WCE) videos. Our study includes the performance evaluation of several state-of-the-art objective video quality metrics in terms of determining the quality of compressed WCE videos. Subjective video quality experiments were conducted with the assistance of experts and non-experts in order to predict the diagnostic and visual quality of these medical videos, respectively. The evaluation of the metrics is based on three major performance metrics that include, correlation between the subjective and objective scores, relative statistical performance and computation time. Results show that the metrics information fidelity criterion (IFC), visual information fidelity-(VIF) and especially pixel based VIF stand out as best performing metrics. Furthermore, our paper reports the performance of HEVC compression on medical videos and according to the results, it performs optimally in preserving the diagnostic and visual quality of WCE videos at Quantization Parameter (QP) values of up to 35 and 37 respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Evaluating alignment quality between iconic language and reference terminologies using similarity metrics.

    Science.gov (United States)

    Griffon, Nicolas; Kerdelhué, Gaetan; Soualmia, Lina F; Merabti, Tayeb; Grosjean, Julien; Lamy, Jean-Baptiste; Venot, Alain; Duclos, Catherine; Darmoni, Stefan J

    2014-03-11

    Visualization of Concepts in Medicine (VCM) is a compositional iconic language that aims to ease information retrieval in Electronic Health Records (EHR), clinical guidelines or other medical documents. Using VCM language in medical applications requires alignment with medical reference terminologies. Alignment from Medical Subject Headings (MeSH) thesaurus and International Classification of Diseases - tenth revision (ICD10) to VCM are presented here. This study aim was to evaluate alignment quality between VCM and other terminologies using different measures of inter-alignment agreement before integration in EHR. For medical literature retrieval purposes and EHR browsing, the MeSH thesaurus and the ICD10, both organized hierarchically, were aligned to VCM language. Some MeSH to VCM alignments were performed automatically but others were performed manually and validated. ICD10 to VCM alignment was entirely manually performed. Inter-alignment agreement was assessed on ICD10 codes and MeSH descriptors, sharing the same Concept Unique Identifiers in the Unified Medical Language System (UMLS). Three metrics were used to compare two VCM icons: binary comparison, crude Dice Similarity Coefficient (DSCcrude), and semantic Dice Similarity Coefficient (DSCsemantic), based on Lin similarity. An analysis of discrepancies was performed. MeSH to VCM alignment resulted in 10,783 relations: 1,830 of which were manually performed and 8,953 were automatically inherited. ICD10 to VCM alignment led to 19,852 relations. UMLS gathered 1,887 alignments between ICD10 and MeSH. Only 1,606 of them were used for this study. Inter-alignment agreement using only validated MeSH to VCM alignment was 74.2% [70.5-78.0]CI95%, DSCcrude was 0.93 [0.91-0.94]CI95%, and DSCsemantic was 0.96 [0.95-0.96]CI95%. Discrepancy analysis revealed that even if two thirds of errors came from the reviewers, UMLS was nevertheless responsible for one third. This study has shown strong overall inter

  9. What are we assessing when we measure food security? A compendium and review of current metrics.

    Science.gov (United States)

    Jones, Andrew D; Ngure, Francis M; Pelto, Gretel; Young, Sera L

    2013-09-01

    The appropriate measurement of food security is critical for targeting food and economic aid; supporting early famine warning and global monitoring systems; evaluating nutrition, health, and development programs; and informing government policy across many sectors. This important work is complicated by the multiple approaches and tools for assessing food security. In response, we have prepared a compendium and review of food security assessment tools in which we review issues of terminology, measurement, and validation. We begin by describing the evolving definition of food security and use this discussion to frame a review of the current landscape of measurement tools available for assessing food security. We critically assess the purpose/s of these tools, the domains of food security assessed by each, the conceptualizations of food security that underpin each metric, as well as the approaches that have been used to validate these metrics. Specifically, we describe measurement tools that 1) provide national-level estimates of food security, 2) inform global monitoring and early warning systems, 3) assess household food access and acquisition, and 4) measure food consumption and utilization. After describing a number of outstanding measurement challenges that might be addressed in future research, we conclude by offering suggestions to guide the selection of appropriate food security metrics.

  10. Evaluating the impact of a quality care-metric on public health nursing practice: protocol for a mixed methods study.

    Science.gov (United States)

    Giltenane, Martina; Frazer, Kate; Sheridan, Ann

    2016-08-01

    To establish, implement and evaluate the impact of a quality care-metric developed to measure public health nursing practice. Measurement of care practices plays an integral role in quality improvement and promotes positive change in healthcare delivery. Quality care-metrics has been identified as a means of effectively measuring public health nursing practice. Public health nurses in Ireland are 'all-purpose' generalist community-based nurses caring for people across the lifespan, in defined geographical areas, employed by the Health Service Executive. In the public health nurse's child and maternal health role, the 'primary visit' (postnatal visit) has been identified as the most important contact a public health nurse has with a mother and her new baby. Mixed methods using a sequential multiphase design. This study involves three phases. The first phase will include focus group and individual interviews with key healthcare professionals and new mothers, using purposively chosen sampling. Thematic analysis of data will identify key components for the development of a quality care-metric. Phase two will be a RAND appropriateness survey with a panel of experts, to develop and validate the quality care-metric. The third phase will involve implementation and evaluation of the quality care-metric. Descriptive and inferential statistics will be completed using SPSS version 21. Funding for this research study was approved in December 2013. This study will evaluate the impact of introducing a quality care-metric into public health nursing practice. Results will illuminate the quality of public health nursing practice in relation to the primary visit. © 2016 John Wiley & Sons Ltd.

  11. Use of a structured panel process to define quality metrics for antimicrobial stewardship programs.

    Science.gov (United States)

    Morris, Andrew M; Brener, Stacey; Dresser, Linda; Daneman, Nick; Dellit, Timothy H; Avdic, Edina; Bell, Chaim M

    2012-05-01

    Antimicrobial stewardship programs are being implemented in health care to reduce inappropriate antimicrobial use, adverse events, Clostridium difficile infection, and antimicrobial resistance. There is no standardized approach to evaluate the impact of these programs. To use a structured panel process to define quality improvement metrics for evaluating antimicrobial stewardship programs in hospital settings that also have the potential to be used as part of public reporting efforts. A multiphase modified Delphi technique. Paper-based survey supplemented with a 1-day consensus meeting. A 10-member expert panel from Canada and the United States was assembled to evaluate indicators for relevance, effectiveness, and the potential to aid quality improvement efforts. There were a total of 5 final metrics selected by the panel: (1) days of therapy per 1000 patient-days; (2) number of patients with specific organisms that are drug resistant; (3) mortality related to antimicrobial-resistant organisms; (4) conservable days of therapy among patients with community-acquired pneumonia (CAP), skin and soft-tissue infections (SSTI), or sepsis and bloodstream infections (BSI); and (5) unplanned hospital readmission within 30 days after discharge from the hospital in which the most responsible diagnosis was one of CAP, SSTI, sepsis or BSI. The first and second indicators were also identified as useful for accountability purposes, such as public reporting. We have successfully identified 2 measures for public reporting purposes and 5 measures that can be used internally in healthcare settings as quality indicators. These indicators can be implemented across diverse healthcare systems to enable ongoing evaluation of antimicrobial stewardship programs and complement efforts for improved patient safety.

  12. The software product assurance metrics study: JPL's software systems quality and productivity

    Science.gov (United States)

    Bush, Marilyn W.

    1989-01-01

    The findings are reported of the Jet Propulsion Laboratory (JPL)/Software Product Assurance (SPA) Metrics Study, conducted as part of a larger JPL effort to improve software quality and productivity. Until recently, no comprehensive data had been assembled on how JPL manages and develops software-intensive systems. The first objective was to collect data on software development from as many projects and for as many years as possible. Results from five projects are discussed. These results reflect 15 years of JPL software development, representing over 100 data points (systems and subsystems), over a third of a billion dollars, over four million lines of code and 28,000 person months. Analysis of this data provides a benchmark for gauging the effectiveness of past, present and future software development work. In addition, the study is meant to encourage projects to record existing metrics data and to gather future data. The SPA long term goal is to integrate the collection of historical data and ongoing project data with future project estimations.

  13. Perception Is Reality: quality metrics in pancreas surgery - a Central Pancreas Consortium (CPC) analysis of 1399 patients.

    Science.gov (United States)

    Abbott, Daniel E; Martin, Grace; Kooby, David A; Merchant, Nipun B; Squires, Malcolm H; Maithel, Shishir K; Weber, Sharon M; Winslow, Emily R; Cho, Clifford S; Bentrem, David J; Kim, Hong Jin; Scoggins, Charles R; Martin, Robert C; Parikh, Alexander A; Hawkins, William G; Ahmad, Syed A

    2016-05-01

    Several groups have defined pancreatic surgery quality metrics that identify centers delivering quality care. Although these metrics are perceived to be associated with good outcomes, their relationship with actual outcomes has not been established. A national cadre of pancreatic surgeons was surveyed regarding perceived quality metrics, which were evaluated against the Central Pancreas Consortium (CPC) database to determine actual performance and relationships with long-term outcomes. The most important metrics were perceived to be participation in clinical trials, appropriate clinical staging, perioperative mortality, and documentation of receipt of adjuvant therapy. Subsequent analysis of 1399 patients in the CPC dataset demonstrated that a R0 retroperitoneal and neck margin was obtained in 79% (n = 1109) and 91.4% (n = 1278) of cases, respectively. 74% of patients (n = 1041) had >10 lymph nodes harvested, and LN positivity was 65% (n = 903). 76% (n = 960) of eligible patients (surgery first approach) received adjuvant therapy within 60 days of surgery. Multivariate analysis demonstrated margin status, identification of >10 lymph nodes, nodal status, tumor grade and delivery of adjuvant therapy within 60 days to be associated with improved overall survival. These analyses demonstrate that systematic monitoring of surgeons' perceived quality metrics provides critical prognostic information, which is associated with patient survival. Copyright © 2016 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.

  14. Water Quality Assessment Tool 2014

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The Water Quality Assessment Tool project was developed to assess the potential for water-borne contaminants to adversely affect biota and habitats on Service lands.

  15. No training blind image quality assessment

    Science.gov (United States)

    Chu, Ying; Mou, Xuanqin; Ji, Zhen

    2014-03-01

    State of the art blind image quality assessment (IQA) methods generally extract perceptual features from the training images, and send them into support vector machine (SVM) to learn the regression model, which could be used to further predict the quality scores of the testing images. However, these methods need complicated training and learning, and the evaluation results are sensitive to image contents and learning strategies. In this paper, two novel blind IQA metrics without training and learning are firstly proposed. The new methods extract perceptual features, i.e., the shape consistency of conditional histograms, from the joint histograms of neighboring divisive normalization transform coefficients of distorted images, and then compare the length attribute of the extracted features with that of the reference images and degraded images in the LIVE database. For the first method, a cluster center is found in the feature attribute space of the natural reference images, and the distance between the feature attribute of the distorted image and the cluster center is adopted as the quality label. The second method utilizes the feature attributes and subjective scores of all the images in the LIVE database to construct a dictionary, and the final quality score is calculated by interpolating the subjective scores of nearby words in the dictionary. Unlike the traditional SVM based blind IQA methods, the proposed metrics have explicit expressions, which reflect the relationships of the perceptual features and the image quality well. Experiment results in the publicly available databases such as LIVE, CSIQ and TID2008 had shown the effectiveness of the proposed methods, and the performances are fairly acceptable.

  16. Area of Concern: A new paradigm in life cycle assessment for the development of footprint metrics

    DEFF Research Database (Denmark)

    Ridoutt, Bradley G.; Pfister, Stephan; Manzardo, Alessandro

    2016-01-01

    operating under the auspices of the UNEP/SETAC Life Cycle Initiative project on environmental life cycle impact assessment (LCIA) has been working to develop generic guidance for developers of footprint metrics. The purpose of this paper is to introduce a universal footprint definition and related......As a class of environmental metrics, footprints have been poorly defined, have shared an unclear relationship to life cycle assessment (LCA), and the variety of approaches to quantification have sometimes resulted in confusing and contradictory messages in the marketplace. In response, a task force...... terminology as well as to discuss modelling implications. The task force has worked from the perspective that footprints should be based on LCA methodology, underpinned by the same data systems and models as used in LCA. However, there are important differences in purpose and orientation relative to LCA...

  17. Metrics for assessing the performance of morphodynamic models of braided rivers at event and reach scales

    Science.gov (United States)

    Williams, Richard; Measures, Richard; Hicks, Murray; Brasington, James

    2017-04-01

    Advances in geomatics technologies have transformed the monitoring of reach-scale (100-101 km) river morphodynamics. Hyperscale Digital Elevation Models (DEMs) can now be acquired at temporal intervals that are commensurate with the frequencies of high-flow events that force morphological change. The low vertical errors associated with such DEMs enable DEMs of Difference (DoDs) to be generated to quantify patterns of erosion and deposition, and derive sediment budgets using the morphological approach. In parallel with reach-scale observational advances, high-resolution, two-dimensional, physics-based numerical morphodynamic models are now computationally feasible for unsteady, reach-scale simulations. In light of this observational and predictive progress, there is a need to identify appropriate metrics that can be extracted from DEMs and DoDs to assess model performance. Nowhere is this more pertinent than in braided river environments, where numerous mobile channels that intertwine around mid-channel bars result in complex patterns of erosion and deposition, thus making model assessment particularly challenging. This paper identifies and evaluates a range of morphological and morphological-change metrics that can be used to assess predictions of braided river morphodynamics at the timescale of single storm events. A depth-averaged, mixed-grainsize Delft3D morphodynamic model was used to simulate morphological change during four discrete high-flow events, ranging from 91 to 403 m3s-1, along a 2.5 x 0.7 km reach of the braided, gravel-bed Rees River, New Zealand. Pre- and post-event topographic surveys, using a fusion of Terrestrial Laser Scanning and optical-empirical bathymetric mapping, were used to produce 0.5 m resolution DEMs and DoDs. The pre- and post-event DEMs for a moderate (227m3s-1) high-flow event were used to calibrate the model. DEMs and DoDs from the other three high-flow events were used for model assessment using two approaches. First

  18. Metrics, Bayes, and BOGSAT: Recognizing and Assessing Uncertainties in Earthquake Hazard Maps

    Science.gov (United States)

    Stein, S. A.; Brooks, E. M.; Spencer, B. D.

    2015-12-01

    Recent damaging earthquakes in areas predicted to be relatively safe illustrate the need to assess how seismic hazard maps perform. At present, there is no agreed way of assessing how well a map performed. The metric implicit in current maps, that during a time interval predicted shaking will be exceeded only at a specific fraction of sites, is useful but permits maps to be nominally successful although they significantly underpredict or overpredict shaking, or nominally unsuccessful but predict shaking well. We explore metrics that measure the effects of overprediction and underprediction. Although no single metric fully characterizes map behavior, using several metrics can provide useful insight for comparing and improving maps. A related question is whether to regard larger-than-expected shaking as a low-probability event allowed by a map, or to revise the map to show increased hazard. Whether and how much to revise a map is complicated, because a new map that better describes the past may or may not better predict the future. The issue is like deciding after a coin has come up heads a number of times whether to continue assuming that the coin is fair and the run is a low-probability event, or to change to a model in which the coin is assumed to be biased. This decision can be addressed using Bayes' Rule, so that how much to change depends on the degree of one's belief in the prior model. Uncertainties are difficult to assess for hazard maps, which require subjective assessments and choices among many poorly known or unknown parameters. However, even rough uncertainty measures for estimates/predictions from such models, sometimes termed BOGSATs (Bunch Of Guys Sitting Around Table) by risk analysts, can give users useful information to make better decisions. We explore the extent of uncertainty via sensitivity experiments on how the predicted hazard depends on model parameters.

  19. Workshop summary: 'Integrating air quality and climate mitigation - is there a need for new metrics to support decision making?'

    Science.gov (United States)

    von Schneidemesser, E.; Schmale, J.; Van Aardenne, J.

    2013-12-01

    Air pollution and climate change are often treated at national and international level as separate problems under different regulatory or thematic frameworks and different policy departments. With air pollution and climate change being strongly linked with regard to their causes, effects and mitigation options, the integration of policies that steer air pollutant and greenhouse gas emission reductions might result in cost-efficient, more effective and thus more sustainable tackling of the two problems. To support informed decision making and to work towards an integrated air quality and climate change mitigation policy requires the identification, quantification and communication of present-day and potential future co-benefits and trade-offs. The identification of co-benefits and trade-offs requires the application of appropriate metrics that are well rooted in science, easy to understand and reflect the needs of policy, industry and the public for informed decision making. For the purpose of this workshop, metrics were loosely defined as a quantified measure of effect or impact used to inform decision-making and to evaluate mitigation measures. The workshop held on October 9 and 10 and co-organized between the European Environment Agency and the Institute for Advanced Sustainability Studies brought together representatives from science, policy, NGOs, and industry to discuss whether current available metrics are 'fit for purpose' or whether there is a need to develop alternative metrics or reassess the way current metrics are used and communicated. Based on the workshop outcome the presentation will (a) summarize the informational needs and current application of metrics by the end-users, who, depending on their field and area of operation might require health, policy, and/or economically relevant parameters at different scales, (b) provide an overview of the state of the science of currently used and newly developed metrics, and the scientific validity of these

  20. Assessing spelling in kindergarten: further comparison of scoring metrics and their relation to reading skills.

    Science.gov (United States)

    Clemens, Nathan H; Oslund, Eric L; Simmons, Leslie E; Simmons, Deborah

    2014-02-01

    Early reading and spelling development share foundational skills, yet spelling assessment is underutilized in evaluating early reading. This study extended research comparing the degree to which methods for scoring spelling skills at the end of kindergarten were associated with reading skills measured at the same time as well as at the end of first grade. Five strategies for scoring spelling responses were compared: totaling the number of words spelled correctly, totaling the number of correct letter sounds, totaling the number of correct letter sequences, using a rubric for scoring invented spellings, and calculating the Spelling Sensitivity Score (Masterson & Apel, 2010b). Students (N=287) who were identified at kindergarten entry as at risk for reading difficulty and who had received supplemental reading intervention were administered a standardized spelling assessment in the spring of kindergarten, and measures of phonological awareness, decoding, word recognition, and reading fluency were administered concurrently and at the end of first grade. The five spelling scoring metrics were similar in their strong relations with factors summarizing reading subskills (phonological awareness, decoding, and word reading) on a concurrent basis. Furthermore, when predicting first-grade reading skills based on spring-of-kindergarten performance, spelling scores from all five metrics explained unique variance over the autoregressive effects of kindergarten word identification. The practical advantages of using a brief spelling assessment for early reading evaluation and the relative tradeoffs of each scoring metric are discussed. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  1. Irrigation water quality assessments

    Science.gov (United States)

    Increasing demands on fresh water supplies by municipal and industrial users means decreased fresh water availability for irrigated agriculture in semi arid and arid regions. There is potential for agricultural use of treated wastewaters and low quality waters for irrigation but this will require co...

  2. Analysis of primary fine particle national ambient air quality standard metrics.

    Science.gov (United States)

    Johnson, Philip R S; Graham, John J

    2006-02-01

    In accordance with the Clean Air Act, the U.S. Environmental Protection Agency (EPA) is currently reviewing its National Ambient Air Quality Standards for particulate matter, which are required to provide an adequate margin of safety to populations, including susceptible subgroups. Based on the latest scientific, health, and technical information about particle pollution, EPA staff recommends establishing more protective health-based fine particle standards. Since the last standards review, epidemiologic studies have continued to find associations between short-term and long-term exposure to particulate matter and cardiopulmonary morbidity and mortality at current pollution levels. This study analyzed the spatial and temporal variability of fine particulate (PM2.5) monitoring data for the Northeast and the continental United States to assess the protectiveness of various levels, forms, and combinations of 24-hr and annual health-based standards currently recommended by EPA staff and the Clean Air Scientific Advisory Committee. Recommended standards have the potential for modest or substantial increases in protection in the Northeast, ranging from an additional 13-83% of the population of the region who are living in areas not likely to meet new standards and thereby benefiting from compliance with more protective air pollution controls. Within recommended standard ranges, an optimal 24-hr (98th percentile)/annual standard suite occurs at 30/12 microg/m3, providing short- and long-term health protection for a substantial percentage of both Northeast (84%) and U.S. (78%) populations. In addition, the Northeast region will not benefit as widely as the nation as a whole if less stringent standards are selected. Should the 24-hr (98th percentile) standard be set at 35 microg/m3, Northeast and U.S. populations will receive 16-48% and 7-17% less protection than a 30 microg/m3 standard, respectively, depending on the level of the annual standard. A 30/12 microg/m3 standard

  3. A new normalizing algorithm for BAC CGH arrays with quality control metrics.

    Science.gov (United States)

    Miecznikowski, Jeffrey C; Gaile, Daniel P; Liu, Song; Shepherd, Lori; Nowak, Norma

    2011-01-01

    The main focus in pin-tip (or print-tip) microarray analysis is determining which probes, genes, or oligonucleotides are differentially expressed. Specifically in array comparative genomic hybridization (aCGH) experiments, researchers search for chromosomal imbalances in the genome. To model this data, scientists apply statistical methods to the structure of the experiment and assume that the data consist of the signal plus random noise. In this paper we propose "SmoothArray", a new method to preprocess comparative genomic hybridization (CGH) bacterial artificial chromosome (BAC) arrays and we show the effects on a cancer dataset. As part of our R software package "aCGHplus," this freely available algorithm removes the variation due to the intensity effects, pin/print-tip, the spatial location on the microarray chip, and the relative location from the well plate. removal of this variation improves the downstream analysis and subsequent inferences made on the data. Further, we present measures to evaluate the quality of the dataset according to the arrayer pins, 384-well plates, plate rows, and plate columns. We compare our method against competing methods using several metrics to measure the biological signal. With this novel normalization algorithm and quality control measures, the user can improve their inferences on datasets and pinpoint problems that may arise in their BAC aCGH technology.

  4. A New Normalizing Algorithm for BAC CGH Arrays with Quality Control Metrics

    Directory of Open Access Journals (Sweden)

    Jeffrey C. Miecznikowski

    2011-01-01

    Full Text Available The main focus in pin-tip (or print-tip microarray analysis is determining which probes, genes, or oligonucleotides are differentially expressed. Specifically in array comparative genomic hybridization (aCGH experiments, researchers search for chromosomal imbalances in the genome. To model this data, scientists apply statistical methods to the structure of the experiment and assume that the data consist of the signal plus random noise. In this paper we propose “SmoothArray”, a new method to preprocess comparative genomic hybridization (CGH bacterial artificial chromosome (BAC arrays and we show the effects on a cancer dataset. As part of our R software package “aCGHplus,” this freely available algorithm removes the variation due to the intensity effects, pin/print-tip, the spatial location on the microarray chip, and the relative location from the well plate. removal of this variation improves the downstream analysis and subsequent inferences made on the data. Further, we present measures to evaluate the quality of the dataset according to the arrayer pins, 384-well plates, plate rows, and plate columns. We compare our method against competing methods using several metrics to measure the biological signal. With this novel normalization algorithm and quality control measures, the user can improve their inferences on datasets and pinpoint problems that may arise in their BAC aCGH technology.

  5. On defining metrics for assessing laparoscopic surgical skills in a virtual training environment.

    Science.gov (United States)

    Payandeh, Shahram; Lomax, Alan J; Dill, John; Mackenzie, Christine L; Cao, Caroline G L

    2002-01-01

    One of the key components of any training environment for surgical education is a method that can be used for assessing surgical skills. Traditionally, defining such a method has been difficult and based mainly on observations. However, through advances in modeling techniques and computer hardware and software, such methods can now be developed using combined visual and haptic rendering of a training scene. This paper presents some ideas on how metrics may be defined and used in the assessment of surgical skills in a virtual laparoscopic training environment.

  6. Tropospheric Ozone Assessment Report: Database and Metrics Data of Global Surface Ozone Observations

    Directory of Open Access Journals (Sweden)

    Martin G. Schultz

    2017-10-01

    Full Text Available In support of the first Tropospheric Ozone Assessment Report (TOAR a relational database of global surface ozone observations has been developed and populated with hourly measurement data and enhanced metadata. A comprehensive suite of ozone data products including standard statistics, health and vegetation impact metrics, and trend information, are made available through a common data portal and a web interface. These data form the basis of the TOAR analyses focusing on human health, vegetation, and climate relevant ozone issues, which are part of this special feature. Cooperation among many data centers and individual researchers worldwide made it possible to build the world's largest collection of 'in-situ' hourly surface ozone data covering the period from 1970 to 2015. By combining the data from almost 10,000 measurement sites around the world with global metadata information, new analyses of surface ozone have become possible, such as the first globally consistent characterisations of measurement sites as either urban or rural/remote. Exploitation of these global metadata allows for new insights into the global distribution, and seasonal and long-term changes of tropospheric ozone and they enable TOAR to perform the first, globally consistent analysis of present-day ozone concentrations and recent ozone changes with relevance to health, agriculture, and climate. Considerable effort was made to harmonize and synthesize data formats and metadata information from various networks and individual data submissions. Extensive quality control was applied to identify questionable and erroneous data, including changes in apparent instrument offsets or calibrations. Such data were excluded from TOAR data products. Limitations of 'a posteriori' data quality assurance are discussed. As a result of the work presented here, global coverage of surface ozone data for scientific analysis has been significantly extended. Yet, large gaps remain in the surface

  7. Quality Research by Using Performance Evaluation Metrics for Software Systems and Components

    Directory of Open Access Journals (Sweden)

    Ion BULIGIU

    2006-01-01

    Full Text Available Software performance and evaluation have four basic needs: (1 well-defined performance testing strategy, requirements, and focuses, (2 correct and effective performance evaluation models, (3 well-defined performance metrics, and (4 cost-effective performance testing and evaluation tools and techniques. This chapter first introduced a performance test process and discusses the performance testing objectives and focus areas. Then, it summarized the basic challenges and issues on performance testing and evaluation of component based programs and components. Next, this chapter presented different types of performance metrics for software components and systems, including processing speed, utilization, throughput, reliability, availability, and scalability metrics. Most of the performance metrics covered here can be considered as the application of existing metrics to software components. New performance metrics are needed to support the performance evaluation of component based programs.

  8. Task-Level vs. Segment-Level Quantitative Metrics for Surgical Skill Assessment.

    Science.gov (United States)

    Vedula, S Swaroop; Malpani, Anand; Ahmidi, Narges; Khudanpur, Sanjeev; Hager, Gregory; Chen, Chi Chiung Grace

    2016-01-01

    Task-level metrics of time and motion efficiency are valid measures of surgical technical skill. Metrics may be computed for segments (maneuvers and gestures) within a task after hierarchical task decomposition. Our objective was to compare task-level and segment (maneuver and gesture)-level metrics for surgical technical skill assessment. Our analyses include predictive modeling using data from a prospective cohort study. We used a hierarchical semantic vocabulary to segment a simple surgical task of passing a needle across an incision and tying a surgeon's knot into maneuvers and gestures. We computed time, path length, and movements for the task, maneuvers, and gestures using tool motion data. We fit logistic regression models to predict experience-based skill using the quantitative metrics. We compared the area under a receiver operating characteristic curve (AUC) for task-level, maneuver-level, and gesture-level models. Robotic surgical skills training laboratory. In total, 4 faculty surgeons with experience in robotic surgery and 14 trainee surgeons with no or minimal experience in robotic surgery. Experts performed the task in shorter time (49.74s; 95% CI = 43.27-56.21 vs. 81.97; 95% CI = 69.71-94.22), with shorter path length (1.63m; 95% CI = 1.49-1.76 vs. 2.23; 95% CI = 1.91-2.56), and with fewer movements (429.25; 95% CI = 383.80-474.70 vs. 728.69; 95% CI = 631.84-825.54) than novices. Experts differed from novices on metrics for individual maneuvers and gestures. The AUCs were 0.79; 95% CI = 0.62-0.97 for task-level models, 0.78; 95% CI = 0.6-0.96 for maneuver-level models, and 0.7; 95% CI = 0.44-0.97 for gesture-level models. There was no statistically significant difference in AUC between task-level and maneuver-level (p = 0.7) or gesture-level models (p = 0.17). Maneuver-level and gesture-level metrics are discriminative of surgical skill and can be used to provide targeted feedback to surgical trainees. Copyright © 2016 Association of Program

  9. Development and clinical application of Vertebral Metrics: using a stereo vision system to assess the spine.

    Science.gov (United States)

    Gabriel, Ana Teresa; Quaresma, Cláudia; Secca, Mário Forjaz; Vieira, Pedro

    2018-01-20

    The biomechanical changes in the spinal column are considered to be the main responsible for rachialgia. Although radiological techniques use ionizing radiation, they are the most applied tools to assess the biomechanics of the spine. To face this problem, non-invasive techniques must be developed. Vertebral Metrics is an ionizing radiation-free instrument designed to detect the 3D position of each vertebrae in a standing position. Using a stereo vision system combined with low intensity UV light, recognition is achieved with software capable of distinguishing fluorescent marks. The fluorescent marks are the skin projection of the vertex of the spinal processes. This paper presents a major development of Vertebral Metrics and its evaluation. It performs a scan in less than 45 s with a resolution on the order of 1 mm, in each spatial direction, therefore, allowing an accurate analysis of the spine. The instrument was applied to patients without associated pathology. Statistically significant differences between consecutive scans were not found. A positive correlation between the 3D positions of each vertebra and the homologous position of the other vertebrae was observed. Using Vertebral Metrics, innovative results can be obtained. It can be used in areas such as orthopedics, neurosurgery, and rehabilitation. Graphical abstract ᅟ.

  10. Metrics for assessing the reliability of a telemedicine remote monitoring system.

    Science.gov (United States)

    Charness, Neil; Fox, Mark; Papadopoulos, Amy; Crump, Cindy

    2013-06-01

    The goal of this study was to assess using new metrics the reliability of a real-time health monitoring system in homes of older adults. The "MobileCare Monitor" system was installed into the homes of nine older adults >75 years of age for a 2-week period. The system consisted of a wireless wristwatch-based monitoring system containing sensors for location, temperature, and impacts and a "panic" button that was connected through a mesh network to third-party wireless devices (blood pressure cuff, pulse oximeter, weight scale, and a survey-administering device). To assess system reliability, daily phone calls instructed participants to conduct system tests and reminded them to fill out surveys and daily diaries. Phone reports and participant diary entries were checked against data received at a secure server. Reliability metrics assessed overall system reliability, data concurrence, study effectiveness, and system usability. Except for the pulse oximeter, system reliability metrics varied between 73% and 92%. Data concurrence for proximal and distal readings exceeded 88%. System usability following the pulse oximeter firmware update varied between 82% and 97%. An estimate of watch-wearing adherence within the home was quite high, about 80%, although given the inability to assess watch-wearing when a participant left the house, adherence likely exceeded the 10 h/day requested time. In total, 3,436 of 3,906 potential measurements were obtained, indicating a study effectiveness of 88%. The system was quite effective in providing accurate remote health data. The different system reliability measures identify important error sources in remote monitoring systems.

  11. Assessing the habitat suitability of agricultural landscapes for characteristic breeding bird guilds using landscape metrics.

    Science.gov (United States)

    Borges, Friederike; Glemnitz, Michael; Schultz, Alfred; Stachow, Ulrich

    2017-04-01

    Many of the processes behind the decline of farmland birds can be related to modifications in landscape structure (composition and configuration), which can partly be expressed quantitatively with measurable or computable indices, i.e. landscape metrics. This paper aims to identify statistical relationships between the occurrence of birds and the landscape structure. We present a method that combines two comprehensive procedures: the "landscape-centred approach" and "guild classification". Our study is based on more than 20,000 individual bird observations based on a 4-year bird monitoring approach in a typical agricultural area in the north-eastern German lowlands. Five characteristic bird guilds, each with three characteristic species, are defined for the typical habitat types of that area: farmland, grassland, hedgerow, forest and settlement. The suitability of each sample plot for each guild is indicated by the level of persistence (LOP) of occurrence of three respective species. Thus, the sample plots can be classified as "preferred" or "less preferred" depending on the lower and upper quartiles of the LOP values. The landscape structure is characterized by 16 different landscape metrics expressing various aspects of landscape composition and configuration. For each guild, the three landscape metrics with the strongest rank correlation with the LOP values and that are not mutually dependent were identified. For four of the bird guilds, the classification success was better than 80%, compared with only 66% for the grassland bird guild. A subset of six landscape metrics proved to be the most meaningful and sufficiently classified the sample areas with respect to bird guild suitability. In addition, derived logistic functions allowed the production of guild-specific habitat suitability maps for the whole landscape. The analytical results show that the proposed approach is appropriate to assess the habitat suitability of agricultural landscapes for characteristic

  12. Quality assessment of urban environment

    Science.gov (United States)

    Ovsiannikova, T. Y.; Nikolaenko, M. N.

    2015-01-01

    This paper is dedicated to the research applicability of quality management problems of construction products. It is offered to expand quality management borders in construction, transferring its principles to urban systems as economic systems of higher level, which qualitative characteristics are substantially defined by quality of construction product. Buildings and structures form spatial-material basis of cities and the most important component of life sphere - urban environment. Authors justify the need for the assessment of urban environment quality as an important factor of social welfare and life quality in urban areas. The authors suggest definition of a term "urban environment". The methodology of quality assessment of urban environment is based on integrated approach which includes the system analysis of all factors and application of both quantitative methods of assessment (calculation of particular and integrated indicators) and qualitative methods (expert estimates and surveys). The authors propose the system of indicators, characterizing quality of the urban environment. This indicators fall into four classes. The authors show the methodology of their definition. The paper presents results of quality assessment of urban environment for several Siberian regions and comparative analysis of these results.

  13. Indoor Climate Quality Assessment -

    DEFF Research Database (Denmark)

    Ansaldi, Roberta; Asadi, Ehsan; Costa, José Joaquim

    This Guidebook gives building professionals useful support in the practical measurements and monitoring of the indoor climate in buildings. It is evident that energy consumption in a building is directly influenced by required and maintained indoor comfort level. Wireless technologies for measure...... for measurement and monitoring have allowed a significantly increased number of possible applications, especially in existing buildings. The Guidebook illustrates several cases with the instrumentation of the monitoring and assessment of indoor climate....

  14. Disparities in reportable quality metrics by insurance status in the primary spine neoplasm population.

    Science.gov (United States)

    Mehdi, Syed K; Tanenbaum, Joseph E; Alentado, Vincent J; Miller, Jacob A; Lubelski, Daniel; Benzel, Edward C; Mroz, Thomas E

    2017-02-01

    ratio [OR] 1.81 95% confidence interval [CI] 1.11-2.95) relative to privately insured patients. Among patients hospitalized for primary spinal neoplasms, primary payer status predicts the incidence of PSI, an indicator of adverse health-care quality used to determine hospital reimbursement by the CMS. As reimbursement continues to be intertwined with reportable quality metrics, identifying vulnerable populations is critical to improving patient care. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Quality Metrics of Semi Automatic DTM from Large Format Digital Camera

    Science.gov (United States)

    Narendran, J.; Srinivas, P.; Udayalakshmi, M.; Muralikrishnan, S.

    2014-11-01

    The high resolution digital images from Ultracam-D Large Format Digital Camera (LFDC) was used for near automatic DTM generation. In the past, manual method for DTM generation was used which are time consuming and labour intensive. In this study LFDC in synergy with accurate position and orientation system and processes like image matching algorithms, distributed processing and filtering techniques for near automatic DTM generation. Traditionally the DTM accuracy is reported using check points collected from the field which are limited in number, time consuming and costly. This paper discusses the reliability of near automatic DTM generated from Ultracam-D for an operational project covering an area of nearly 600 Sq. Km. using 21,000 check points captured stereoscopically by experienced operators. The reliability of the DTM for the three study areas with different morphology is presented using large number of stereo check points and parameters related to statistical distribution of residuals such as skewness, kurtosis, standard deviation and linear error at 90% confidence interval. The residuals obtained for the three areas follow normal distribution in agreement with the majority of standards on positional accuracy. The quality metrics in terms of reliability were computed for the DTMs generated and the tables and graphs show the potential of Ultracam-D for the generation of semiautomatic DTM process for different terrain types.

  16. Estimated work ability in warm outdoor environments depends on the chosen heat stress assessment metric

    Science.gov (United States)

    Bröde, Peter; Fiala, Dusan; Lemke, Bruno; Kjellstrom, Tord

    2017-04-01

    With a view to occupational effects of climate change, we performed a simulation study on the influence of different heat stress assessment metrics on estimated workability (WA) of labour in warm outdoor environments. Whole-day shifts with varying workloads were simulated using as input meteorological records for the hottest month from four cities with prevailing hot (Dallas, New Delhi) or warm-humid conditions (Managua, Osaka), respectively. In addition, we considered the effects of adaptive strategies like shielding against solar radiation and different work-rest schedules assuming an acclimated person wearing light work clothes (0.6 clo). We assessed WA according to Wet Bulb Globe Temperature (WBGT) by means of an empirical relation of worker performance from field studies (Hothaps), and as allowed work hours using safety threshold limits proposed by the corresponding standards. Using the physiological models Predicted Heat Strain (PHS) and Universal Thermal Climate Index (UTCI)-Fiala, we calculated WA as the percentage of working hours with body core temperature and cumulated sweat loss below standard limits (38 °C and 7.5% of body weight, respectively) recommended by ISO 7933 and below conservative (38 °C; 3%) and liberal (38.2 °C; 7.5%) limits in comparison. ANOVA results showed that the different metrics, workload, time of day and climate type determined the largest part of WA variance. WBGT-based metrics were highly correlated and indicated slightly more constrained WA for moderate workload, but were less restrictive with high workload and for afternoon work hours compared to PHS and UTCI-Fiala. Though PHS showed unrealistic dynamic responses to rest from work compared to UTCI-Fiala, differences in WA assessed by the physiological models largely depended on the applied limit criteria. In conclusion, our study showed that the choice of the heat stress assessment metric impacts notably on the estimated WA. Whereas PHS and UTCI-Fiala can account for

  17. The California stream quality assessment

    Science.gov (United States)

    Van Metre, Peter C.; Egler, Amanda L.; May, Jason T.

    2017-03-06

    In 2017, the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) project is assessing stream quality in coastal California, United States. The USGS California Stream Quality Assessment (CSQA) will sample streams over most of the Central California Foothills and Coastal Mountains ecoregion (modified from Griffith and others, 2016), where rapid urban growth and intensive agriculture in the larger river valleys are raising concerns that stream health is being degraded. Findings will provide the public and policy-makers with information regarding which human and natural factors are the most critical in affecting stream quality and, thus, provide insights about possible approaches to protect the health of streams in the region.

  18. MATLAB-Based Applications for Image Processing and Image Quality Assessment – Part I: Software Description

    Directory of Open Access Journals (Sweden)

    L. Krasula

    2011-12-01

    Full Text Available This paper describes several MATLAB-based applications useful for image processing and image quality assessment. The Image Processing Application helps user to easily modify images, the Image Quality Adjustment Application enables to create series of pictures with different quality. The Image Quality Assessment Application contains objective full reference quality metrics that can be used for image quality assessment. The Image Quality Evaluation Applications represent an easy way to compare subjectively the quality of distorted images with reference image. Results of these subjective tests can be processed by using the Results Processing Application. All applications provide Graphical User Interface (GUI for the intuitive usage.

  19. Linear and non-linear heart rate metrics for the assessment of anaesthetists' workload during general anaesthesia.

    Science.gov (United States)

    Martin, J; Schneider, F; Kowalewskij, A; Jordan, D; Hapfelmeier, A; Kochs, E F; Wagner, K J; Schulz, C M

    2016-12-01

    Excessive workload may impact the anaesthetists' ability to adequately process information during clinical practice in the operation room and may result in inaccurate situational awareness and performance. This exploratory study investigated heart rate (HR), linear and non-linear heart rate variability (HRV) metrics and subjective ratings scales for the assessment of workload associated with the anaesthesia stages induction, maintenance and emergence. HR and HRV metrics were calculated based on five min segments from each of the three anaesthesia stages. The area under the receiver operating characteristics curve (AUC) of the investigated metrics was calculated to assess their ability to discriminate between the stages of anaesthesia. Additionally, a multiparametric approach based on logistic regression models was performed to further evaluate whether linear or non-linear heart rate metrics are suitable for the assessment of workload. Mean HR and several linear and non-linear HRV metrics including subjective workload ratings differed significantly between stages of anaesthesia. Permutation Entropy (PeEn, AUC=0.828) and mean HR (AUC=0.826) discriminated best between the anaesthesia stages induction and maintenance. In the multiparametric approach using logistic regression models, the model based on non-linear heart rate metrics provided a higher AUC compared with the models based on linear metrics. In this exploratory study based on short ECG segment analysis, PeEn and HR seem to be promising to separate workload levels between different stages of anaesthesia. The multiparametric analysis of the regression models favours non-linear heart rate metrics over linear metrics. © The Author 2016. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Specification and implementation of IFC based performance metrics to support building life cycle assessment of hybrid energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Morrissey, Elmer; O' Donnell, James; Keane, Marcus; Bazjanac, Vladimir

    2004-03-29

    Minimizing building life cycle energy consumption is becoming of paramount importance. Performance metrics tracking offers a clear and concise manner of relating design intent in a quantitative form. A methodology is discussed for storage and utilization of these performance metrics through an Industry Foundation Classes (IFC) instantiated Building Information Model (BIM). The paper focuses on storage of three sets of performance data from three distinct sources. An example of a performance metrics programming hierarchy is displayed for a heat pump and a solar array. Utilizing the sets of performance data, two discrete performance effectiveness ratios may be computed, thus offering an accurate method of quantitatively assessing building performance.

  1. Evaluation of the performance of a micromethod for measuring urinary iodine by using six sigma quality metrics.

    Science.gov (United States)

    Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud

    2013-09-01

    The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)Quality of UI Procedures (EQUIP) TEas, the performance of all laboratories was≤2.49 Sigma metrics at all concentrations. Only one laboratory had TEcalc

  2. Comparing apples and oranges: assessment of the relative video quality in the presence of different types of distortions

    DEFF Research Database (Denmark)

    Reiter, Ulrich; Korhonen, Jari; You, Junyong

    2011-01-01

    Video quality assessment is essential for the performance analysis of visual communication applications. Objective metrics can be used for estimating the relative quality differences, but they typically give reliable results only if the compared videos contain similar types of quality distortion...... well different objective quality metrics estimate the relative subjective quality levels for content with different types of quality distortions. Our conclusion is that none of the studied objective metrics works reliably for assessing the co-impact of compression artifacts and transmission errors...... on the subjective quality. Nevertheless, we have observed that the objective metrics' tendency to either over- or underestimate the perceived impact of transmission errors has a high correlation with the spatial and temporal activity levels of the content. Therefore, our results can be useful for improving...

  3. C-17A Sustainment Performance Metrics Assessment: Repair Source Impact on Sustainment for Future Business Case Analysis Development

    Science.gov (United States)

    2013-06-01

    Maintenance Depots. June 10, 2010. Report No. D-2010-067. Kaplan , Robert S., Norton , David P., Putting the Balances Scorecard to Work. Harvard...metrics. Most applicably, a recognized “best practice”, as advocated by Kaplan (1993), used a balanced scorecard to assess cross-functional areas...strategic model to measure the balanced scorecard as cited in Graham (1996). Both Kaplan and Brown viewed performance metrics as the key success

  4. Improving assessment accuracy for lake biological condition by classifying lakes with diatom typology, varying metrics and modeling multimetric indices.

    Science.gov (United States)

    Liu, Bo; Stevenson, R Jan

    2017-12-31

    Site grouping by regions or typologies, site-specific modeling and varying metrics among site groups are four approaches that account for natural variation, which can be a major source of error in ecological assessments. Using a data set from the 2007 National Lakes Assessment project of the USEPA, we compared performances of multimetric indices (MMI) of biological condition that were developed: (1) with different lake grouping methods, ecoregions or diatom typologies; (2) by varying or not varying metrics among site groups; and (3) with different statistical techniques for modeling diatom metric values expected for minimally disturbed condition for each lake. Hierarchical modeling of MMIs, i.e. grouping sites by ecoregions or typologies and then modeling natural variability in metrics among lakes within groups, substantially improved MMI performance compared to using either ecoregions or site-specific modeling alone. Compared with MMIs based on ecoregion site groups, MMI precision and sensitivity to human disturbance were better when sites were grouped by diatom typologies and assessing performance nationwide. However, when MMI performance was evaluated at site group levels, as some government agencies often do, there was little difference in MMI performance between the two site grouping methods. Low numbers of reference and highly impacted sites in some typology groups likely limited MMI performance at the group level of analysis. Varying metrics among site groups did not improve MMI performance. Random forest models for site-specific expected metric values performed better than classification and regression tree and multiple linear regression, except when numbers of reference sites were small in site groups. Then classification and regression tree models were most precise. Based on our results, we recommend hierarchical modeling in future large scale lake assessments where lakes are grouped by ecoregions or diatom typologies and site-specific metric models are

  5. Impact of alternative metrics on estimates of extent of occurrence for extinction risk assessment.

    Science.gov (United States)

    Joppa, Lucas N; Butchart, Stuart H M; Hoffmann, Michael; Bachman, Steve P; Akçakaya, H Resit; Moat, Justin F; Böhm, Monika; Holland, Robert A; Newton, Adrian; Polidoro, Beth; Hughes, Adrian

    2016-04-01

    In International Union for Conservation of Nature (IUCN) Red List assessments, extent of occurrence (EOO) is a key measure of extinction risk. However, the way assessors estimate EOO from maps of species' distributions is inconsistent among assessments of different species and among major taxonomic groups. Assessors often estimate EOO from the area of mapped distribution, but these maps often exclude areas that are not habitat in idiosyncratic ways and are not created at the same spatial resolutions. We assessed the impact on extinction risk categories of applying different methods (minimum convex polygon, alpha hull) for estimating EOO for 21,763 species of mammals, birds, and amphibians. Overall, the percentage of threatened species requiring down listing to a lower category of threat (taking into account other Red List criteria under which they qualified) spanned 11-13% for all species combined (14-15% for mammals, 7-8% for birds, and 12-15% for amphibians). These down listings resulted from larger estimates of EOO and depended on the EOO calculation method. Using birds as an example, we found that 14% of threatened and near threatened species could require down listing based on the minimum convex polygon (MCP) approach, an approach that is now recommended by IUCN. Other metrics (such as alpha hull) had marginally smaller impacts. Our results suggest that uniformly applying the MCP approach may lead to a one-time down listing of hundreds of species but ultimately ensure consistency across assessments and realign the calculation of EOO with the theoretical basis on which the metric was founded. © 2015 Society for Conservation Biology.

  6. Compensating for Type-I Errors in Video Quality Assessment

    DEFF Research Database (Denmark)

    Brunnström, Kjell; Tavakoli, Samira; Søgaard, Jacob

    2015-01-01

    This paper analyzes the impact on compensating for Type-I errors in video quality assessment. A Type-I error is to incorrectly conclude that there is an effect. The risk increases with the number of comparisons that are performed in statistical tests. Type-I errors are an issue often neglected in...... in Quality of Experience and video quality assessment analysis. Examples are given for the analysis of subjective experiments and the evaluation of objective metrics by correlation.......This paper analyzes the impact on compensating for Type-I errors in video quality assessment. A Type-I error is to incorrectly conclude that there is an effect. The risk increases with the number of comparisons that are performed in statistical tests. Type-I errors are an issue often neglected...

  7. Determine metrics and set targets for soil quality on agriculture residue and energy crop pathways

    Energy Technology Data Exchange (ETDEWEB)

    Ian Bonner; David Muth

    2013-09-01

    There are three objectives for this project: 1) support OBP in meeting MYPP stated performance goals for the Sustainability Platform, 2) develop integrated feedstock production system designs that increase total productivity of the land, decrease delivered feedstock cost to the conversion facilities, and increase environmental performance of the production system, and 3) deliver to the bioenergy community robust datasets and flexible analysis tools for establishing sustainable and viable use of agricultural residues and dedicated energy crops. The key project outcome to date has been the development and deployment of a sustainable agricultural residue removal decision support framework. The modeling framework has been used to produce a revised national assessment of sustainable residue removal potential. The national assessment datasets are being used to update national resource assessment supply curves using POLYSIS. The residue removal modeling framework has also been enhanced to support high fidelity sub-field scale sustainable removal analyses. The framework has been deployed through a web application and a mobile application. The mobile application is being used extensively in the field with industry, research, and USDA NRCS partners to support and validate sustainable residue removal decisions. The results detailed in this report have set targets for increasing soil sustainability by focusing on primary soil quality indicators (total organic carbon and erosion) in two agricultural residue management pathways and a dedicated energy crop pathway. The two residue pathway targets were set to, 1) increase residue removal by 50% while maintaining soil quality, and 2) increase soil quality by 5% as measured by Soil Management Assessment Framework indicators. The energy crop pathway was set to increase soil quality by 10% using these same indicators. To demonstrate the feasibility and impact of each of these targets, seven case studies spanning the US are presented

  8. The health-related quality of life journey of gynecologic oncology surgical patients: Implications for the incorporation of patient-reported outcomes into surgical quality metrics.

    Science.gov (United States)

    Doll, Kemi M; Barber, Emma L; Bensen, Jeannette T; Snavely, Anna C; Gehrig, Paola A

    2016-05-01

    To report the changes in patient-reported quality of life for women undergoing gynecologic oncology surgeries. In a prospective cohort study from 10/2013-10/2014, women were enrolled pre-operatively and completed comprehensive interviews at baseline, 1, 3, and 6months post-operatively. Measures included the disease-specific Functional Assessment of Cancer Therapy-General (FACT-GP), general Patient Reported Outcome Measure Information System (PROMIS) global health and validated measures of anxiety and depression. Bivariate statistics were used to analyze demographic groups and changes in mean scores over time. Of 231 patients completing baseline interviews, 185 (80%) completed 1-month, 170 (74%) 3-month, and 174 (75%) 6-month interviews. Minimally invasive (n=115, 63%) and laparotomy (n=60, 32%) procedures were performed. Functional wellbeing (20 → 17.6, pquality, patients with increased postoperative healthcare resource use were noted to have higher baseline levels of anxiety. For women undergoing gynecologic oncology procedures, temporary declines in functional wellbeing are balanced by improvements in emotional wellbeing and decreased anxiety symptoms after surgery. Not all commonly used QOL surveys are sensitive to changes during the perioperative period and may not be suitable for use in surgical quality metrics. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Assessing visual control during simulated and live operations: gathering evidence for the content validity of simulation using eye movement metrics.

    Science.gov (United States)

    Vine, Samuel J; McGrath, John S; Bright, Elizabeth; Dutton, Thomas; Clark, James; Wilson, Mark R

    2014-06-01

    Although virtual reality (VR) simulators serve an important role in the training and assessment of surgeons, they need to be evaluated for evidence of validity. Eye-tracking technology and measures of visual control have been used as an adjunct to the performance parameters produced by VR simulators to help in objectively establishing the construct validity (experts vs. novices) of VR simulators. However, determining the extent to which VR simulators represent the real procedure and environment (content validity) has largely been a subjective process undertaken by experienced surgeons. This study aimed to examine the content validity of a VR transurethral resection of the prostate (TURP) simulator by comparing visual control metrics taken during simulated and real TURP procedures. Eye-tracking data were collected from seven surgeons performing 14 simulated TURP operations and three surgeons performing 15 real TURP operations on live patients. The data were analyzed offline, and visual control metrics (number and duration of fixations, percentage of time the surgeons fixated on the screen) were calculated. The surgeons displayed more fixations of a shorter duration and spent less time fixating on the video monitor during the real TURP than during the simulated TURP. This could have been due to (1) the increased complexity of the operating room (OR) environment (2) the decreased quality of the image of the urethra and associated anatomy (compared with the VR simulator), or (3) the impairment of visual attentional control due to the increased levels of stress likely experienced in the OR. The findings suggest that the complexity of the environment surrounding VR simulators needs to be considered in the design of effective simulated training curricula. The study also provides support for the use of eye-tracking technology to assess the content validity of simulation and to examine psychomotor processes during live operations.

  10. Timeliness “at a glance”: assessing the turnaround time through the six sigma metrics.

    Science.gov (United States)

    Ialongo, Cristiano; Bernardini, Sergio

    2016-01-01

    Almost thirty years of systematic analysis have proven the turnaround time to be a fundamental dimension for the clinical laboratory. Several indicators are to date available to assess and report quality with respect to timeliness, but they sometimes lack the communicative immediacy and accuracy. The six sigma is a paradigm developed within the industrial domain for assessing quality and addressing goal and issues. The sigma level computed through the Z-score method is a simple and straightforward tool which delivers quality by a universal dimensionless scale and allows to handle non-normal data. Herein we report our preliminary experience in using the sigma level to assess the change in urgent (STAT) test turnaround time due to the implementation of total automation. We found that the Z-score method is a valuable and easy to use method for assessing and communicating the quality level of laboratory timeliness, providing a good correspondence with the actual change in efficiency which was retrospectively observed.

  11. Organ quality metrics are a poor predictor of costs and resource utilization in deceased donor kidney transplantation.

    Science.gov (United States)

    Stahl, Christopher C; Wima, Koffi; Hanseman, Dennis J; Hoehn, Richard S; Ertel, Audrey; Midura, Emily F; Hohmann, Samuel F; Paquette, Ian M; Shah, Shimul A; Abbott, Daniel E

    2015-12-01

    The desire to provide cost-effective care has lead to an investigation of the costs of therapy for end-stage renal disease. Organ quality metrics are one way to attempt to stratify kidney transplants, although the ability of these metrics to predict costs and resource use is undetermined. The Scientific Registry of Transplant Recipients database was linked to the University HealthSystem Consortium Database to identify adult deceased donor kidney transplant recipients from 2009 to 2012. Patients were divided into cohorts by kidney criteria (standard vs expanded) or kidney donor profile index (KDPI) score (Cost was defined as reimbursement based on Medicare cost/charge ratios and included the costs of readmission when applicable. More than 19,500 patients populated the final dataset. Lower-quality kidneys (expanded criteria donor or KDPI 85+) were more likely to be transplanted in older (both P costs compared with standard criteria donor transplants (risk ratio [RR] 0.97, 95% confidence interval [CI] 0.93-1.00, P = .07). KDPI 85+ was associated with slightly lower costs than KDPI quality metrics are less influential predictors of short-term costs than recipient factors. Future studies should focus on recipient characteristics as a way to discern high versus low cost transplantation procedures. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Large radius of curvature measurement based on the evaluation of interferogram-quality metric in non-null interferometry

    Science.gov (United States)

    Yang, Zhongming; Dou, Jiantai; Du, Jinyu; Gao, Zhishan

    2018-03-01

    Non-null interferometry could use to measure the radius of curvature (ROC), we have presented a virtual quadratic Newton rings phase-shifting moiré-fringes measurement method for large ROC measurement (Yang et al., 2016). In this paper, we propose a large ROC measurement method based on the evaluation of the interferogram-quality metric by the non-null interferometer. With the multi-configuration model of the non-null interferometric system in ZEMAX, the retrace errors and the phase introduced by the test surface are reconstructed. The interferogram-quality metric is obtained by the normalized phase-shifted testing Newton rings with the spherical surface model in the non-null interferometric system. The radius curvature of the test spherical surface can be obtained until the minimum of the interferogram-quality metric is found. Simulations and experimental results are verified the feasibility of our proposed method. For a spherical mirror with a ROC of 41,400 mm, the measurement accuracy is better than 0.13%.

  13. Applying Undertaker to quality assessment

    DEFF Research Database (Denmark)

    Archie, John G.; Paluszewski, Martin; Karplus, Kevin

    2009-01-01

    Our group tested three quality assessment functions in CASP8: a function which used only distance constraints derived from alignments (SAM-T08-MQAO), a function which added other single-model terms to the distance constraints (SAM-T08-MQAU), and a function which used both single-model and consens...

  14. Assessing Question Quality Using NLP

    Science.gov (United States)

    Kopp, Kristopher J.; Johnson, Amy M.; Crossley, Scott A.; McNamara, Danielle S.

    2017-01-01

    An NLP algorithm was developed to assess question quality to inform feedback on questions generated by students within iSTART (an intelligent tutoring system that teaches reading strategies). A corpus of 4575 questions was coded using a four-level taxonomy. NLP indices were calculated for each question and machine learning was used to predict…

  15. QUAST: quality assessment tool for genome assemblies.

    Science.gov (United States)

    Gurevich, Alexey; Saveliev, Vladislav; Vyahhi, Nikolay; Tesler, Glenn

    2013-04-15

    Limitations of genome sequencing techniques have led to dozens of assembly algorithms, none of which is perfect. A number of methods for comparing assemblers have been developed, but none is yet a recognized benchmark. Further, most existing methods for comparing assemblies are only applicable to new assemblies of finished genomes; the problem of evaluating assemblies of previously unsequenced species has not been adequately considered. Here, we present QUAST-a quality assessment tool for evaluating and comparing genome assemblies. This tool improves on leading assembly comparison software with new ideas and quality metrics. QUAST can evaluate assemblies both with a reference genome, as well as without a reference. QUAST produces many reports, summary tables and plots to help scientists in their research and in their publications. In this study, we used QUAST to compare several genome assemblers on three datasets. QUAST tables and plots for all of them are available in the Supplementary Material, and interactive versions of these reports are on the QUAST website. http://bioinf.spbau.ru/quast . Supplementary data are available at Bioinformatics online.

  16. Software Architecture Coupling Metric for Assessing Operational Responsiveness of Trading Systems

    Directory of Open Access Journals (Sweden)

    Claudiu VINTE

    2012-01-01

    Full Text Available The empirical observation that motivates our research relies on the difficulty to assess the performance of a trading architecture beyond a few synthetic indicators like response time, system latency, availability or volume capacity. Trading systems involve complex software architectures of distributed resources. However, in the context of a large brokerage firm, which offers a global coverage from both, market and client perspectives, the term distributed gains a critical significance indeed. Offering a low latency ordering system by nowadays standards is relatively easily achievable, but integrating it in a flexible manner within the broader information system architecture of a broker/dealer requires operational aspects to be factored in. We propose a metric for measuring the coupling level within software architecture, and employ it to identify architectural designs that can offer a higher level of operational responsiveness, which ultimately would raise the overall real-world performance of a trading system.

  17. A City and National Metric measuring Isolation from the Global Market for Food Security Assessment

    Science.gov (United States)

    Brown, Molly E.; Silver, Kirk Coleman; Rajagopalan, Krishnan

    2013-01-01

    The World Bank has invested in infrastructure in developing countries for decades. This investment aims to reduce the isolation of markets, reducing both seasonality and variability in food availability and food prices. Here we combine city market price data, global distance to port, and country infrastructure data to create a new Isolation Index for countries and cities around the world. Our index quantifies the isolation of a city from the global market. We demonstrate that an index built at the country level can be applied at a sub-national level to quantify city isolation. In doing so, we offer policy makers with an alternative metric to assess food insecurity. We compare our isolation index with other indices and economic data found in the literature.We show that our Index measures economic isolation regardless of economic stability using correlation and analysis

  18. Subjective and Objective Quality Assessment of Single-Channel Speech Separation Algorithms

    DEFF Research Database (Denmark)

    Mowlaee, Pejman; Saeidi, Rahim; Christensen, Mads Græsbøll

    2012-01-01

    are expected to carry on to other applications beyond ASR. In this paper, in addition to conventional speech quality metrics (PESQ and SNRloss), we also evaluate the separation systems output using different source separation metrics: blind source separation evaluation (BSS EVAL) and perceptual evaluation...... that PESQ and PEASS quality metrics predict well the subjective quality of separated signals obtained by the separation systems. From the results it is observed that the short-time objective intelligibility (STOI) measure predict the speech intelligibility results.......Previous studies on performance evaluation of single-channel speech separation (SCSS) algorithms mostly focused on automatic speech recognition (ASR) accuracy as their performance measure. Assessing the separated signals by different metrics other than this has the benefit that the results...

  19. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery.

    Science.gov (United States)

    Shiraishi, Satomi; Tan, Jun; Olsen, Lindsey A; Moore, Kevin L

    2015-02-01

    The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose-volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V10Gy (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution's VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QMclin - QMpred, and a coefficient of determination, R(2). For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. The most accurate predictions are obtained when plans are stratified based on proximity to OARs and their PTV

  20. An evaluation of metrics for assessing maternal exposure to agricultural pesticides.

    Science.gov (United States)

    Warren, Joshua L; Luben, Thomas J; Sanders, Alison P; Brownstein, Naomi C; Herring, Amy H; Meyer, Robert E

    2014-01-01

    We evaluate the use of three different exposure metrics to estimate maternal agricultural pesticide exposure during pregnancy. Using a geographic information system-based method of pesticide exposure estimation, we combine data on crop density and specific pesticide application amounts/dates to create the three exposure metrics. For illustration purposes, we create each metric for a North Carolina cohort of pregnant women, 2003-2005, and analyze the risk of congenital anomaly development with a focus on metric comparisons. Based on the results, and the need to balance data collection efforts/computational efficiency with accuracy, the metric which estimates total chemical exposure using application dates based on crop-specific earliest planting and latest harvesting information is preferred. Benefits and drawbacks of each metric are discussed and recommendations for extending the analysis to other states are provided.

  1. Leveraging multi-channel x-ray detector technology to improve quality metrics for industrial and security applications

    Science.gov (United States)

    Jimenez, Edward S.; Thompson, Kyle R.; Stohn, Adriana; Goodner, Ryan N.

    2017-09-01

    Sandia National Laboratories has recently developed the capability to acquire multi-channel radio- graphs for multiple research and development applications in industry and security. This capability allows for the acquisition of x-ray radiographs or sinogram data to be acquired at up to 300 keV with up to 128 channels per pixel. This work will investigate whether multiple quality metrics for computed tomography can actually benefit from binned projection data compared to traditionally acquired grayscale sinogram data. Features and metrics to be evaluated include the ability to dis- tinguish between two different materials with similar absorption properties, artifact reduction, and signal-to-noise for both raw data and reconstructed volumetric data. The impact of this technology to non-destructive evaluation, national security, and industry is wide-ranging and has to potential to improve upon many inspection methods such as dual-energy methods, material identification, object segmentation, and computer vision on radiographs.

  2. Class Cohesion Metrics for Software Engineering: A Critical Review

    Directory of Open Access Journals (Sweden)

    Habib Izadkhah

    2017-02-01

    Full Text Available Class cohesion or degree of the relations of class members is considered as one of the crucial quality criteria. A class with a high cohesion improves understandability, maintainability and reusability. The class cohesion metrics can be measured quantitatively and therefore can be used as a base for assessing the quality of design. The main objective of this paper is to identify important research directions in the area of class cohesion metrics that require further attention in order to develop more effective and efficient class cohesion metrics for software engineering. In this paper, we discuss the class cohesion assessing metrics (thirty-two metrics that have received the most attention in the research community and compare them from different aspects. We also present desirable properties of cohesion metrics to validate class cohesion metrics.

  3. Alternative Metrics ("Altmetrics") for Assessing Article Impact in Popular General Radiology Journals.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Ayoola, Abimbola; Singh, Kush; Duszak, Richard

    2017-07-01

    Emerging alternative metrics leverage social media and other online platforms to provide immediate measures of biomedical articles' reach among diverse public audiences. We aimed to compare traditional citation and alternative impact metrics for articles in popular general radiology journals. All 892 original investigations published in 2013 issues of Academic Radiology, American Journal of Roentgenology, Journal of the American College of Radiology, and Radiology were included. Each article's content was classified as imaging vs nonimaging. Traditional journal citations to articles were obtained from Web of Science. Each article's Altmetric Attention Score (Altmetric), representing weighted mentions across a variety of online platforms, was obtained from Altmetric.com. Statistical assessment included the McNemar test, the Mann-Whitney test, and the Pearson correlation. Mean and median traditional citation counts were 10.7 ± 15.4 and 5 vs 3.3 ± 13.3 and 0 for Altmetric. Among all articles, 96.4% had ≥1 traditional citation vs 41.8% for Altmetric (P < 0.001). Online platforms for which at least 5% of the articles were represented included Mendeley (42.8%), Twitter (34.2%), Facebook (10.7%), and news outlets (8.4%). Citations and Altmetric were weakly correlated (r = 0.20), with only a 25.0% overlap in terms of articles within their top 10th percentiles. Traditional citations were higher for articles with imaging vs nonimaging content (11.5 ± 16.2 vs 6.9 ± 9.8, P < 0.001), but Altmetric scores were higher in articles with nonimaging content (5.1 ± 11.1 vs 2.8 ± 13.7, P = 0.006). Although overall online attention to radiology journal content was low, alternative metrics exhibited unique trends, particularly for nonclinical articles, and may provide a complementary measure of radiology research impact compared to traditional citation counts. Copyright © 2017 The Association of University Radiologists. Published by

  4. Assessing quality in cardiac surgery

    Directory of Open Access Journals (Sweden)

    Samer A.M. Nashef

    2005-07-01

    Full Text Available There is a the strong temporal, if not causal, link between the intervention and the outcome in cardiac surgery and therefore a link becomes established between operative mortality and the measurement of surgical performance. In Britain the law stipulates that data collected by any public body or using public funds must be made freely available. Tools and mechanisms we devise and develop are likely to form the models on which the quality of care is assessed in other surgical and perhaps medical specialties. Measuring professional performance should be done by the profession. To measure risk there are a number of scores as crude mortality is not enough. A very important benefit of assessing the risk of death is to use this knowledge in the determination of the indication to operate. The second benefit is in the assessment of the quality of care as risk prediction gives a standard against performance of hospitals and surgeons. Peer review and “naming and shaming” are two mechanisms to monitor quality. There are two potentially damaging outcomes from the publication of results in a league-table form: the first is the damage to the hospital; the second is to refuse to operate on high-risk patients. There is a real need for quality monitoring in medicine in general and in cardiac surgery in particular. Good quality surgical work requires robust knowledge of three crucial variables: activity, risk prediction and performance. In Europe, the three major specialist societies have agreed to establish the European Cardiovascular and Thoracic Surgery Institute of Accreditation (ECTSIA. Performance monitoring is soon to become imperative. If we surgeons are not on board, we shall have no control on its final destination, and the consequences may be equally damaging to us and to our patients.

  5. QPLOT: a quality assessment tool for next generation sequencing data.

    Science.gov (United States)

    Li, Bingshan; Zhan, Xiaowei; Wing, Mary-Kate; Anderson, Paul; Kang, Hyun Min; Abecasis, Goncalo R

    2013-01-01

    Next generation sequencing (NGS) is being widely used to identify genetic variants associated with human disease. Although the approach is cost effective, the underlying data is susceptible to many types of error. Importantly, since NGS technologies and protocols are rapidly evolving, with constantly changing steps ranging from sample preparation to data processing software updates, it is important to enable researchers to routinely assess the quality of sequencing and alignment data prior to downstream analyses. Here we describe QPLOT, an automated tool that can facilitate the quality assessment of sequencing run performance. Taking standard sequence alignments as input, QPLOT generates a series of diagnostic metrics summarizing run quality and produces convenient graphical summaries for these metrics. QPLOT is computationally efficient, generates webpages for interactive exploration of detailed results, and can handle the joint output of many sequencing runs. QPLOT is an automated tool that facilitates assessment of sequence run quality. We routinely apply QPLOT to ensure quick detection of diagnostic of sequencing run problems. We hope that QPLOT will be useful to the community as well.

  6. QPLOT: A Quality Assessment Tool for Next Generation Sequencing Data

    Directory of Open Access Journals (Sweden)

    Bingshan Li

    2013-01-01

    Full Text Available Background. Next generation sequencing (NGS is being widely used to identify genetic variants associated with human disease. Although the approach is cost effective, the underlying data is susceptible to many types of error. Importantly, since NGS technologies and protocols are rapidly evolving, with constantly changing steps ranging from sample preparation to data processing software updates, it is important to enable researchers to routinely assess the quality of sequencing and alignment data prior to downstream analyses. Results. Here we describe QPLOT, an automated tool that can facilitate the quality assessment of sequencing run performance. Taking standard sequence alignments as input, QPLOT generates a series of diagnostic metrics summarizing run quality and produces convenient graphical summaries for these metrics. QPLOT is computationally efficient, generates webpages for interactive exploration of detailed results, and can handle the joint output of many sequencing runs. Conclusion. QPLOT is an automated tool that facilitates assessment of sequence run quality. We routinely apply QPLOT to ensure quick detection of diagnostic of sequencing run problems. We hope that QPLOT will be useful to the community as well.

  7. Ultrasound to assess bone quality.

    Science.gov (United States)

    Raum, Kay; Grimal, Quentin; Varga, Peter; Barkmann, Reinhard; Glüer, Claus C; Laugier, Pascal

    2014-06-01

    Bone quality is determined by a variety of compositional, micro- and ultrastructural properties of the mineralized tissue matrix. In contrast to X-ray-based methods, the interaction of acoustic waves with bone tissue carries information about elastic and structural properties of the tissue. Quantitative ultrasound (QUS) methods represent powerful alternatives to ionizing x-ray based assessment of fracture risk. New in vivo applicable methods permit measurements of fracture-relevant properties, [eg, cortical thickness and stiffness at fragile anatomic regions (eg, the distal radius and the proximal femur)]. Experimentally, resonance ultrasound spectroscopy and acoustic microscopy can be used to assess the mesoscale stiffness tensor and elastic maps of the tissue matrix at microscale resolution, respectively. QUS methods, thus, currently represent the most promising approach for noninvasive assessment of components of fragility beyond bone mass and bone microstructure providing prospects for improved assessment of fracture risk.

  8. Assessing water quality trends in catchments with contrasting hydrological regimes

    Science.gov (United States)

    Sherriff, Sophie C.; Shore, Mairead; Mellander, Per-Erik

    2016-04-01

    Environmental resources are under increasing pressure to simultaneously achieve social, economic and ecological aims. Increasing demand for food production, for example, has expanded and intensified agricultural systems globally. In turn, greater risks of diffuse pollutant delivery (suspended sediment (SS) and Phosphorus (P)) from land to water due to higher stocking densities, fertilisation rates and soil erodibility has been attributed to deterioration of chemical and ecological quality of aquatic ecosystems. Development of sustainable and resilient management strategies for agro-ecosystems must detect and consider the impact of land use disturbance on water quality over time. However, assessment of multiple monitoring sites over a region is challenged by hydro-climatic fluctuations and the propagation of events through catchments with contrasting hydrological regimes. Simple water quality metrics, for example, flow-weighted pollutant exports have potential to normalise the impact of catchment hydrology and better identify water quality fluctuations due to land use and short-term climate fluctuations. This paper assesses the utility of flow-weighted water quality metrics to evaluate periods and causes of critical pollutant transfer. Sub-hourly water quality (SS and P) and discharge data were collected from hydrometric monitoring stations at the outlets of five small (~10 km2) agricultural catchments in Ireland. Catchments possess contrasting land uses (predominantly grassland or arable) and soil drainage (poorly, moderately or well drained) characteristics. Flow-weighted water quality metrics were calculated and evaluated according to fluctuations in source pressure and rainfall. Flow-weighted water quality metrics successfully identified fluctuations in pollutant export which could be attributed to land use changes through the agricultural calendar, i.e., groundcover fluctuations. In particular, catchments with predominantly poor or moderate soil drainage

  9. Impact of Constant Rate Factor on Objective Video Quality Assessment

    Directory of Open Access Journals (Sweden)

    Juraj Bienik

    2017-01-01

    Full Text Available This paper deals with the impact of constant rate factor value on the objective video quality assessment using PSNR and SSIM metrics. Compression efficiency of H.264 and H.265 codecs defined by different Constant rate factor (CRF values was tested. The assessment was done for eight types of video sequences depending on content for High Definition (HD, Full HD (FHD and Ultra HD (UHD resolution. Finally, performance of both mentioned codecs with emphasis on compression ratio and efficiency of coding was compared.

  10. Techno-Economic Related Metrics for a Wave Energy Converters Feasibility Assessment

    Directory of Open Access Journals (Sweden)

    Adrian de Andres

    2016-10-01

    Full Text Available When designing “multi-MW arrays” of Wave Energy Converters (WECs, having a low number of converters with high individual power ratings can be beneficial as the Operation and Maintenance (O&M costs may be reduced. However, having converters of small dimensions or small power ratings could also be beneficial, as suggested by previous works, due to a reduction in material costs as compared to power production, and the use of small, inexpensive vessels. In this work, a case study investigating the optimum size of WEC for a 20 MW array is performed. Analysis is carried out based on the CorPower Ocean technology. In this case study, firstly a Levelized Cost of Energy (LCOE model is created. This model incorporates the latest Capital Expenditure (CAPEX estimates for CorPower Ocean’s 250 kW prototype. Using this techno-economic model, several sizes/ratings of WEC are tested for use in a 20 MW array. Operational Expenditure (OPEX is calculated using two different calculation approaches in order to check its influence on final indicators. OPEX is firstly calculated as a percentage of CAPEX, as shown in previous works, and secondly using a failure-repair model, taking into account individual failures of WECs in the array. Size/rating analysis is carried out for several European locations in order to establish any dependence between site location and optimal WEC size/rating. Several metrics for techno-economic assessment of marine energy converters, other than LCOE, are compared in this work. A comparison of several devices with each these metrics is performed within this study.

  11. Assessing the functional coherence of gene sets with metrics based on the Gene Ontology graph.

    Science.gov (United States)

    Richards, Adam J; Muller, Brian; Shotwell, Matthew; Cowart, L Ashley; Rohrer, Bäerbel; Lu, Xinghua

    2010-06-15

    The results of initial analyses for many high-throughput technologies commonly take the form of gene or protein sets, and one of the ensuing tasks is to evaluate the functional coherence of these sets. The study of gene set function most commonly makes use of controlled vocabulary in the form of ontology annotations. For a given gene set, the statistical significance of observing these annotations or 'enrichment' may be tested using a number of methods. Instead of testing for significance of individual terms, this study is concerned with the task of assessing the global functional coherence of gene sets, for which novel metrics and statistical methods have been devised. The metrics of this study are based on the topological properties of graphs comprised of genes and their Gene Ontology annotations. A novel aspect of these methods is that both the enrichment of annotations and the relationships among annotations are considered when determining the significance of functional coherence. We applied our methods to perform analyses on an existing database and on microarray experimental results. Here, we demonstrated that our approach is highly discriminative in terms of differentiating coherent gene sets from random ones and that it provides biologically sensible evaluations in microarray analysis. We further used examples to show the utility of graph visualization as a tool for studying the functional coherence of gene sets. The implementation is provided as a freely accessible web application at: http://projects.dbbe.musc.edu/gosteiner. Additionally, the source code written in the Python programming language, is available under the General Public License of the Free Software Foundation. Supplementary data are available at Bioinformatics online.

  12. The Midwest Stream Quality Assessment

    Science.gov (United States)

    ,

    2012-01-01

    In 2013, the U.S. Geological Survey (USGS) National Water-Quality Assessment Program (NAWQA) and USGS Columbia Environmental Research Center (CERC) will be collaborating with the U.S. Environmental Protection Agency (EPA) National Rivers and Streams Assessment (NRSA) to assess stream quality across the Midwestern United States. The sites selected for this study are a subset of the larger NRSA, implemented by the EPA, States and Tribes to sample flowing waters across the United States (http://water.epa.gov/type/rsl/monitoring/riverssurvey/index.cfm). The goals are to characterize water-quality stressors—contaminants, nutrients, and sediment—and ecological conditions in streams throughout the Midwest and to determine the relative effects of these stressors on aquatic organisms in the streams. Findings will contribute useful information for communities and policymakers by identifying which human and environmental factors are the most critical in controlling stream quality. This collaborative study enhances information provided to the public and policymakers and minimizes costs by leveraging and sharing data gathered under existing programs. In the spring and early summer, NAWQA will sample streams weekly for contaminants, nutrients, and sediment. During the same time period, CERC will test sediment and water samples for toxicity, deploy time-integrating samplers, and measure reproductive effects and biomarkers of contaminant exposure in fish or amphibians. NRSA will sample sites once during the summer to assess ecological and habitat conditions in the streams by collecting data on algal, macroinvertebrate, and fish communities and collecting detailed physical-habitat measurements. Study-team members from all three programs will work in collaboration with USGS Water Science Centers and State agencies on study design, execution of sampling and analysis, and reporting.

  13. Next-Generation Metrics: Responsible Metrics & Evaluation for Open Science

    Energy Technology Data Exchange (ETDEWEB)

    Wilsdon, J.; Bar-Ilan, J.; Peters, I.; Wouters, P.

    2016-07-01

    Metrics evoke a mixed reaction from the research community. A commitment to using data to inform decisions makes some enthusiastic about the prospect of granular, real-time analysis o of research and its wider impacts. Yet we only have to look at the blunt use of metrics such as journal impact factors, h-indices and grant income targets, to be reminded of the pitfalls. Some of the most precious qualities of academic culture resist simple quantification, and individual indicators often struggle to do justice to the richness and plurality of research. Too often, poorly designed evaluation criteria are “dominating minds, distorting behaviour and determining careers (Lawrence, 2007).” Metrics hold real power: they are constitutive of values, identities and livelihoods. How to exercise that power to more positive ends has been the focus of several recent and complementary initiatives, including the San Francisco Declaration on Research Assessment (DORA1), the Leiden Manifesto2 and The Metric Tide3 (a UK government review of the role of metrics in research management and assessment). Building on these initiatives, the European Commission, under its new Open Science Policy Platform4, is now looking to develop a framework for responsible metrics for research management and evaluation, which can be incorporated into the successor framework to Horizon 2020. (Author)

  14. A multi-model multi-objective study to evaluate the role of metric choice on sensitivity assessment

    Science.gov (United States)

    Haghnegahdar, Amin; Razavi, Saman; Wheater, Howard; Gupta, Hoshin

    2016-04-01

    Sensitivity analysis (SA) is an essential tool for providing insight into model behavior, calibration, and uncertainty assessment. It is often overlooked that the metric choice can significantly change the assessment of model sensitivity. In order to identify important hydrological processes across various case studies, we conducted a multi-model multi-criteria sensitivity analysis using a novel and efficient technique, Variogram Analysis of Response Surfaces (VARS). The analysis was conducted using three physically-based hydrological models, applied at various scales ranging from small (hillslope) to large (watershed) scale. In each case, the sensitivity of simulated streamflow to model processes (represented through parameters) were measured using different metrics selected based on various hydrograph characteristics including high flows, low flows, and volume. It is demonstrated that metric choice has a significant influence on SA results and must be aligned with study objectives. Guidelines for identifying important model parameters from a multi-objective SA perspective is discussed as part of this study.

  15. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Shiraishi, Satomi; Moore, Kevin L., E-mail: kevinmoore@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California 92093 (United States); Tan, Jun [Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas 75490 (United States); Olsen, Lindsey A. [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States)

    2015-02-15

    Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V{sub 10Gy} (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM{sub clin} − QM{sub pred}, and a coefficient of determination, R{sup 2}. For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are

  16. Kurtosis corrected sound pressure level as a noise metric for risk assessment of occupational noises.

    Science.gov (United States)

    Goley, G Steven; Song, Won Joon; Kim, Jay H

    2011-03-01

    Current noise guidelines use an energy-based noise metric to predict the risk of hearing loss, and thus ignore the effect of temporal characteristics of the noise. The practice is widely considered to underestimate the risk of a complex noise environment, where impulsive noises are embedded in a steady-state noise. A basic form for noise metrics is designed by combining the equivalent sound pressure level (SPL) and a temporal correction term defined as a function of kurtosis of the noise. Several noise metrics are developed by varying this basic form and evaluated utilizing existing chinchilla noise exposure data. It is shown that the kurtosis correction term significantly improves the correlation of the noise metric with the measured hearing losses in chinchillas. The average SPL of the frequency components of the noise that define the hearing loss with a kurtosis correction term is identified as the best noise metric among tested. One of the investigated metrics, the kurtosis-corrected A-weighted SPL, is applied to a human exposure study data as a preview of applying the metrics to human guidelines. The possibility of applying the noise metrics to human guidelines is discussed. © 2011 Acoustical Society of America

  17. Analytical Tools Interface for Landscape Assessments (ATtILA) for landscape metrics

    Science.gov (United States)

    ATtILA is easy to use ArcView extension that calculates many commonly used landscape metrics. By providing an intuitive interface, the extension provides the ability to generate landscape metrics to a wide audience regardless of their GIS knowledge level.

  18. Convective Weather Forecast Quality Metrics for Air Traffic Management Decision-Making

    Science.gov (United States)

    Chatterji, Gano B.; Gyarfas, Brett; Chan, William N.; Meyn, Larry A.

    2006-01-01

    the process described in Refs. 5 through 7, in terms of percentage coverage or confidence level is notionally sound compared to characterizing in terms of probabilities because the probability of the forecast being correct can only be determined using actual observations. References 5 through 7 only use the forecast data and not the observations. The method for computing the probability of detection, false alarm ratio and several forecast quality metrics (Skill Scores) using both the forecast and observation data are given in Ref. 2. This paper extends the statistical verification method in Ref. 2 to determine co-occurrence probabilities. The method consists of computing the probability that a severe weather cell (grid location) is detected in the observation data in the neighborhood of the severe weather cell in the forecast data. Probabilities of occurrence at the grid location and in its neighborhood with higher severity, and with lower severity in the observation data compared to that in the forecast data are examined. The method proposed in Refs. 5 through 7 is used for computing the probability that a certain number of cells in the neighborhood of severe weather cells in the forecast data are seen as severe weather cells in the observation data. Finally, the probability of existence of gaps in the observation data in the neighborhood of severe weather cells in forecast data is computed. Gaps are defined as openings between severe weather cells through which an aircraft can safely fly to its intended destination. The rest of the paper is organized as follows. Section II summarizes the statistical verification method described in Ref. 2. The extension of this method for computing the co-occurrence probabilities in discussed in Section HI. Numerical examples using NCWF forecast data and NCWD observation data are presented in Section III to elucidate the characteristics of the co-occurrence probabilities. This section also discusses the procedure for computing

  19. Total Probability of Collision as a Metric for Finite Conjunction Assessment and Collision Risk Management

    Science.gov (United States)

    Frigm, Ryan C.; Hejduk, Matthew D.; Johnson, Lauren C.; Plakalovic, Dragan

    2015-01-01

    On-orbit collision risk is becoming an increasing mission risk to all operational satellites in Earth orbit. Managing this risk can be disruptive to mission and operations, present challenges for decision-makers, and is time-consuming for all parties involved. With the planned capability improvements to detecting and tracking smaller orbital debris and capacity improvements to routinely predict on-orbit conjunctions, this mission risk will continue to grow in terms of likelihood and effort. It is very real possibility that the future space environment will not allow collision risk management and mission operations to be conducted in the same manner as it is today. This paper presents the concept of a finite conjunction assessment-one where each discrete conjunction is not treated separately but, rather, as a continuous event that must be managed concurrently. The paper also introduces the Total Probability of Collision as an analogous metric for finite conjunction assessment operations and provides several options for its usage in a Concept of Operations.

  20. Serial improvement of quality metrics in pediatric thoracoscopic lobectomy for congenital lung malformation: an analysis of learning curve.

    Science.gov (United States)

    Park, Samina; Kim, Eung Re; Hwang, Yoohwa; Lee, Hyun Joo; Park, In Kyu; Kim, Young Tae; Kang, Chang Hyun

    2017-10-01

    Video-assisted thoracic surgery (VATS) pulmonary resection in children is a technically demanding procedure that requires a relatively long learning period. This study aimed to evaluate the serial improvement of quality metrics according to case volume experience in pediatric VATS pulmonary resection of congenital lung malformation (CLM). Methods VATS anatomical resection in CLM was attempted in 200 consecutive patients. The learning curve for the operative time was modeled by cumulative sum analysis. Quality metrics were used to measure technical achievement and efficiency outcomes. Results The median operative time was 95 min. The median length of hospital stay and chest tube indwelling time was 4 and 2 days, respectively. The improvement of operation time was observed persistently until 200 cases. However, two cut-off points, the 50th case and 110th case, were identified in the learning curve for operative time, and the 110th case was the turning point for stable outcomes with short operation time. Significant reduction of length of hospital stay and chest tube indwelling time was observed after 50 cases (p = .002 and p = .021, respectively). The complication rate decreased but continued at a low rate for entire study period and the interval decrease was not statistically significant. Conversion rate decreased significantly (p = .001), and technically challenging procedures were performed more frequently in later cases. Conclusions Improvements of quality metrics in operation time, conversion rate, length of hospital stay, and chest tube indwelling time were observed in proportion to case volume. Minimum experience of 50 is necessary for stable outcomes of pediatric VATS pulmonary resection.

  1. Challenges, Solutions, and Quality Metrics of Personal Genome Assembly in Advancing Precision Medicine.

    Science.gov (United States)

    Xiao, Wenming; Wu, Leihong; Yavas, Gokhan; Simonyan, Vahan; Ning, Baitang; Hong, Huixiao

    2016-04-22

    Even though each of us shares more than 99% of the DNA sequences in our genome, there are millions of sequence codes or structure in small regions that differ between individuals, giving us different characteristics of appearance or responsiveness to medical treatments. Currently, genetic variants in diseased tissues, such as tumors, are uncovered by exploring the differences between the reference genome and the sequences detected in the diseased tissue. However, the public reference genome was derived with the DNA from multiple individuals. As a result of this, the reference genome is incomplete and may misrepresent the sequence variants of the general population. The more reliable solution is to compare sequences of diseased tissue with its own genome sequence derived from tissue in a normal state. As the price to sequence the human genome has dropped dramatically to around $1000, it shows a promising future of documenting the personal genome for every individual. However, de novo assembly of individual genomes at an affordable cost is still challenging. Thus, till now, only a few human genomes have been fully assembled. In this review, we introduce the history of human genome sequencing and the evolution of sequencing platforms, from Sanger sequencing to emerging "third generation sequencing" technologies. We present the currently available de novo assembly and post-assembly software packages for human genome assembly and their requirements for computational infrastructures. We recommend that a combined hybrid assembly with long and short reads would be a promising way to generate good quality human genome assemblies and specify parameters for the quality assessment of assembly outcomes. We provide a perspective view of the benefit of using personal genomes as references and suggestions for obtaining a quality personal genome. Finally, we discuss the usage of the personal genome in aiding vaccine design and development, monitoring host immune-response, tailoring

  2. Challenges, Solutions, and Quality Metrics of Personal Genome Assembly in Advancing Precision Medicine

    Directory of Open Access Journals (Sweden)

    Wenming Xiao

    2016-04-01

    Full Text Available Even though each of us shares more than 99% of the DNA sequences in our genome, there are millions of sequence codes or structure in small regions that differ between individuals, giving us different characteristics of appearance or responsiveness to medical treatments. Currently, genetic variants in diseased tissues, such as tumors, are uncovered by exploring the differences between the reference genome and the sequences detected in the diseased tissue. However, the public reference genome was derived with the DNA from multiple individuals. As a result of this, the reference genome is incomplete and may misrepresent the sequence variants of the general population. The more reliable solution is to compare sequences of diseased tissue with its own genome sequence derived from tissue in a normal state. As the price to sequence the human genome has dropped dramatically to around $1000, it shows a promising future of documenting the personal genome for every individual. However, de novo assembly of individual genomes at an affordable cost is still challenging. Thus, till now, only a few human genomes have been fully assembled. In this review, we introduce the history of human genome sequencing and the evolution of sequencing platforms, from Sanger sequencing to emerging “third generation sequencing” technologies. We present the currently available de novo assembly and post-assembly software packages for human genome assembly and their requirements for computational infrastructures. We recommend that a combined hybrid assembly with long and short reads would be a promising way to generate good quality human genome assemblies and specify parameters for the quality assessment of assembly outcomes. We provide a perspective view of the benefit of using personal genomes as references and suggestions for obtaining a quality personal genome. Finally, we discuss the usage of the personal genome in aiding vaccine design and development, monitoring host

  3. Quality of Life after Brain Injury (QOLIBRI): scale development and metric properties.

    Science.gov (United States)

    von Steinbüchel, Nicole; Wilson, Lindsay; Gibbons, Henning; Hawthorne, Graeme; Höfer, Stefan; Schmidt, Silke; Bullinger, Monika; Maas, Andrew; Neugebauer, Edmund; Powell, Jane; von Wild, Klaus; Zitnay, George; Bakx, Wilbert; Christensen, Anne-Lise; Koskinen, Sanna; Sarajuuri, Jaana; Formisano, Rita; Sasse, Nadine; Truelle, Jean-Luc

    2010-07-01

    The consequences of traumatic brain injury (TBI) for health-related quality of life (HRQoL) are poorly investigated, and a TBI-specific instrument has not previously been available. The cross-cultural development of a new measure to assess HRQoL after TBI is described here. An international TBI Task Force derived a conceptual model from previous work, constructed an initial item bank of 148 items, and then reduced the item set through two successive multicenter validation studies. The first study, with eight language versions of the QOLIBRI, recruited 1528 participants with TBI, and the second with six language versions, recruited 921 participants. The data from 795 participants from the second study who had complete Glasgow Coma Scale (GCS) and Glasgow Outcome Scale (GOS) data were used to finalize the instrument. The final version of the QOLIBRI consists of 37 items in six scales (see Appendix ). Satisfaction is assessed in the areas of "Cognition," "Self," "Daily Life and Autonomy," and "Social Relationships," and feeling bothered by "Emotions," and "Physical Problems." The QOLIBRI scales meet standard psychometric criteria (internal consistency, alpha = 0.75-0.89, test-retest reliability, r(tt) = 0.78-0.85). Test-retest reliability (r(tt) = 0.68-0.87) as well as internal consistency (alpha = 0.81-0.91) were also good in a subgroup of participants with lower cognitive performance. Although there is one strong HRQoL factor, a six-scale structure explaining additional variance was validated by exploratory and confirmatory factor analyses, and with Rasch modeling. The QOLIBRI is a new cross-culturally developed instrument for assessing HRQoL after TBI that fulfills standard psychometric criteria. It is potentially useful for clinicians and researchers conducting clinical trials, for assessing the impact of rehabilitation or other interventions, and for carrying out epidemiological surveys.

  4. Monitoring cognitive function and need with the automated neuropsychological assessment metrics in Decompression Sickness (DCS) research

    Science.gov (United States)

    Nesthus, Thomas E.; Schiflett, Sammuel G.

    1993-01-01

    Hypobaric decompression sickness (DCS) research presents the medical monitor with the difficult task of assessing the onset and progression of DCS largely on the basis of subjective symptoms. Even with the introduction of precordial Doppler ultrasound techniques for the detection of venous gas emboli (VGE), correct prediction of DCS can be made only about 65 percent of the time according to data from the Armstrong Laboratory's (AL's) hypobaric DCS database. An AL research protocol concerned with exercise and its effects on denitrogenation efficiency includes implementation of a performance assessment test battery to evaluate cognitive functioning during a 4-h simulated 30,000 ft (9144 m) exposure. Information gained from such a test battery may assist the medical monitor in identifying early signs of DCS and subtle neurologic dysfunction related to cases of asymptomatic, but advanced, DCS. This presentation concerns the selection and integration of a test battery and the timely graphic display of subject test results for the principal investigator and medical monitor. A subset of the Automated Neuropsychological Assessment Metrics (ANAM) developed through the Office of Military Performance Assessment Technology (OMPAT) was selected. The ANAM software provides a library of simple tests designed for precise measurement of processing efficiency in a variety of cognitive domains. For our application and time constraints, two tests requiring high levels of cognitive processing and memory were chosen along with one test requiring fine psychomotor performance. Accuracy, speed, and processing throughout variables as well as RMS error were collected. An automated mood survey provided 'state' information on six scales including anger, happiness, fear, depression, activity, and fatigue. An integrated and interactive LOTUS 1-2-3 macro was developed to import and display past and present task performance and mood-change information.

  5. Requirement Metrics for Risk Identification

    Science.gov (United States)

    Hammer, Theodore; Huffman, Lenore; Wilson, William; Rosenberg, Linda; Hyatt, Lawrence

    1996-01-01

    The Software Assurance Technology Center (SATC) is part of the Office of Mission Assurance of the Goddard Space Flight Center (GSFC). The SATC's mission is to assist National Aeronautics and Space Administration (NASA) projects to improve the quality of software which they acquire or develop. The SATC's efforts are currently focused on the development and use of metric methodologies and tools that identify and assess risks associated with software performance and scheduled delivery. This starts at the requirements phase, where the SATC, in conjunction with software projects at GSFC and other NASA centers is working to identify tools and metric methodologies to assist project managers in identifying and mitigating risks. This paper discusses requirement metrics currently being used at NASA in a collaborative effort between the SATC and the Quality Assurance Office at GSFC to utilize the information available through the application of requirements management tools.

  6. Fifty shades of grey: Variability in metric-based assessment of surface waters using macroinvertebrates

    NARCIS (Netherlands)

    Keizer-Vlek, H.E.

    2014-01-01

    Since the introduction of the European Water Framework Directive (WFD) in 2000, every member state is obligated to assess the effects of human activities on the ecological quality status of all water bodies and to indicate the level of confidence and precision of the results provided by the

  7. No-reference visual quality assessment for image inpainting

    Science.gov (United States)

    Voronin, V. V.; Frantc, V. A.; Marchuk, V. I.; Sherstobitov, A. I.; Egiazarian, K.

    2015-03-01

    Inpainting has received a lot of attention in recent years and quality assessment is an important task to evaluate different image reconstruction approaches. In many cases inpainting methods introduce a blur in sharp transitions in image and image contours in the recovery of large areas with missing pixels and often fail to recover curvy boundary edges. Quantitative metrics of inpainting results currently do not exist and researchers use human comparisons to evaluate their methodologies and techniques. Most objective quality assessment methods rely on a reference image, which is often not available in inpainting applications. Usually researchers use subjective quality assessment by human observers. It is difficult and time consuming procedure. This paper focuses on a machine learning approach for no-reference visual quality assessment for image inpainting based on the human visual property. Our method is based on observation that Local Binary Patterns well describe local structural information of the image. We use a support vector regression learned on assessed by human images to predict perceived quality of inpainted images. We demonstrate how our predicted quality value correlates with qualitative opinion in a human observer study. Results are shown on a human-scored dataset for different inpainting methods.

  8. Quality assessment of digital annotated ECG data from clinical trials by the FDA ECG Warehouse.

    Science.gov (United States)

    Sarapa, Nenad

    2007-09-01

    The FDA mandates that digital electrocardiograms (ECGs) from 'thorough' QTc trials be submitted into the ECG Warehouse in Health Level 7 extended markup language format with annotated onset and offset points of waveforms. The FDA did not disclose the exact Warehouse metrics and minimal acceptable quality standards. The author describes the Warehouse scoring algorithms and metrics used by FDA, points out ways to improve FDA review and suggests Warehouse benefits for pharmaceutical sponsors. The Warehouse ranks individual ECGs according to their score for each quality metric and produces histogram distributions with Warehouse-specific thresholds that identify ECGs of questionable quality. Automatic Warehouse algorithms assess the quality of QT annotation and duration of manual QT measurement by the central ECG laboratory.

  9. The palmar metric: A novel radiographic assessment of the equine distal phalanx

    Directory of Open Access Journals (Sweden)

    M.A. Burd

    2014-08-01

    Full Text Available Digital radiographs are often used to subjectively assess the equine digit. Recently, quantitative and objective radiographic measurements have been reported that give new insight into the form and function of the equine digit. We investigated a radio-dense curvilinear profile along the distal phalanx on lateral radiographs we term the Palmar Curve (PC that we believe provides a measurement of the concavity of the distal phalanx of the horse. A second quantitative measurement, the Palmar Metric (PM was defined as the percent area under the PC. We correlated the PM and age from 544 radiographs of the distal phalanx from the left and right front feet of various breed horses of known age, and 278 radiographs of the front feet of Quarter Horses. The PM was negatively correlated with age and decreased at a rate of 0.28 % per year for horses of various breeds and 0.33 % per year for Quarter Horses. Therefore, veterinarians should be aware of age related change in the concave, parietal solar aspect of the distal phalanx in the horse.

  10. METRIC CHARACTERISTICS OF THE MEASURING INSTRUMENT FOR ASSESSING THE FLEXIBILITY OF A SHOULDER GIRDLE

    Directory of Open Access Journals (Sweden)

    Ivana Čerkez

    2011-09-01

    Full Text Available The goal of this paper is to apply the measuring instruments to assess flexibility of a shoulder girdle, and to establish some metric characteristics of the test. The study was conducted on a sample of 38 second grade students of the Vocational High-School Siroki Brijeg. The sample of variables is composed of 3 standard tests for measuring shoulder girdle flexibility and a modified test for measuring shoulder girdle flexibility. The idea for the modification of the test was obtained „on the ground“, when the respondent reached „extremly high“ test result, and with the previous instruments we were unable to measure the movement in its entirety The results of the research indicate that the adequate reliability and homogeneity of the test, as well as pragmatic and factorial validity, were established. The explanation of these results is that tests with its isolates well the shoulder girdle from the influence of other topological regions. The construction of the test itself allows a wide application, which is very important because it is potentially useful in all age groups.

  11. Compromises Between Quality of Service Metrics and Energy Consumption of Hierarchical and Flat Routing Protocols for Wireless Sensors Network

    Directory of Open Access Journals (Sweden)

    Abdelbari BEN YAGOUTA

    2016-11-01

    Full Text Available Wireless Sensor Network (WSN is wireless network composed of spatially distributed and tiny autonomous nodes, which cooperatively monitor physical or environmental conditions. Among the concerns of these networks is prolonging the lifetime by saving nodes energy. There are several protocols specially designed for WSNs based on energy conservation. However, many WSNs applications require QoS (Quality of Service criteria, such as latency, reliability and throughput. In this paper, we will compare three routing protocols for wireless sensors network LEACH (Low Energy Adaptive Clustering Hierarchy, AODV (Ad hoc on demand Distance Vector and LABILE (Link Quality-Based Lexical Routing using Castalia simulator in terms of energy consumption, throughput, reliability and latency time of packets received by sink under different conditions to determinate the best configurations that offers the most suitable compromises between energy conservation and all QoS metrics for each routing protocols. The results show that, the best configurations that offer the suitable compromises between energy conservation and all QoS metrics is a large number of deployed nodes with low packet rate for LEACH (300 nodes and 1 packet/s, a medium number of deployed nodes with low packet rate For AODV (100 nodes and 1 packet/s and a very low nodes density with low packet rate for LABILE (50 nodes and 1 packet/s.

  12. A consistent conceptual framework for applying climate metrics in technology life cycle assessment

    Science.gov (United States)

    Mallapragada, Dharik; Mignone, Bryan K.

    2017-07-01

    Comparing the potential climate impacts of different technologies is challenging for several reasons, including the fact that any given technology may be associated with emissions of multiple greenhouse gases when evaluated on a life cycle basis. In general, analysts must decide how to aggregate the climatic effects of different technologies, taking into account differences in the properties of the gases (differences in atmospheric lifetimes and instantaneous radiative efficiencies) as well as different technology characteristics (differences in emission factors and technology lifetimes). Available metrics proposed in the literature have incorporated these features in different ways and have arrived at different conclusions. In this paper, we develop a general framework for classifying metrics based on whether they measure: (a) cumulative or end point impacts, (b) impacts over a fixed time horizon or up to a fixed end year, and (c) impacts from a single emissions pulse or from a stream of pulses over multiple years. We then use the comparison between compressed natural gas and gasoline-fueled vehicles to illustrate how the choice of metric can affect conclusions about technologies. Finally, we consider tradeoffs involved in selecting a metric, show how the choice of metric depends on the framework that is assumed for climate change mitigation, and suggest which subset of metrics are likely to be most analytically self-consistent.

  13. Color Image Quality Assessment Based on CIEDE2000

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2012-01-01

    Full Text Available Combining the color difference formula of CIEDE2000 and the printing industry standard for visual verification, we present an objective color image quality assessment method correlated with subjective vision perception. An objective score conformed to subjective perception (OSCSP Q was proposed to directly reflect the subjective visual perception. In addition, we present a general method to calibrate correction factors of color difference formula under real experimental conditions. Our experiment results show that the present DE2000-based metric can be consistent with human visual system in general application environment.

  14. Higher Education Quality Assessment Model: Towards Achieving Educational Quality Standard

    Science.gov (United States)

    Noaman, Amin Y.; Ragab, Abdul Hamid M.; Madbouly, Ayman I.; Khedra, Ahmed M.; Fayoumi, Ayman G.

    2017-01-01

    This paper presents a developed higher education quality assessment model (HEQAM) that can be applied for enhancement of university services. This is because there is no universal unified quality standard model that can be used to assess the quality criteria of higher education institutes. The analytical hierarchy process is used to identify the…

  15. What Are We Assessing When We Measure Food Security? A Compendium and Review of Current Metrics12

    Science.gov (United States)

    Jones, Andrew D.; Ngure, Francis M.; Pelto, Gretel; Young, Sera L.

    2013-01-01

    The appropriate measurement of food security is critical for targeting food and economic aid; supporting early famine warning and global monitoring systems; evaluating nutrition, health, and development programs; and informing government policy across many sectors. This important work is complicated by the multiple approaches and tools for assessing food security. In response, we have prepared a compendium and review of food security assessment tools in which we review issues of terminology, measurement, and validation. We begin by describing the evolving definition of food security and use this discussion to frame a review of the current landscape of measurement tools available for assessing food security. We critically assess the purpose/s of these tools, the domains of food security assessed by each, the conceptualizations of food security that underpin each metric, as well as the approaches that have been used to validate these metrics. Specifically, we describe measurement tools that 1) provide national-level estimates of food security, 2) inform global monitoring and early warning systems, 3) assess household food access and acquisition, and 4) measure food consumption and utilization. After describing a number of outstanding measurement challenges that might be addressed in future research, we conclude by offering suggestions to guide the selection of appropriate food security metrics. PMID:24038241

  16. Multi-elemental profiling and chemo-metric validation revealed nutritional qualities of Zingiber officinale.

    Science.gov (United States)

    Pandotra, Pankaj; Viz, Bhavana; Ram, Gandhi; Gupta, Ajai Prakash; Gupta, Suphla

    2015-04-01

    Ginger rhizome is a valued food, spice and an important ingredient of traditional systems of medicine of India, China and Japan. An Inductively Coupled Plasma-Mass Spectrometry (ICP-MS) based multi-elemental profiling was performed to assess the quantitative complement of elements, nutritional quality and toxicity of 46 ginger germplasms, collected from the north western Himalayan India. The abundance of eighteen elements quantified in the acid digested rhizomes was observed to be K>Mg>Fe>Ca>Na>Mn>Zn>Ba>Cu>Cr>Ni>Pb>Co>Se>As>Be>Cd. Toxic element, Hg was not detected in any of the investigated samples. Chemometric analyses showed positive correlation among most of the elements. No negative correlation was observed in any of the metals under investigation. UPGMA based clustering analysis of the quantitative data grouped all the 46 samples into three major clusters, displaying 88% similarity in their metal composition, while eighteen metals investigated grouped into two major clusters. Quantitatively, all the elements analyzed were below the permissible limits laid down by World Health Organization. The results were further validated by cluster analysis (CA) and principal component analysis (PCA) to understand the ionome of the ginger rhizome. The study suggested raw ginger to be a good source of beneficial elements/minerals like Mg, Ca, Mn, Fe, Cu and Zn and will provide platform for understanding the functional and physiological status of ginger rhizome. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Optimization of the alpha image reconstruction - an iterative CT-image reconstruction with well-defined image quality metrics.

    Science.gov (United States)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelrieß, Marc

    2017-09-01

    Optimization of the AIR-algorithm for improved convergence and performance. The AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  18. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    Energy Technology Data Exchange (ETDEWEB)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc [German Cancer Research Center, Heidelberg (Germany).

    2017-10-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  19. Insertion of impairments in test video sequences for quality assessment based on psychovisual characteristics

    OpenAIRE

    López Velasco, Juan Pedro; Rodrigo Ferrán, Juan Antonio; Jiménez Bermejo, David; Menendez Garcia, Jose Manuel

    2014-01-01

    Assessing video quality is a complex task. While most pixel-based metrics do not present enough correlation between objective and subjective results, algorithms need to correspond to human perception when analyzing quality in a video sequence. For analyzing the perceived quality derived from concrete video artifacts in determined region of interest we present a novel methodology for generating test sequences which allow the analysis of impact of each individual distortion. Through results obt...

  20. NMF-Based Image Quality Assessment Using Extreme Learning Machine.

    Science.gov (United States)

    Wang, Shuigen; Deng, Chenwei; Lin, Weisi; Huang, Guang-Bin; Zhao, Baojun

    2017-01-01

    Numerous state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage process: distortion description followed by distortion effects pooling. As for the first stage, the distortion descriptors or measurements are expected to be effective representatives of human visual variations, while the second stage should well express the relationship among quality descriptors and the perceptual visual quality. However, most of the existing quality descriptors (e.g., luminance, contrast, and gradient) do not seem to be consistent with human perception, and the effects pooling is often done in ad-hoc ways. In this paper, we propose a novel full-reference IQA metric. It applies non-negative matrix factorization (NMF) to measure image degradations by making use of the parts-based representation of NMF. On the other hand, a new machine learning technique [extreme learning machine (ELM)] is employed to address the limitations of the existing pooling techniques. Compared with neural networks and support vector regression, ELM can achieve higher learning accuracy with faster learning speed. Extensive experimental results demonstrate that the proposed metric has better performance and lower computational complexity in comparison with the relevant state-of-the-art approaches.

  1. Using spatial metrics to assess the efficacy of biodiversity conservation within the Romanian Carpathian Convention area

    Directory of Open Access Journals (Sweden)

    Petrişor Alexandru-Ionuţ

    2017-06-01

    Full Text Available The alpine region is of crucial importance for the European Union; as a result, the Carpathian Convention aims at its sustainable development. Since sustainability implies also conservation through natural protected areas, aimed at including regions representative for the national biogeographical space, this article aims at assessing the efficiency of conservation. The methodology consisted of using spatial metrics applied to Romanian and European data on the natural protected areas, land cover and use and their transitional dynamics. The findings show a very good coverage of the Alpine biogeographical region (98% included in the Convention area, and 43% of it protected within the Convention area and of the ecological region of Carpathian montane coniferous forests (88% included in the Convention area, and 42% of it protected within the Convention area. The dominant land cover is represented by forests (63% within the Convention area, and 70% of the total protected area. The main transitional dynamics are deforestation (covering 50% of all changes area within the Convention area and 46% from the changed area within its protected area and forestations – including afforestation, reforestation and colonization of abandoned agricultural areas by forest vegetation (covering 44% of all changes area within the Convention area and 51% from the changed area within its protected area during 1990-2000 and deforestation (covering 97% of all changes area within the Convention area and 99% from the changed area within its protected area during 1990-2000. The results suggest that the coverage of biogeographical and ecological zones is good, especially for the most relevant ones, but deforestations are a serious issue, regardless of occurring before or after achieving the protection status.

  2. Machine learning approach for objective inpainting quality assessment

    Science.gov (United States)

    Frantc, V. A.; Voronin, V. V.; Marchuk, V. I.; Sherstobitov, A. I.; Agaian, S.; Egiazarian, K.

    2014-05-01

    This paper focuses on a machine learning approach for objective inpainting quality assessment. Inpainting has received a lot of attention in recent years and quality assessment is an important task to evaluate different image reconstruction approaches. Quantitative metrics for successful image inpainting currently do not exist; researchers instead are relying upon qualitative human comparisons in order to evaluate their methodologies and techniques. We present an approach for objective inpainting quality assessment based on natural image statistics and machine learning techniques. Our method is based on observation that when images are properly normalized or transferred to a transform domain, local descriptors can be modeled by some parametric distributions. The shapes of these distributions are different for noninpainted and inpainted images. Approach permits to obtain a feature vector strongly correlated with a subjective image perception by a human visual system. Next, we use a support vector regression learned on assessed by human images to predict perceived quality of inpainted images. We demonstrate how our predicted quality value repeatably correlates with a qualitative opinion in a human observer study.

  3. Blind image quality assessment based on aesthetic and statistical quality-aware features

    Science.gov (United States)

    Jenadeleh, Mohsen; Masaeli, Mohammad Masood; Moghaddam, Mohsen Ebrahimi

    2017-07-01

    The main goal of image quality assessment (IQA) methods is the emulation of human perceptual image quality judgments. Therefore, the correlation between objective scores of these methods with human perceptual scores is considered as their performance metric. Human judgment of the image quality implicitly includes many factors when assessing perceptual image qualities such as aesthetics, semantics, context, and various types of visual distortions. The main idea of this paper is to use a host of features that are commonly employed in image aesthetics assessment in order to improve blind image quality assessment (BIQA) methods accuracy. We propose an approach that enriches the features of BIQA methods by integrating a host of aesthetics image features with the features of natural image statistics derived from multiple domains. The proposed features have been used for augmenting five different state-of-the-art BIQA methods, which use statistical natural scene statistics features. Experiments were performed on seven benchmark image quality databases. The experimental results showed significant improvement of the accuracy of the methods.

  4. Kurtosis corrected sound pressure level as a noise metric for risk assessment of occupational noises

    OpenAIRE

    Goley, G. Steven; Song, Won Joon; Kim, Jay H.

    2011-01-01

    Current noise guidelines use an energy-based noise metric to predict the risk of hearing loss, and thus ignore the effect of temporal characteristics of the noise. The practice is widely considered to underestimate the risk of a complex noise environment, where impulsive noises are embedded in a steady-state noise. A basic form for noise metrics is designed by combining the equivalent sound pressure level (SPL) and a temporal correction term defined as a function of kurtosis of the noise. Sev...

  5. Assessing Natural Resource Use by Forest-Reliant Communities in Madagascar Using Functional Diversity and Functional Redundancy Metrics

    OpenAIRE

    Brown, Kerry A.; Flynn, Dan F. B.; Abram, Nicola K.; Ingram, J. Carter; Johnson, Steig E.; Wright, Patricia

    2011-01-01

    Biodiversity plays an integral role in the livelihoods of subsistence-based forest-dwelling communities and as a consequence it is increasingly important to develop quantitative approaches that capture not only changes in taxonomic diversity, but also variation in natural resources and provisioning services. We apply a functional diversity metric originally developed for addressing questions in community ecology to assess utilitarian diversity of 56 forest plots in Madagascar. The use categor...

  6. Metrics to assess the mitigation of global warming by carbon capture and storage in the ocean and in geological reservoirs

    OpenAIRE

    Haugan, Peter Mosby; Joos, Fortunat

    2004-01-01

    Different metrics to assess mitigation of global warming by carbon capture and storage are discussed. The climatic impact of capturing 30% of the anthropogenic carbon emission and its storage in the ocean or in geological reservoir are evaluated for different stabilization scenarios using a reduced-form carbon cycle-climate model. The accumulated Global Warming Avoided (GWA) remains, after a ramp-up during the first ~50 years, in the range of 15 to 30% over the next millennium for de...

  7. Sound quality prediction based on systematic metric selection and shrinkage: Comparison of stepwise, lasso, and elastic-net algorithms and clustering preprocessing

    Science.gov (United States)

    Gauthier, Philippe-Aubert; Scullion, William; Berry, Alain

    2017-07-01

    Sound quality is the impression of quality that is transmitted by the sound of a device. Its importance in sound and acoustical design of consumer products no longer needs to be demonstrated. One of the challenges is the creation of a prediction model that is able to predict the results of a listening test while using metrics derived from the sound stimuli. Often, these models are either derived using linear regression on a limited set of experimenter-selected metrics, or using more complex algorithms such as neural networks. In the former case, the user-selected metrics can bias the model and reflect the engineer pre-conceived idea of sound quality while missing potential features. In the latter case, although prediction might be efficient, the model is often in the form of a black-box which is difficult to use as a sound design guideline for engineers. In this paper, preprocessing by participants clustering and three different algorithms are compared in order to construct a sound quality prediction model that does not suffer from these limitations. The lasso, elastic-net and stepwise algorithms are tested for listening tests of consumer product for which 91 metrics are used as potential predictors. Based on the reported results, it is shown that the most promising algorithm is the lasso which is able to (1) efficiently limit the number of metrics, (2) most accurately predict the results of listening tests, and (3) provide a meaningful model that can be used as understandable design guidelines.

  8. Geographic regions for assessing built environmental correlates with walking trips: A comparison using different metrics and model designs.

    Science.gov (United States)

    Tribby, Calvin P; Miller, Harvey J; Brown, Barbara B; Smith, Ken R; Werner, Carol M

    2017-05-01

    There is growing international evidence that supportive built environments encourage active travel such as walking. An unsettled question is the role of geographic regions for analyzing the relationship between the built environment and active travel. This paper examines the geographic region question by assessing walking trip models that use two different regions: walking activity spaces and self-defined neighborhoods. We also use two types of built environment metrics, perceived and audit data, and two types of study design, cross-sectional and longitudinal, to assess these regions. We find that the built environment associations with walking are dependent on the type of metric and the type of model. Audit measures summarized within walking activity spaces better explain walking trips compared to audit measures within self-defined neighborhoods. Perceived measures summarized within self-defined neighborhoods have mixed results. Finally, results differ based on study design. This suggests that results may not be comparable among different regions, metrics and designs; researchers need to consider carefully these choices when assessing active travel correlates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Bringing Public Engagement into an Academic Plan and Its Assessment Metrics

    Science.gov (United States)

    Britner, Preston A.

    2012-01-01

    This article describes how public engagement was incorporated into a research university's current Academic Plan, how the public engagement metrics were selected and adopted, and how those processes led to subsequent strategic planning. Some recognition of the importance of civic engagement has followed, although there are many areas in which…

  10. Using Landscape Metrics Analysis and Analytic Hierarchy Process to Assess Water Harvesting Potential Sites in Jordan

    Directory of Open Access Journals (Sweden)

    Abeer Albalawneh

    2015-09-01

    Full Text Available Jordan is characterized as a “water scarce” country. Therefore, conserving ecosystem services such as water regulation and soil retention is challenging. In Jordan, rainwater harvesting has been adapted to meet those challenges. However, the spatial composition and configuration features of a target landscape are rarely considered when selecting a rainwater-harvesting site. This study aimed to introduce landscape spatial features into the schemes for selecting a proper water-harvesting site. Landscape metrics analysis was used to quantify 10 metrics for three potential landscapes (i.e., Watershed 104 (WS 104, Watershed 59 (WS 59, and Watershed 108 (WS 108 located in the Jordanian Badia region. Results of the metrics analysis showed that the three non–vegetative land cover types in the three landscapes were highly suitable for serving as rainwater harvesting sites. Furthermore, Analytic Hierarchy Process (AHP was used to prioritize the fitness of the three target sites by comparing their landscape metrics. Results of AHP indicate that the non-vegetative land cover in the WS 104 landscape was the most suitable site for rainwater harvesting intervention, based on its dominance, connectivity, shape, and low degree of fragmentation. Our study advances the water harvesting network design by considering its landscape spatial pattern.

  11. Material quality assurance risk assessment.

    Science.gov (United States)

    2013-01-01

    Over the past two decades the role of SHA has shifted from quality control (QC) of materials and : placement techniques to quality assurance (QA) and acceptance. The role of the Office of Materials : Technology (OMT) has been shifting towards assuran...

  12. Use of Frequency Response Metrics to Assess the Planning and Operating Requirements for Reliable Integration of Variable Renewable Generation

    Energy Technology Data Exchange (ETDEWEB)

    Eto, Joseph H.; Undrill, John; Mackin, Peter; Daschmans, Ron; Williams, Ben; Haney, Brian; Hunt, Randall; Ellis, Jeff; Illian, Howard; Martinez, Carlos; O' Malley, Mark; Coughlin, Katie; LaCommare, Kristina Hamachi

    2010-12-20

    An interconnected electric power system is a complex system that must be operated within a safe frequency range in order to reliably maintain the instantaneous balance between generation and load. This is accomplished by ensuring that adequate resources are available to respond to expected and unexpected imbalances and restoring frequency to its scheduled value in order to ensure uninterrupted electric service to customers. Electrical systems must be flexible enough to reliably operate under a variety of"change" scenarios. System planners and operators must understand how other parts of the system change in response to the initial change, and need tools to manage such changes to ensure reliable operation within the scheduled frequency range. This report presents a systematic approach to identifying metrics that are useful for operating and planning a reliable system with increased amounts of variable renewable generation which builds on existing industry practices for frequency control after unexpected loss of a large amount of generation. The report introduces a set of metrics or tools for measuring the adequacy of frequency response within an interconnection. Based on the concept of the frequency nadir, these metrics take advantage of new information gathering and processing capabilities that system operators are developing for wide-area situational awareness. Primary frequency response is the leading metric that will be used by this report to assess the adequacy of primary frequency control reserves necessary to ensure reliable operation. It measures what is needed to arrest frequency decline (i.e., to establish frequency nadir) at a frequency higher than the highest set point for under-frequency load shedding within an interconnection. These metrics can be used to guide the reliable operation of an interconnection under changing circumstances.

  13. Measuring scientific impact beyond academia: An assessment of existing impact metrics and proposed improvements.

    Directory of Open Access Journals (Sweden)

    James Ravenscroft

    Full Text Available How does scientific research affect the world around us? Being able to answer this question is of great importance in order to appropriately channel efforts and resources in science. The impact by scientists in academia is currently measured by citation based metrics such as h-index, i-index and citation counts. These academic metrics aim to represent the dissemination of knowledge among scientists rather than the impact of the research on the wider world. In this work we are interested in measuring scientific impact beyond academia, on the economy, society, health and legislation (comprehensive impact. Indeed scientists are asked to demonstrate evidence of such comprehensive impact by authoring case studies in the context of the Research Excellence Framework (REF. We first investigate the extent to which existing citation based metrics can be indicative of comprehensive impact. We have collected all recent REF impact case studies from 2014 and we have linked these to papers in citation networks that we constructed and derived from CiteSeerX, arXiv and PubMed Central using a number of text processing and information retrieval techniques. We have demonstrated that existing citation-based metrics for impact measurement do not correlate well with REF impact results. We also consider metrics of online attention surrounding scientific works, such as those provided by the Altmetric API. We argue that in order to be able to evaluate wider non-academic impact we need to mine information from a much wider set of resources, including social media posts, press releases, news articles and political debates stemming from academic work. We also provide our data as a free and reusable collection for further analysis, including the PubMed citation network and the correspondence between REF case studies, grant applications and the academic literature.

  14. Compression-based classification of biological sequences and structures via the Universal Similarity Metric: experimental assessment

    Directory of Open Access Journals (Sweden)

    Manzini Giovanni

    2007-07-01

    Full Text Available Abstract Background Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity, NCD (Normalized Compression Dissimilarity and CD (Compression Dissimilarity. Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. Results We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC

  15. Pragmatic model of traslation Quality assessment

    OpenAIRE

    Vorobjeva, S.; Podrezenko, V.

    2006-01-01

    The study analyses various approaches to translation quality assessment. Functional and pragmatic translation quality evaluation model which is based on target text function being equivalent to source text function has been proposed.

  16. Towards Quality Assessment in an EFL Programme

    Science.gov (United States)

    Ali, Holi Ibrahim Holi; Al Ajmi, Ahmed Ali Saleh

    2013-01-01

    Assessment is central in education and the teaching-learning process. This study attempts to explore the perspectives and views about quality assessment among teachers of English as a Foreign Language (EFL), and to find ways of promoting quality assessment. Quantitative methodology was used to collect data. To answer the study questions, a…

  17. Elliptical Local Vessel Density: a Fast and Robust Quality Metric for Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Chaum, Edward [ORNL; Karnowski, Thomas Paul [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL; Abramoff, M.D. [University of Iowa

    2008-01-01

    A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches to the problem of automatically judging the image quality. We propose a new set of features independent of Field of View or resolution to describe the morphology of the patient's vessels. Our initial results suggest that they can be used to estimate the image quality in a time one order of magnitude shorter respect to previous techniques.

  18. Habitat connectivity as a metric for aquatic microhabitat quality: Application to Chinook salmon spawning habitat

    Science.gov (United States)

    Ryan Carnie; Daniele Tonina; Jim McKean; Daniel Isaak

    2016-01-01

    Quality of fish habitat at the scale of a single fish, at the metre resolution, which we defined here as microhabitat, has been primarily evaluated on short reaches, and their results have been extended through long river segments with methods that do not account for connectivity, a measure of the spatial distribution of habitat patches. However, recent...

  19. Modeling the interannual variability of microbial quality metrics of irrigation water in a Pennsylvanian stream

    Science.gov (United States)

    Knowledge of the microbial quality of irrigation waters is extremely limited. For this reason, the US FDA has promulgated the Produce Rule, mandating the testing of irrigation water sources for many farms. The rule requires the collection and analysis of at least 20 water samples over two to four ye...

  20. A metrics-based comparison of secondary user quality between iOS and Android

    NARCIS (Netherlands)

    T. Amman

    2014-01-01

    htmlabstract Native mobile applications gain popularity in the commercial market. There is no other econom- ical sector that grows as fast. A lot of economical research is done in this sector, but there is very little research that deals with qualities for mobile application developers. This paper

  1. Audiovisual quality assessment in communications applications: Current status, trends and challenges

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2010-01-01

    Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception. In co...... addressed in practical quality metrics is the co-impact of audio and video qualities. This paper provides an overview of the current trends and challenges in objective audiovisual quality assessment, with emphasis on communication applications......Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception....... In communications applications, transmission errors, including packet losses and bit errors, can be a significant source of quality degradation. Also the environmental factors, such as background noise, ambient light and display characteristics, pose an impact on perception. A third aspect that has not been widely...

  2. Quality assurance in performance assessments

    Energy Technology Data Exchange (ETDEWEB)

    Maul, P.R.; Watkins, B.M.; Salter, P.; Mcleod, R [QuantiSci Ltd, Henley-on-Thames (United Kingdom)

    1999-01-01

    Following publication of the Site-94 report, SKI wishes to review how Quality Assurance (QA) issues could be treated in future work both in undertaking their own Performance Assessment (PA) calculations and in scrutinising documents supplied by SKB (on planning a repository for spent fuels in Sweden). The aim of this report is to identify the key QA issues and to outline the nature and content of a QA plan which would be suitable for SKI, bearing in mind the requirements and recommendations of relevant standards. Emphasis is on issues which are specific to Performance Assessments for deep repositories for radioactive wastes, but consideration is also given to issues which need to be addressed in all large projects. Given the long time over which the performance of a deep repository system must be evaluated, the demonstration that a repository is likely to perform satisfactorily relies on the use of computer-generated model predictions of system performance. This raises particular QA issues which are generally not encountered in other technical areas (for instance, power station operations). The traceability of the arguments used is a key QA issue, as are conceptual model uncertainty, and code verification and validation; these were all included in the consideration of overall uncertainties in the Site-94 project. Additionally, issues which are particularly relevant to SKI include: How QA in a PA fits in with the general QA procedures of the organisation undertaking the work. The relationship between QA as applied by the regulator and the implementor of a repository development programme. Section 2 introduces the discussion of these issues by reviewing the standards and guidance which are available from national and international organisations. This is followed in Section 3 by a review of specific issues which arise from the Site-94 exercise. An outline procedure for managing QA issues in SKI is put forward as a basis for discussion in Section 4. It is hoped that

  3. The Use of Performance Metrics for the Assessment of Safeguards Effectiveness at the State Level

    Energy Technology Data Exchange (ETDEWEB)

    Bachner K. M.; George Anzelon, Lawrence Livermore National Laboratory, Livermore, CA Yana Feldman, Lawrence Livermore National Laboratory, Livermore, CA Mark Goodman,Department of State, Washington, DC Dunbar Lockwood, National Nuclear Security Administration, Washington, DC Jonathan B. Sanborn, JBS Consulting, LLC, Arlington, VA.

    2016-07-24

    In the ongoing evolution of International Atomic Energy Agency (IAEA) safeguards at the state level, many safeguards implementation principles have been emphasized: effectiveness, efficiency, non-discrimination, transparency, focus on sensitive materials, centrality of material accountancy for detecting diversion, independence, objectivity, and grounding in technical considerations, among others. These principles are subject to differing interpretations and prioritizations and sometimes conflict. This paper is an attempt to develop metrics and address some of the potential tradeoffs inherent in choices about how various safeguards policy principles are implemented. The paper carefully defines effective safeguards, including in the context of safeguards approaches that take account of the range of state-specific factors described by the IAEA Secretariat and taken note of by the Board in September 2014, and (2) makes use of performance metrics to help document, and to make transparent, how safeguards implementation would meet such effectiveness requirements.

  4. Quality of life and functional capacity outcomes in the MOMENTUM 3 trial at 6 months: A call for new metrics for left ventricular assist device patients.

    Science.gov (United States)

    Cowger, Jennifer A; Naka, Yoshifumi; Aaronson, Keith D; Horstmanshof, Douglas; Gulati, Sanjeev; Rinde-Hoffman, Debbie; Pinney, Sean; Adatya, Sirtaz; Farrar, David J; Jorde, Ulrich P

    2018-01-01

    The Multicenter Study of MAGLEV Technology in Patients Undergoing Mechanical Circulatory Support Therapy with HeartMate 3 (MOMENTUM 3) clinical trial demonstrated improved 6-month event-free survival, but a detailed analysis of health-related quality of life (HR-QOL) and functional capacity (FC) was not presented. Further, the effect of early serious adverse events (SAEs) on these metrics and on the general ability to live well while supported with a left ventricular assist system (LVAS) warrants evaluation. FC (New York Heart Association [NYHA] and 6-minute walk test [6MWT]) and HR-QOL (European Quality of Life [EQ-5D-5L] and the Kansas City Cardiomyopathy [KCCQ]) assessments were obtained at baseline and 6 months after HeartMate 3 (HM3, n = 151; Abbott, Abbott Park, IL) or HeartMate II (HMII, n = 138; Abbott) implant as part of the MOMENTUM 3 clinical trial. Metrics were compared between devices and in those with and without events. The proportion of patients "living well on an LVAS" at 6 months, defined as alive with satisfactory FC (NYHA I/II or 6MWT > 300 meters) and HR-QOL (overall KCCQ > 50), was evaluated. Although the median (25th-75th percentile) patient KCCQ (change for HM3: +28 [10-46]; HMII: +29 [9-48]) and EQ-5D-5L (change for HM3: -1 [-5 to 0]; HMII: -2 [-6 to 0]) scores improved from baseline to 6 months (p 0.05). Likewise, there was an equivalent improvement in 6MWT distance at 6 months in HM3 (+94 [1-274] meters] and HMII (+188[43-340 meters]) from baseline. In patients with SAEs (n = 188), 6MWTs increased from baseline (p metrics did not change. The development of left ventricular assist device-specific HR-QOL tools is needed to better characterize the effect of SAEs on a patient's well-being. MOMENTUM 3 clinical trial #NCT02224755. Copyright © 2018 International Society for the Heart and Lung Transplantation. Published by Elsevier Inc. All rights reserved.

  5. Statistical air quality predictions for public health surveillance: evaluation and generation of county level metrics of PM2.5 for the environmental public health tracking network.

    Science.gov (United States)

    Vaidyanathan, Ambarish; Dimmick, William Fred; Kegler, Scott R; Qualters, Judith R

    2013-03-14

    The Centers for Disease Control and Prevention (CDC) developed county level metrics for the Environmental Public Health Tracking Network (Tracking Network) to characterize potential population exposure to airborne particles with an aerodynamic diameter of 2.5 μm or less (PM(2.5)). These metrics are based on Federal Reference Method (FRM) air monitor data in the Environmental Protection Agency (EPA) Air Quality System (AQS); however, monitor data are limited in space and time. In order to understand air quality in all areas and on days without monitor data, the CDC collaborated with the EPA in the development of hierarchical Bayesian (HB) based predictions of PM(2.5) concentrations. This paper describes the generation and evaluation of HB-based county level estimates of PM(2.5). We used three geo-imputation approaches to convert grid-level predictions to county level estimates. We used Pearson (r) and Kendall Tau-B (τ) correlation coefficients to assess the consistency of the relationship, and examined the direct differences (by county) between HB-based estimates and AQS-based concentrations at the daily level. We further compared the annual averages using Tukey mean-difference plots. During the year 2005, fewer than 20% of the counties in the conterminous United States (U.S.) had PM(2.5) monitoring and 32% of the conterminous U.S. population resided in counties with no AQS monitors. County level estimates resulting from population-weighted centroid containment approach were correlated more strongly with monitor-based concentrations (r = 0.9; τ = 0.8) than were estimates from other geo-imputation approaches. The median daily difference was -0.2 μg/m(3) with an interquartile range (IQR) of 1.9 μg/m(3) and the median relative daily difference was -2.2% with an IQR of 17.2%. Under-prediction was more prevalent at higher concentrations and for counties in the western U.S. While the relationship between county level HB-based estimates and AQS-based concentrations is

  6. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and depl...... Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals....

  7. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2012-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and depl...... Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals....

  8. Low-cost oblique illumination: an image quality assessment.

    Science.gov (United States)

    Ruiz-Santaquiteria, Jesus; Espinosa-Aranda, Jose Luis; Deniz, Oscar; Sanchez, Carlos; Borrego-Ramos, Maria; Blanco, Saul; Cristobal, Gabriel; Bueno, Gloria

    2018-01-01

    We study the effectiveness of several low-cost oblique illumination filters to improve overall image quality, in comparison with standard bright field imaging. For this purpose, a dataset composed of 3360 diatom images belonging to 21 taxa was acquired. Subjective and objective image quality assessments were done. The subjective evaluation was performed by a group of diatom experts by psychophysical test where resolution, focus, and contrast were assessed. Moreover, some objective nonreference image quality metrics were applied to the same image dataset to complete the study, together with the calculation of several texture features to analyze the effect of these filters in terms of textural properties. Both image quality evaluation methods, subjective and objective, showed better results for images acquired using these illumination filters in comparison with the no filtered image. These promising results confirm that this kind of illumination filters can be a practical way to improve the image quality, thanks to the simple and low cost of the design and manufacturing process. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  9. How to assess the quality of your analytical method?

    Science.gov (United States)

    Topic, Elizabeta; Nikolac, Nora; Panteghini, Mauro; Theodorsson, Elvar; Salvagno, Gian Luca; Miler, Marijana; Simundic, Ana-Maria; Infusino, Ilenia; Nordin, Gunnar; Westgard, Sten

    2015-10-01

    Laboratory medicine is amongst the fastest growing fields in medicine, crucial in diagnosis, support of prevention and in the monitoring of disease for individual patients and for the evaluation of treatment for populations of patients. Therefore, high quality and safety in laboratory testing has a prominent role in high-quality healthcare. Applied knowledge and competencies of professionals in laboratory medicine increases the clinical value of laboratory results by decreasing laboratory errors, increasing appropriate utilization of tests, and increasing cost effectiveness. This collective paper provides insights into how to validate the laboratory assays and assess the quality of methods. It is a synopsis of the lectures at the 15th European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Continuing Postgraduate Course in Clinical Chemistry and Laboratory Medicine entitled "How to assess the quality of your method?" (Zagreb, Croatia, 24-25 October 2015). The leading topics to be discussed include who, what and when to do in validation/verification of methods, verification of imprecision and bias, verification of reference intervals, verification of qualitative test procedures, verification of blood collection systems, comparability of results among methods and analytical systems, limit of detection, limit of quantification and limit of decision, how to assess the measurement uncertainty, the optimal use of Internal Quality Control and External Quality Assessment data, Six Sigma metrics, performance specifications, as well as biological variation. This article, which continues the annual tradition of collective papers from the EFLM continuing postgraduate courses in clinical chemistry and laboratory medicine, aims to provide further contributions by discussing the quality of laboratory methods and measurements and, at the same time, to offer continuing professional development to the attendees.

  10. A Web-Based Graphical Food Frequency Assessment System: Design, Development and Usability Metrics

    Science.gov (United States)

    Alawadhi, Balqees; Fallaize, Rosalind; Lovegrove, Julie A; Hwang, Faustina

    2017-01-01

    Background Food frequency questionnaires (FFQs) are well established in the nutrition field, but there remain important questions around how to develop online tools in a way that can facilitate wider uptake. Also, FFQ user acceptance and evaluation have not been investigated extensively. Objective This paper presents a Web-based graphical food frequency assessment system that addresses challenges of reproducibility, scalability, mobile friendliness, security, and usability and also presents the utilization metrics and user feedback from a deployment study. Methods The application design employs a single-page application Web architecture with back-end services (database, authentication, and authorization) provided by Google Firebase’s free plan. Its design and responsiveness take advantage of the Bootstrap framework. The FFQ was deployed in Kuwait as part of the EatWellQ8 study during 2016. The EatWellQ8 FFQ contains 146 food items (including drinks). Participants were recruited in Kuwait without financial incentive. Completion time was based on browser timestamps and usability was measured using the System Usability Scale (SUS), scoring between 0 and 100. Products with a SUS higher than 70 are considered to be good. Results A total of 235 participants created accounts in the system, and 163 completed the FFQ. Of those 163 participants, 142 reported their gender (93 female, 49 male) and 144 reported their date of birth (mean age of 35 years, range from 18-65 years). The mean completion time for all FFQs (n=163), excluding periods of interruption, was 14.2 minutes (95% CI 13.3-15.1 minutes). Female participants (n=93) completed in 14.1 minutes (95% CI 12.9-15.3 minutes) and male participants (n=49) completed in 14.3 minutes (95% CI 12.6-15.9 minutes). Participants using laptops or desktops (n=69) completed the FFQ in an average of 13.9 minutes (95% CI 12.6-15.1 minutes) and participants using smartphones or tablets (n=91) completed in an average of 14.5 minutes (95

  11. A Web-Based Graphical Food Frequency Assessment System: Design, Development and Usability Metrics.

    Science.gov (United States)

    Franco, Rodrigo Zenun; Alawadhi, Balqees; Fallaize, Rosalind; Lovegrove, Julie A; Hwang, Faustina

    2017-05-08

    Food frequency questionnaires (FFQs) are well established in the nutrition field, but there remain important questions around how to develop online tools in a way that can facilitate wider uptake. Also, FFQ user acceptance and evaluation have not been investigated extensively. This paper presents a Web-based graphical food frequency assessment system that addresses challenges of reproducibility, scalability, mobile friendliness, security, and usability and also presents the utilization metrics and user feedback from a deployment study. The application design employs a single-page application Web architecture with back-end services (database, authentication, and authorization) provided by Google Firebase's free plan. Its design and responsiveness take advantage of the Bootstrap framework. The FFQ was deployed in Kuwait as part of the EatWellQ8 study during 2016. The EatWellQ8 FFQ contains 146 food items (including drinks). Participants were recruited in Kuwait without financial incentive. Completion time was based on browser timestamps and usability was measured using the System Usability Scale (SUS), scoring between 0 and 100. Products with a SUS higher than 70 are considered to be good. A total of 235 participants created accounts in the system, and 163 completed the FFQ. Of those 163 participants, 142 reported their gender (93 female, 49 male) and 144 reported their date of birth (mean age of 35 years, range from 18-65 years). The mean completion time for all FFQs (n=163), excluding periods of interruption, was 14.2 minutes (95% CI 13.3-15.1 minutes). Female participants (n=93) completed in 14.1 minutes (95% CI 12.9-15.3 minutes) and male participants (n=49) completed in 14.3 minutes (95% CI 12.6-15.9 minutes). Participants using laptops or desktops (n=69) completed the FFQ in an average of 13.9 minutes (95% CI 12.6-15.1 minutes) and participants using smartphones or tablets (n=91) completed in an average of 14.5 minutes (95% CI 13.2-15.8 minutes). The median SUS

  12. Operationalizing the Measuring What Matters Spirituality Quality Metric in a Population of Hospitalized, Critically Ill Patients and Their Family Members.

    Science.gov (United States)

    Aslakson, Rebecca A; Kweku, Josephine; Kinnison, Malonnie; Singh, Sarabdeep; Crowe, Thomas Y

    2017-03-01

    Measuring What Matters (MWM) quality indicators support measurement of the percentage of patients who have spiritual discussions, if desired. The objective of this study was to 1) determine the ease of, and barriers to, prospectively collecting MWM spirituality quality measure data and 2) further explore the importance of spirituality in a seriously ill, hospitalized population of critically ill patients and their family members. Electronic medical record (EMR) review and cross-sectional survey of intensive care unit (ICU) patients and their family members from October to December 2015. Participants were in four adult ICUs totaling 68 beds at a single academic, urban, tertiary care center which has ICU-assigned chaplains and an in-house, 24-hour, on-call chaplain. All patients had a "Spiritual Risk Screen" which included two questions identifying patient religion and whether a chaplain visit was desired. Approximately 2/3 of ICU patients were eligible, and there were 144 respondents (50% female; 57% patient and 43% family member), with the majority being Caucasian or African American (68% and 21%, respectively). Common religious identifications were Christian or no faith tradition (76% and 11%, respectively). Approximately half of patients had an EMR chaplain note although it did not document presence of a "spiritual discussion." No study patients received palliative care consultation. A majority (85%) noted that spirituality was "important to them" and that prevalence remained high across respondent age, race, faith tradition, or admitting ICU. Operationalizing the MWM spirituality quality indicator was challenging as elements of a "spiritual screening" or documentation of a "spiritual discussion" were not clearly documented in the EMR. The high prevalence of spirituality among respondents validates the importance of spirituality as a potential quality metric. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All

  13. Automated FMV image quality assessment based on power spectrum statistics

    Science.gov (United States)

    Kalukin, Andrew

    2015-05-01

    Factors that degrade image quality in video and other sensor collections, such as noise, blurring, and poor resolution, also affect the spatial power spectrum of imagery. Prior research in human vision and image science from the last few decades has shown that the image power spectrum can be useful for assessing the quality of static images. The research in this article explores the possibility of using the image power spectrum to automatically evaluate full-motion video (FMV) imagery frame by frame. This procedure makes it possible to identify anomalous images and scene changes, and to keep track of gradual changes in quality as collection progresses. This article will describe a method to apply power spectral image quality metrics for images subjected to simulated blurring, blocking, and noise. As a preliminary test on videos from multiple sources, image quality measurements for image frames from 185 videos are compared to analyst ratings based on ground sampling distance. The goal of the research is to develop an automated system for tracking image quality during real-time collection, and to assign ratings to video clips for long-term storage, calibrated to standards such as the National Imagery Interpretability Rating System (NIIRS).

  14. Assessing Field Spectroscopy Metadata Quality

    Directory of Open Access Journals (Sweden)

    Barbara A. Rasaiah

    2015-04-01

    Full Text Available This paper presents the proposed criteria for measuring the quality and completeness of field spectroscopy metadata in a spectral archive. Definitions for metadata quality and completeness for field spectroscopy datasets are introduced. Unique methods for measuring quality and completeness of metadata to meet the requirements of field spectroscopy datasets are presented. Field spectroscopy metadata quality can be defined in terms of (but is not limited to logical consistency, lineage, semantic and syntactic error rates, compliance with a quality standard, quality assurance by a recognized authority, and reputational authority of the data owners/data creators. Two spectral libraries are examined as case studies of operationalized metadata policies, and the degree to which they are aligned with the needs of field spectroscopy scientists. The case studies reveal that the metadata in publicly available spectral datasets are underperforming on the quality and completeness measures. This paper is part two in a series examining the issues central to a metadata standard for field spectroscopy datasets.

  15. Privacy Metrics and Boundaries

    NARCIS (Netherlands)

    L-F. Pau (Louis-François)

    2005-01-01

    textabstractThis paper aims at defining a set of privacy metrics (quantitative and qualitative) in the case of the relation between a privacy protector ,and an information gatherer .The aims with such metrics are: -to allow to assess and compare different user scenarios and their differences; for

  16. Integration of Classification Tree Analyses and Spatial Metrics to Assess Changes in Supraglacial Lakes in the Karakoram Himalaya

    Science.gov (United States)

    Bulley, H. N.; Bishop, M. P.; Shroder, J. F.; Haritashya, U. K.

    2007-12-01

    Alpine glacier responses to climate chnage reveal increases in retreat with corresponding increases in production of glacier melt water and development of supraglacial lakes. The rate of occurrence and spatial extent of lakes in the Himalaya are difficult to determine because current spectral-based image analysis of glacier surfaces are limited through anisotropic reflectance and lack of high quality digital elevation models. Additionally, the limitations of multivariate classification algorithms to adequately segregate glacier features in satellite imagery have led to an increased interest in non-parametric methods, such as classification and regression trees. Our objectives are to demonstrate the utility of a semi-automated approach that integrates classification- tree-based image segmentation and object-oriented analysis to differentiate supraglacial lakes from glacier debris, ice cliffs, lateral and medial moraines. The classification-tree process involves a binary, recursive, partitioning non-parametric method that can account for non-linear relationships. We used 2002 and 2004 ASTER VNIR and SWIR imagery to assess the Baltoro Glacier in the Karakoram Himalaya. Other input variables include the normalized difference water index (NDWI), ratio images, Moran's I image, and fractal dimension. The classification tree was used to generate initial image segments and it was particularly effective in differentiating glacier features. The object-oriented analysis included the use of shape and spatial metrics to refine the classification-tree output. Classification-tree results show that NDWI is the most important single variable for characterizing the glacier-surface features, followed by NIR/IR ratio, IR band, and IR/Red ratio variables. Lake features extracted from both images show there were 142 lakes in 2002 as compared to 188 lakes in 2004. In general, there was a significant increase in planimetric area from 2002 to 2004, and we documented the formation of 46 new

  17. Metrics, Dose, and Dose Concept: The Need for a Proper Dose Concept in the Risk Assessment of Nanoparticles

    Directory of Open Access Journals (Sweden)

    Myrtill Simkó

    2014-04-01

    Full Text Available In order to calculate the dose for nanoparticles (NP, (i relevant information about the dose metrics and (ii a proper dose concept are crucial. Since the appropriate metrics for NP toxicity are yet to be elaborated, a general dose calculation model for nanomaterials is not available. Here we propose how to develop a dose assessment model for NP in analogy to the radiation protection dose calculation, introducing the so-called “deposited and the equivalent dose”. As a dose metric we propose the total deposited NP surface area (SA, which has been shown frequently to determine toxicological responses e.g. of lung tissue. The deposited NP dose is proportional to the total surface area of deposited NP per tissue mass, and takes into account primary and agglomerated NP. By using several weighting factors the equivalent dose additionally takes into account various physico-chemical properties of the NP which are influencing the biological responses. These weighting factors consider the specific surface area, the surface textures, the zeta-potential as a measure for surface charge, the particle morphology such as the shape and the length-to-diameter ratio (aspect ratio, the band gap energy levels of metal and metal oxide NP, and the particle dissolution rate. Furthermore, we discuss how these weighting factors influence the equivalent dose of the deposited NP.

  18. Quality assessment of occupational health services instruments

    NARCIS (Netherlands)

    van Dijk, F. J.; de Kort, W. L.; Verbeek, J. H.

    1993-01-01

    Interest in the quality of instruments for occupational health services is growing as a result of European legislation on preventive services stressing, for example, risk identification and assessment. The quality of the services can be enhanced when the quality of the applied instruments can be

  19. Assessing Quality in Home Visiting Programs

    Science.gov (United States)

    Korfmacher, Jon; Laszewski, Audrey; Sparr, Mariel; Hammel, Jennifer

    2013-01-01

    Defining quality and designing a quality assessment measure for home visitation programs is a complex and multifaceted undertaking. This article summarizes the process used to create the Home Visitation Program Quality Rating Tool (HVPQRT) and identifies next steps for its development. The HVPQRT measures both structural and dynamic features of…

  20. Assessing quality in volcanic ash soils

    Science.gov (United States)

    Terry L. Craigg; Steven W. Howes

    2007-01-01

    Forest managers must understand how changes in soil quality resulting from project implementation affect long-term productivity and watershed health. Volcanic ash soils have unique properties that affect their quality and function; and which may warrant soil quality standards and assessment techniques that are different from other soils. We discuss the concept of soil...

  1. STATISTICS IN SERVICE QUALITY ASSESSMENT

    Directory of Open Access Journals (Sweden)

    Dragana Gardašević

    2012-09-01

    Full Text Available For any quality evaluation in sports, science, education, and so, it is useful to collect data to construct a strategy to improve the quality of services offered to the user. For this purpose, we use statistical software packages for data processing data collected in order to increase customer satisfaction. The principle is demonstrated by the example of the level of student satisfaction ratings Belgrade Polytechnic (as users the quality of institutions (Belgrade Polytechnic. Here, the emphasis on statistical analysis as a tool for quality control in order to improve the same, and not the interpretation of results. Therefore, the above can be used as a model in sport to improve the overall results.

  2. Comparative assessment of GIS-based methods and metrics for estimating long-term exposures to air pollution

    Science.gov (United States)

    Gulliver, John; de Hoogh, Kees; Fecht, Daniela; Vienneau, Danielle; Briggs, David

    2011-12-01

    The development of geographical information system techniques has opened up a wide array of methods for air pollution exposure assessment. The extent to which these provide reliable estimates of air pollution concentrations is nevertheless not clearly established. Nor is it clear which methods or metrics should be preferred in epidemiological studies. This paper compares the performance of ten different methods and metrics in terms of their ability to predict mean annual PM 10 concentrations across 52 monitoring sites in London, UK. Metrics analysed include indicators (distance to nearest road, traffic volume on nearest road, heavy duty vehicle (HDV) volume on nearest road, road density within 150 m, traffic volume within 150 m and HDV volume within 150 m) and four modelling approaches: based on the nearest monitoring site, kriging, dispersion modelling and land use regression (LUR). Measures were computed in a GIS, and resulting metrics calibrated and validated against monitoring data using a form of grouped jack-knife analysis. The results show that PM 10 concentrations across London show little spatial variation. As a consequence, most methods can predict the average without serious bias. Few of the approaches, however, show good correlations with monitored PM 10 concentrations, and most predict no better than a simple classification based on site type. Only land use regression reaches acceptable levels of correlation ( R2 = 0.47), though this can be improved by also including information on site type. This might therefore be taken as a recommended approach in many studies, though care is needed in developing meaningful land use regression models, and like any method they need to be validated against local data before their application as part of epidemiological studies.

  3. When can we measure stress noninvasively? Postdeposition effects on a fecal stress metric confound a multiregional assessment.

    Science.gov (United States)

    Wilkening, Jennifer L; Ray, Chris; Varner, Johanna

    2016-01-01

    Measurement of stress hormone metabolites in fecal samples has become a common method to assess physiological stress in wildlife populations. Glucocorticoid metabolite (GCM) measurements can be collected noninvasively, and studies relating this stress metric to anthropogenic disturbance are increasing. However, environmental characteristics (e.g., temperature) can alter measured GCM concentration when fecal samples cannot be collected immediately after defecation. This effect can confound efforts to separate environmental factors causing predeposition physiological stress in an individual from those acting on a fecal sample postdeposition. We used fecal samples from American pikas (Ochotona princeps) to examine the influence of environmental conditions on GCM concentration by (1) comparing GCM concentration measured in freshly collected control samples to those placed in natural habitats for timed exposure, and (2) relating GCM concentration in samples collected noninvasively throughout the western United States to local environmental characteristics measured before and after deposition. Our timed-exposure trials clarified the spatial scale at which exposure to environmental factors postdeposition influences GCM concentration in pika feces. Also, fecal samples collected from occupied pika habitats throughout the species' range revealed significant relationships between GCM and metrics of climate during the postdeposition period (maximum temperature, minimum temperature, and precipitation during the month of sample collection). Conversely, we found no such relationships between GCM and metrics of climate during the predeposition period (prior to the month of sample collection). Together, these results indicate that noninvasive measurement of physiological stress in pikas across the western US may be confounded by climatic conditions in the postdeposition environment when samples cannot be collected immediately after defecation. Our results reiterate the importance

  4. Clinical Music Study Quality Assessment Scale (MUSIQUAS)

    NARCIS (Netherlands)

    Jaschke, A.C.; Eggermont, L.H.P.; Scherder, E.J.A.; Shippton, M.; Hiomonides, I.

    2013-01-01

    AIMS Quality assessment of studies is essential for the understanding and application of these in systematic reviews and meta analyses, the two “gold standards” of medical sciences. Publications in scientific journals have extensively used assessment scales to address poor methodological quality,

  5. Moving beyond the concept of "primary forest" as a metric of forest environment quality.

    Science.gov (United States)

    Bernier, P Y; Paré, D; Stinson, G; Bridge, S R J; Kishchuk, B E; Lemprière, T C; Thiffault, E; Titus, B D; Vasbinder, W

    2017-03-01

    The United Nations Food and Agriculture Organization (FAO) has been reporting country-level area in primary forests in its Global Forest Resource Assessment since 2005. The FAO definition of a primary forest (naturally regenerated forest of native species where there are no clearly visible indications of human activities and the ecological processes are not significantly disturbed) is generally accepted as authoritative and is being used in policy making. However, problems with this definition undermine our capacity to obtain globally coherent estimates. In addition, the current reporting on primary forests fails to consider the complementarily of non-primary forests toward the maintenance of ecosystem services. These issues undermine the appropriate tracking of changes in primary and non-primary forests, and the assessment of impacts of such changes on ecosystem services. We present the case for an operational reconsideration of the primary forest concept and discuss how alternatives or supplements might be developed. © 2016 by the Ecological Society of America.

  6. Open access colonoscopy: Critical appraisal of indications, quality metrics and outcomes.

    Science.gov (United States)

    Ghaoui, Rony; Ramdass, Sheryl; Friderici, Jennifer; Desilets, David J

    2016-08-01

    In an era of cost containment and measurement of value, screening for colon cancer represents a clear target for better accountability. Bundling payment is a real possibility and will likely have to rely on open-access colonoscopy (OAC). OAC is a method to allow patients to undergo endoscopy without prior evaluation by a gastroenterologist. We conducted a cross-sectional study to evaluate the indications and outcomes among patients scheduled for OAC or traditional colonoscopy at a tertiary medical center. We hypothesized that outcomes in OAC patients would be similar to those from traditional referral modes. Using a standardized data abstraction form, we documented indications for colonoscopy, clinical outcomes (complications, emergency room visits, phone calls), and compliance with quality indicators (QI) in a random sample of 1000 patients who underwent an outpatient colonoscopy at an academic medical center in 2013. We compared baseline characteristics and outcomes between two cohorts: OAC vs. patients who were scheduled after previous evaluation by a gastroenterologist or physician assistant or non-open access colonoscopy (NOAC). Patients in the OAC group were more likely to be male, non-Hispanic, to be privately insured, and to have screening (vs. diagnostic) indication. However they were significantly less likely than those in the NOAC group to have a procedure performed once scheduled, (45.5% vs. 66.9%, pmetrics such as documentation of prep quality (99.8% vs. 98.8%, p=0.24). Patients undergoing OAC are more likely to have a screening colonoscopy but with overall similar clinical outcomes and compliance with QI to patients scheduled as NOAC. OAC remains handicapped by high cancellation and no-show rates. Copyright © 2016 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  7. Assessing technical skill in surgery and endoscopy: a set of metrics and an algorithm (C-PASS) to assess skills in surgical and endoscopic procedures.

    Science.gov (United States)

    Stylopoulos, Nicholas; Vosburgh, Kirby G

    2007-06-01

    Historically, the performance of surgeons has been assessed subjectively by senior surgical staff in both training and operating environments. In this work, the position and motion of surgical instruments are analyzed through an objective process, denoted C-PASS, to measure surgeon performance of laparoscopic, endoscopic, and image-guided procedures. To develop C-PASS, clinically relevant performance characteristics were identified. Then measurement techniques for parameters that represented each characteristic were derived, and analytic techniques were implemented to transform these parameters into explicit, robust metrics. The metrics comprise the C-PASS performance assessment method, which has been validated over the last 3 years in studies of laparoscopy and endoscopy. These studies show that C-PASS is straightforward, reproducible, and accurate. It is sufficiently powerful to assess the efficiency of these complex processes. It is likely that C-PASS and similar approaches will improve skills acquisition and learning and also enable the objective comparison of systems and techniques.

  8. Physical function metric over measure: An illustration with the Patient-Reported Outcomes Measurement Information System (PROMIS) and the Functional Assessment of Cancer Therapy (FACT).

    Science.gov (United States)

    Kaat, Aaron J; Schalet, Benjamin D; Rutsohn, Joshua; Jensen, Roxanne E; Cella, David

    2017-09-08

    Measuring patient-reported outcomes (PROs) is becoming an integral component of quality improvement initiatives, clinical care, and research studies in cancer, including comparative effectiveness research. However, the number of PROs limits comparability across studies. Herein, the authors attempted to link the Functional Assessment of Cancer Therapy-General Physical Well-Being (FACT-G PWB) subscale with the Patient-Reported Outcomes Measurement Information System (PROMIS) Physical Function (PF) calibrated item bank. The also sought to augment a subset of the conceptually most similar FACT-G PWB items with PROMIS PF items to improve the linking. Baseline data from 5506 participants in the Measuring Your Health (MY-Health) study were used to identify the optimal items for linking FACT-G PWB with PROMIS PF. A mixed methods approach identified the optimal items for creating the 5-item FACT/PROMIS-PF5 scale. Both the linked and augmented relationships were cross-validated using the follow-up MY-Health data. A 5-item FACT-G PWB item subset was found to be optimal for linking with PROMIS PF. In addition, a 2-item subset, including only items that were conceptually very similar to the PROMIS item bank content, were augmented with 3 PROMIS PF items. This new FACT/PROMIS-PF5 provided superior score recovery. The PROMIS PF metric allows for the evaluation of the extent to which similar questionnaires can be linked and therefore expressed on the same metric. These results allow for the aggregation of existing data and provide an optimal measure for future studies wishing to use the FACT yet also report on the PROMIS PF metric. Cancer 2017. © 2017 American Cancer Society. © 2017 American Cancer Society.

  9. The quest for 'diagnostically lossless' medical image compression: a comparative study of objective quality metrics for compressed medical images

    Science.gov (United States)

    Kowalik-Urbaniak, Ilona; Brunet, Dominique; Wang, Jiheng; Koff, David; Smolarski-Koff, Nadine; Vrscay, Edward R.; Wallace, Bill; Wang, Zhou

    2014-03-01

    Our study, involving a collaboration with radiologists (DK,NSK) as well as a leading international developer of medical imaging software (AGFA), is primarily concerned with improved methods of assessing the diagnostic quality of compressed medical images and the investigation of compression artifacts resulting from JPEG and JPEG2000. In this work, we compare the performances of the Structural Similarity quality measure (SSIM), MSE/PSNR, compression ratio CR and JPEG quality factor Q, based on experimental data collected in two experiments involving radiologists. An ROC and Kolmogorov-Smirnov analysis indicates that compression ratio is not always a good indicator of visual quality. Moreover, SSIM demonstrates the best performance, i.e., it provides the closest match to the radiologists' assessments. We also show that a weighted Youden index1 and curve tting method can provide SSIM and MSE thresholds for acceptable compression ratios.

  10. Data connectivity: A critical tool for external quality assessment.

    Science.gov (United States)

    Cheng, Ben; Cunningham, Brad; Boeras, Debrah I; Mafaune, Patron; Simbi, Raiva; Peeling, Rosanna W

    2016-01-01

    Point-of-care (POC) tests have been useful in increasing access to testing and treatment monitoring for HIV. Decentralising testing from laboratories to hundreds of sites around a country presents tremendous challenges in training and quality assurance. In order to address these concerns, companies are now either embedding connectivity in their new POC diagnostic instruments or providing some form of channel for electronic result exchange. These will allow automated key performance and operational metrics from devices in the field to a central database. Setting up connectivity between these POC devices and a central database at the Ministries of Health will allow automated data transmission, creating an opportunity for real-time information on diagnostic instrument performance as well as the competency of the operator through external quality assessment. A pilot programme in Zimbabwe shows that connectivity has significantly improve the turn-around time of external quality assessment result submissions and allow corrective actions to be provided in a timely manner. Furthermore, by linking the data to existing supply chain management software, stock-outs can be minimised. As countries are looking forward to achieving the 90-90-90 targets for HIV, such innovative technologies can automate disease surveillance, improve the quality of testing and strengthen the efficiency of health systems.

  11. Data connectivity: A critical tool for external quality assessment

    Directory of Open Access Journals (Sweden)

    Ben Cheng

    2016-10-01

    Full Text Available Point-of-care (POC tests have been useful in increasing access to testing and treatment monitoring for HIV. Decentralising testing from laboratories to hundreds of sites around a country presents tremendous challenges in training and quality assurance. In order to address these concerns, companies are now either embedding connectivity in their new POC diagnostic instruments or providing some form of channel for electronic result exchange. These will allow automated key performance and operational metrics from devices in the field to a central database. Setting up connectivity between these POC devices and a central database at the Ministries of Health will allow automated data transmission, creating an opportunity for real- time information on diagnostic instrument performance as well as the competency of the operator through external quality assessment. A pilot programme in Zimbabwe shows that connectivity has significantly improve the turn-around time of external quality assessment result submissions and allow corrective actions to be provided in a timely manner. Furthermore, by linking the data to existing supply chain management software, stock-outs can be minimised. As countries are looking forward to achieving the 90-90-90 targets for HIV, such innovative technologies can automate disease surveillance, improve the quality of testing and strengthen the efficiency of health systems.

  12. Algal Attributes: An Autecological Classification of Algal Taxa Collected by the National Water-Quality Assessment Program

    Science.gov (United States)

    Porter, Stephen D.

    2008-01-01

    Algae are excellent indicators of water-quality conditions, notably nutrient and organic enrichment, and also are indicators of major ion, dissolved oxygen, and pH concentrations and stream microhabitat conditions. The autecology, or physiological optima and tolerance, of algal species for various water-quality contaminants and conditions is relatively well understood for certain groups of freshwater algae, notably diatoms. However, applications of autecological information for water-quality assessments have been limited because of challenges associated with compiling autecological literature from disparate sources, tracking name changes for a large number of algal species, and creating an autecological data base from which algal-indicator metrics can be calculated. A comprehensive summary of algal autecological attributes for North American streams and rivers does not exist. This report describes a large, digital data file containing 28,182 records for 5,939 algal taxa, generally species or variety, collected by the U.S. Geological Survey?s National Water-Quality Assessment (NAWQA) Program. The data file includes 37 algal attributes classified by over 100 algal-indicator codes or metrics that can be calculated easily with readily available software. Algal attributes include qualitative classifications based on European and North American autecological literature, and semi-quantitative, weighted-average regression approaches for estimating optima using regional and national NAWQA data. Applications of algal metrics in water-quality assessments are discussed and national quartile distributions of metric scores are shown for selected indicator metrics.

  13. Investigation of 2 models to set and evaluate quality targets for hb a1c: biological variation and sigma-metrics.

    Science.gov (United States)

    Weykamp, Cas; John, Garry; Gillery, Philippe; English, Emma; Ji, Linong; Lenters-Westra, Erna; Little, Randie R; Roglic, Gojka; Sacks, David B; Takei, Izumi

    2015-05-01

    A major objective of the IFCC Task Force on Implementation of HbA1c Standardization is to develop a model to define quality targets for glycated hemoglobin (Hb A1c). Two generic models, biological variation and sigma-metrics, are investigated. We selected variables in the models for Hb A1c and used data of external quality assurance/proficiency testing programs to evaluate the suitability of the models to set and evaluate quality targets within and between laboratories. In the biological variation model, 48% of individual laboratories and none of the 26 instrument groups met the minimum performance criterion. In the sigma-metrics model, with a total allowable error (TAE) set at 5 mmol/mol (0.46% NGSP), 77% of the individual laboratories and 12 of 26 instrument groups met the 2σ criterion. The biological variation and sigma-metrics models were demonstrated to be suitable for setting and evaluating quality targets within and between laboratories. The sigma-metrics model is more flexible, as both the TAE and the risk of failure can be adjusted to the situation-for example, requirements related to diagnosis/monitoring or international authorities. With the aim of reaching (inter)national consensus on advice regarding quality targets for Hb A1c, the Task Force suggests the sigma-metrics model as the model of choice, with default values of 5 mmol/mol (0.46%) for TAE and risk levels of 2σ and 4σ for routine laboratories and laboratories performing clinical trials, respectively. These goals should serve as a starting point for discussion with international stakeholders in the field of diabetes. © 2015 American Association for Clinical Chemistry.

  14. Quality-assessment expectations and quality-assessment reality in ...

    African Journals Online (AJOL)

    Various data sets were used to examine whether there would be a discrepancy between what lecturers in a particular academic department emphasised when they first considered the feasibility of this type of educational interpreting, and what they actually focused on when assessing the interpreters' performance.

  15. Adaptive Optics Metrics & QC Scheme

    Science.gov (United States)

    Girard, Julien H.

    2017-09-01

    "There are many Adaptive Optics (AO) fed instruments on Paranal and more to come. To monitor their performances and assess the quality of the scientific data, we have developed a scheme and a set of tools and metrics adapted to each flavour of AO and each data product. Our decisions to repeat observations or not depends heavily on this immediate quality control "zero" (QC0). Atmospheric parameters monitoring can also help predict performances . At the end of the chain, the user must be able to find the data that correspond to his/her needs. In Particular, we address the special case of SPHERE."

  16. Assessing the colour quality of LED sources

    DEFF Research Database (Denmark)

    Jost-Boissard, S.; Avouac, P.; Fontoynont, Marc

    2015-01-01

    The CIE General Colour Rendering Index is currently the criterion used to describe and measure the colour-rendering properties of light sources. But over the past years, there has been increasing evidence of its limitations particularly its ability to predict the perceived colour quality of light...... but also with a preference index or a memory index calculated without blue and purple hues. A very low correlation was found between appreciation and naturalness indicating that colour quality needs more than one metric to describe subjective aspects.......The CIE General Colour Rendering Index is currently the criterion used to describe and measure the colour-rendering properties of light sources. But over the past years, there has been increasing evidence of its limitations particularly its ability to predict the perceived colour quality of light...... sources and especially some LEDs. In this paper, several aspects of perceived colour quality are investigated using a side-by-side paired comparison method, and the following criteria: naturalness of fruits and vegetables, colourfulness of the Macbeth Color Checker chart, visual appreciation...

  17. Perceived Quality of Full HD Video - Subjective Quality Assessment

    Directory of Open Access Journals (Sweden)

    Juraj Bienik

    2016-01-01

    Full Text Available In recent years, an interest in multimedia services has become a global trend and this trend is still rising. The video quality is a very significant part from the bundle of multimedia services, which leads to a requirement for quality assessment in the video domain. Video quality of a streamed video across IP networks is generally influenced by two factors “transmission link imperfection and efficiency of compression standards. This paper deals with subjective video quality assessment and the impact of the compression standards H.264, H.265 and VP9 on perceived video quality of these compression standards. The evaluation is done for four full HD sequences, the difference of scenes is in the content“ distinction is based on Spatial (SI and Temporal (TI Index of test sequences. Finally, experimental results follow up to 30% bitrate reducing of H.265 and VP9 compared with the reference H.264.

  18. Detectability and image quality metrics based on robust statistics: following non-linear, noise-reduction filters

    Science.gov (United States)

    Tkaczyk, J. Eric; Haneda, Eri; Palma, Giovanni; Iordache, Razvan; Klausz, Remy; Garayt, Mathieu; Carton, Ann-Katherine

    2014-03-01

    Non-linear image processing and reconstruction algorithms that reduced noise while preserving edge detail are currently being evaluated in medical imaging research literature. We have implemented a robust statistics analysis of four widely utilized methods. This work demonstrates consistent trends in filter impact by which such non-linear algorithms can be evaluated. We calculate observer model test statistics and propose metrics based on measured non-Gaussian distributions that can serve as image quality measures analogous to SDNR and detectability. The filter algorithms that vary significantly in their approach to noise reduction include median (MD), bilateral (BL), anisotropic diffusion (AD) and total-variance regularization (TV). It is shown that the detectability of objects limited by Poisson noise is not significantly improved after filtration. There is no benefit to the fraction of correct responses in repeated n-alternate forced choice experiments, for n=2-25. Nonetheless, multi-pixel objects with contrast above the detectability threshold appear visually to benefit from non-linear processing algorithms. In such cases, calculations on highly repeated trials show increased separation of the object-level histogram from the background-level distribution. Increased conspicuity is objectively characterized by robust statistical measures of distribution separation.

  19. Teachers’ opinions on quality criteria for Competency Assessment Programs

    NARCIS (Netherlands)

    Baartman, L.K.J.; Bastiaens, T.J.; Kirschner, P.A.; Vleuten, C.P.M. van der

    2006-01-01

    Quality control policies towards Dutch vocational schools have changed dramatically because the government questioned examination quality. Schools must now demonstrate assessment quality to a new Examination Quality Center. Since teachers often design assessments, they must be involved in quality

  20. Fighter agility metrics. M.S. Thesis

    Science.gov (United States)

    Liefer, Randall K.

    1990-01-01

    Fighter flying qualities and combat capabilities are currently measured and compared in terms relating to vehicle energy, angular rates and sustained acceleration. Criteria based on these measurable quantities have evolved over the past several decades and are routinely used to design aircraft structures, aerodynamics, propulsion and control systems. While these criteria, or metrics, have the advantage of being well understood, easily verified and repeatable during test, they tend to measure the steady state capability of the aircraft and not its ability to transition quickly from one state to another. Proposed new metrics to assess fighter aircraft agility are collected and analyzed. A framework for classification of these new agility metrics is developed and applied. A complete set of transient agility metrics is evaluated with a high fidelity, nonlinear F-18 simulation. Test techniques and data reduction methods are proposed. A method of providing cuing information to the pilot during flight test is discussed. The sensitivity of longitudinal and lateral agility metrics to deviations from the pilot cues is studied in detail. The metrics are shown to be largely insensitive to reasonable deviations from the nominal test pilot commands. Instrumentation required to quantify agility via flight test is also considered. With one exception, each of the proposed new metrics may be measured with instrumentation currently available.

  1. Health outcomes in diabetics measured with Minnesota Community Measurement quality metrics

    Directory of Open Access Journals (Sweden)

    Takahashi PY

    2014-12-01

    Full Text Available Paul Y Takahashi,1 Jennifer L St Sauver,2 Lila J Finney Rutten,2 Robert M Jacobson,3 Debra J Jacobson,2 Michaela E McGree,2 Jon O Ebbert1 1Department of Internal Medicine, Division of Primary Care Internal Medicine, 2Department of Health Sciences Research, Mayo Clinic Robert D and Patricia E Kern Center for the Science of Health Care Delivery, 3Department of Pediatric and Adolescent Medicine, Division of Community Pediatrics, Mayo Clinic, Rochester, MN, USA Objective: Our objective was to understand the relationship between optimal diabetes control, as defined by Minnesota Community Measurement (MCM, and adverse health outcomes including emergency department (ED visits, hospitalizations, 30-day rehospitalization, intensive care unit (ICU stay, and mortality. Patients and methods: In 2009, we conducted a retrospective cohort study of empaneled Employee and Community Health patients with diabetes mellitus. We followed patients from 1 September 2009 until 30 June 2011 for hospitalization and until 5 January 2014 for mortality. Optimal control of diabetes mellitus was defined as achieving the following three measures: low-density lipoprotein (LDL cholesterol <100 mg/mL, blood pressure <140/90 mmHg, and hemoglobin A1c <8%. Using the electronic medical record, we assessed hospitalizations, ED visits, ICU stays, 30-day rehospitalizations, and mortality. The chi-square or Wilcoxon rank-sum tests were used to compare those with and without optimal control. We used Cox proportional hazard models to estimate the associations between optimal diabetes mellitus status and each outcome. Results: We identified 5,731 empaneled patients with diabetes mellitus; 2,842 (49.6% were in the optimal control category. After adjustment, we observed that non-optimally controlled patients had higher risks for hospitalization (hazard ratio [HR] 1.11; 95% confidence interval [CI] 1.00–1.23, ED visits (HR 1.15; 95% CI 1.06–1.25, and mortality (HR 1.29; 95% CI 1.09–1

  2. Data Matching, Integration, and Interoperability for a Metric Assessment of Monographs

    DEFF Research Database (Denmark)

    Zuccala, Alesia Ann; Cornacchia, Roberto

    2016-01-01

    This paper details a unique data experiment carried out at the University of Amsterdam, Center for Digital Humanities. Data pertaining to monographs were collected from three autonomous resources, the Scopus Journal Index, WorldCat.org and Goodreads, and linked according to unique identifiers...... in a new Microsoft SQL database. The purpose of the experiment was to investigate co-varied metrics for a list of book titles based on their citation impact (from Scopus), presence in international libraries (WorldCat.org) and visibility as publically reviewed items (Goodreads). The results of our data...... experiment highlighted current problems related citation indices and the way that books are recorded by different citing authors. Our research further demonstrates the primary problem of matching book titles as ‘cited objects’ with book titles held in a union library catalog, given that books are always...

  3. Dental metric assessment of the omo fossils: implications for the phylogenetic position of Australopithecus africanus.

    Science.gov (United States)

    Hunt, K; Vitzthum, V J

    1986-10-01

    The discovery of Australopithecus afarensis has led to new interpretations of hominid phylogeny, some of which reject A. africanus as an ancestor of Homo. Analysis of buccolingual tooth crown dimensions in australopithecines and Homo species by Johanson and White (Science 202:321-330, 1979) revealed that the South African gracile australopithecines are intermediate in size between Laetoli/hadar hominids and South African robust hominids. Homo, on the other hand, displays dimensions similar to those of A. afarensis and smaller than those of other australopithecines. These authors conclude, therefore, that A. africanus is derived in the direction of A. robustus and is not an ancestor of the Homo clade. However, there is a considerable time gap (ca. 800,000 years) between the Laetoli/Hadar specimens and the earliest Homo specimens; "gracile" hominids from Omo fit into this chronological gap and are from the same geographic area. Because the early specimens at Omo have been designated A. afarensis and the later specimens classified as Homo habilis, Omo offers a unique opportunity to test hypotheses concerning hominid evolution, especially regarding the phylogenetic status of A. africanus. Comparisons of mean cheek teeth breadths disclosed the significant (P less than or equal to 0.05) differences between the Omo sample and the Laetoli/Hadar fossils (P4, M2, and M3), the Homo fossils (P3, P4, M1, M2, and M1), and A. africanus (M3). Of the several possible interpretations of these data, it appears that the high degree of similarity between the Omo sample and the South African gracile australopithecine material warrants considering the two as geographical variants of A. africanus. The geographic, chronologic, and metric attributes of the Omo sample argue for its lineal affinity with A. afarensis and Homo. In conclusion, a consideration of hominid postcanine dental metrics provides no basis for removing A. africanus from the ancestry of the Homo lineage.

  4. Assessing the quality of Smilacis Glabrae Rhizoma (Tufuling) by colormetrics and UPLC-Q-TOF-MS.

    Science.gov (United States)

    He, Xicheng; Yi, Tao; Tang, Yina; Xu, Jun; Zhang, Jianye; Zhang, Yazhou; Dong, Lisha; Chen, Hubiao

    2016-01-01

    The quality of the materials used in Chinese medicine (CM) is generally assessed based on an analysis of their chemical components (e.g., chromatographic fingerprint analysis). However, there is a growing interest in the use of color metrics as an indicator of quality in CM. The aim of this study was to investigate the accuracy and feasibility of using color metrics and chemical fingerprint analysis to determine the quality of Smilacis Glabrae Rhizoma (Tufuling) (SGR). The SGR samples were divided into two categories based on their cross-sectional coloration, including red SGR (R-SGR) and white SGR (W-SGR). Forty-three samples of SGR were collected and their colors were quantized based on an RGB color model using the Photoshop software. An ultra-performance liquid chromatography/quadrupole time-of-flight mass spectrometry (UPLC/QTOF MS) system was used for chromatographic fingerprint analysis to evaluate the quality of the different SGR samples. Hierarchical cluster analysis and dimensional reduction were used to evaluate the data generated from the different samples. Pearson correlation coefficient was used to evaluate the relationship between the color metrics and the chemical compositions of R-SGR and W-SGR. The SGR samples were divided into two different groups based on their cross-sectional color, including color A (CLA) and B (CLB), as well as being into two separate classes based on their chemical composition, including chemical A (CHA) and B (CHB). Standard fingerprint chromatograms were for CHA and CHB. Statistical analysis revealed a significant correlation (Pearson's r = -0.769, P metrics and the results of the chemical fingerprint analysis. The SGR samples were divided into two major clusters, and the variations in the colors of these samples reflected differences in the quality of the SGR material. Furthermore, we observed a statistically significant correlation between the color metrics and the quality of the SGR material.

  5. Forensic Metrics

    Science.gov (United States)

    Bort, Nancy

    2005-01-01

    One of the most important review topics the author teaches in middle school is the use of metric measurement for problem solving and inquiry. For many years, she had students measuring various objects around the room using the tools of metric measurement. She dutifully taught hypothesizing, data collecting, and drawing conclusions. It was…

  6. Measuring Research Quality Using the Journal Impact Factor, Citations and "Ranked Journals": Blunt Instruments or Inspired Metrics?

    Science.gov (United States)

    Jarwal, Som D.; Brion, Andrew M.; King, Maxwell L.

    2009-01-01

    This paper examines whether three bibliometric indicators--the journal impact factor, citations per paper and the Excellence in Research for Australia (ERA) initiative's list of "ranked journals"--can predict the quality of individual research articles as assessed by international experts, both overall and within broad disciplinary…

  7. mCSQAM: Service Quality Assessment Model in Mobile Cloud Services Environment

    Directory of Open Access Journals (Sweden)

    Young-Rok Shin

    2016-01-01

    Full Text Available Cloud computing is high technology that extends existing IT capabilities and requirements. Recently, the cloud computing paradigm is towards mobile with advances of mobile network and personal devices. As concept of mobile cloud, the number of providers rapidly increases for various mobile cloud services. Despite development of cloud computing, most service providers used their own policies to deliver their services to user. In other words, quality criteria for mobile cloud service assessment are not clearly established yet. To solve the problem, there were some researches that proposed models for service quality assessment. However, they did not consider various metrics to assess service quality. Although existing research considers various metrics, they did not consider newly generated Service Level Agreement. In this paper, to solve the problem, we proposed a mobile cloud service assessment model called mCSQAM and verify our model through few case researches. To apply the mobile cloud, proposed assessment model is transformed from ISO/IEC 9126 which is an international standard for software quality assessment. mCSQAM can provide service quality assessment and determine raking of the service. Furthermore, if Cloud Service Broker includes mCSQAM, appropriate services can be recommended for service users using user and service conditions.

  8. Air Quality Assessment Using Interpolation Technique

    OpenAIRE

    Awkash Kumar; Rashmi S. Patil; Anil Kumar Dikshit; Rakesh Kumar

    2016-01-01

    Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance...

  9. Quality assessment for online iris images

    CSIR Research Space (South Africa)

    Makinana, S

    2015-01-01

    Full Text Available -1 Computer Science & Information Technology (CS & IT), Dubai, UAE, 23-24 January 2015 Quality assessment for online iris images Sisanda Makinana, Tendani Malumedzha, and Fulufhelo V Nelwamondo Modelling and Digital Science, CSIR, Pretoria, South...

  10. Quality Assessment on Environmental Conservation Interventions in ...

    African Journals Online (AJOL)

    African Journal of Economic Review, Volume III, Issue I, January 2015. 90 Page. Quality Assessment on Environmental Conservation Interventions in three Selected. Councils of Dodoma Region, Tanzania. Gaspar Peter Mwananchipeta Mwembezi1. Abstract. The study highlights some of conservation challenges, quality ...

  11. Quality assessment of differently treated mackerel ( Scomber ...

    African Journals Online (AJOL)

    The quality assessment of differently treated mackerel (Scomber scombrus), sourced from commercial cold storage facilities in Owerri, was carried out using 1kg of fish ... were therefore observed to have varying effects on the keeping quality of mackerel (Scomber scombrus), with brining and salting being the most preferred.

  12. Hydrogeochemistry and groundwater quality assessment of Ranipet ...

    Indian Academy of Sciences (India)

    A study was carried out to assess the groundwater pollution and identify major variables affecting the groundwater quality in Ranipet industrial area. Twenty five wells were monitored during pre- and post-monsoon in 2008 and analyzed for the major physico-chemical variables. The water quality variables such as total ...

  13. Academic Accountability, Quality and Assessment of Higher ...

    African Journals Online (AJOL)

    This study examined quality assurance and academic accountability in ten higher education institutions in Nigeria, using UNESCO's input-processoutput framework for assessing the quality of education. Data were collected from staff and students of the universities as well as opinion leaders drawn from the communities ...

  14. Surface water quality assessment using factor analysis

    African Journals Online (AJOL)

    2006-01-16

    Jan 16, 2006 ... surface water by rain and stormwater. On the other hand, run- off water increases pollutant concentrations, thereby decreases quality. To assess the water quality of the Buyuk Menderes. River under high-flow conditions, factor analysis was applied to data sets obtained from 21 monitoring stations between ...

  15. ON SOIL QUALITY AND ITS ASSESSING

    Directory of Open Access Journals (Sweden)

    N. Florea

    2007-10-01

    Full Text Available The term of “soil quality” is utilized until present with different connotations; its meaning became nowadays more comprehensive. The most adequate definition of the “soil quality” is: “the capacity of a specific kind of soil to function, within natural or managed ecosystem boundaries, to sustain plant and animal productivity, maintain or enhance water and air quality and support human health and habitation” (Karlen et al, 1998 One distinguishes a native soil quality, in natural conditions, and a meta-native soil quality, in managed conditions. Also, one can distinguish a stable side and a variable side of the soil quality. It is useful to consider also the term of “soilscape quality”, defined as weighted average of soil qualities of all the soils entering soil cover and their arrangement (expressed by the pedogeographical assemblage. The assessing soil quality can be made indirectly by a set of indicators. The kind and number of the quality indicators depend on the evaluation scale and the objective of the assessment. New researches are necessary to define more accurately the soil quality and to develop its evaluation. Assessing and monitoring soil quality have global implication in environment and society.

  16. No-reference quality assessment based on visual perception

    Science.gov (United States)

    Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao

    2014-11-01

    The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233

  17. Quality Assessment in the Primary care

    Directory of Open Access Journals (Sweden)

    Muharrem Ak

    2013-04-01

    Full Text Available -Quality Assessment in the Primary care Dear Editor; I have read the article titled as “Implementation of Rogi Kalyan Samiti (RKS at Primary Health Centre Durvesh” with great interest. Shrivastava et all concluded that assessment mechanism for the achievement of objectives for the suggested RKS model was not successful (1. Hereby I would like to emphasize the importance of quality assessment (QA especially in the era of newly established primary care implementations in our country. Promotion of quality has been fundamental part of primary care health services. Nevertheless variations in quality of care exist even in the developed countries. Accomplishment of quality in the primary care has some barriers like administration and directorial factors, absence of evidence-based medicine practice lack of continuous medical education. Quality of health care is no doubt multifaceted model that covers all components of health structures and processes of care. Quality in the primary care set up includes patient physician relationship, immunization, maternal, adolescent, adult and geriatric health care, referral, non-communicable disease management and prescribing (2. Most countries are recently beginning the implementation of quality assessments in all walks of healthcare. Organizations like European society for quality and safety in family practice (EQuiP endeavor to accomplish quality by collaboration. There are reported developments and experiments related to the methodology, processes and outcomes of quality assessments of health care. Quality assessments will not only contribute the accomplishment of the program / project but also detect the areas where obstacles also exist. In order to speed up the adoption of QA and to circumvent the occurrence of mistakes, health policy makers and family physicians from different parts of the world should share their experiences. Consensus on quality in preventive medicine implementations can help to yield

  18. Surveillance Metrics Sensitivity Study

    Energy Technology Data Exchange (ETDEWEB)

    Bierbaum, R; Hamada, M; Robertson, A

    2011-11-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.

  19. Surveillance metrics sensitivity study.

    Energy Technology Data Exchange (ETDEWEB)

    Hamada, Michael S. (Los Alamos National Laboratory); Bierbaum, Rene Lynn; Robertson, Alix A. (Lawrence Livermore Laboratory)

    2011-09-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.

  20. Michigan lakes: An assessment of water quality

    Science.gov (United States)

    Minnerick, R.J.

    2004-01-01

    Michigan has more than 11,000 inland lakes, that provide countless recreational opportunities and are an important resource that makes tourism and recreation a $15-billion-dollar per-year industry in the State (Stynes, 2002). Knowledge of the water-quality characteristics of inland lakes is essential for the current and future management of these resources.Historically the U. S. Geological Survey (USGS) and the Michigan Department of Environmental Quality (MDEQ) jointly have monitored water quality in Michigan's lakes and rivers. During the 1990's, however, funding for surface-water-quality monitoring was reduced greatly. In 1998, the citizens of Michigan passed the Clean Michigan Initiative to clean up, protect, and enhance Michigan's environmental infrastructure. Because of expanding water-quality-data needs, the MDEQ and the USGS jointly redesigned and implemented the Lake Water-Quality Assessment (LWQA) Monitoring Program (Michigan Department of Environmental Quality, 1997).

  1. Quality control in public participation assessments of water quality: the OPAL Water Survey.

    Science.gov (United States)

    Rose, N L; Turner, S D; Goldsmith, B; Gosling, L; Davidson, T A

    2016-07-22

    Public participation in scientific data collection is a rapidly expanding field. In water quality surveys, the involvement of the public, usually as trained volunteers, generally includes the identification of aquatic invertebrates to a broad taxonomic level. However, quality assurance is often not addressed and remains a key concern for the acceptance of publicly-generated water quality data. The Open Air Laboratories (OPAL) Water Survey, launched in May 2010, aimed to encourage interest and participation in water science by developing a 'low-barrier-to-entry' water quality survey. During 2010, over 3000 participant-selected lakes and ponds were surveyed making this the largest public participation lake and pond survey undertaken to date in the UK. But the OPAL approach of using untrained volunteers and largely anonymous data submission exacerbates quality control concerns. A number of approaches were used in order to address data quality issues including: sensitivity analysis to determine differences due to operator, sampling effort and duration; direct comparisons of identification between participants and experienced scientists; the use of a self-assessment identification quiz; the use of multiple participant surveys to assess data variability at single sites over short periods of time; comparison of survey techniques with other measurement variables and with other metrics generally considered more accurate. These quality control approaches were then used to screen the OPAL Water Survey data to generate a more robust dataset. The OPAL Water Survey results provide a regional and national assessment of water quality as well as a first national picture of water clarity (as suspended solids concentrations). Less than 10 % of lakes and ponds surveyed were 'poor' quality while 26.8 % were in the highest water quality band. It is likely that there will always be a question mark over untrained volunteer generated data simply because quality assurance is uncertain

  2. Quality of Experience Assessment of Video Quality in Social Clouds

    Directory of Open Access Journals (Sweden)

    Asif Ali Laghari

    2017-01-01

    Full Text Available Video sharing on social clouds is popular among the users around the world. High-Definition (HD videos have big file size so the storing in cloud storage and streaming of videos with high quality from cloud to the client are a big problem for service providers. Social clouds compress the videos to save storage and stream over slow networks to provide quality of service (QoS. Compression of video decreases the quality compared to original video and parameters are changed during the online play as well as after download. Degradation of video quality due to compression decreases the quality of experience (QoE level of end users. To assess the QoE of video compression, we conducted subjective (QoE experiments by uploading, sharing, and playing videos from social clouds. Three popular social clouds, Facebook, Tumblr, and Twitter, were selected to upload and play videos online for users. The QoE was recorded by using questionnaire given to users to provide their experience about the video quality they perceive. Results show that Facebook and Twitter compressed HD videos more as compared to other clouds. However, Facebook gives a better quality of compressed videos compared to Twitter. Therefore, users assigned low ratings for Twitter for online video quality compared to Tumblr that provided high-quality online play of videos with less compression.

  3. Image quality assessment using deep convolutional networks

    Science.gov (United States)

    Li, Yezhou; Ye, Xiang; Li, Yong

    2017-12-01

    This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.

  4. A metric space for Type Ia supernova spectra: a new method to assess explosion scenarios

    Science.gov (United States)

    Sasdelli, Michele; Hillebrandt, W.; Kromer, M.; Ishida, E. E. O.; Röpke, F. K.; Sim, S. A.; Pakmor, R.; Seitenzahl, I. R.; Fink, M.

    2017-04-01

    Over the past years, Type Ia supernovae (SNe Ia) have become a major tool to determine the expansion history of the Universe, and considerable attention has been given to, both, observations and models of these events. However, until now, their progenitors are not known. The observed diversity of light curves and spectra seems to point at different progenitor channels and explosion mechanisms. Here, we present a new way to compare model predictions with observations in a systematic way. Our method is based on the construction of a metric space for SN Ia spectra by means of linear principal component analysis, taking care of missing and/or noisy data, and making use of partial least-squares regression to find correlations between spectral properties and photometric data. We investigate realizations of the three major classes of explosion models that are presently discussed: delayed-detonation Chandrasekhar-mass explosions, sub-Chandrasekhar-mass detonations and double-degenerate mergers, and compare them with data. We show that in the principal component space, all scenarios have observed counterparts, supporting the idea that different progenitors are likely. However, all classes of models face problems in reproducing the observed correlations between spectral properties and light curves and colours. Possible reasons are briefly discussed.

  5. What is "fallback"?: metrics needed to assess telemetry tag effects on anadromous fish behavior

    Science.gov (United States)

    Frank, Holly J.; Mather, Martha E.; Smith, Joseph M.; Muth, Robert M.; Finn, John T.; McCormick, Stephen D.

    2009-01-01

    Telemetry has allowed researchers to document the upstream migrations of anadromous fish in freshwater. In many anadromous alosine telemetry studies, researchers use downstream movements (“fallback”) as a behavioral field bioassay for adverse tag effects. However, these downstream movements have not been uniformly reported or interpreted. We quantified movement trajectories of radio-tagged anadromous alewives (Alosa pseudoharengus) in the Ipswich River, Massachusetts (USA) and tested blood chemistry of tagged and untagged fish held 24 h. A diverse repertoire of movements was observed, which could be quantified using (a) direction of initial movements, (b) timing, and (c) characteristics of bouts of coupled upstream and downstream movements (e.g., direction, distance, duration, and speed). Because downstream movements of individual fish were almost always made in combination with upstream movements, these should be examined together. Several of the movement patterns described here could fall under the traditional definition of “fallback” but were not necessarily aberrant. Because superficially similar movements could have quite different interpretations, post-tagging trajectories need more precise definitions. The set of metrics we propose here will help quantify tag effects in the field, and provide the basis for a conceptual framework that helps define the complicated behaviors seen in telemetry studies on alewives and other fish in the field.

  6. ASSESSMENT OF QUALITY OF INNOVATIVE TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    Larisa Alexejevna Ismagilova

    2016-12-01

    Full Text Available We consider the topical issue of implementation of innovative technologies in the aircraft engine building industry. In this industry, products with high reliability requirements are developed and mass-produced. These products combine the latest achievements of science and technology. To make a decision on implementation of innovative technologies, a comprehensive assessment is carried out. It affects the efficiency of the innovations realization. In connection with this, the assessment of quality of innovative technologies is a key aspect in the selection of technological processes for their implementation. Problems concerning assessment of the quality of new technologies and processes of production are considered in the suggested method with respect to new positions. The developed method of assessing the quality of innovative technologies stands out for formed system of the qualimetric characteristics ensuring the effectiveness, efficiency, adaptability of innovative technologies and processes. The feature of suggested system of assessment is that it is based on principles of matching and grouping of quality indicators of innovative technologies and the characteristics of technological processes. The indicators are assessed from the standpoint of feasibility, technologies competiveness and commercial demand of products. In this paper, we discuss the example of implementing the approach of assessing the quality of the innovative technology of high-tech products such as turbine aircraft engine.

  7. Using the Consumer Experience with Pharmacy Services Survey as a quality metric for ambulatory care pharmacies: older adults' perspectives

    Science.gov (United States)

    Shiyanbola, Olayinka O; Mott, David A; Croes, Kenneth D

    2016-01-01

    Objectives To describe older adults' perceptions of evaluating and comparing pharmacies based on the Consumer Experience with Pharmacy Services Survey (CEPSS), describe older adults' perceived importance of the CEPSS and its specific domains, and explore older adults' perceptions of the influence of specific CEPSS domains in choosing/switching pharmacies. Design Focus group methodology was combined with the administration of a questionnaire. The focus groups explored participants' perceived importance of the CEPSS and their perception of using the CEPSS to choose and/or switch pharmacies. Then, using the questionnaire, participants rated their perceived importance of each CEPSS domain in evaluating a pharmacy, and the likelihood of using CEPSS to switch pharmacies if their current pharmacy had low ratings. Descriptive and thematic analyses were done. Setting 6 semistructured focus groups were conducted in a private meeting room in a Mid-Western state in the USA. Participants 60 English-speaking adults who were at least 65 years, and had filled a prescription at a retail pharmacy within 90 days. Results During the focus groups, the older adults perceived the CEPSS to have advantages and disadvantages in evaluating and comparing pharmacies. Older adults thought the CEPSS was important in choosing the best pharmacies and avoiding the worst pharmacies. The perceived influence of the CEPSS in switching pharmacies varied depending on the older adult's personal experience or trust of other consumers' experience. Questionnaire results showed that participants perceived health/medication-focused communication as very important or extremely important (n=47, 82.5%) in evaluating pharmacies and would be extremely likely (n=21, 36.8%) to switch pharmacies if their pharmacy had low ratings in this domain. Conclusions The older adults in this study are interested in using patient experiences as a quality metric for avoiding the worst pharmacies. Pharmacists' communication

  8. Quantifying the Assessment Loads of Students and Staff: The Challenge of Selecting Appropriate Metrics

    Science.gov (United States)

    Scott, Shirley V.

    2015-01-01

    Assessment is central to learning. It is also central to the cost of providing higher education. Choosing how much and what forms of assessment are questions not only of good teaching but of good policy. Measuring the amount of assessment set in each course provides a basis on which to determine equitable and appropriate workloads for students and…

  9. ASSESSING SUBJECTIVE SLEEP QUALITY IN SENIORS

    Directory of Open Access Journals (Sweden)

    Iveta Kukliczová

    2017-03-01

    Full Text Available Aim: The study aimed at assessing the quality of sleep in seniors. Another objective was to determine the impact of gender, age, type of residence and taking sleeping medication on the quality of sleep. Design: A cross-sectional study. Methods: Data were collected using the standardized Pittsburgh Sleep Quality Index (PSQI questionnaire. The sample comprised 146 seniors living in the Moravian-Silesian Region, Czech Republic. The survey was conducted from January 2014 to the end of October 2014 in a long-term chronic care department of a selected hospital, two retirement homes and among seniors living in their own homes. Results: Thirty-five (24% seniors had their global PSQI scores of 5 (i.e. the highest score indication good sleep quality or less. The remaining 111 (76% participants were shown to suffer from impaired sleep quality as their global PSQI scores were 6 or higher. There were statistically significant differences in component scores between seniors with the global PSQI scores of 5 or less and those with higher scores. The best quality of sleep was observed in females, seniors in the 65–74 age category and those sharing their own homes with their spouses or partners. Conclusion: Subjective sleep quality assessment varies significantly with respect to gender, age, type of residence and use of sleeping medication. Keywords: sleep quality, PSQI, subjective assessment, senior.

  10. LASSOing the MUSTANG: Using Tools Developed by IRIS for Actionable Seismic Data Quality Assessment

    Science.gov (United States)

    Frassetto, A.; Sumy, D. F.; Casey, R. E.; Woodward, R.; Ahern, T. K.

    2016-12-01

    MUSTANG, short for Modular Utility for STAtistical kNowledge Gathering, is a system developed by IRIS Data Services to bring data quality analysis web services to the IRIS DMC, covering the entirety of the data archive from past to present time. It provides station operators and earth scientists ready access to measurements that are scientifically useful and reflect the state of instrumentation that record seismic data. LASSO (Latest Assessment of Seismic Station Observations) is a software client, developed by Instrumental Software Technologies Inc. in conjunction with IRIS Instrumentation Services, which allows the viewer to download, display, sort, and analyze values of different metrics from MUSTANG. LASSO runs within a web-browser, allowing its user to interactively select from a number of preset metric groupings, or develop a more customized query. For many networks archiving data with the DMC, MUSTANG metrics are now available within days, making LASSO an ideal tool for assessing network status. Here, we demonstrate the features of LASSO and how Instrumentation Services commonly uses both LASSO and MUSTANG directly to provide actionable reports and visualizations of data quality. We also highlight the Mustang Data Browser, a tool for the simplified, rapid display of MUSTANG metrics developed by Data Services. Those interested in learning more about LASSO (lasso.iris.edu), MUSTANG (http://service.iris.edu/mustang/), and the Data Browser (https://ds.iris.edu/mustang/databrowser/) are encouraged to use these tools and visit our presentation.

  11. hydrochemical characteristics and quality assessment of ...

    African Journals Online (AJOL)

    Physiochemical assessment of shallow groundwater in Gboloko area was carried out to determine its suitability for drinking and irrigation purposes. ...... Journal of king Saud. University. Science 21:179-190. Ayuba, R., Omonona, O. V and Onwuka, O. S., 2013. Assessment of groundwater quality of Lokoja basement area ...

  12. Improvement in Total Joint Replacement Quality Metrics: Year One Versus Year Three of the Bundled Payments for Care Improvement Initiative.

    Science.gov (United States)

    Dundon, John M; Bosco, Joseph; Slover, James; Yu, Stephen; Sayeed, Yousuf; Iorio, Richard

    2016-12-07

    In January 2013, a large, tertiary, urban academic medical center began participation in the Bundled Payments for Care Improvement (BPCI) initiative for total joint arthroplasty, a program implemented by the Centers for Medicare & Medicaid Services (CMS) in 2011. Medicare Severity-Diagnosis Related Groups (MS-DRGs) 469 and 470 were included. We participated in BPCI Model 2, by which an episode of care includes the inpatient and all post-acute care costs through 90 days following discharge. The goal for this initiative is to improve patient care and quality through a patient-centered approach with increased care coordination supported through payment innovation. Length of stay (LOS), readmissions, discharge disposition, and cost per episode of care were analyzed for year 3 compared with year 1 of the initiative. Multiple programs were implemented after the first year to improve performance metrics: a surgeon-directed preoperative risk-factor optimization program, enhanced care coordination and home services, a change in venous thromboembolic disease (VTED) prophylaxis to a risk-stratified protocol, infection-prevention measures, a continued emphasis on discharge to home rather than to an inpatient facility, and a quality-dependent gain-sharing program among surgeons. There were 721 Medicare primary total joint arthroplasty patients in year 1 and 785 in year 3; their data were compared. The average hospital LOS decreased from 3.58 to 2.96 days. The rate of discharge to an inpatient facility decreased from 44% to 28%. The 30-day all-cause readmission rate decreased from 7% to 5%; the 60-day all-cause readmission rate decreased from 11% to 6%; and the 90-day all-cause readmission rate decreased from 13% to 8%. The average 90-day cost per episode decreased by 20%. Mid-term results from the implementation of Medicare BPCI Model 2 for primary total joint arthroplasty demonstrated decreased LOS, decreased discharges to inpatient facilities, decreased readmissions, and

  13. Mean absolute error and root mean square error: which is the better metric for assessing model performance?

    Science.gov (United States)

    Brassington, Gary

    2017-04-01

    The mean absolute error (MAE) and root mean square error (RMSE) are two metrics that are often used interchangeably as measures of ocean forecast accuracy. Recent literature has debated which of these should be preferred though their conclusions have largely been based on empirical arguments. We note that in general, RM SE2 = M AE2 + V ARk [|ɛ|] PIC PIC such that RMSE includes both the MAE as well as additional information related to the variance (biased estimator) of the errors ɛ with sample size k. The greater sensitivity of RMSE to a small number of outliers is directly attributable to the variance of absolute error. Further statistical properties for both metrics are derived and compared based on the assumption that the errors are Gaussian. For an unbiased (or bias corrected) model both MAE and RMSE are shown to estimate the total error standard deviation to within a constant coefficient such that ° -- M AE ≈ 2/πRM SE PIC . Both metrics have comparable behaviour in response to model bias and asymptote to the model bias as the bias increases. MAE is shown to be an unbiased estimator while RMSE is a biased estimator. MAE also has a lower sample variance compared with RMSE indicating MAE is the most robust choice. For real-time applications where there is a likelihood of "bad" observations we recommend ° - ° ---° - π- -1- π- π- TESD = 2 M AE ± √k- 2 - 1 2M AE PIC as an unbiased estimator of the total error standard deviation with error estimates (one standard deviation) based on the sample variance and defined as a scaling of the MAE itself. A sample size (k) on the order of 90 and 9000 provides an error scaling of 10% and 1% respectively. Nonetheless if the model performance is being analysed using a large sample of delayed-mode quality controlled observations then RMSE might be preferred where the second moment sensitivity to large model errors is important. Alternatively for model intercomparisons the information might compactly represented by a

  14. Quality assessment in meta-analisys

    Directory of Open Access Journals (Sweden)

    Giuseppe La Torre

    2006-06-01

    Full Text Available

    Background: An important characteristic of meta-analysis is that the results are determined both by the management of the meta-analysis process and by the features of studies included. The scientific rigor of potential primary studies varies considerably and the common objection to meta-analytic summaries is that they combine results from studies of different quality. Researchers began to develop quality scales for experimental studies, however now the interest of researchers is also focusing on observational studies. Since 1980, when Chalmers developed the first quality scale to assess primary studies included in metaanalysis, more than 100 scales have been developed, which vary dramatically in the quality and quantity of the items included. No standard lists of items exist, and the used quality scales lack empirically-supported components.

    Methods: Two of the most important and diffuse quality scales for experimental studies, Jadad system and Chalmers’ scale, and a quality scale used for observational studies, developed by Angelillo et al., are described and compared.

    Conclusion: The fallibility of meta-analysis is not surprising, considering the various bias that may be introduced by the processes of locating and selecting studies, including publication bias, language bias and citation bias. Quality assessment of the studies offers an estimate of the likelihood that their results will express the truth.

  15. Research Quality Assessment and Planning Journals. The Italian Perspective.

    Directory of Open Access Journals (Sweden)

    Bruno Zanon

    2014-02-01

    Full Text Available Assessment of research products is a crucial issue for universities and research institutions faced with internationalization and competition. Disciplines are reacting differently to this challenge, and planning, in its various forms – from urban design to process­oriented sectors – is under strain because the increasingly common assessment procedures based on the number of articles published in ranked journals and on citation data are not generally accepted. The reputation of journals, the impact of publications, and the profiles of scholars are increasingly defined by means of indexes such as impact factor and citations counts, but these metrics are questioned because they do not take account of all journals and magazines – in particular those published in languages other than English – and they do not consider teaching and other activities typical of academics and which have a real impact on planning practices at the local level. In Italy the discussion is particularly heated because assessment procedures are recent, the disciplinary community is not used to publishing in ranked international journals, and the Italian literature is not attuned to the international quality criteria. The paper reviews the recent debate on planning journals and research assessment. It focuses on the Italian case from the perspective of improving current practices.

  16. Assessing Woody Vegetation Trends in Sahelian Drylands Using MODIS Based Seasonal Metrics

    Science.gov (United States)

    Brandt, Martin; Hiernaux, Pierre; Rasmussen, Kjeld; Mbow, Cheikh; Kergoat, Laurent; Tagesson, Torbern; Ibrahim, Yahaya Z.; Wele, Abdoulaye; Tucker, Compton J.; Fensholt, Rasmus

    2016-01-01

    Woody plants play a major role for the resilience of drylands and in peoples' livelihoods. However, due to their scattered distribution, quantifying and monitoring woody cover over space and time is challenging. We develop a phenology driven model and train/validate MODIS (MCD43A4, 500m) derived metrics with 178 ground observations from Niger, Senegal and Mali to estimate woody cover trends from 2000 to 2014 over the entire Sahel. The annual woody cover estimation at 500 m scale is fairly accurate with an RMSE of 4.3 (woody cover %) and r(exp 2) = 0.74. Over the 15 year period we observed an average increase of 1.7 (+/- 5.0) woody cover (%) with large spatial differences: No clear change can be observed in densely populated areas (0.2 +/- 4.2), whereas a positive change is seen in sparsely populated areas (2.1 +/- 5.2). Woody cover is generally stable in cropland areas (0.9 +/- 4.6), reflecting the protective management of parkland trees by the farmers. Positive changes are observed in savannas (2.5 +/- 5.4) and woodland areas (3.9 +/- 7.3). The major pattern of woody cover change reveals strong increases in the sparsely populated Sahel zones of eastern Senegal, western Mali and central Chad, but a decreasing trend is observed in the densely populated western parts of Senegal, northern Nigeria, Sudan and southwestern Niger. This decrease is often local and limited to woodlands, being an indication of ongoing expansion of cultivated areas and selective logging.We show that an overall positive trend is found in areas of low anthropogenic pressure demonstrating the potential of these ecosystems to provide services such as carbon storage, if not over-utilized. Taken together, our results provide an unprecedented synthesis of woody cover dynamics in theSahel, and point to land use and human population density as important drivers, however only partially and locally offsetting a general post-drought increase.

  17. Comparing Institution Nitrogen Footprints: Metrics for Assessing and Tracking Environmental Impact

    Science.gov (United States)

    When multiple institutions with strong sustainability initiatives use a new environmental impact assessment tool, there is an impulse to compare. The first seven institutions to calculate their nitrogen footprints using the nitrogen footprint tool have worked collaboratively to i...

  18. Development of an Interdisciplinary Team Communication Framework and Quality Metrics for Home-Based Medical Care Practices.

    Science.gov (United States)

    Fathi, Roya; Sheehan, Orla C; Garrigues, Sarah K; Saliba, Debra; Leff, Bruce; Ritchie, Christine S

    2016-08-01

    The unique needs of homebound adults receiving home-based medical care (HBMC) (ie, home-based primary care and home-based palliative care services) are ideally provided by interdisciplinary care teams (IDTs) that provide coordinated care. The composition of team members from an array of organizations and the unique dimension of providing care in the home present specific challenges to timely access and communication of patient care information. The objective of this work was to develop a conceptual framework and corresponding quality indicators (QIs) that assess how IDT members for HBMC practices access and communicate key patient information with each other. A systematic review of peer-reviewed and gray literature was performed to inform a framework for care coordination in the home and the development of candidate QIs to assess processes by which all IDT members optimally access and use patient information. A technical expert panel (TEP) participated in a modified Delphi process to assess the validity and feasibility of each QI and to identify which would be most suitable for testing in the field. Thematic analysis of literature revealed 4 process themes for how HBMC practices might engage in high-quality care coordination: using electronic medical records, conducting interdisciplinary team meetings, sharing standardized patient assessments, and communicating via secure e-messaging. Based on these themes, 9 candidate QIs were developed to reflect these processes. Three candidate QIs were assessed by the TEP as valid and feasible to measure in an HBMC practice setting. These indicators focused on use of IDT meetings, standardized patient assessments, and secure e-messaging. Translating the complex issue of care coordination into QIs will improve care delivered to vulnerable home-limited adults who receive HBMC. Guided by the literature, we developed a framework to reflect optimal care coordination in the home setting and identified 3 candidate QIs to field-test in

  19. Quality Management Plan for the Environmental Assessment and Innovation Division

    Science.gov (United States)

    Quality management plan (QMP) which identifies the mission, roles, responsibilities of personnel with regard to quality assurance and quality management for the environmental assessment and innovation division.

  20. Image Quality Assessment via Quality-aware Group Sparse Coding

    Directory of Open Access Journals (Sweden)

    Minglei Tong

    2014-12-01

    Full Text Available Image quality assessment has been attracting growing attention at an accelerated pace over the past decade, in the fields of image processing, vision and machine learning. In particular, general purpose blind image quality assessment is technically challenging and lots of state-of-the-art approaches have been developed to solve this problem, most under the supervised learning framework where the human scored samples are needed for training a regression model. In this paper, we propose an unsupervised learning approach that work without the human label. In the off-line stage, our method trains a dictionary covering different levels of image quality patch atoms across the training samples without knowing the human score, where each atom is associated with a quality score induced from the reference image; at the on-line stage, given each image patch, our method performs group sparse coding to encode the sample, such that the sample quality can be estimated from the few labeled atoms whose encoding coefficients are nonzero. Experimental results on the public dataset show the promising performance of our approach and future research direction is also discussed.

  1. Contributions of the EMERALD project to assessing and improving microarray data quality.

    Science.gov (United States)

    Beisvåg, Vidar; Kauffmann, Audrey; Malone, James; Foy, Carole; Salit, Marc; Schimmel, Heinz; Bongcam-Rudloff, Erik; Landegren, Ulf; Parkinson, Helen; Huber, Wolfgang; Brazma, Alvis; Sandvik, Arne K; Kuiper, Martin

    2011-01-01

    While minimum information about a microarray experiment (MIAME) standards have helped to increase the value of the microarray data deposited into public databases like ArrayExpress and Gene Expression Omnibus (GEO), limited means have been available to assess the quality of this data or to identify the procedures used to normalize and transform raw data. The EMERALD FP6 Coordination Action was designed to deliver approaches to assess and enhance the overall quality of microarray data and to disseminate these approaches to the microarray community through an extensive series of workshops, tutorials, and symposia. Tools were developed for assessing data quality and used to demonstrate how the removal of poor-quality data could improve the power of statistical analyses and facilitate analysis of multiple joint microarray data sets. These quality metrics tools have been disseminated through publications and through the software package arrayQualityMetrics. Within the framework provided by the Ontology of Biomedical Investigations, ontology was developed to describe data transformations, and software ontology was developed for gene expression analysis software. In addition, the consortium has advocated for the development and use of external reference standards in microarray hybridizations and created the Molecular Methods (MolMeth) database, which provides a central source for methods and protocols focusing on microarray-based technologies.

  2. Assessing Public Metabolomics Metadata, Towards Improving Quality.

    Science.gov (United States)

    Ferreira, João D; Inácio, Bruno; Salek, Reza M; Couto, Francisco M

    2017-12-13

    Public resources need to be appropriately annotated with metadata in order to make them discoverable, reproducible and traceable, further enabling them to be interoperable or integrated with other datasets. While data-sharing policies exist to promote the annotation process by data owners, these guidelines are still largely ignored. In this manuscript, we analyse automatic measures of metadata quality, and suggest their application as a mean to encourage data owners to increase the metadata quality of their resources and submissions, thereby contributing to higher quality data, improved data sharing, and the overall accountability of scientific publications. We analyse these metadata quality measures in the context of a real-world repository of metabolomics data (i.e. MetaboLights), including a manual validation of the measures, and an analysis of their evolution over time. Our findings suggest that the proposed measures can be used to mimic a manual assessment of metadata quality.

  3. Imaging quality assessment of multi-modal miniature microscope

    Science.gov (United States)

    Lee, Junwon; Rogers, Jeremy D.; Descour, Michael R.; Hsu, Elizabeth; Aaron, Jesse S.; Sokolov, Konstantin; Richards-Kortum, Rebecca R.

    2003-06-01

    We are developing a multi-modal miniature microscope (4M device) to image morphology and cytochemistry in vivo and provide better delineation of tumors. The 4M device is designed to be a complete microscope on a chip, including optical, micro-mechanical, and electronic components. It has advantages such as compact size and capability for microscopic-scale imaging. This paper presents an optics-only prototype 4M device, the very first imaging system made of sol-gel material. The microoptics used in the 4M device has a diameter of 1.3 mm. Metrology of the imaging quality assessment of the prototype device is presented. We describe causes of imaging performance degradation in order to improve the fabrication process. We built a multi-modal imaging test-bed to measure first-order properties and to assess the imaging quality of the 4M device. The 4M prototype has a field of view of 290 µm in diameter, a magnification of -3.9, a working distance of 250 µm and a depth of field of 29.6+/-6 µm. We report the modulation transfer function (MTF) of the 4M device as a quantitative metric of imaging quality. Based on the MTF data, we calculated a Strehl ratio of 0.59. In order to investigate the cause of imaging quality degradation, the surface characterization of lenses in 4M devices is measured and reported. We also imaged both polystyrene microspheres similar in size to epithelial cell nuclei and cervical cancer cells. Imaging results indicate that the 4M prototype can resolve cellular detail necessary for detection of precancer.

  4. Metriculator: quality assessment for mass spectrometry-based proteomics.

    Science.gov (United States)

    Taylor, Ryan M; Dance, Jamison; Taylor, Russ J; Prince, John T

    2013-11-15

    Quality control in mass spectrometry-based proteomics remains subjective, labor-intensive and inconsistent between laboratories. We introduce Metriculator, a software designed to facilitate long-term storage of extensive performance metrics as introduced by NIST in 2010. Metriculator features a web interface that generates interactive comparison plots for contextual understanding of metric values and an automated metric generation toolkit. The comparison plots are designed for at-a-glance determination of outliers and trends in the datasets, together with relevant statistical comparisons. Easy-to-use quantitative comparisons and a framework for integration plugins will encourage a culture of quality assurance within the proteomics community. Available under the MIT license at http://github.com/princelab/metriculator.

  5. Using the Consumer Experience with Pharmacy Services Survey as a quality metric for ambulatory care pharmacies: older adults' perspectives.

    Science.gov (United States)

    Shiyanbola, Olayinka O; Mott, David A; Croes, Kenneth D

    2016-05-26

    To describe older adults' perceptions of evaluating and comparing pharmacies based on the Consumer Experience with Pharmacy Services Survey (CEPSS), describe older adults' perceived importance of the CEPSS and its specific domains, and explore older adults' perceptions of the influence of specific CEPSS domains in choosing/switching pharmacies. Focus group methodology was combined with the administration of a questionnaire. The focus groups explored participants' perceived importance of the CEPSS and their perception of using the CEPSS to choose and/or switch pharmacies. Then, using the questionnaire, participants rated their perceived importance of each CEPSS domain in evaluating a pharmacy, and the likelihood of using CEPSS to switch pharmacies if their current pharmacy had low ratings. Descriptive and thematic analyses were done. 6 semistructured focus groups were conducted in a private meeting room in a Mid-Western state in the USA. 60 English-speaking adults who were at least 65 years, and had filled a prescription at a retail pharmacy within 90 days. During the focus groups, the older adults perceived the CEPSS to have advantages and disadvantages in evaluating and comparing pharmacies. Older adults thought the CEPSS was important in choosing the best pharmacies and avoiding the worst pharmacies. The perceived influence of the CEPSS in switching pharmacies varied depending on the older adult's personal experience or trust of other consumers' experience. Questionnaire results showed that participants perceived health/medication-focused communication as very important or extremely important (n=47, 82.5%) in evaluating pharmacies and would be extremely likely (n=21, 36.8%) to switch pharmacies if their pharmacy had low ratings in this domain. The older adults in this study are interested in using patient experiences as a quality metric for avoiding the worst pharmacies. Pharmacists' communication about health and medicines is perceived important and likely

  6. End-to-end image quality assessment

    Science.gov (United States)

    Raventos, Joaquin

    2012-05-01

    An innovative computerized benchmarking approach (US Patent pending Sep 2011) based on extensive application of photometry, geometrical optics, and digital media using a randomized target, for a standard observer to assess the image quality of video imaging systems, at different day time, and low-light luminance levels. It takes into account, the target's contrast and color characteristics, as well as the observer's visual acuity and dynamic response. This includes human vision as part of the "extended video imaging system" (EVIS), and allows image quality assessment by several standard observers simultaneously.

  7. Assessing Journal Quality in Mathematics Education

    Science.gov (United States)

    Nivens, Ryan Andrew; Otten, Samuel

    2017-01-01

    In this Research Commentary, we describe 3 journal metrics--the Web of Science's Impact Factor, Scopus's SCImago Journal Rank, and Google Scholar Metrics' h5-index--and compile the rankings (if they exist) for 69 mathematics education journals. We then discuss 2 paths that the mathematics education community should consider with regard to these…

  8. Change in visual acuity is well correlated with change in image-quality metrics for both normal and keratoconic wavefront errors.

    Science.gov (United States)

    Ravikumar, Ayeswarya; Marsack, Jason D; Bedell, Harold E; Shi, Yue; Applegate, Raymond A

    2013-11-26

    We determined the degree to which change in visual acuity (VA) correlates with change in optical quality using image-quality (IQ) metrics for both normal and keratoconic wavefront errors (WFEs). VA was recorded for five normal subjects reading simulated, logMAR acuity charts generated from the scaled WFEs of 15 normal and seven keratoconic eyes. We examined the correlations over a large range of acuity loss (up to 11 lines) and a smaller, more clinically relevant range (up to four lines). Nine IQ metrics were well correlated for both ranges. Over the smaller range of primary interest, eight were also accurate and precise in estimating the variations in logMAR acuity in both normal and keratoconic WFEs. The accuracy for these eight best metrics in estimating the mean change in logMAR acuity ranged between ±0.0065 to ±0.017 logMAR (all less than one letter), and the precision ranged between ±0.10 to ±0.14 logMAR (all less than seven letters).

  9. Ecological Status of a Patagonian Mountain River: Usefulness of Environmental and Biotic Metrics for Rehabilitation Assessment

    Science.gov (United States)

    Laura, Miserendino M.; Adriana, M. Kutschker; Cecilia, Brand; La Ludmila, Manna; Cecilia, Prinzio Y. Di; Gabriela, Papazian; José, Bava

    2016-06-01

    This work evaluates the consequences of anthropogenic pressures at different sections of a Patagonian mountain river using a set of environmental and biological measures. A map of risk of soil erosion at a basin scale was also produced. The study was conducted at 12 sites along the Percy River system, where physicochemical parameters, riparian ecosystem quality, habitat condition, plants, and macroinvertebrates were investigated. While livestock and wood collection, the dominant activities at upper and mean basin sites resulted in an important loss of the forest cover still the riparian ecosystem remains in a relatively good status of conservation, as do the in-stream habitat conditions and physicochemical features. Besides, most indicators based on macroinvertebrates revealed that both upper and middle basin sections supported similar assemblages, richness, density, and most functional feeding group attributes. Instead, the lower urbanized basin showed increases in conductivity and nutrient values, poor quality in the riparian ecosystem, and habitat condition. According to the multivariate analysis, ammonia level, elevation, current velocity, and habitat conditions had explanatory power on benthos assemblages. Discharge, naturalness of the river channel, flood plain morphology, conservation status, and percent of urban areas were important moderators of plant composition. Finally, although the present land use in the basin would not produce a significant risk of soil erosion, unsustainable practices that promotes the substitution of the forest for shrubs would lead to severe consequences. Mitigation efforts should be directed to protect headwater forest, restore altered riparian ecosystem, and to control the incipient eutrophication process.

  10. Quantitative metrics for assessing predicted climate change pressure on North American tree species

    Science.gov (United States)

    Kevin M. Potter; William W. Hargrove

    2013-01-01

    Changing climate may pose a threat to forest tree species, forcing three potential population-level responses: toleration/adaptation, movement to suitable environmental conditions, or local extirpation. Assessments that prioritize and classify tree species for management and conservation activities in the face of climate change will need to incorporate estimates of the...

  11. From Log Files to Assessment Metrics: Measuring Students' Science Inquiry Skills Using Educational Data Mining

    Science.gov (United States)

    Gobert, Janice D.; Sao Pedro, Michael; Raziuddin, Juelaila; Baker, Ryan S.

    2013-01-01

    We present a method for assessing science inquiry performance, specifically for the inquiry skill of designing and conducting experiments, using educational data mining on students' log data from online microworlds in the Inq-ITS system (Inquiry Intelligent Tutoring System; www.inq-its.org). In our approach, we use a 2-step process: First we use…

  12. Quality Assessment of Collection 6 MODIS Atmospheric Science Products

    Science.gov (United States)

    Manoharan, V. S.; Ridgway, B.; Platnick, S. E.; Devadiga, S.; Mauoka, E.

    2015-12-01

    Since the launch of the NASA Terra and Aqua satellites in December 1999 and May 2002, respectively, atmosphere and land data acquired by the MODIS (Moderate Resolution Imaging Spectroradiometer) sensor on-board these satellites have been reprocessed five times at the MODAPS (MODIS Adaptive Processing System) located at NASA GSFC. The global land and atmosphere products use science algorithms developed by the NASA MODIS science team investigators. MODAPS completed Collection 6 reprocessing of MODIS Atmosphere science data products in April 2015 and is currently generating the Collection 6 products using the latest version of the science algorithms. This reprocessing has generated one of the longest time series of consistent data records for understanding cloud, aerosol, and other constituents in the earth's atmosphere. It is important to carefully evaluate and assess the quality of this data and remove any artifacts to maintain a useful climate data record. Quality Assessment (QA) is an integral part of the processing chain at MODAPS. This presentation will describe the QA approaches and tools adopted by the MODIS Land/Atmosphere Operational Product Evaluation (LDOPE) team to assess the quality of MODIS operational Atmospheric products produced at MODAPS. Some of the tools include global high resolution images, time series analysis and statistical QA metrics. The new high resolution global browse images with pan and zoom have provided the ability to perform QA of products in real time through synoptic QA on the web. This global browse generation has been useful in identifying production error, data loss, and data quality issues from calibration error, geolocation error and algorithm performance. A time series analysis for various science datasets in the Level-3 monthly product was recently developed for assessing any long term drifts in the data arising from instrument errors or other artifacts. This presentation will describe and discuss some test cases from the

  13. Metric learning

    CERN Document Server

    Bellet, Aurelien; Sebban, Marc

    2015-01-01

    Similarity between objects plays an important role in both human cognitive processes and artificial systems for recognition and categorization. How to appropriately measure such similarities for a given task is crucial to the performance of many machine learning, pattern recognition and data mining methods. This book is devoted to metric learning, a set of techniques to automatically learn similarity and distance functions from data that has attracted a lot of interest in machine learning and related fields in the past ten years. In this book, we provide a thorough review of the metric learnin

  14. Harmonizing exposure metrics and methods for sustainability assessments of food contact materials

    DEFF Research Database (Denmark)

    Ernstoff, Alexi; Jolliet, Olivier; Niero, Monia

    2016-01-01

    We aim to develop harmonized and operational methods for quantifying exposure to chemicals in food packaging specifically for sustainability assessments. Thousands of chemicals are approved for food packaging and numerous contaminates occur, e.g. through recycling. Chemical migration into food......, like LCA, finally facilitates including exposure to chemicals as a sustainable packaging design issue. Results were demonstrated in context of the pilot-scale Product Environmental Footprint regulatory method in the European Union. Increasing recycled content, decreasing greenhouse gas emissions...... by selecting plastics over glass, and adding chemicals with a design function were identified as risk management issues. We conclude developing an exposure framework, suitable for sustainability assessments commonly used for food packaging, is feasible to help guide packaging design to consider both...

  15. Automated Neuropsychological Assessment Metrics, Version 4 (ANAM4): Examination of Select Psychometric Properties and Administration Procedures

    Science.gov (United States)

    2016-12-01

    performance; 2) assess the test- retest reliability and practice effects of individual ANAM4 test modules; 3) examine the validity of the ANAM4 Mood Scale ...individual ANAM4 test modules. Study 3 examines the validity of the ANAM4 Mood Scale . Study 4 aims to establish a nationally- representative normative... parametric approach using major advances on spectroscopic methods and neuroimaging to identify biomarkers that can be used to distinguish between post

  16. Assessing Metrics for Estimating Fire Induced Change in the Forest Understorey Structure Using Terrestrial Laser Scanning

    OpenAIRE

    Gupta, Vaibhav; Reinke, Karin; Jones, Simon; Wallace, Luke; Holden, Lucas

    2015-01-01

    Quantifying post-fire effects in a forested landscape is important to ascertain burn severity, ecosystem recovery and post-fire hazard assessments and mitigation planning. Reporting of such post-fire effects assumes significance in fire-prone countries such as USA, Australia, Spain, Greece and Portugal where prescribed burns are routinely carried out. This paper describes the use of Terrestrial Laser Scanning (TLS) to estimate and map change in the forest understorey following a prescribed bu...

  17. A novel no-reference objective stereoscopic video quality assessment method based on visual saliency analysis

    Science.gov (United States)

    Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin

    2017-07-01

    This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.

  18. A condition metric for Eucalyptus woodland derived from expert evaluations.

    Science.gov (United States)

    Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D

    2018-02-01

    The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.

  19. Objective assessment of the impact of frame rate on video quality

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Korhonen, Jari; Forchhammer, Søren

    2012-01-01

    In this paper, we present a novel objective quality metric that takes the impact of frame rate into account. The proposed metric uses PSNR, frame rate and a content dependent parameter that can easily be obtained from spatial and temporal activity indices. The results have been validated on data...... from a subjective quality study, where the test subjects have been choosing the preferred path from the lowest quality to the best quality, at each step making a choice in favor of higher frame rate or lower distortion. A comparison with other relevant objective metrics shows that the proposed metric...

  20. Quality assessment of pacemaker implantations in Denmark

    DEFF Research Database (Denmark)

    Møller, M; Arnsbo, P; Asklund, Mogens

    2002-01-01

    AIMS: Quality assessment of therapeutic procedures is essential to insure a cost-effective health care system. Pacemaker implantation is a common procedure with more than 500,000 implantations world-wide per year, but the general complication rate is not well described. We studied procedure related...

  1. Water quality assessment and hydrochemical characteristics of ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 123; Issue 1. Water quality assessment and hydrochemical characteristics of groundwater on the aspect of metals in an old town, Foshan, south China. Guanxing Huang Zongyu Chen Jichao Sun. Volume 123 Issue 1 February 2014 pp 91-100 ...

  2. Water quality issues and energy assessments

    Energy Technology Data Exchange (ETDEWEB)

    Davis, M.J.; Chiu, S.

    1980-11-01

    This report identifies and evaluates the significant water quality issues related to regional and national energy development. In addition, it recommends improvements in the Office assessment capability. Handbook-style formating, which includes a system of cross-references and prioritization, is designed to help the reader use the material.

  3. Quality assessment of human behavior models

    NARCIS (Netherlands)

    Doesburg, W.A. van

    2007-01-01

    Accurate and efficient models of human behavior offer great potential in military and crisis management applications. However, little attention has been given to the man ner in which it can be determined if this potential is actually realized. In this study a quality assessment approach that

  4. Physicochemical and bacteriological quality assessment of the ...

    African Journals Online (AJOL)

    ALAKEH

    Technology. Full Length Research Paper. Physicochemical and bacteriological quality assessment of the Bambui community drinking water in the North West Region ... the water samples were contaminated to different extents by bacteria and heavy metals due to lack of ... water source and the decision to purify or not water.

  5. Retinal image quality assessment through a visual similarity index

    Science.gov (United States)

    Pérez, Jorge; Espinosa, Julián; Vázquez, Carmen; Mas, David

    2013-04-01

    Retinal image quality is commonly analyzed through parameters inherited from instrumental optics. These parameters are defined for 'good optics' so they are hard to translate into visual quality metrics. Instead of using point or artificial functions, we propose a quality index that takes into account properties of natural images. These images usually show strong local correlations that help to interpret the image. Our aim is to derive an objective index that quantifies the quality of vision by taking into account the local structure of the scene, instead of focusing on a particular aberration. As we show, this index highly correlates with visual acuity and allows inter-comparison of natural images around the retina. The usefulness of the index is proven through the analysis of real eyes before and after undergoing corneal surgery, which usually are hard to analyze with standard metrics.

  6. Methodologies and Metrics for Assessing the Strength of Relationships between Entities within Semantic Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Hickling, T L; Hanley, W G

    2005-09-29

    Semantic graphs are becoming a valuable tool for organizing and discovering information in an increasingly complex analysis environment. This paper investigates the use of graph topology to measure the strength of relationships in a semantic graph. These relationships are comprised of some number of distinct paths, whose length and configuration jointly characterize the strength of association. We explore these characteristics through the use of three distinct algorithms respectively based upon an electrical conductance model, Newman and Girvan's measure of betweenness [5], and cutsets. Algorithmic performance is assessed based upon a collection of partially ordered subgraphs which were constructed according to our subjective beliefs regarding strength of association.

  7. Assessing Quality of Data Standards: Framework and Illustration Using XBRL GAAP Taxonomy

    Science.gov (United States)

    Zhu, Hongwei; Wu, Harris

    The primary purpose of data standards or metadata schemas is to improve the interoperability of data created by multiple standard users. Given the high cost of developing data standards, it is desirable to assess the quality of data standards. We develop a set of metrics and a framework for assessing data standard quality. The metrics include completeness and relevancy. Standard quality can also be indirectly measured by assessing interoperability of data instances. We evaluate the framework using data from the financial sector: the XBRL (eXtensible Business Reporting Language) GAAP (Generally Accepted Accounting Principles) taxonomy and US Securities and Exchange Commission (SEC) filings produced using the taxonomy by approximately 500 companies. The results show that the framework is useful and effective. Our analysis also reveals quality issues of the GAAP taxonomy and provides useful feedback to taxonomy users. The SEC has mandated that all publicly listed companies must submit their filings using XBRL. Our findings are timely and have practical implications that will ultimately help improve the quality of financial data.

  8. Air Quality Assessment Using Interpolation Technique

    Directory of Open Access Journals (Sweden)

    Awkash Kumar

    2016-07-01

    Full Text Available Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance Weighting (IDW of Geographical Information System (GIS has been used to perform interpolation with the help of concentration data on air quality at three locations of Mumbai for the year 2008. The classification was done for the spatial and temporal variation in air quality levels for Mumbai region. The seasonal and annual variations of air quality levels for SO2, NOx and SPM (Suspended Particulate Matter have been focused in this study. Results show that SPM concentration always exceeded the permissible limit of National Ambient Air Quality Standard. Also, seasonal trends of pollutant SPM was low in monsoon due rain fall. The finding of this study will help to formulate control strategies for rational management of air pollution and can be used for many other regions.

  9. Capability Assessment and Performance Metrics for the Titan Multispectral Mapping Lidar

    Directory of Open Access Journals (Sweden)

    Juan Carlos Fernandez-Diaz

    2016-11-01

    Full Text Available In this paper we present a description of a new multispectral airborne mapping light detection and ranging (lidar along with performance results obtained from two years of data collection and test campaigns. The Titan multiwave lidar is manufactured by Teledyne Optech Inc. (Toronto, ON, Canada and emits laser pulses in the 1550, 1064 and 532 nm wavelengths simultaneously through a single oscillating mirror scanner at pulse repetition frequencies (PRF that range from 50 to 300 kHz per wavelength (max combined PRF of 900 kHz. The Titan system can perform simultaneous mapping in terrestrial and very shallow water environments and its multispectral capability enables new applications, such as the production of false color active imagery derived from the lidar return intensities and the automated classification of target and land covers. Field tests and mapping projects performed over the past two years demonstrate capabilities to classify five land covers in urban environments with an accuracy of 90%, map bathymetry under more than 15 m of water, and map thick vegetation canopies at sub-meter vertical resolutions. In addition to its multispectral and performance characteristics, the Titan system is designed with several redundancies and diversity schemes that have proven to be beneficial for both operations and the improvement of data quality.

  10. MO-D-213-06: Quantitative Image Quality Metrics Are for Physicists, Not Radiologists: How to Communicate to Your Radiologists Using Their Language

    Energy Technology Data Exchange (ETDEWEB)

    Szczykutowicz, T; Rubert, N; Ranallo, F [University Wisconsin-Madison, Madison, WI (United States)

    2015-06-15

    Purpose: A framework for explaining differences in image quality to non-technical audiences in medial imaging is needed. Currently, this task is something that is learned “on the job.” The lack of a formal methodology for communicating optimal acquisition parameters into the clinic effectively mitigates many technological advances. As a community, medical physicists need to be held responsible for not only advancing image science, but also for ensuring its proper use in the clinic. This work outlines a framework that bridges the gap between the results from quantitative image quality metrics like detectability, MTF, and NPS and their effect on specific anatomical structures present in diagnostic imaging tasks. Methods: Specific structures of clinical importance were identified for a body, an extremity, a chest, and a temporal bone protocol. Using these structures, quantitative metrics were used to identify the parameter space that should yield optimal image quality constrained within the confines of clinical logistics and dose considerations. The reading room workflow for presenting the proposed changes for imaging each of these structures is presented. The workflow consists of displaying images for physician review consisting of different combinations of acquisition parameters guided by quantitative metrics. Examples of using detectability index, MTF, NPS, noise and noise non-uniformity are provided. During review, the physician was forced to judge the image quality solely on those features they need for diagnosis, not on the overall “look” of the image. Results: We found that in many cases, use of this framework settled mis-agreements between physicians. Once forced to judge images on the ability to detect specific structures inter reader agreement was obtained. Conclusion: This framework will provide consulting, research/industrial, or in-house physicists with clinically relevant imaging tasks to guide reading room image review. This framework avoids use

  11. Deep Aesthetic Quality Assessment With Semantic Information.

    Science.gov (United States)

    Kao, Yueying; He, Ran; Huang, Kaiqi

    2017-03-01

    Human beings often assess the aesthetic quality of an image coupled with the identification of the image's semantic content. This paper addresses the correlation issue between automatic aesthetic quality assessment and semantic recognition. We cast the assessment problem as the main task among a multi-task deep model, and argue that semantic recognition task offers the key to address this problem. Based on convolutional neural networks, we employ a single and simple multi-task framework to efficiently utilize the supervision of aesthetic and semantic labels. A correlation item between these two tasks is further introduced to the framework by incorporating the inter-task relationship learning. This item not only provides some useful insight about the correlation but also improves assessment accuracy of the aesthetic task. In particular, an effective strategy is developed to keep a balance between the two tasks, which facilitates to optimize the parameters of the framework. Extensive experiments on the challenging Aesthetic Visual Analysis dataset and Photo.net dataset validate the importance of semantic recognition in aesthetic quality assessment, and demonstrate that multitask deep models can discover an effective aesthetic representation to achieve the state-of-the-art results.

  12. Mauve assembly metrics.

    Science.gov (United States)

    Darling, Aaron E; Tritt, Andrew; Eisen, Jonathan A; Facciotti, Marc T

    2011-10-01

    High-throughput DNA sequencing technologies have spurred the development of numerous novel methods for genome assembly. With few exceptions, these algorithms are heuristic and require one or more parameters to be manually set by the user. One approach to parameter tuning involves assembling data from an organism with an available high-quality reference genome, and measuring assembly accuracy using some metrics. We developed a system to measure assembly quality under several scoring metrics, and to compare assembly quality across a variety of assemblers, sequence data types, and parameter choices. When used in conjunction with training data such as a high-quality reference genome and sequence reads from the same organism, our program can be used to manually identify an optimal sequencing and assembly strategy for de novo sequencing of related organisms. GPL source code and a usage tutorial is at http://ngopt.googlecode.com aarondarling@ucdavis.edu Supplementary data is available at Bioinformatics online.

  13. Parasitology: United Kingdom National Quality Assessment Scheme.

    Science.gov (United States)

    Hawthorne, M.; Chiodini, P. L.; Snell, J. J.; Moody, A. H.; Ramsay, A.

    1992-01-01

    AIMS: To assess the results from parasitology laboratories taking part in a quality assessment scheme between 1986 and 1991; and to compare performance with repeat specimens. METHODS: Quality assessment of blood parasitology, including tissue parasites (n = 444; 358 UK, 86 overseas), and faecal parasitology, including extra-intestinal parasites (n = 205; 141 UK, 64 overseas), was performed. RESULTS: Overall, the standard of performance was poor. A questionnaire distributed to participants showed that a wide range of methods was used, some of which were considered inadequate to achieve reliable results. Teaching material was distributed to participants from time to time in an attempt to improve standards. CONCLUSIONS: Since the closure of the IMLS fellowship course in 1972, fewer opportunities for specialised training in parasitology are available: more training is needed. Poor performance in the detection of malarial parasites is mainly attributable to incorrect speciation, misidentification, and lack of equipment such as an eyepiece graticule. PMID:1452791

  14. Arbuscular mycorrhiza in soil quality assessment

    DEFF Research Database (Denmark)

    Kling, M.; Jakobsen, I.

    1998-01-01

    Arbuscular mycorrhizal (AM) fungi constitute a living bridge for the transport of nutrients from soil to plant roots, and are considered as the group of soil microorganisms that is of most direct importance to nutrient uptake by herbaceous plants. AM fungi also contribute to the formation of soil...... aggregates and to the protection of plants against drought and root pathogens. Assessment of soil quality, defined as the capacity of a soil to function within ecosystem boundaries to sustain biological productivity, maintain environmental quality, and promote plant health, should therefore include both...... quantitative and qualitative measurements of this important biological resource. Various methods for the assessment of the potential for mycorrhiza formation and function are presented. Examples are given of the application of these methods to assess the impact of pesticides on the mycorrhiza....

  15. Metrics for Success: Strategies for Enabling Core Facility Performance and Assessing Outcomes.

    Science.gov (United States)

    Turpen, Paula B; Hockberger, Philip E; Meyn, Susan M; Nicklin, Connie; Tabarini, Diane; Auger, Julie A

    2016-04-01

    Core Facilities are key elements in the research portfolio of academic and private research institutions. Administrators overseeing core facilities (core administrators) require assessment tools for evaluating the need and effectiveness of these facilities at their institutions. This article discusses ways to promote best practices in core facilities as well as ways to evaluate their performance across 8 of the following categories: general management, research and technical staff, financial management, customer base and satisfaction, resource management, communications, institutional impact, and strategic planning. For each category, we provide lessons learned that we believe contribute to the effective and efficient overall management of core facilities. If done well, we believe that encouraging best practices and evaluating performance in core facilities will demonstrate and reinforce the importance of core facilities in the research and educational mission of institutions. It will also increase job satisfaction of those working in core facilities and improve the likelihood of sustainability of both facilities and personnel.

  16. Quality assessment of palliative home care in Italy.

    Science.gov (United States)

    Scaccabarozzi, Gianlorenzo; Lovaglio, Pietro Giorgio; Limonta, Fabrizio; Floriani, Maddalena; Pellegrini, Giacomo

    2017-08-01

    The complexity of end-of-life care, represented by a large number of units caring for dying patients, of different types of organizations motivates the importance of measure the quality of provided care. Despite the law 38/2010 promulgated to remove the barriers and provide affordable access to palliative care, measurement, and monitoring of processes of home care providers in Italy has not been attempted. Using data drawn by an institutional voluntary observatory established in Italy in 2013, collecting home palliative care units caring for people between January and December 2013, we assess the degree to which Italian home palliative care teams endorse a set of standards required by the 38/2010 law and best practices as emerged from the literature. The evaluation strategy is based on Rasch analysis, allowing to objectively measuring both performances of facilities and quality indicators' difficulty on the same metric, using 14 quality indicators identified by the observatory's steering committee. Globally, 195 home care teams were registered in the observatory reporting globally 40 955 cured patients in 2013 representing 66% of the population of home palliative care units active in Italy in 2013. Rasch analysis identifies 5 indicators ("interview" with caregivers, continuous training provided to medical and nursing staff, provision of specialized multidisciplinary interventions, psychological support to the patient and family, and drug supply at home) easy to endorse by health care providers and 3 problematic indicators (presence of a formally established Local Network of Palliative care in the area of reference, provision of the care for most problematic patient requiring high intensity of the care, and the percentage of cancer patient dying at Home). The lack of Local Network of Palliative care, required by law 38/2010, is, at the present, the main barrier to its application. However, the adopted methodology suggests that a clear roadmap for health facilities

  17. Identified metabolic signature for assessing red blood cell unit quality is associated with endothelial damage markers and clinical outcomes

    DEFF Research Database (Denmark)

    Bordbar, Aarash; Johansson, Pär I.; Paglia, Giuseppe

    2016-01-01

    shown no difference of clinical outcome for patients receiving old or fresh RBCs. An overlooked but essential issue in assessing RBC unit quality and ultimately designing the necessary clinical trials is a metric for what constitutes an old or fresh RBC unit. STUDY DESIGN AND METHODS: Twenty RBC units...... years and endothelial damage markers in healthy volunteers undergoing autologous transfusions. CONCLUSION: The state of RBC metabolism may be a better indicator of cellular quality than traditional hematologic variables....

  18. Categorizing biomarkers of the human exposome and developing metrics for assessing environmental sustainability.

    Science.gov (United States)

    Pleil, Joachim D

    2012-01-01

    The concept of maintaining environmental sustainability broadly encompasses all human activities that impact the global environment, including the production of energy, use and management of finite resources such as petrochemicals, metals, food production (farmland, fresh and ocean waters), and potable water sources (rivers, lakes, aquifers), as well as preserving the diversity of the surrounding ecosystems. The ultimate concern is how one can manage Spaceship Earth in the long term to sustain the life, health, and welfare of the human species and the planet's flora and fauna. On a more intimate scale, one needs to consider the human interaction with the environment as expressed in the form of the exposome, which is defined as all exogenous and endogenous exposures from conception onward, including exposures from diet, lifestyle, and internal biology, as a quantity of critical interest to disease etiology. Current status and subsequent changes in the measurable components of the exposome, the human biomarkers, could thus conceivably be used to assess the sustainability of the environmental conditions with respect to human health. The basic theory is that a shift away from sustainability will be reflected in outlier measurements of human biomarkers. In this review, the philosophy of long-term environmental sustainability is explored in the context of human biomarker measurements and how empirical data can be collected and interpreted to assess if solutions to existing environmental problems might have unintended consequences. The first part discusses four conventions in the literature for categorizing environmental biomarkers and how different types of biomarker measurements might fit into the various grouping schemes. The second part lays out a sequence of data management strategies to establish statistics and patterns within the exposome that reflect human homeostasis and how changes or perturbations might be interpreted in light of external environmental

  19. Estimation of sex from the metric assessment of digital hand radiographs in a Western Australian population.

    Science.gov (United States)

    DeSilva, Rebecca; Flavel, Ambika; Franklin, Daniel

    2014-11-01

    The forensic anthropologist is responsible for contributing to the identification of an unknown by constructing a biological profile from their skeletal remains. Towards achieving this goal, anthropologists can apply population and temporally specific standards with known error margins to morphometric data collected from a decedent. Recent research relating to the formulation of sex estimation standards has focussed on the assessment of bones other than the traditionally favoured pelvis and cranium, such as long bones of the appendicular skeleton. In particular, sex estimation standards based on morphometric data from metacarpals and phalanges have reported classification accuracy rates of 80% (and above) based on a narrow range of populations. The purpose of this study is to provide population-specific hand bone sex-estimation standards for a contemporary Western Australian population. The present study examines digital right hand radiographs of 300 adults of known age, equally represented by sex. A total of 40 measurements were taken in each hand (metacarpals and proximal phalanges); the measurements were then analysed using univariate statistics and cross-validated direct and stepwise discriminant function analysis. All hand bone measurements were significantly sexually dimorphic, with a tendency for the width measurements to express a higher degree of dimorphism than the length measurements. A maximum cross-validated classification accuracy of 91% was achieved with a sex bias of -6%. The standards presented here can be used in future forensic investigations that require sex estimation of hand bones in a Western Australian population. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Quality Assessment of Domesticated Animal Genome Assemblies

    DEFF Research Database (Denmark)

    Seemann, Stefan E; Anthon, Christian; Palasca, Oana

    2015-01-01

    domesticated animal genomes still need to be sequenced deeper in order to produce high-quality assemblies. In the meanwhile, ironically, the extent to which RNAseq and other next-generation data is produced frequently far exceeds that of the genomic sequence. Furthermore, basic comparative analysis is often...... affected by the lack of genomic sequence. Herein, we quantify the quality of the genome assemblies of 20 domesticated animals and related species by assessing a range of measurable parameters, and we show that there is a positive correlation between the fraction of mappable reads from RNAseq data...

  1. Quantitative Metrics and Risk Assessment: The Three Tenets Model of Cybersecurity

    Directory of Open Access Journals (Sweden)

    Jeff Hughes

    2013-08-01

    Full Text Available Progress in operational cybersecurity has been difficult to demonstrate. In spite of the considerable research and development investments made for more than 30 years, many government, industrial, financial, and consumer information systems continue to be successfully attacked and exploited on a routine basis. One of the main reasons that progress has been so meagre is that most technical cybersecurity solutions that have been proposed to-date have been point solutions that fail to address operational tradeoffs, implementation costs, and consequent adversary adaptations across the full spectrum of vulnerabilities. Furthermore, sound prescriptive security principles previously established, such as the Orange Book, have been difficult to apply given current system complexity and acquisition approaches. To address these issues, the authors have developed threat-based descriptive methodologies to more completely identify system vulnerabilities, to quantify the effectiveness of possible protections against those vulnerabilities, and to evaluate operational consequences and tradeoffs of possible protections. This article begins with a discussion of the tradeoffs among seemingly different system security properties such as confidentiality, integrity, and availability. We develop a quantitative framework for understanding these tradeoffs and the issues that arise when those security properties are all in play within an organization. Once security goals and candidate protections are identified, risk/benefit assessments can be performed using a novel multidisciplinary approach, called “QuERIES.” The article ends with a threat-driven quantitative methodology, called “The Three Tenets”, for identifying vulnerabilities and countermeasures in networked cyber-physical systems. The goal of this article is to offer operational guidance, based on the techniques presented here, for informed decision making about cyber-physical system security.

  2. Learnometrics: Metrics for Learning Objects (Learnometrics: metrieken voor leerobjecten)

    OpenAIRE

    Ochoa, Xavier

    2008-01-01

    - Introduction - Quantitative Analysis of the Publication of Learning Objects - Quantiative Analysis of the Reuse of Learning Objects - Metadata Quality Metrics for Learning Objects - Relevance Ranking Metrics for Learning Objects - Metrics Service Architecture and Use Cases - Conclusions

  3. Trajectory-Oriented Approach to Managing Traffic Complexity: Trajectory Flexibility Metrics and Algorithms and Preliminary Complexity Impact Assessment

    Science.gov (United States)

    Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek

    2009-01-01

    This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.

  4. Assessing natural resource use by forest-reliant communities in Madagascar using functional diversity and functional redundancy metrics.

    Directory of Open Access Journals (Sweden)

    Kerry A Brown

    Full Text Available Biodiversity plays an integral role in the livelihoods of subsistence-based forest-dwelling communities and as a consequence it is increasingly important to develop quantitative approaches that capture not only changes in taxonomic diversity, but also variation in natural resources and provisioning services. We apply a functional diversity metric originally developed for addressing questions in community ecology to assess utilitarian diversity of 56 forest plots in Madagascar. The use categories for utilitarian plants were determined using expert knowledge and household questionnaires. We used a null model approach to examine the utilitarian (functional diversity and utilitarian redundancy present within ecological communities. Additionally, variables that might influence fluctuations in utilitarian diversity and redundancy--specifically number of felled trees, number of trails, basal area, canopy height, elevation, distance from village--were analyzed using Generalized Linear Models (GLMs. Eighteen of the 56 plots showed utilitarian diversity values significantly higher than expected. This result indicates that these habitats exhibited a low degree of utilitarian redundancy and were therefore comprised of plants with relatively distinct utilitarian properties. One implication of this finding is that minor losses in species richness may result in reductions in utilitarian diversity and redundancy, which may limit local residents' ability to switch between alternative choices. The GLM analysis showed that the most predictive model included basal area, canopy height and distance from village, which suggests that variation in utilitarian redundancy may be a result of local residents harvesting resources from the protected area. Our approach permits an assessment of the diversity of provisioning services available to local communities, offering unique insights that would not be possible using traditional taxonomic diversity measures. These analyses

  5. Assessing natural resource use by forest-reliant communities in Madagascar using functional diversity and functional redundancy metrics.

    Science.gov (United States)

    Brown, Kerry A; Flynn, Dan F B; Abram, Nicola K; Ingram, J Carter; Johnson, Steig E; Wright, Patricia

    2011-01-01

    Biodiversity plays an integral role in the livelihoods of subsistence-based forest-dwelling communities and as a consequence it is increasingly important to develop quantitative approaches that capture not only changes in taxonomic diversity, but also variation in natural resources and provisioning services. We apply a functional diversity metric originally developed for addressing questions in community ecology to assess utilitarian diversity of 56 forest plots in Madagascar. The use categories for utilitarian plants were determined using expert knowledge and household questionnaires. We used a null model approach to examine the utilitarian (functional) diversity and utilitarian redundancy present within ecological communities. Additionally, variables that might influence fluctuations in utilitarian diversity and redundancy--specifically number of felled trees, number of trails, basal area, canopy height, elevation, distance from village--were analyzed using Generalized Linear Models (GLMs). Eighteen of the 56 plots showed utilitarian diversity values significantly higher than expected. This result indicates that these habitats exhibited a low degree of utilitarian redundancy and were therefore comprised of plants with relatively distinct utilitarian properties. One implication of this finding is that minor losses in species richness may result in reductions in utilitarian diversity and redundancy, which may limit local residents' ability to switch between alternative choices. The GLM analysis showed that the most predictive model included basal area, canopy height and distance from village, which suggests that variation in utilitarian redundancy may be a result of local residents harvesting resources from the protected area. Our approach permits an assessment of the diversity of provisioning services available to local communities, offering unique insights that would not be possible using traditional taxonomic diversity measures. These analyses introduce another

  6. Cyber threat metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Frye, Jason Neal; Veitch, Cynthia K.; Mateski, Mark Elliot; Michalski, John T.; Harris, James Mark; Trevino, Cassandra M.; Maruoka, Scott

    2012-03-01

    Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.

  7. Validation of no-reference image quality index for the assessment of digital mammographic images

    Science.gov (United States)

    de Oliveira, Helder C. R.; Barufaldi, Bruno; Borges, Lucas R.; Gabarda, Salvador; Bakic, Predrag R.; Maidment, Andrew D. A.; Schiabel, Homero; Vieira, Marcelo A. C.

    2016-03-01

    To ensure optimal clinical performance of digital mammography, it is necessary to obtain images with high spatial resolution and low noise, keeping radiation exposure as low as possible. These requirements directly affect the interpretation of radiologists. The quality of a digital image should be assessed using objective measurements. In general, these methods measure the similarity between a degraded image and an ideal image without degradation (ground-truth), used as a reference. These methods are called Full-Reference Image Quality Assessment (FR-IQA). However, for digital mammography, an image without degradation is not available in clinical practice; thus, an objective method to assess the quality of mammograms must be performed without reference. The purpose of this study is to present a Normalized Anisotropic Quality Index (NAQI), based on the Rényi entropy in the pseudo-Wigner domain, to assess mammography images in terms of spatial resolution and noise without any reference. The method was validated using synthetic images acquired through an anthropomorphic breast software phantom, and the clinical exposures on anthropomorphic breast physical phantoms and patient's mammograms. The results reported by this noreference index follow the same behavior as other well-established full-reference metrics, e.g., the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Reductions of 50% on the radiation dose in phantom images were translated as a decrease of 4dB on the PSNR, 25% on the SSIM and 33% on the NAQI, evidencing that the proposed metric is sensitive to the noise resulted from dose reduction. The clinical results showed that images reduced to 53% and 30% of the standard radiation dose reported reductions of 15% and 25% on the NAQI, respectively. Thus, this index may be used in clinical practice as an image quality indicator to improve the quality assurance programs in mammography; hence, the proposed method reduces the subjectivity

  8. Multiple Image Arrangement for Subjective Quality Assessment

    Science.gov (United States)

    Wang, Yan; Zhai, Guangtao

    2017-12-01

    Subjective quality assessment serves as the foundation for almost all visual quality related researches. Size of the image quality databases has expanded from dozens to thousands in the last decades. Since each subjective rating therein has to be averaged over quite a few participants, the ever-increasing overall size of those databases calls for an evolution of existing subjective test methods. Traditional single/double stimulus based approaches are being replaced by multiple image tests, where several distorted versions of the original one are displayed and rated at once. And this naturally brings upon the question of how to arrange those multiple images on screen during the test. In this paper, we answer this question by performing subjective viewing test with eye tracker for different types arrangements. Our research indicates that isometric arrangement imposes less duress on participants and has more uniform distribution of eye fixations and movements and therefore is expected to generate more reliable subjective ratings.

  9. Quality Assessment of Urinary Stone Analysis

    DEFF Research Database (Denmark)

    Siener, Roswitha; Buchholz, Noor; Daudon, Michel

    2016-01-01

    , between 2010 and 2014. Each participant received the same blinded test samples for stone analysis. A total of 24 samples, comprising pure substances and mixtures of two or three components, were analysed. The evaluation of the quality of the laboratory in the present study was based on the attainment...... and chemical analysis. The aim of the present study was to assess the quality of urinary stone analysis of laboratories in Europe. Nine laboratories from eight European countries participated in six quality control surveys for urinary calculi analyses of the Reference Institute for Bioanalytics, Bonn, Germany...... of 75% of the maximum total points, i.e. 99 points. The methods of stone analysis used were infrared spectroscopy (n = 7), chemical analysis (n = 1) and X-ray diffraction (n = 1). In the present study only 56% of the laboratories, four using infrared spectroscopy and one using X-ray diffraction...

  10. Evaluating the Good Ontology Design Guideline (GoodOD with the ontology quality requirements and evaluation method and metrics (OQuaRE.

    Directory of Open Access Journals (Sweden)

    Astrid Duque-Ramos

    Full Text Available OBJECTIVE: To (1 evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2 informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. BACKGROUND: In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. METHODS: In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. RESULTS: Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b both positive and negative training effects with respect to the gold standard were found for five topics. CONCLUSION: The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies.

  11. Evaluating the Good Ontology Design Guideline (GoodOD) with the ontology quality requirements and evaluation method and metrics (OQuaRE).

    Science.gov (United States)

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies.

  12. A proposed metric for assessing the potential of community annoyance from wind turbine low-frequency noise emissions

    Science.gov (United States)

    Kelley, N. D.

    1987-11-01

    Given our initial experience with the low frequency, impulsive noise emissions from the MOD-1 wind turbine and their impact on the surrounding community, the ability to assess the potential of interior low frequency annoyance in homes located near wind turbine installations may be important. Since there are currently no universally accepted metrics or descriptors for low frequency community annoyance, we performed a limited program using volunteers to see if we could identify a method suitable for wind turbine noise applications. We electronically simulated three interior environments resulting from low frequency acoustical loads radiated from both individual turbines and groups of upwind and downwind turbines. The written comments of the volunteers exposed to these interior stimuli were correlated with a number of descriptors which have been proposed for predicting low frequency annoyance. The results are presented in this paper. We discuss our modification of the highest correlated predictor to include the internal dynamic pressure effects associated with the response of residential structures to low frequency acoustic loads. Finally, we outline a proposed procedure for establishing both a low frequency figure of merit for a particular wind turbine design and, using actual measurements, estimate the potential for annoyance to nearby communities.

  13. Visual quality assessment by machine learning

    CERN Document Server

    Xu, Long; Kuo, C -C Jay

    2015-01-01

    The book encompasses the state-of-the-art visual quality assessment (VQA) and learning based visual quality assessment (LB-VQA) by providing a comprehensive overview of the existing relevant methods. It delivers the readers the basic knowledge, systematic overview and new development of VQA. It also encompasses the preliminary knowledge of Machine Learning (ML) to VQA tasks and newly developed ML techniques for the purpose. Hence, firstly, it is particularly helpful to the beginner-readers (including research students) to enter into VQA field in general and LB-VQA one in particular. Secondly, new development in VQA and LB-VQA particularly are detailed in this book, which will give peer researchers and engineers new insights in VQA.

  14. Ecosystem approaches to environmental quality assessment

    Science.gov (United States)

    Nip, Maarten J.; Udo de Haes, Helias A.

    1995-01-01

    Environmental quality assessment has to focus more on the quality of whole ecosystems, instead of focusing on the direct effects of a specific stressor, because of a more integrated environmetal policy approach. Yet, how can the ecosystem quality be measured? Partly this is a normative question, a question of what is considered good and bad. At the same time, it is a scientific question, dealing with the problem of low the state of a system as complex as an ecosystem could be measured. Measuring all abiotic and biotic components, not to mention their many relationships, is not feasible. In this article we review several approaches dealing with this scientific question. Three approaches are distinguished; they differ in type of variable set and ecosystem model used. As a result of this, the information about the state of the ecosystem differs: ultimate breadth, comprising information about the whole ecosystem, is at the expense of detail, while ultimate detail is at the expense of breadth. We discuss whether the resultant quality assessments differ in character and are therefore suitable to answer different policy questions.

  15. Collembase : a repository for springtail genomics and soil quality assessment

    NARCIS (Netherlands)

    Timmermans, Martijn J T N; de Boer, Muriel E; Nota, Benjamin; de Boer, Tjalf E; Mariën, Janine; Klein-Lankhorst, Rene M; van Straalen, Nico M; Roelofs, Dick

    2007-01-01

    BACKGROUND: Environmental quality assessment is traditionally based on responses of reproduction and survival of indicator organisms.