WorldWideScience

Sample records for assessments quality metrics

  1. Assessing Software Quality Through Visualised Cohesion Metrics

    Directory of Open Access Journals (Sweden)

    Timothy Shih

    2001-05-01

    Full Text Available Cohesion is one of the most important factors for software quality as well as maintainability, reliability and reusability. Module cohesion is defined as a quality attribute that seeks for measuring the singleness of the purpose of a module. The module of poor quality can be a serious obstacle to the system quality. In order to design a good software quality, software managers and engineers need to introduce cohesion metrics to measure and produce desirable software. A highly cohesion software is thought to be a desirable constructing. In this paper, we propose a function-oriented cohesion metrics based on the analysis of live variables, live span and the visualization of processing element dependency graph. We give six typical cohesion examples to be measured as our experiments and justification. Therefore, a well-defined, well-normalized, well-visualized and well-experimented cohesion metrics is proposed to indicate and thus enhance software cohesion strength. Furthermore, this cohesion metrics can be easily incorporated with software CASE tool to help software engineers to improve software quality.

  2. A software quality model and metrics for risk assessment

    Science.gov (United States)

    Hyatt, L.; Rosenberg, L.

    1996-01-01

    A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.

  3. Supporting analysis and assessments quality metrics: Utility market sector

    Energy Technology Data Exchange (ETDEWEB)

    Ohi, J. [National Renewable Energy Lab., Golden, CO (United States)

    1996-10-01

    In FY96, NREL was asked to coordinate all analysis tasks so that in FY97 these tasks will be part of an integrated analysis agenda that will begin to define a 5-15 year R&D roadmap and portfolio for the DOE Hydrogen Program. The purpose of the Supporting Analysis and Assessments task at NREL is to provide this coordination and conduct specific analysis tasks. One of these tasks is to prepare the Quality Metrics (QM) for the Program as part of the overall QM effort at DOE/EERE. The Hydrogen Program one of 39 program planning units conducting QM, a process begun in FY94 to assess benefits/costs of DOE/EERE programs. The purpose of QM is to inform decisionmaking during budget formulation process by describing the expected outcomes of programs during the budget request process. QM is expected to establish first step toward merit-based budget formulation and allow DOE/EERE to get {open_quotes}most bang for its (R&D) buck.{close_quotes} In FY96. NREL coordinated a QM team that prepared a preliminary QM for the utility market sector. In the electricity supply sector, the QM analysis shows hydrogen fuel cells capturing 5% (or 22 GW) of the total market of 390 GW of new capacity additions through 2020. Hydrogen consumption in the utility sector increases from 0.009 Quads in 2005 to 0.4 Quads in 2020. Hydrogen fuel cells are projected to displace over 0.6 Quads of primary energy in 2020. In future work, NREL will assess the market for decentralized, on-site generation, develop cost credits for distributed generation benefits (such as deferral of transmission and distribution investments, uninterruptible power service), cost credits for by-products such as heat and potable water, cost credits for environmental benefits (reduction of criteria air pollutants and greenhouse gas emissions), compete different fuel cell technologies against each other for market share, and begin to address economic benefits, especially employment.

  4. Software Quality Assurance Metrics

    Science.gov (United States)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  5. Metric qualities of the cognitive behavioral assessment for outcome evaluation to estimate psychological treatment effects.

    Science.gov (United States)

    Bertolotti, Giorgio; Michielin, Paolo; Vidotto, Giulio; Sanavio, Ezio; Bottesi, Gioia; Bettinardi, Ornella; Zotti, Anna Maria

    2015-01-01

    Cognitive behavioral assessment for outcome evaluation was developed to evaluate psychological treatment interventions, especially for counseling and psychotherapy. It is made up of 80 items and five scales: anxiety, well-being, perception of positive change, depression, and psychological distress. The aim of the study was to present the metric qualities and to show validity and reliability of the five constructs of the questionnaire both in nonclinical and clinical subjects. Four steps were completed to assess reliability and factor structure: criterion-related and concurrent validity, responsiveness, and convergent-divergent validity. A nonclinical group of 269 subjects was enrolled, as was a clinical group comprising 168 adults undergoing psychotherapy and psychological counseling provided by the Italian public health service. Cronbach's alphas were between 0.80 and 0.91 for the clinical sample and between 0.74 and 0.91 in the nonclinical one. We observed an excellent structural validity for the five interrelated dimensions. The clinical group showed higher scores in the anxiety, depression, and psychological distress scales, as well as lower scores in well-being and perception of positive change scales than those observed in the nonclinical group. Responsiveness was large for the anxiety, well-being, and depression scales; the psychological distress and perception of positive change scales showed a moderate effect. The questionnaire showed excellent psychometric properties, thus demonstrating that the questionnaire is a good evaluative instrument, with which to assess pre- and post-treatment outcomes.

  6. Quality Assessment of Adaptive Bitrate Videos using Image Metrics and Machine Learning

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Brunnström, Kjell

    2015-01-01

    Adaptive bitrate (ABR) streaming is widely used for distribution of videos over the internet. In this work, we investigate how well we can predict the quality of such videos using well-known image metrics, information about the bitrate levels, and a relatively simple machine learning method...

  7. SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject to random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be

  8. Software quality metrics aggregation in industry

    NARCIS (Netherlands)

    Mordal, K.; Anquetil, N.; Laval, J.; Serebrenik, A.; Vasilescu, B.N.; Ducasse, S.

    2013-01-01

    With the growing need for quality assessment of entire software systems in the industry, new issues are emerging. First, because most software quality metrics are defined at the level of individual software components, there is a need for aggregation methods to summarize the results at the system

  9. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    Science.gov (United States)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  10. Application of sigma metrics for the assessment of quality control in clinical chemistry laboratory in Ghana: A pilot study.

    Science.gov (United States)

    Afrifa, Justice; Gyekye, Seth A; Owiredu, William K B A; Ephraim, Richard K D; Essien-Baidoo, Samuel; Amoah, Samuel; Simpong, David L; Arthur, Aaron R

    2015-01-01

    Sigma metrics provide a uniquely defined scale with which we can assess the performance of a laboratory. The objective of this study was to assess the internal quality control (QC) in the clinical chemistry laboratory of the University of Cape Cost Hospital (UCC) using the six sigma metrics application. We used commercial control serum [normal (L1) and pathological (L2)] for validation of quality control. Metabolites (glucose, urea, and creatinine), lipids [triglycerides (TG), total cholesterol, high-density lipoprotein cholesterol (HDL-C)], enzymes [alkaline phosphatase (ALP), alanine aminotransferase (AST)], electrolytes (sodium, potassium, chloride) and total protein were assessed. Between-day imprecision (CVs), inaccuracy (Bias) and sigma values were calculated for each control level. Apart from sodium (2.40%, 3.83%), chloride (2.52% and 2.51%) for both L1 and L2 respectively, and glucose (4.82%), cholesterol (4.86%) for L2, CVs for all other parameters (both L1 and L2) were >5%. Four parameters (HDL-C, urea, creatinine and potassium) achieved sigma levels >1 for both controls. Chloride and sodium achieved sigma levels >1 for L1 but sigma levels 1 for L2. Glucose and ALP achieved a sigma level >1 for both control levels whereas TG achieved a sigma level >2 for both control levels. Unsatisfactory sigma levels (six sigma levels for the laboratory.

  11. A No Reference Image Quality Assessment Metric Based on Visual Perception

    Directory of Open Access Journals (Sweden)

    Yan Fu

    2016-12-01

    Full Text Available Nowadays, how to evaluate image quality reasonably is a basic and challenging problem. In view of the present no reference evaluation methods, they cannot reflect the human visual perception of image quality accurately. In this paper, we propose an efficient general-purpose no reference image quality assessment (NRIQA method based on visual perception, and effectively integrates human visual characteristics into the NRIQA fields. First, a novel algorithm for salient region extraction is presented. Two characteristics graphs of texture and edging of the original image are added to the Itti model. Due to the normalized luminance coefficients of natural images obey the generalized Gauss probability distribution, we utilize this characteristic to extract statistical features in the regions of interest (ROI and regions of non-interest respectively. Then, the extracted features are fused to be an input to establish the support vector regression (SVR model. Finally, the IQA model obtained by training is used to predict the quality of the image. Experimental results show that this method has good predictive ability, and the evaluation effect is better than existing classical algorithms. Moreover, the predicted results are more consistent with human subjective perception, which can accurately reflect the human visual perception to image quality.

  12. Application of Sigma Metrics Analysis for the Assessment and Modification of Quality Control Program in the Clinical Chemistry Laboratory of a Tertiary Care Hospital.

    Science.gov (United States)

    Iqbal, Sahar; Mustansar, Tazeen

    2017-03-01

    Sigma is a metric that quantifies the performance of a process as a rate of Defects-Per-Million opportunities. In clinical laboratories, sigma metric analysis is used to assess the performance of laboratory process system. Sigma metric is also used as a quality management strategy for a laboratory process to improve the quality by addressing the errors after identification. The aim of this study is to evaluate the errors in quality control of analytical phase of laboratory system by sigma metric. For this purpose sigma metric analysis was done for analytes using the internal and external quality control as quality indicators. Results of sigma metric analysis were used to identify the gaps and need for modification in the strategy of laboratory quality control procedure. Sigma metric was calculated for quality control program of ten clinical chemistry analytes including glucose, chloride, cholesterol, triglyceride, HDL, albumin, direct bilirubin, total bilirubin, protein and creatinine, at two control levels. To calculate the sigma metric imprecision and bias was calculated with internal and external quality control data, respectively. The minimum acceptable performance was considered as 3 sigma. Westgard sigma rules were applied to customize the quality control procedure. Sigma level was found acceptable (≥3) for glucose (L2), cholesterol, triglyceride, HDL, direct bilirubin and creatinine at both levels of control. For rest of the analytes sigma metric was found control levels (8.8 and 8.0 at L2 and L3, respectively). We conclude that analytes with the sigma value quality control procedure. In this study application of sigma rules provided us the practical solution for improved and focused design of QC procedure.

  13. A management-oriented framework for selecting metrics used to assess habitat- and path-specific quality in spatially structured populations

    Science.gov (United States)

    Nicol, Sam; Wiederholt, Ruscena; Diffendorfer, James E.; Mattsson, Brady; Thogmartin, Wayne E.; Semmens, Darius J.; Laura Lopez-Hoffman,; Norris, Ryan

    2016-01-01

    Mobile species with complex spatial dynamics can be difficult to manage because their population distributions vary across space and time, and because the consequences of managing particular habitats are uncertain when evaluated at the level of the entire population. Metrics to assess the importance of habitats and pathways connecting habitats in a network are necessary to guide a variety of management decisions. Given the many metrics developed for spatially structured models, it can be challenging to select the most appropriate one for a particular decision. To guide the management of spatially structured populations, we define three classes of metrics describing habitat and pathway quality based on their data requirements (graph-based, occupancy-based, and demographic-based metrics) and synopsize the ecological literature relating to these classes. Applying the first steps of a formal decision-making approach (problem framing, objectives, and management actions), we assess the utility of metrics for particular types of management decisions. Our framework can help managers with problem framing, choosing metrics of habitat and pathway quality, and to elucidate the data needs for a particular metric. Our goal is to help managers to narrow the range of suitable metrics for a management project, and aid in decision-making to make the best use of limited resources.

  14. Landscape pattern metrics and regional assessment

    Science.gov (United States)

    O'Neill, R. V.; Riitters, K.H.; Wickham, J.D.; Jones, K.B.

    1999-01-01

    The combination of remote imagery data, geographic information systems software, and landscape ecology theory provides a unique basis for monitoring and assessing large-scale ecological systems. The unique feature of the work has been the need to develop and interpret quantitative measures of spatial pattern-the landscape indices. This article reviews what is known about the statistical properties of these pattern metrics and suggests some additional metrics based on island biogeography, percolation theory, hierarchy theory, and economic geography. Assessment applications of this approach have required interpreting the pattern metrics in terms of specific environmental endpoints, such as wildlife and water quality, and research into how to represent synergystic effects of many overlapping sources of stress.

  15. Survival As a Quality Metric of Cancer Care: Use of the National Cancer Data Base to Assess Hospital Performance.

    Science.gov (United States)

    Shulman, Lawrence N; Palis, Bryan E; McCabe, Ryan; Mallin, Kathy; Loomis, Ashley; Winchester, David; McKellar, Daniel

    2018-01-01

    Survival is considered an important indicator of the quality of cancer care, but the validity of different methodologies to measure comparative survival rates is less well understood. We explored whether the National Cancer Data Base (NCDB) could serve as a source of unadjusted and risk-adjusted cancer survival data and whether these data could be used as quality indicators for individual hospitals or in the aggregate by hospital type. The NCDB, an aggregate of > 1,500 hospital cancer registries, was queried to analyze unadjusted and risk-adjusted hazards of death for patients with stage III breast cancer (n = 116,787) and stage IIIB or IV non-small-cell lung cancer (n = 252,392). Data were analyzed at the individual hospital level and by hospital type. At the hospital level, after risk adjustment, few hospitals had comparative risk-adjusted survival rates that were statistically better or worse. By hospital type, National Cancer Institute-designated comprehensive cancer centers had risk-adjusted survival ratios that were statistically significantly better than those of academic cancer centers and community hospitals. Using the NCDB as the data source, survival rates for patients with stage III breast cancer and stage IIIB or IV non-small-cell lung cancer were statistically better at National Cancer Institute-designated comprehensive cancer centers when compared with other hospital types. Compared with academic hospitals, risk-adjusted survival was lower in community hospitals. At the individual hospital level, after risk adjustment, few hospitals were shown to have statistically better or worse survival, suggesting that, using NCDB data, survival may not be a good metric to determine relative quality of cancer care at this level.

  16. Relevance of motion-related assessment metrics in laparoscopic surgery.

    Science.gov (United States)

    Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J

    2013-06-01

    Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.

  17. Metrics design for safety assessment

    NARCIS (Netherlands)

    Luo, Yaping; van den Brand, M.G.J.

    2016-01-01

    Context:In the safety domain, safety assessment is used to show that safety-critical systems meet the required safety objectives. This process is also referred to as safety assurance and certification. During this procedure, safety standards are used as development guidelines to keep the risk at an

  18. Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling

    Science.gov (United States)

    Mog, Robert A.

    1997-01-01

    Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.

  19. Performance evaluation of objective quality metrics for HDR image compression

    Science.gov (United States)

    Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic

    2014-09-01

    Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.

  20. Decision Analysis for Metric Selection on a Clinical Quality Scorecard.

    Science.gov (United States)

    Guth, Rebecca M; Storey, Patricia E; Vitale, Michael; Markan-Aurora, Sumita; Gordon, Randolph; Prevost, Traci Q; Dunagan, Wm Claiborne; Woeltje, Keith F

    2016-09-01

    Clinical quality scorecards are used by health care institutions to monitor clinical performance and drive quality improvement. Because of the rapid proliferation of quality metrics in health care, BJC HealthCare found it increasingly difficult to select the most impactful scorecard metrics while still monitoring metrics for regulatory purposes. A 7-step measure selection process was implemented incorporating Kepner-Tregoe Decision Analysis, which is a systematic process that considers key criteria that must be satisfied in order to make the best decision. The decision analysis process evaluates what metrics will most appropriately fulfill these criteria, as well as identifies potential risks associated with a particular metric in order to identify threats to its implementation. Using this process, a list of 750 potential metrics was narrowed to 25 that were selected for scorecard inclusion. This decision analysis process created a more transparent, reproducible approach for selecting quality metrics for clinical quality scorecards. © The Author(s) 2015.

  1. A Single Conjunction Risk Assessment Metric: the F-Value

    Science.gov (United States)

    Frigm, Ryan Clayton; Newman, Lauri K.

    2009-01-01

    The Conjunction Assessment Team at NASA Goddard Space Flight Center provides conjunction risk assessment for many NASA robotic missions. These risk assessments are based on several figures of merit, such as miss distance, probability of collision, and orbit determination solution quality. However, these individual metrics do not singly capture the overall risk associated with a conjunction, making it difficult for someone without this complete understanding to take action, such as an avoidance maneuver. The goal of this analysis is to introduce a single risk index metric that can easily convey the level of risk without all of the technical details. The proposed index is called the conjunction "F-value." This paper presents the concept of the F-value and the tuning of the metric for use in routine Conjunction Assessment operations.

  2. Towards Video Quality Metrics Based on Colour Fractal Geometry

    Directory of Open Access Journals (Sweden)

    Richard Noël

    2010-01-01

    Full Text Available Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.

  3. A no-reference image and video visual quality metric based on machine learning

    Science.gov (United States)

    Frantc, Vladimir; Voronin, Viacheslav; Semenishchev, Evgenii; Minkin, Maxim; Delov, Aliy

    2018-04-01

    The paper presents a novel visual quality metric for lossy compressed video quality assessment. High degree of correlation with subjective estimations of quality is due to using of a convolutional neural network trained on a large amount of pairs video sequence-subjective quality score. We demonstrate how our predicted no-reference quality metric correlates with qualitative opinion in a human observer study. Results are shown on the EVVQ dataset with comparison existing approaches.

  4. Quality metric for spherical panoramic video

    Science.gov (United States)

    Zakharchenko, Vladyslav; Choi, Kwang Pyo; Park, Jeong Hoon

    2016-09-01

    Virtual reality (VR)/ augmented reality (AR) applications allow users to view artificial content of a surrounding space simulating presence effect with a help of special applications or devices. Synthetic contents production is well known process form computer graphics domain and pipeline has been already fixed in the industry. However emerging multimedia formats for immersive entertainment applications such as free-viewpoint television (FTV) or spherical panoramic video require different approaches in content management and quality assessment. The international standardization on FTV has been promoted by MPEG. This paper is dedicated to discussion of immersive media distribution format and quality estimation process. Accuracy and reliability of the proposed objective quality estimation method had been verified with spherical panoramic images demonstrating good correlation results with subjective quality estimation held by a group of experts.

  5. Development of quality metrics for ambulatory pediatric cardiology: Infection prevention.

    Science.gov (United States)

    Johnson, Jonathan N; Barrett, Cindy S; Franklin, Wayne H; Graham, Eric M; Halnon, Nancy J; Hattendorf, Brandy A; Krawczeski, Catherine D; McGovern, James J; O'Connor, Matthew J; Schultz, Amy H; Vinocur, Jeffrey M; Chowdhury, Devyani; Anderson, Jeffrey B

    2017-12-01

    In 2012, the American College of Cardiology's (ACC) Adult Congenital and Pediatric Cardiology Council established a program to develop quality metrics to guide ambulatory practices for pediatric cardiology. The council chose five areas on which to focus their efforts; chest pain, Kawasaki Disease, tetralogy of Fallot, transposition of the great arteries after arterial switch, and infection prevention. Here, we sought to describe the process, evaluation, and results of the Infection Prevention Committee's metric design process. The infection prevention metrics team consisted of 12 members from 11 institutions in North America. The group agreed to work on specific infection prevention topics including antibiotic prophylaxis for endocarditis, rheumatic fever, and asplenia/hyposplenism; influenza vaccination and respiratory syncytial virus prophylaxis (palivizumab); preoperative methods to reduce intraoperative infections; vaccinations after cardiopulmonary bypass; hand hygiene; and testing to identify splenic function in patients with heterotaxy. An extensive literature review was performed. When available, previously published guidelines were used fully in determining metrics. The committee chose eight metrics to submit to the ACC Quality Metric Expert Panel for review. Ultimately, metrics regarding hand hygiene and influenza vaccination recommendation for patients did not pass the RAND analysis. Both endocarditis prophylaxis metrics and the RSV/palivizumab metric passed the RAND analysis but fell out during the open comment period. Three metrics passed all analyses, including those for antibiotic prophylaxis in patients with heterotaxy/asplenia, for influenza vaccination compliance in healthcare personnel, and for adherence to recommended regimens of secondary prevention of rheumatic fever. The lack of convincing data to guide quality improvement initiatives in pediatric cardiology is widespread, particularly in infection prevention. Despite this, three metrics were

  6. Experiences with Software Quality Metrics in the EMI middlewate

    OpenAIRE

    Alandes, M; Kenny, E M; Meneses, D; Pucciani, G

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristi...

  7. Degraded visual environment image/video quality metrics

    Science.gov (United States)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  8. Evaluating which plan quality metrics are appropriate for use in lung SBRT.

    Science.gov (United States)

    Yaparpalvi, Ravindra; Garg, Madhur K; Shen, Jin; Bodner, William R; Mynampati, Dinesh K; Gafar, Aleiya; Kuo, Hsiang-Chi; Basavatia, Amar K; Ohri, Nitin; Hong, Linda X; Kalnicki, Shalom; Tome, Wolfgang A

    2018-02-01

    Several dose metrics in the categories-homogeneity, coverage, conformity and gradient have been proposed in literature for evaluating treatment plan quality. In this study, we applied these metrics to characterize and identify the plan quality metrics that would merit plan quality assessment in lung stereotactic body radiation therapy (SBRT) dose distributions. Treatment plans of 90 lung SBRT patients, comprising 91 targets, treated in our institution were retrospectively reviewed. Dose calculations were performed using anisotropic analytical algorithm (AAA) with heterogeneity correction. A literature review on published plan quality metrics in the categories-coverage, homogeneity, conformity and gradient was performed. For each patient, using dose-volume histogram data, plan quality metric values were quantified and analysed. For the study, the radiation therapy oncology group (RTOG) defined plan quality metrics were: coverage (0.90 ± 0.08); homogeneity (1.27 ± 0.07); conformity (1.03 ± 0.07) and gradient (4.40 ± 0.80). Geometric conformity strongly correlated with conformity index (p plan quality guidelines-coverage % (ICRU 62), conformity (CN or CI Paddick ) and gradient (R 50% ). Furthermore, we strongly recommend that RTOG lung SBRT protocols adopt either CN or CI Padddick in place of prescription isodose to target volume ratio for conformity index evaluation. Advances in knowledge: Our study metrics are valuable tools for establishing lung SBRT plan quality guidelines.

  9. Research on quality metrics of wireless adaptive video streaming

    Science.gov (United States)

    Li, Xuefei

    2018-04-01

    With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.

  10. [Clinical trial data management and quality metrics system].

    Science.gov (United States)

    Chen, Zhao-hua; Huang, Qin; Deng, Ya-zhong; Zhang, Yue; Xu, Yu; Yu, Hao; Liu, Zong-fan

    2015-11-01

    Data quality management system is essential to ensure accurate, complete, consistent, and reliable data collection in clinical research. This paper is devoted to various choices of data quality metrics. They are categorized by study status, e.g. study start up, conduct, and close-out. In each category, metrics for different purposes are listed according to ALCOA+ principles such us completeness, accuracy, timeliness, traceability, etc. Some general quality metrics frequently used are also introduced. This paper contains detail information as much as possible to each metric by providing definition, purpose, evaluation, referenced benchmark, and recommended targets in favor of real practice. It is important that sponsors and data management service providers establish a robust integrated clinical trial data quality management system to ensure sustainable high quality of clinical trial deliverables. It will also support enterprise level of data evaluation and bench marking the quality of data across projects, sponsors, data management service providers by using objective metrics from the real clinical trials. We hope this will be a significant input to accelerate the improvement of clinical trial data quality in the industry.

  11. A universal color image quality metric

    NARCIS (Netherlands)

    Toet, A.; Lucassen, M.P.

    2003-01-01

    We extend a recently introduced universal grayscale image quality index to a newly developed perceptually decorrelated color space. The resulting color image quality index quantifies the distortion of a processed color image relative to its original version. We evaluated the new color image quality

  12. Experiences with Software Quality Metrics in the EMI middleware

    International Nuclear Information System (INIS)

    Alandes, M; Meneses, D; Pucciani, G; Kenny, E M

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project to extract “code metrics” on the status of the software products and “process metrics” related to the quality of the development and support process such as reaction time to critical bugs, requirements tracking and delays in product releases.

  13. Experiences with Software Quality Metrics in the EMI Middleware

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project t...

  14. Development of soil quality metrics using mycorrhizal fungi

    Energy Technology Data Exchange (ETDEWEB)

    Baar, J.

    2010-07-01

    Based on the Treaty on Biological Diversity of Rio de Janeiro in 1992 for maintaining and increasing biodiversity, several countries have started programmes monitoring soil quality and the above- and below ground biodiversity. Within the European Union, policy makers are working on legislation for soil protection and management. Therefore, indicators are needed to monitor the status of the soils and these indicators reflecting the soil quality, can be integrated in working standards or soil quality metrics. Soil micro-organisms, particularly arbuscular mycorrhizal fungi (AMF), are indicative of soil changes. These soil fungi live in symbiosis with the great majority of plants and are sensitive to changes in the physico-chemical conditions of the soil. The aim of this study was to investigate whether AMF are reliable and sensitive indicators for disturbances in the soils and can be used for the development of soil quality metrics. Also, it was studied whether soil quality metrics based on AMF meet requirements to applicability by users and policy makers. Ecological criterions were set for the development of soil quality metrics for different soils. Multiple root samples containing AMF from various locations in The Netherlands were analyzed. The results of the analyses were related to the defined criterions. This resulted in two soil quality metrics, one for sandy soils and a second one for clay soils, with six different categories ranging from very bad to very good. These soil quality metrics meet the majority of requirements for applicability and are potentially useful for the development of legislations for the protection of soil quality. (Author) 23 refs.

  15. Software metrics to improve software quality in HEP

    International Nuclear Information System (INIS)

    Lancon, E.

    1996-01-01

    The ALEPH reconstruction program maintainability has been evaluated with a case tool implementing an ISO standard methodology based on software metrics. It has been found that the overall quality of the program is good and has shown improvement over the past five years. Frequently modified routines exhibits lower quality; most buys were located in routines with particularly low quality. Implementing from the beginning a quality criteria could have avoided time losses due to bug corrections. (author)

  16. Metrics for Objective Assessment of Surgical Skills Workshop

    National Research Council Canada - National Science Library

    Satava, Richard

    2001-01-01

    On 9-10 July, 2001 the Metrics for Objective Assessment of Surgical Skills Workshop convened an international assemblage of subject matter experts in objective assessment of surgical technical skills...

  17. Pragmatic quality metrics for evolutionary software development models

    Science.gov (United States)

    Royce, Walker

    1990-01-01

    Due to the large number of product, project, and people parameters which impact large custom software development efforts, measurement of software product quality is a complex undertaking. Furthermore, the absolute perspective from which quality is measured (customer satisfaction) is intangible. While we probably can't say what the absolute quality of a software product is, we can determine the relative quality, the adequacy of this quality with respect to pragmatic considerations, and identify good and bad trends during development. While no two software engineers will ever agree on an optimum definition of software quality, they will agree that the most important perspective of software quality is its ease of change. We can call this flexibility, adaptability, or some other vague term, but the critical characteristic of software is that it is soft. The easier the product is to modify, the easier it is to achieve any other software quality perspective. This paper presents objective quality metrics derived from consistent lifecycle perspectives of rework which, when used in concert with an evolutionary development approach, can provide useful insight to produce better quality per unit cost/schedule or to achieve adequate quality more efficiently. The usefulness of these metrics is evaluated by applying them to a large, real world, Ada project.

  18. Development of Quality Metrics in Ambulatory Pediatric Cardiology.

    Science.gov (United States)

    Chowdhury, Devyani; Gurvitz, Michelle; Marelli, Ariane; Anderson, Jeffrey; Baker-Smith, Carissa; Diab, Karim A; Edwards, Thomas C; Hougen, Tom; Jedeikin, Roy; Johnson, Jonathan N; Karpawich, Peter; Lai, Wyman; Lu, Jimmy C; Mitchell, Stephanie; Newburger, Jane W; Penny, Daniel J; Portman, Michael A; Satou, Gary; Teitel, David; Villafane, Juan; Williams, Roberta; Jenkins, Kathy

    2017-02-07

    The American College of Cardiology Adult Congenital and Pediatric Cardiology (ACPC) Section had attempted to create quality metrics (QM) for ambulatory pediatric practice, but limited evidence made the process difficult. The ACPC sought to develop QMs for ambulatory pediatric cardiology practice. Five areas of interest were identified, and QMs were developed in a 2-step review process. In the first step, an expert panel, using the modified RAND-UCLA methodology, rated each QM for feasibility and validity. The second step sought input from ACPC Section members; final approval was by a vote of the ACPC Council. Work groups proposed a total of 44 QMs. Thirty-one metrics passed the RAND process and, after the open comment period, the ACPC council approved 18 metrics. The project resulted in successful development of QMs in ambulatory pediatric cardiology for a range of ambulatory domains. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  19. Performance evaluation of no-reference image quality metrics for face biometric images

    Science.gov (United States)

    Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick

    2018-03-01

    The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.

  20. Neurosurgical virtual reality simulation metrics to assess psychomotor skills during brain tumor resection.

    Science.gov (United States)

    Azarnoush, Hamed; Alzhrani, Gmaan; Winkler-Schwartz, Alexander; Alotaibi, Fahad; Gelinas-Phaneuf, Nicholas; Pazos, Valérie; Choudhury, Nusrat; Fares, Jawad; DiRaddo, Robert; Del Maestro, Rolando F

    2015-05-01

    Virtual reality simulator technology together with novel metrics could advance our understanding of expert neurosurgical performance and modify and improve resident training and assessment. This pilot study introduces innovative metrics that can be measured by the state-of-the-art simulator to assess performance. Such metrics cannot be measured in an operating room and have not been used previously to assess performance. Three sets of performance metrics were assessed utilizing the NeuroTouch platform in six scenarios with simulated brain tumors having different visual and tactile characteristics. Tier 1 metrics included percentage of brain tumor resected and volume of simulated "normal" brain tissue removed. Tier 2 metrics included instrument tip path length, time taken to resect the brain tumor, pedal activation frequency, and sum of applied forces. Tier 3 metrics included sum of forces applied to different tumor regions and the force bandwidth derived from the force histogram. The results outlined are from a novice resident in the second year of training and an expert neurosurgeon. The three tiers of metrics obtained from the NeuroTouch simulator do encompass the wide variability of technical performance observed during novice/expert resections of simulated brain tumors and can be employed to quantify the safety, quality, and efficiency of technical performance during simulated brain tumor resection. Tier 3 metrics derived from force pyramids and force histograms may be particularly useful in assessing simulated brain tumor resections. Our pilot study demonstrates that the safety, quality, and efficiency of novice and expert operators can be measured using metrics derived from the NeuroTouch platform, helping to understand how specific operator performance is dependent on both psychomotor ability and cognitive input during multiple virtual reality brain tumor resections.

  1. PSQM-based RR and NR video quality metrics

    Science.gov (United States)

    Lu, Zhongkang; Lin, Weisi; Ong, Eeping; Yang, Xiaokang; Yao, Susu

    2003-06-01

    This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-based metric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.

  2. Studying the added value of visual attention in objective image quality metrics based on eye movement data

    NARCIS (Netherlands)

    Liu, H.; Heynderickx, I.E.J.

    2009-01-01

    Current research on image quality assessment tends to include visual attention in objective metrics to further enhance their performance. A variety of computational models of visual attention are implemented in different metrics, but their accuracy in representing human visual attention is not fully

  3. SU-E-T-222: How to Define and Manage Quality Metrics in Radiation Oncology.

    Science.gov (United States)

    Harrison, A; Cooper, K; DeGregorio, N; Doyle, L; Yu, Y

    2012-06-01

    Since the 2001 IOM Report Crossing the Quality Chasm: A New Health System for the 21st Century, the need to provide quality metrics in health care has increased. Quality metrics have yet to be defined for the field of radiation oncology. This study represents one institutes initial efforts defining and measuring quality metrics using our electronic medical record and verify system(EMR) as a primary data collection tool. This effort began by selecting meaningful quality metrics rooted in the IOM definition of quality (safe, timely, efficient, effective, equitable and patient-centered care) that were also measurable targets based on current data input and workflow. Elekta MOSAIQ 2.30.04D1 was used to generate reports on the number of Special Physics Consults(SPC) charged as a surrogate for treatment complexity, daily patient time in department(DTP) as a measure of efficiency and timeliness, and time from CT-simulation to first LINAC appointment(STL). The number of IMRT QAs delivered in the department was also analyzed to assess complexity. Although initial MOSAIQ reports were easily generated, the data needed to be assessed and adjusted for outliers. Patients with delays outside of radiation oncology such as chemotherapy or surgery were excluded from STL data. We found an average STL of six days for all CT-simulated patients and an average DTP of 52 minutes total time, with 23 minutes in the LINAC vault. Annually, 7.3% of all patient require additional physics support indicated by SPC. Utilizing our EMR, an entire year's worth of useful data characterizing our clinical experience was analyzed in less than one day. Having baseline quality metrics is necessary to improve patient care. Future plans include dissecting this data into more specific categories such as IMRT DTP, workflow timing following CT-simulation, beam-on hours, chart review outcomes, and dosimetric quality indicators. © 2012 American Association of Physicists in Medicine.

  4. Assessment of the Log-Euclidean Metric Performance in Diffusion Tensor Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mostafa Charmi

    2010-06-01

    Full Text Available Introduction: Appropriate definition of the distance measure between diffusion tensors has a deep impact on Diffusion Tensor Image (DTI segmentation results. The geodesic metric is the best distance measure since it yields high-quality segmentation results. However, the important problem with the geodesic metric is a high computational cost of the algorithms based on it. The main goal of this paper is to assess the possible substitution of the geodesic metric with the Log-Euclidean one to reduce the computational cost of a statistical surface evolution algorithm. Materials and Methods: We incorporated the Log-Euclidean metric in the statistical surface evolution algorithm framework. To achieve this goal, the statistics and gradients of diffusion tensor images were defined using the Log-Euclidean metric. Numerical implementation of the segmentation algorithm was performed in the MATLAB software using the finite difference techniques. Results: In the statistical surface evolution framework, the Log-Euclidean metric was able to discriminate the torus and helix patterns in synthesis datasets and rat spinal cords in biological phantom datasets from the background better than the Euclidean and J-divergence metrics. In addition, similar results were obtained with the geodesic metric. However, the main advantage of the Log-Euclidean metric over the geodesic metric was the dramatic reduction of computational cost of the segmentation algorithm, at least by 70 times. Discussion and Conclusion: The qualitative and quantitative results have shown that the Log-Euclidean metric is a good substitute for the geodesic metric when using a statistical surface evolution algorithm in DTIs segmentation.

  5. A Validation of Object-Oriented Design Metrics as Quality Indicators

    Science.gov (United States)

    Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio

    1997-01-01

    This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.

  6. Quality Assessment in Oncology

    International Nuclear Information System (INIS)

    Albert, Jeffrey M.; Das, Prajnan

    2012-01-01

    The movement to improve healthcare quality has led to a need for carefully designed quality indicators that accurately reflect the quality of care. Many different measures have been proposed and continue to be developed by governmental agencies and accrediting bodies. However, given the inherent differences in the delivery of care among medical specialties, the same indicators will not be valid across all of them. Specifically, oncology is a field in which it can be difficult to develop quality indicators, because the effectiveness of an oncologic intervention is often not immediately apparent, and the multidisciplinary nature of the field necessarily involves many different specialties. Existing and emerging comparative effectiveness data are helping to guide evidence-based practice, and the increasing availability of these data provides the opportunity to identify key structure and process measures that predict for quality outcomes. The increasing emphasis on quality and efficiency will continue to compel the medical profession to identify appropriate quality measures to facilitate quality improvement efforts and to guide accreditation, credentialing, and reimbursement. Given the wide-reaching implications of quality metrics, it is essential that they be developed and implemented with scientific rigor. The aims of the present report were to review the current state of quality assessment in oncology, identify existing indicators with the best evidence to support their implementation, and propose a framework for identifying and refining measures most indicative of true quality in oncologic care.

  7. Quality Assessment in Oncology

    Energy Technology Data Exchange (ETDEWEB)

    Albert, Jeffrey M. [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Das, Prajnan, E-mail: prajdas@mdanderson.org [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2012-07-01

    The movement to improve healthcare quality has led to a need for carefully designed quality indicators that accurately reflect the quality of care. Many different measures have been proposed and continue to be developed by governmental agencies and accrediting bodies. However, given the inherent differences in the delivery of care among medical specialties, the same indicators will not be valid across all of them. Specifically, oncology is a field in which it can be difficult to develop quality indicators, because the effectiveness of an oncologic intervention is often not immediately apparent, and the multidisciplinary nature of the field necessarily involves many different specialties. Existing and emerging comparative effectiveness data are helping to guide evidence-based practice, and the increasing availability of these data provides the opportunity to identify key structure and process measures that predict for quality outcomes. The increasing emphasis on quality and efficiency will continue to compel the medical profession to identify appropriate quality measures to facilitate quality improvement efforts and to guide accreditation, credentialing, and reimbursement. Given the wide-reaching implications of quality metrics, it is essential that they be developed and implemented with scientific rigor. The aims of the present report were to review the current state of quality assessment in oncology, identify existing indicators with the best evidence to support their implementation, and propose a framework for identifying and refining measures most indicative of true quality in oncologic care.

  8. Energy-Based Metrics for Arthroscopic Skills Assessment.

    Science.gov (United States)

    Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa

    2017-08-05

    Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.

  9. Beyond metrics? Utilizing 'soft intelligence' for healthcare quality and safety.

    Science.gov (United States)

    Martin, Graham P; McKee, Lorna; Dixon-Woods, Mary

    2015-10-01

    Formal metrics for monitoring the quality and safety of healthcare have a valuable role, but may not, by themselves, yield full insight into the range of fallibilities in organizations. 'Soft intelligence' is usefully understood as the processes and behaviours associated with seeking and interpreting soft data-of the kind that evade easy capture, straightforward classification and simple quantification-to produce forms of knowledge that can provide the basis for intervention. With the aim of examining current and potential practice in relation to soft intelligence, we conducted and analysed 107 in-depth qualitative interviews with senior leaders, including managers and clinicians, involved in healthcare quality and safety in the English National Health Service. We found that participants were in little doubt about the value of softer forms of data, especially for their role in revealing troubling issues that might be obscured by conventional metrics. Their struggles lay in how to access softer data and turn them into a useful form of knowing. Some of the dominant approaches they used risked replicating the limitations of hard, quantitative data. They relied on processes of aggregation and triangulation that prioritised reliability, or on instrumental use of soft data to animate the metrics. The unpredictable, untameable, spontaneous quality of soft data could be lost in efforts to systematize their collection and interpretation to render them more tractable. A more challenging but potentially rewarding approach involved processes and behaviours aimed at disrupting taken-for-granted assumptions about quality, safety, and organizational performance. This approach, which explicitly values the seeking out and the hearing of multiple voices, is consistent with conceptual frameworks of organizational sensemaking and dialogical understandings of knowledge. Using soft intelligence this way can be challenging and discomfiting, but may offer a critical defence against the

  10. Model assessment using a multi-metric ranking technique

    Science.gov (United States)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  11. Landscape metrics application in ecological and visual landscape assessment

    Directory of Open Access Journals (Sweden)

    Gavrilović Suzana

    2017-01-01

    Full Text Available The development of landscape-ecological approach application in spatial planning provides exact theoretical and empirical evidence for monitoring ecological consequences of natural and/or anthropogenic factors, particularly changes in spatial structures caused by them. Landscape pattern which feature diverse landscape values is the holder of the unique landscape character at different spatial levels and represents a perceptual domain for its users. Using the landscape metrics, the parameters of landscape composition and configuration are mathematical algorithms that quantify the specific spatial characteristics used for interpretation of landscape features and processes (physical and ecological aspect, as well as forms (visual aspect and the meaning (cognitive aspect of the landscape. Landscape metrics has been applied mostly in the ecological and biodiversity assessments as well as in the determination of the level of structural change of landscape, but more and more applied in the assessment of the visual character of the landscape. Based on a review of relevant literature, the aim of this work is to show the main trends of landscape metrics within the aspect of ecological and visual assessments. The research methodology is based on the analysis, classification and systematization of the research studies published from 2000 to 2016, where the landscape metrics is applied: (1 the analysis of landscape pattern and its changes, (2 the analysis of biodiversity and habitat function and (3 a visual landscape assessment. By selecting representative metric parameters for the landscape composition and configuration, for each category is formed the basis for further landscape metrics research and application for the integrated ecological and visual assessment of the landscape values. Contemporary conceptualization of the landscape is seen holistically, and the future research should be directed towards the development of integrated landscape assessment

  12. Framework for Information Age Assessment Metrics

    National Research Council Canada - National Science Library

    Augustine, Thomas H; Broyles, James W

    2004-01-01

    .... In a spiral development process design and assessment are adjacent phases and both profit from this powerful intellectual construct. Both the strengths and weaknesses of a particular technology can be the stimulus for a successful integration that provides the capabilities that will satisfy the needs and goals of the users.

  13. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2011-01-01

    and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use......Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed...

  14. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2012-01-01

    and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use......Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed...

  15. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)*

    Science.gov (United States)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the “International Workshop on Proteomic Data Quality Metrics” in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: 1) an evolving list of comprehensive quality metrics and 2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals. PMID:22052993

  16. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    Science.gov (United States)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the U.S. National Cancer Institute (NCI) convened the “International Workshop on Proteomic Data Quality Metrics” in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: (1) an evolving list of comprehensive quality metrics and (2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals. PMID:22053864

  17. Attention modeling for video quality assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2010-01-01

    averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in modeling visual attention, which is then integrated into IQMs to calculate the local quality of a video frame...... average between the global quality and the local quality. Experimental results demonstrate that the combination of the global quality and local quality outperforms both sole global quality and local quality, as well as other quality models, in video quality assessment. In addition, the proposed video...... quality modeling algorithm can improve the performance of image quality metrics on video quality assessment compared to the normal averaged spatiotemporal pooling scheme....

  18. Proxy Graph: Visual Quality Metrics of Big Graph Sampling.

    Science.gov (United States)

    Nguyen, Quan Hoang; Hong, Seok-Hee; Eades, Peter; Meidiana, Amyra

    2017-06-01

    Data sampling has been extensively studied for large scale graph mining. Many analyses and tasks become more efficient when performed on graph samples of much smaller size. The use of proxy objects is common in software engineering for analysis and interaction with heavy objects or systems. In this paper, we coin the term 'proxy graph' and empirically investigate how well a proxy graph visualization can represent a big graph. Our investigation focuses on proxy graphs obtained by sampling; this is one of the most common proxy approaches. Despite the plethora of data sampling studies, this is the first evaluation of sampling in the context of graph visualization. For an objective evaluation, we propose a new family of quality metrics for visual quality of proxy graphs. Our experiments cover popular sampling techniques. Our experimental results lead to guidelines for using sampling-based proxy graphs in visualization.

  19. Duration of Postoperative Mechanical Ventilation as a Quality Metric for Pediatric Cardiac Surgical Programs.

    Science.gov (United States)

    Gaies, Michael; Werho, David K; Zhang, Wenying; Donohue, Janet E; Tabbutt, Sarah; Ghanayem, Nancy S; Scheurer, Mark A; Costello, John M; Gaynor, J William; Pasquali, Sara K; Dimick, Justin B; Banerjee, Mousumi; Schwartz, Steven M

    2018-02-01

    Few metrics exist to assess quality of care at pediatric cardiac surgical programs, limiting opportunities for benchmarking and quality improvement. Postoperative duration of mechanical ventilation (POMV) may be an important quality metric because of its association with complications and resource utilization. In this study we modelled case-mix-adjusted POMV duration and explored hospital performance across POMV metrics. This study used the Pediatric Cardiac Critical Care Consortium clinical registry to analyze 4,739 hospitalizations from 15 hospitals (October 2013 to August 2015). All patients admitted to pediatric cardiac intensive care units after an index cardiac operation were included. We fitted a model to predict duration of POMV accounting for patient characteristics. Robust estimates of SEs were obtained using bootstrap resampling. We created performance metrics based on observed-to-expected (O/E) POMV to compare hospitals. Overall, 3,108 patients (65.6%) received POMV; the remainder were extubated intraoperatively. Our model was well calibrated across groups; neonatal age had the largest effect on predicted POMV. These comparisons suggested clinically and statistically important variation in POMV duration across centers with a threefold difference observed in O/E ratios (0.6 to 1.7). We identified 1 hospital with better-than-expected and 3 hospitals with worse-than-expected performance (p < 0.05) based on the O/E ratio. We developed a novel case-mix-adjusted model to predict POMV duration after congenital heart operations. We report variation across hospitals on metrics of O/E duration of POMV that may be suitable for benchmarking quality of care. Identifying high-performing centers and practices that safely limit the duration of POMV could stimulate quality improvement efforts. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  20. An entropy generation metric for non-energy systems assessments

    International Nuclear Information System (INIS)

    Sekulic, Dusan P.

    2009-01-01

    Processes in non-energy systems have not been as frequent a subject of sustainability studies based on Thermodynamics as have processes in energy systems. This paper offers insight into thermodynamic thinking devoted to selection of a sustainability energy-related metric based on entropy balancing of a non-energy system. An underlying objective in this sustainability oriented study is product quality involving thermal processing during manufacturing vs. resource utilization (say, energy). The product quality for the considered family of materials processing for manufacturing is postulated as inherently controlled by the imposed temperature non-uniformity margins. These temperature non-uniformities can be converted into a thermodynamic metric which can be related to either destruction of exergy of the available resource or, on a more fundamental level of process quality, to entropy generation inherent to the considered manufacturing system. Hence, a manufacturing system can be considered as if it were an energy system, although in the later case the system objective would be quite different. In a non-energy process, a metric may indicate the level of perfection of the process (not necessarily energy efficiency) and may be related to the sustainability footprint or, as advocated in this paper, it may be related to product quality. Controlled atmosphere brazing (CAB) of aluminum, a state-of-the-art manufacturing process involving mass production of compact heat exchangers for automotive, aerospace and process industries, has been used as an example.

  1. Operator-based metric for nuclear operations automation assessment

    Energy Technology Data Exchange (ETDEWEB)

    Zacharias, G.L.; Miao, A.X.; Kalkan, A. [Charles River Analytics Inc., Cambridge, MA (United States)] [and others

    1995-04-01

    Continuing advances in real-time computational capabilities will support enhanced levels of smart automation and AI-based decision-aiding systems in the nuclear power plant (NPP) control room of the future. To support development of these aids, we describe in this paper a research tool, and more specifically, a quantitative metric, to assess the impact of proposed automation/aiding concepts in a manner that can account for a number of interlinked factors in the control room environment. In particular, we describe a cognitive operator/plant model that serves as a framework for integrating the operator`s information-processing capabilities with his procedural knowledge, to provide insight as to how situations are assessed by the operator, decisions made, procedures executed, and communications conducted. Our focus is on the situation assessment (SA) behavior of the operator, the development of a quantitative metric reflecting overall operator awareness, and the use of this metric in evaluating automation/aiding options. We describe the results of a model-based simulation of a selected emergency scenario, and metric-based evaluation of a range of contemplated NPP control room automation/aiding options. The results demonstrate the feasibility of model-based analysis of contemplated control room enhancements, and highlight the need for empirical validation.

  2. Quality Metrics in Neonatal and Pediatric Critical Care Transport: A National Delphi Project.

    Science.gov (United States)

    Schwartz, Hamilton P; Bigham, Michael T; Schoettker, Pamela J; Meyer, Keith; Trautman, Michael S; Insoft, Robert M

    2015-10-01

    The transport of neonatal and pediatric patients to tertiary care facilities for specialized care demands monitoring the quality of care delivered during transport and its impact on patient outcomes. In 2011, pediatric transport teams in Ohio met to identify quality indicators permitting comparisons among programs. However, no set of national consensus quality metrics exists for benchmarking transport teams. The aim of this project was to achieve national consensus on appropriate neonatal and pediatric transport quality metrics. Modified Delphi technique. The first round of consensus determination was via electronic mail survey, followed by rounds of consensus determination in-person at the American Academy of Pediatrics Section on Transport Medicine's 2012 Quality Metrics Summit. All attendees of the American Academy of Pediatrics Section on Transport Medicine Quality Metrics Summit, conducted on October 21-23, 2012, in New Orleans, LA, were eligible to participate. Candidate quality metrics were identified through literature review and those metrics currently tracked by participating programs. Participants were asked in a series of rounds to identify "very important" quality metrics for transport. It was determined a priori that consensus on a metric's importance was achieved when at least 70% of respondents were in agreement. This is consistent with other Delphi studies. Eighty-two candidate metrics were considered initially. Ultimately, 12 metrics achieved consensus as "very important" to transport. These include metrics related to airway management, team mobilization time, patient and crew injuries, and adverse patient care events. Definitions were assigned to the 12 metrics to facilitate uniform data tracking among programs. The authors succeeded in achieving consensus among a diverse group of national transport experts on 12 core neonatal and pediatric transport quality metrics. We propose that transport teams across the country use these metrics to

  3. Applicability of Existing Objective Metrics of Perceptual Quality for Adaptive Video Streaming

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Krasula, Lukás; Shahid, Muhammad

    2016-01-01

    Objective video quality metrics are designed to estimate the quality of experience of the end user. However, these objective metrics are usually validated with video streams degraded under common distortion types. In the presented work, we analyze the performance of published and known full......-reference and noreference quality metrics in estimating the perceived quality of adaptive bit-rate video streams knowingly out of scope. Experimental results indicate not surprisingly that state of the art objective quality metrics overlook the perceived degradations in the adaptive video streams and perform poorly...

  4. Assessment of six dissimilarity metrics for climate analogues

    Science.gov (United States)

    Grenier, Patrick; Parent, Annie-Claude; Huard, David; Anctil, François; Chaumont, Diane

    2013-04-01

    Spatial analogue techniques consist in identifying locations whose recent-past climate is similar in some aspects to the future climate anticipated at a reference location. When identifying analogues, one key step is the quantification of the dissimilarity between two climates separated in time and space, which involves the choice of a metric. In this communication, spatial analogues and their usefulness are briefly discussed. Next, six metrics are presented (the standardized Euclidean distance, the Kolmogorov-Smirnov statistic, the nearest-neighbor distance, the Zech-Aslan energy statistic, the Friedman-Rafsky runs statistic and the Kullback-Leibler divergence), along with a set of criteria used for their assessment. The related case study involves the use of numerical simulations performed with the Canadian Regional Climate Model (CRCM-v4.2.3), from which three annual indicators (total precipitation, heating degree-days and cooling degree-days) are calculated over 30-year periods (1971-2000 and 2041-2070). Results indicate that the six metrics identify comparable analogue regions at a relatively large scale, but best analogues may differ substantially. For best analogues, it is also shown that the uncertainty stemming from the metric choice does generally not exceed that stemming from the simulation or model choice. A synthesis of the advantages and drawbacks of each metric is finally presented, in which the Zech-Aslan energy statistic stands out as the most recommended metric for analogue studies, whereas the Friedman-Rafsky runs statistic is the least recommended, based on this case study.

  5. Quality Evaluation in Wireless Imaging Using Feature-Based Objective Metrics

    OpenAIRE

    Engelke, Ulrich; Zepernick, Hans-Jürgen

    2007-01-01

    This paper addresses the evaluation of image quality in the context of wireless systems using feature-based objective metrics. The considered metrics comprise of a weighted combination of feature values that are used to quantify the extend by which the related artifacts are present in a processed image. In view of imaging applications in mobile radio and wireless communication systems, reduced-reference objective quality metrics are investigated for quantifying user-perceived quality. The exa...

  6. Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.

    Science.gov (United States)

    Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel

    2017-07-28

    New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.

  7. SU-E-J-155: Automatic Quantitative Decision Making Metric for 4DCT Image Quality

    International Nuclear Information System (INIS)

    Kiely, J Blanco; Olszanski, A; Both, S; White, B; Low, D

    2015-01-01

    Purpose: To develop a quantitative decision making metric for automatically detecting irregular breathing using a large patient population that received phase-sorted 4DCT. Methods: This study employed two patient cohorts. Cohort#1 contained 256 patients who received a phasesorted 4DCT. Cohort#2 contained 86 patients who received three weekly phase-sorted 4DCT scans. A previously published technique used a single abdominal surrogate to calculate the ratio of extreme inhalation tidal volume to normal inhalation tidal volume, referred to as the κ metric. Since a single surrogate is standard for phase-sorted 4DCT in radiation oncology clinical practice, tidal volume was not quantified. Without tidal volume, the absolute κ metric could not be determined, so a relative κ (κrel) metric was defined based on the measured surrogate amplitude instead of tidal volume. Receiver operator characteristic (ROC) curves were used to quantitatively determine the optimal cutoff value (jk) and efficiency cutoff value (τk) of κrel to automatically identify irregular breathing that would reduce the image quality of phase-sorted 4DCT. Discriminatory accuracy (area under the ROC curve) of κrel was calculated by a trapezoidal numeric integration technique. Results: The discriminatory accuracy of ?rel was found to be 0.746. The key values of jk and tk were calculated to be 1.45 and 1.72 respectively. For values of ?rel such that jk≤κrel≤τk, the decision to reacquire the 4DCT would be at the discretion of the physician. This accounted for only 11.9% of the patients in this study. The magnitude of κrel held consistent over 3 weeks for 73% of the patients in cohort#3. Conclusion: The decision making metric, ?rel, was shown to be an accurate classifier of irregular breathing patients in a large patient population. This work provided an automatic quantitative decision making metric to quickly and accurately assess the extent to which irregular breathing is occurring during phase

  8. SU-E-J-155: Automatic Quantitative Decision Making Metric for 4DCT Image Quality

    Energy Technology Data Exchange (ETDEWEB)

    Kiely, J Blanco; Olszanski, A; Both, S; White, B [University of Pennsylvania, Philadelphia, PA (United States); Low, D [Deparment of Radiation Oncology, University of California Los Angeles, Los Angeles, CA (United States)

    2015-06-15

    Purpose: To develop a quantitative decision making metric for automatically detecting irregular breathing using a large patient population that received phase-sorted 4DCT. Methods: This study employed two patient cohorts. Cohort#1 contained 256 patients who received a phasesorted 4DCT. Cohort#2 contained 86 patients who received three weekly phase-sorted 4DCT scans. A previously published technique used a single abdominal surrogate to calculate the ratio of extreme inhalation tidal volume to normal inhalation tidal volume, referred to as the κ metric. Since a single surrogate is standard for phase-sorted 4DCT in radiation oncology clinical practice, tidal volume was not quantified. Without tidal volume, the absolute κ metric could not be determined, so a relative κ (κrel) metric was defined based on the measured surrogate amplitude instead of tidal volume. Receiver operator characteristic (ROC) curves were used to quantitatively determine the optimal cutoff value (jk) and efficiency cutoff value (τk) of κrel to automatically identify irregular breathing that would reduce the image quality of phase-sorted 4DCT. Discriminatory accuracy (area under the ROC curve) of κrel was calculated by a trapezoidal numeric integration technique. Results: The discriminatory accuracy of ?rel was found to be 0.746. The key values of jk and tk were calculated to be 1.45 and 1.72 respectively. For values of ?rel such that jk≤κrel≤τk, the decision to reacquire the 4DCT would be at the discretion of the physician. This accounted for only 11.9% of the patients in this study. The magnitude of κrel held consistent over 3 weeks for 73% of the patients in cohort#3. Conclusion: The decision making metric, ?rel, was shown to be an accurate classifier of irregular breathing patients in a large patient population. This work provided an automatic quantitative decision making metric to quickly and accurately assess the extent to which irregular breathing is occurring during phase

  9. A Novel Scoring Metrics for Quality Assurance of Ocean Color Observations

    Science.gov (United States)

    Wei, J.; Lee, Z.

    2016-02-01

    Interpretation of the ocean bio-optical properties from ocean color observations depends on the quality of the ocean color data, specifically the spectrum of remote sensing reflectance (Rrs). The in situ and remotely measured Rrs spectra are inevitably subject to errors induced by instrument calibration, sea-surface correction and atmospheric correction, and other environmental factors. Great efforts have been devoted to the ocean color calibration and validation. Yet, there exist no objective and consensus criteria for assessment of the ocean color data quality. In this study, the gap is filled by developing a novel metrics for such data quality assurance and quality control (QA/QC). This new QA metrics is not intended to discard "suspicious" Rrs spectra from available datasets. Rather, it takes into account the Rrs spectral shapes and amplitudes as a whole and grades each Rrs spectrum. This scoring system is developed based on a large ensemble of in situ hyperspectral remote sensing reflectance data measured from various aquatic environments and processed with robust procedures. This system is further tested with the NASA bio-Optical Marine Algorithm Data set (NOMAD), with results indicating significant improvements in the estimation of bio-optical properties when Rrs spectra marked with higher quality assurance are used. This scoring system is further verified with simulated data and satellite ocean color data in various regions, and we envision higher quality ocean color products with the implementation of such a quality screening system.

  10. Urban Landscape Metrics for Climate and Sustainability Assessments

    Science.gov (United States)

    Cochran, F. V.; Brunsell, N. A.

    2014-12-01

    To test metrics for rapid identification of urban classes and sustainable urban forms, we examine the configuration of urban landscapes using satellite remote sensing data. We adopt principles from landscape ecology and urban planning to evaluate urban heterogeneity and design themes that may constitute more sustainable urban forms, including compactness (connectivity), density, mixed land uses, diversity, and greening. Using 2-D wavelet and multi-resolution analysis, landscape metrics, and satellite-derived indices of vegetation fraction and impervious surface, the spatial variability of Landsat and MODIS data from metropolitan areas of Manaus and São Paulo, Brazil are investigated. Landscape metrics for density, connectivity, and diversity, like the Shannon Diversity Index, are used to assess the diversity of urban buildings, geographic extent, and connectedness. Rapid detection of urban classes for low density, medium density, high density, and tall building district at the 1-km scale are needed for use in climate models. If the complexity of finer-scale urban characteristics can be related to the neighborhood scale both climate and sustainability assessments may be more attainable across urban areas.

  11. Assessment and improvement of radiation oncology trainee contouring ability utilizing consensus-based penalty metrics

    International Nuclear Information System (INIS)

    Hallock, Abhirami; Read, Nancy; D'Souza, David

    2012-01-01

    The objective of this study was to develop and assess the feasibility of utilizing consensus-based penalty metrics for the purpose of critical structure and organ at risk (OAR) contouring quality assurance and improvement. A Delphi study was conducted to obtain consensus on contouring penalty metrics to assess trainee-generated OAR contours. Voxel-based penalty metric equations were used to score regions of discordance between trainee and expert contour sets. The utility of these penalty metric scores for objective feedback on contouring quality was assessed by using cases prepared for weekly radiation oncology radiation oncology trainee treatment planning rounds. In two Delphi rounds, six radiation oncology specialists reached agreement on clinical importance/impact and organ radiosensitivity as the two primary criteria for the creation of the Critical Structure Inter-comparison of Segmentation (CriSIS) penalty functions. Linear/quadratic penalty scoring functions (for over- and under-contouring) with one of four levels of severity (none, low, moderate and high) were assigned for each of 20 OARs in order to generate a CriSIS score when new OAR contours are compared with reference/expert standards. Six cases (central nervous system, head and neck, gastrointestinal, genitourinary, gynaecological and thoracic) then were used to validate 18 OAR metrics through comparison of trainee and expert contour sets using the consensus derived CriSIS functions. For 14 OARs, there was an improvement in CriSIS score post-educational intervention. The use of consensus-based contouring penalty metrics to provide quantitative information for contouring improvement is feasible.

  12. Software metrics: The key to quality software on the NCC project

    Science.gov (United States)

    Burns, Patricia J.

    1993-01-01

    Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.

  13. Development of quality metrics for ambulatory pediatric cardiology: Chest pain.

    Science.gov (United States)

    Lu, Jimmy C; Bansal, Manish; Behera, Sarina K; Boris, Jeffrey R; Cardis, Brian; Hokanson, John S; Kakavand, Bahram; Jedeikin, Roy

    2017-12-01

    As part of the American College of Cardiology Adult Congenital and Pediatric Cardiology Section effort to develop quality metrics (QMs) for ambulatory pediatric practice, the chest pain subcommittee aimed to develop QMs for evaluation of chest pain. A group of 8 pediatric cardiologists formulated candidate QMs in the areas of history, physical examination, and testing. Consensus candidate QMs were submitted to an expert panel for scoring by the RAND-UCLA modified Delphi process. Recommended QMs were then available for open comments from all members. These QMs are intended for use in patients 5-18 years old, referred for initial evaluation of chest pain in an ambulatory pediatric cardiology clinic, with no known history of pediatric or congenital heart disease. A total of 10 candidate QMs were submitted; 2 were rejected by the expert panel, and 5 were removed after the open comment period. The 3 approved QMs included: (1) documentation of family history of cardiomyopathy, early coronary artery disease or sudden death, (2) performance of electrocardiogram in all patients, and (3) performance of an echocardiogram to evaluate coronary arteries in patients with exertional chest pain. Despite practice variation and limited prospective data, 3 QMs were approved, with measurable data points which may be extracted from the medical record. However, further prospective studies are necessary to define practice guidelines and to develop appropriate use criteria in this population. © 2017 Wiley Periodicals, Inc.

  14. Constructing a no-reference H.264/AVC bitstream-based video quality metric using genetic programming-based symbolic regression

    OpenAIRE

    Staelens, Nicolas; Deschrijver, Dirk; Vladislavleva, E; Vermeulen, Brecht; Dhaene, Tom; Demeester, Piet

    2013-01-01

    In order to ensure optimal quality of experience toward end users during video streaming, automatic video quality assessment becomes an important field-of-interest to video service providers. Objective video quality metrics try to estimate perceived quality with high accuracy and in an automated manner. In traditional approaches, these metrics model the complex properties of the human visual system. More recently, however, it has been shown that machine learning approaches can also yield comp...

  15. A guide to calculating habitat-quality metrics to inform conservation of highly mobile species

    Science.gov (United States)

    Bieri, Joanna A.; Sample, Christine; Thogmartin, Wayne E.; Diffendorfer, James E.; Earl, Julia E.; Erickson, Richard A.; Federico, Paula; Flockhart, D. T. Tyler; Nicol, Sam; Semmens, Darius J.; Skraber, T.; Wiederholt, Ruscena; Mattsson, Brady J.

    2018-01-01

    Many metrics exist for quantifying the relative value of habitats and pathways used by highly mobile species. Properly selecting and applying such metrics requires substantial background in mathematics and understanding the relevant management arena. To address this multidimensional challenge, we demonstrate and compare three measurements of habitat quality: graph-, occupancy-, and demographic-based metrics. Each metric provides insights into system dynamics, at the expense of increasing amounts and complexity of data and models. Our descriptions and comparisons of diverse habitat-quality metrics provide means for practitioners to overcome the modeling challenges associated with management or conservation of such highly mobile species. Whereas previous guidance for applying habitat-quality metrics has been scattered in diversified tracks of literature, we have brought this information together into an approachable format including accessible descriptions and a modeling case study for a typical example that conservation professionals can adapt for their own decision contexts and focal populations.Considerations for Resource ManagersManagement objectives, proposed actions, data availability and quality, and model assumptions are all relevant considerations when applying and interpreting habitat-quality metrics.Graph-based metrics answer questions related to habitat centrality and connectivity, are suitable for populations with any movement pattern, quantify basic spatial and temporal patterns of occupancy and movement, and require the least data.Occupancy-based metrics answer questions about likelihood of persistence or colonization, are suitable for populations that undergo localized extinctions, quantify spatial and temporal patterns of occupancy and movement, and require a moderate amount of data.Demographic-based metrics answer questions about relative or absolute population size, are suitable for populations with any movement pattern, quantify demographic

  16. Metrics-based assessments of research: incentives for 'institutional plagiarism'?

    Science.gov (United States)

    Berry, Colin

    2013-06-01

    The issue of plagiarism--claiming credit for work that is not one's own, rightly, continues to cause concern in the academic community. An analysis is presented that shows the effects that may arise from metrics-based assessments of research, when credit for an author's outputs (chiefly publications) is given to an institution that did not support the research but which subsequently employs the author. The incentives for what is termed here "institutional plagiarism" are demonstrated with reference to the UK Research Assessment Exercise in which submitting units of assessment are shown in some instances to derive around twice the credit for papers produced elsewhere by new recruits, compared to papers produced 'in-house'.

  17. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  18. Performance metrics for the assessment of satellite data products: an ocean color case study

    Science.gov (United States)

    Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coeffic...

  19. Effective dose efficiency: an application-specific metric of quality and dose for digital radiography

    Energy Technology Data Exchange (ETDEWEB)

    Samei, Ehsan; Ranger, Nicole T; Dobbins, James T III; Ravin, Carl E, E-mail: samei@duke.edu [Carl E Ravin Advanced Imaging Laboratories, Department of Radiology (United States)

    2011-08-21

    The detective quantum efficiency (DQE) and the effective DQE (eDQE) are relevant metrics of image quality for digital radiography detectors and systems, respectively. The current study further extends the eDQE methodology to technique optimization using a new metric of the effective dose efficiency (eDE), reflecting both the image quality as well as the effective dose (ED) attributes of the imaging system. Using phantoms representing pediatric, adult and large adult body habitus, image quality measurements were made at 80, 100, 120 and 140 kVp using the standard eDQE protocol and exposures. ED was computed using Monte Carlo methods. The eDE was then computed as a ratio of image quality to ED for each of the phantom/spectral conditions. The eDQE and eDE results showed the same trends across tube potential with 80 kVp yielding the highest values and 120 kVp yielding the lowest. The eDE results for the pediatric phantom were markedly lower than the results for the adult phantom at spatial frequencies lower than 1.2-1.7 mm{sup -1}, primarily due to a correspondingly higher value of ED per entrance exposure. The relative performance for the adult and large adult phantoms was generally comparable but affected by kVps. The eDE results for the large adult configuration were lower than the eDE results for the adult phantom, across all spatial frequencies (120 and 140 kVp) and at spatial frequencies greater than 1.0 mm{sup -1} (80 and 100 kVp). Demonstrated for chest radiography, the eDE shows promise as an application-specific metric of imaging performance, reflective of body habitus and radiographic technique, with utility for radiography protocol assessment and optimization.

  20. Supporting visual quality assessment with machine learning

    NARCIS (Netherlands)

    Gastaldo, P.; Zunino, R.; Redi, J.

    2013-01-01

    Objective metrics for visual quality assessment often base their reliability on the explicit modeling of the highly non-linear behavior of human perception; as a result, they may be complex and computationally expensive. Conversely, machine learning (ML) paradigms allow to tackle the quality

  1. Extracting Patterns from Educational Traces via Clustering and Associated Quality Metrics

    NARCIS (Netherlands)

    Mihaescu, Marian; Tanasie, Alexandru; Dascalu, Mihai; Trausan-Matu, Stefan

    2016-01-01

    Clustering algorithms, pattern mining techniques and associated quality metrics emerged as reliable methods for modeling learners’ performance, comprehension and interaction in given educational scenarios. The specificity of available data such as missing values, extreme values or outliers,

  2. National Quality Forum Colon Cancer Quality Metric Performance: How Are Hospitals Measuring Up?

    Science.gov (United States)

    Mason, Meredith C; Chang, George J; Petersen, Laura A; Sada, Yvonne H; Tran Cao, Hop S; Chai, Christy; Berger, David H; Massarweh, Nader N

    2017-12-01

    To evaluate the impact of care at high-performing hospitals on the National Quality Forum (NQF) colon cancer metrics. The NQF endorses evaluating ≥12 lymph nodes (LNs), adjuvant chemotherapy (AC) for stage III patients, and AC within 4 months of diagnosis as colon cancer quality indicators. Data on hospital-level metric performance and the association with survival are unclear. Retrospective cohort study of 218,186 patients with resected stage I to III colon cancer in the National Cancer Data Base (2004-2012). High-performing hospitals (>75% achievement) were identified by the proportion of patients achieving each measure. The association between hospital performance and survival was evaluated using Cox shared frailty modeling. Only hospital LN performance improved (15.8% in 2004 vs 80.7% in 2012; trend test, P fashion [0 metrics, reference; 1, hazard ratio (HR) 0.96 (0.89-1.03); 2, HR 0.92 (0.87-0.98); 3, HR 0.85 (0.80-0.90); 2 vs 1, HR 0.96 (0.91-1.01); 3 vs 1, HR 0.89 (0.84-0.93); 3 vs 2, HR 0.95 (0.89-0.95)]. Performance on metrics in combination was associated with lower risk of death [LN + AC, HR 0.86 (0.78-0.95); AC + timely AC, HR 0.92 (0.87-0.98); LN + AC + timely AC, HR 0.85 (0.80-0.90)], whereas individual measures were not [LN, HR 0.95 (0.88-1.04); AC, HR 0.95 (0.87-1.05)]. Less than half of hospitals perform well on these NQF colon cancer metrics concurrently, and high performance on individual measures is not associated with improved survival. Quality improvement efforts should shift focus from individual measures to defining composite measures encompassing the overall multimodal care pathway and capturing successful transitions from one care modality to another.

  3. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    Science.gov (United States)

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  4. Better Metrics to Automatically Predict the Quality of a Text Summary

    Directory of Open Access Journals (Sweden)

    Judith D. Schlesinger

    2012-09-01

    Full Text Available In this paper we demonstrate a family of metrics for estimating the quality of a text summary relative to one or more human-generated summaries. The improved metrics are based on features automatically computed from the summaries to measure content and linguistic quality. The features are combined using one of three methods—robust regression, non-negative least squares, or canonical correlation, an eigenvalue method. The new metrics significantly outperform the previous standard for automatic text summarization evaluation, ROUGE.

  5. Metrics for Assessment of Smart Grid Data Integrity Attacks

    Energy Technology Data Exchange (ETDEWEB)

    Annarita Giani; Miles McQueen; Russell Bent; Kameshwar Poolla; Mark Hinrichs

    2012-07-01

    There is an emerging consensus that the nation’s electricity grid is vulnerable to cyber attacks. This vulnerability arises from the increasing reliance on using remote measurements, transmitting them over legacy data networks to system operators who make critical decisions based on available data. Data integrity attacks are a class of cyber attacks that involve a compromise of information that is processed by the grid operator. This information can include meter readings of injected power at remote generators, power flows on transmission lines, and relay states. These data integrity attacks have consequences only when the system operator responds to compromised data by redispatching generation under normal or contingency protocols. These consequences include (a) financial losses from sub-optimal economic dispatch to service loads, (b) robustness/resiliency losses from placing the grid at operating points that are at greater risk from contingencies, and (c) systemic losses resulting from cascading failures induced by poor operational choices. This paper is focused on understanding the connections between grid operational procedures and cyber attacks. We first offer two examples to illustrate how data integrity attacks can cause economic and physical damage by misleading operators into taking inappropriate decisions. We then focus on unobservable data integrity attacks involving power meter data. These are coordinated attacks where the compromised data are consistent with the physics of power flow, and are therefore passed by any bad data detection algorithm. We develop metrics to assess the economic impact of these attacks under re-dispatch decisions using optimal power flow methods. These metrics can be use to prioritize the adoption of appropriate countermeasures including PMU placement, encryption, hardware upgrades, and advance attack detection algorithms.

  6. A Metric Tool for Predicting Source Code Quality from a PDL Design

    OpenAIRE

    Henry, Sallie M.; Selig, Calvin

    1987-01-01

    The software crisis has increased the demand for automated tools to assist software developers in the production of quality software. Quality metrics have given software developers a tool to measure software quality. These measurements, however, are available only after the software has been produced. Due to high cost, software managers are reluctant, to redesign and reimplement low quality software. Ideally, a life cycle which allows early measurement of software quality is a necessary ingre...

  7. Using business intelligence to monitor clinical quality metrics.

    Science.gov (United States)

    Resetar, Ervina; Noirot, Laura A; Reichley, Richard M; Storey, Patricia; Skiles, Ann M; Traynor, Patrick; Dunagan, W Claiborne; Bailey, Thomas C

    2007-10-11

    BJC HealthCare (BJC) uses a number of industry standard indicators to monitor the quality of services provided by each of its hospitals. By establishing an enterprise data warehouse as a central repository of clinical quality information, BJC is able to monitor clinical quality performance in a timely manner and improve clinical outcomes.

  8. SU-E-T-776: Use of Quality Metrics for a New Hypo-Fractionated Pre-Surgical Mesothelioma Protocol

    International Nuclear Information System (INIS)

    Richardson, S; Mehta, V

    2015-01-01

    Purpose: The “SMART” (Surgery for Mesothelioma After Radiation Therapy) approach involves hypo-fractionated radiotherapy of the lung pleura to 25Gy over 5 days followed by surgical resection within 7. Early clinical results suggest that this approach is very promising, but also logistically challenging due to the multidisciplinary involvement. Due to the compressed schedule, high dose, and shortened planning time, the delivery of the planned doses were monitored for safety with quality metric software. Methods: Hypo-fractionated IMRT treatment plans were developed for all patients and exported to Quality Reports™ software. Plan quality metrics or PQMs™ were created to calculate an objective scoring function for each plan. This allows for an objective assessment of the quality of the plan and a benchmark for plan improvement for subsequent patients. The priorities of various components were incorporated based on similar hypo-fractionated protocols such as lung SBRT treatments. Results: Five patients have been treated at our institution using this approach. The plans were developed, QA performed, and ready within 5 days of simulation. Plan Quality metrics utilized in scoring included doses to OAR and target coverage. All patients tolerated treatment well and proceeded to surgery as scheduled. Reported toxicity included grade 1 nausea (n=1), grade 1 esophagitis (n=1), grade 2 fatigue (n=3). One patient had recurrent fluid accumulation following surgery. No patients experienced any pulmonary toxicity prior to surgery. Conclusion: An accelerated course of pre-operative high dose radiation for mesothelioma is an innovative and promising new protocol. Without historical data, one must proceed cautiously and monitor the data carefully. The development of quality metrics and scoring functions for these treatments allows us to benchmark our plans and monitor improvement. If subsequent toxicities occur, these will be easy to investigate and incorporate into the

  9. SU-E-T-776: Use of Quality Metrics for a New Hypo-Fractionated Pre-Surgical Mesothelioma Protocol

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, S; Mehta, V [Swedish Cancer Institute, Seattle, WA (United States)

    2015-06-15

    Purpose: The “SMART” (Surgery for Mesothelioma After Radiation Therapy) approach involves hypo-fractionated radiotherapy of the lung pleura to 25Gy over 5 days followed by surgical resection within 7. Early clinical results suggest that this approach is very promising, but also logistically challenging due to the multidisciplinary involvement. Due to the compressed schedule, high dose, and shortened planning time, the delivery of the planned doses were monitored for safety with quality metric software. Methods: Hypo-fractionated IMRT treatment plans were developed for all patients and exported to Quality Reports™ software. Plan quality metrics or PQMs™ were created to calculate an objective scoring function for each plan. This allows for an objective assessment of the quality of the plan and a benchmark for plan improvement for subsequent patients. The priorities of various components were incorporated based on similar hypo-fractionated protocols such as lung SBRT treatments. Results: Five patients have been treated at our institution using this approach. The plans were developed, QA performed, and ready within 5 days of simulation. Plan Quality metrics utilized in scoring included doses to OAR and target coverage. All patients tolerated treatment well and proceeded to surgery as scheduled. Reported toxicity included grade 1 nausea (n=1), grade 1 esophagitis (n=1), grade 2 fatigue (n=3). One patient had recurrent fluid accumulation following surgery. No patients experienced any pulmonary toxicity prior to surgery. Conclusion: An accelerated course of pre-operative high dose radiation for mesothelioma is an innovative and promising new protocol. Without historical data, one must proceed cautiously and monitor the data carefully. The development of quality metrics and scoring functions for these treatments allows us to benchmark our plans and monitor improvement. If subsequent toxicities occur, these will be easy to investigate and incorporate into the

  10. Defining quality metrics and improving safety and outcome in allergy care.

    Science.gov (United States)

    Lee, Stella; Stachler, Robert J; Ferguson, Berrylin J

    2014-04-01

    The delivery of allergy immunotherapy in the otolaryngology office is variable and lacks standardization. Quality metrics encompasses the measurement of factors associated with good patient-centered care. These factors have yet to be defined in the delivery of allergy immunotherapy. We developed and applied quality metrics to 6 allergy practices affiliated with an academic otolaryngic allergy center. This work was conducted at a tertiary academic center providing care to over 1500 patients. We evaluated methods and variability between 6 sites. Tracking of errors and anaphylaxis was initiated across all sites. A nationwide survey of academic and private allergists was used to collect data on current practice and use of quality metrics. The most common types of errors recorded were patient identification errors (n = 4), followed by vial mixing errors (n = 3), and dosing errors (n = 2). There were 7 episodes of anaphylaxis of which 2 were secondary to dosing errors for a rate of 0.01% or 1 in every 10,000 injection visits/year. Site visits showed that 86% of key safety measures were followed. Analysis of nationwide survey responses revealed that quality metrics are still not well defined by either medical or otolaryngic allergy practices. Academic practices were statistically more likely to use quality metrics (p = 0.021) and perform systems reviews and audits in comparison to private practices (p = 0.005). Quality metrics in allergy delivery can help improve safety and quality care. These metrics need to be further defined by otolaryngic allergists in the changing health care environment. © 2014 ARS-AAOA, LLC.

  11. Metrics for assessing retailers based on consumer perception

    Directory of Open Access Journals (Sweden)

    Klimin Anastasii

    2017-01-01

    Full Text Available The article suggests a new look at trading platforms, which is called “metrics.” Metrics are a way to look at the point of sale in a large part from the buyer’s side. The buyer enters the store and make buying decision based on those factors that the seller often does not consider, or considers in part, because “does not see” them, since he is not a buyer. The article proposes the classification of retailers, metrics and a methodology for their determination, presents the results of an audit of retailers in St. Petersburg on the proposed methodology.

  12. Metrics to assess injury prevention programs for young workers in high-risk occupations: a scoping review of the literature

    Directory of Open Access Journals (Sweden)

    Jennifer Smith

    2018-05-01

    Full Text Available Introduction: Despite legal protections for young workers in Canada, youth aged 15–24 are at high risk of traumatic occupational injury. While many injury prevention initiatives targeting young workers exist, the challenge faced by youth advocates and employers is deciding what aspect(s of prevention will be the most effective focus for their efforts. A review of the academic and grey literatures was undertaken to compile the metrics—both the indicators being evaluated and the methods of measurement—commonly used to assess injury prevention programs for young workers. Metrics are standards of measurement through which efficiency, performance, progress, or quality of a plan, process, or product can be assessed. Methods: A PICO framework was used to develop search terms. Medline, PubMed, OVID, EMBASE, CCOHS, PsychINFO, CINAHL, NIOSHTIC, Google Scholar and the grey literature were searched for articles in English, published between 1975-2015. Two independent reviewers screened the resulting list and categorized the metrics in three domains of injury prevention: Education, Environment and Enforcement. Results: Of 174 acquired articles meeting the inclusion criteria, 21 both described and assessed an intervention. Half were educational in nature (N=11. Commonly assessed metrics included: knowledge, perceptions, self-reported behaviours or intentions, hazardous exposures, injury claims, and injury counts. One study outlined a method for developing metrics to predict injury rates. Conclusion: Metrics specific to the evaluation of young worker injury prevention programs are needed, as current metrics are insufficient to predict reduced injuries following program implementation. One study, which the review brought to light, could be an appropriate model for future research to develop valid leading metrics specific to young workers, and then apply these metrics to injury prevention programs for youth.

  13. Metrics for analyzing the quality of model transformations

    NARCIS (Netherlands)

    Amstel, van M.F.; Lange, C.F.J.; Brand, van den M.G.J.; Falcone, G.; Guéhéneuc, Y.G.; Lange, C.F.J.; Porkoláb, Z.; Sahraoui, H.A.

    2008-01-01

    Model transformations become increasingly important with the emergence of model driven engineering of, amongst others, objectoriented software systems. It is therefore necessary to define and evaluate the quality of model transformations. The goal of our research is to make the quality of model

  14. Sigma metrics as a tool for evaluating the performance of internal quality control in a clinical chemistry laboratory.

    Science.gov (United States)

    Kumar, B Vinodh; Mohan, Thuthi

    2018-01-01

    Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of sigma level. For all analytes sigma level, the quality goal index (QGI) was 1.2 indicated inaccuracy. This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes.

  15. RNA-SeQC: RNA-seq metrics for quality control and process optimization.

    Science.gov (United States)

    DeLuca, David S; Levin, Joshua Z; Sivachenko, Andrey; Fennell, Timothy; Nazaire, Marc-Danie; Williams, Chris; Reich, Michael; Winckler, Wendy; Getz, Gad

    2012-06-01

    RNA-seq, the application of next-generation sequencing to RNA, provides transcriptome-wide characterization of cellular activity. Assessment of sequencing performance and library quality is critical to the interpretation of RNA-seq data, yet few tools exist to address this issue. We introduce RNA-SeQC, a program which provides key measures of data quality. These metrics include yield, alignment and duplication rates; GC bias, rRNA content, regions of alignment (exon, intron and intragenic), continuity of coverage, 3'/5' bias and count of detectable transcripts, among others. The software provides multi-sample evaluation of library construction protocols, input materials and other experimental parameters. The modularity of the software enables pipeline integration and the routine monitoring of key measures of data quality such as the number of alignable reads, duplication rates and rRNA contamination. RNA-SeQC allows investigators to make informed decisions about sample inclusion in downstream analysis. In summary, RNA-SeQC provides quality control measures critical to experiment design, process optimization and downstream computational analysis. See www.genepattern.org to run online, or www.broadinstitute.org/rna-seqc/ for a command line tool.

  16. On the performance of metrics to predict quality in point cloud representations

    Science.gov (United States)

    Alexiou, Evangelos; Ebrahimi, Touradj

    2017-09-01

    Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.

  17. Quality of Service Metrics in Wireless Sensor Networks: A Survey

    Science.gov (United States)

    Snigdh, Itu; Gupta, Nisha

    2016-03-01

    Wireless ad hoc network is characterized by autonomous nodes communicating with each other by forming a multi hop radio network and maintaining connectivity in a decentralized manner. This paper presents a systematic approach to the interdependencies and the analogy of the various factors that affect and constrain the wireless sensor network. This article elaborates the quality of service parameters in terms of methods of deployment, coverage and connectivity which affect the lifetime of the network that have been addressed, till date by the different literatures. The analogy of the indispensable rudiments was discussed that are important factors to determine the varied quality of service achieved, yet have not been duly focused upon.

  18. The use of quality metrics in service centres

    NARCIS (Netherlands)

    Petkova, V.T.; Sander, P.C.; Brombacher, A.C.

    2000-01-01

    In industry it is not well realised that a service centre is potentially one of the major contributors to quality improvement. Service is able to collect vital information about the field behaviour of products in interaction with customers. If this information is well analysed and communicated, the

  19. Instrument Motion Metrics for Laparoscopic Skills Assessment in Virtual Reality and Augmented Reality.

    Science.gov (United States)

    Fransson, Boel A; Chen, Chi-Ya; Noyes, Julie A; Ragle, Claude A

    2016-11-01

    To determine the construct and concurrent validity of instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality simulators. Evaluation study. Veterinarian students (novice, n = 14) and veterinarians (experienced, n = 11) with no or variable laparoscopic experience. Participants' minimally invasive surgery (MIS) experience was determined by hospital records of MIS procedures performed in the Teaching Hospital. Basic laparoscopic skills were assessed by 5 tasks using a physical box trainer. Each participant completed 2 tasks for assessments in each type of simulator (virtual reality: bowel handling and cutting; augmented reality: object positioning and a pericardial window model). Motion metrics such as instrument path length, angle or drift, and economy of motion of each simulator were recorded. None of the motion metrics in a virtual reality simulator showed correlation with experience, or to the basic laparoscopic skills score. All metrics in augmented reality were significantly correlated with experience (time, instrument path, and economy of movement), except for the hand dominance metric. The basic laparoscopic skills score was correlated to all performance metrics in augmented reality. The augmented reality motion metrics differed between American College of Veterinary Surgeons diplomates and residents, whereas basic laparoscopic skills score and virtual reality metrics did not. Our results provide construct validity and concurrent validity for motion analysis metrics for an augmented reality system, whereas a virtual reality system was validated only for the time score. © Copyright 2016 by The American College of Veterinary Surgeons.

  20. Utility Validation of a New Fingerprint Quality Metric

    OpenAIRE

    Yao , Zhigang; Charrier , Christophe; Rosenberger , Christophe

    2014-01-01

    International audience; Fingerprint somehow can be regarded as a relatively fullfledged application in biometrics. The use of this biometric modality is not limited to traditional public security area, but spread into the daily life, smart phone authentication control and e-payment, for instance. However, quality control of biometric sample is still a necessary task due in order to optimize the operational performance. Research works had shown that biometric systems performance could be great...

  1. Developing a more useful surface quality metric for laser optics

    Science.gov (United States)

    Turchette, Quentin; Turner, Trey

    2011-02-01

    Light scatter due to surface defects on laser resonator optics produces losses which lower system efficiency and output power. The traditional methodology for surface quality inspection involves visual comparison of a component to scratch and dig (SAD) standards under controlled lighting and viewing conditions. Unfortunately, this process is subjective and operator dependent. Also, there is no clear correlation between inspection results and the actual performance impact of the optic in a laser resonator. As a result, laser manufacturers often overspecify surface quality in order to ensure that optics will not degrade laser performance due to scatter. This can drive up component costs and lengthen lead times. Alternatively, an objective test system for measuring optical scatter from defects can be constructed with a microscope, calibrated lighting, a CCD detector and image processing software. This approach is quantitative, highly repeatable and totally operator independent. Furthermore, it is flexible, allowing the user to set threshold levels as to what will or will not constitute a defect. This paper details how this automated, quantitative type of surface quality measurement can be constructed, and shows how its results correlate against conventional loss measurement techniques such as cavity ringdown times.

  2. Estimating the assessment difficulty of CVSS environmental metrics : an experiment

    NARCIS (Netherlands)

    Allodi, L.; Biagioni, S.; Crispo, B.; Labunets, K.; Massacci, F.; Santos, W.; Dang, T.K.; Wagner, R.; Küng, J.; Thoai, N.; Takizawa, M.; Neuhold, E.J.

    2017-01-01

    [Context] The CVSS framework provides several dimensions to score vulnerabilities. The environmental metrics allow security analysts to downgrade or upgrade vulnerability scores based on a company’s computing environments and security requirements. [Question] How difficult is for a human assessor to

  3. Estimating the Assessment Difficulty of CVSS Environmental Metrics : An Experiment

    NARCIS (Netherlands)

    Allodi, Luca; Biagioni, Silvio; Crispo, Bruno; Labunets, K.; Massacci, Fabio; Santos, Wagner; Khanh Dang, Tran; Wagner, Roland; Küng, Josef; Thoai, Nam; Takizawa, Makoto; Neuhold, Erich J.

    2017-01-01

    [Context] The CVSS framework provides several dimensions to score vulnerabilities. The environmental metrics allow security analysts to downgrade or upgrade vulnerability scores based on a company’s computing environments and security requirements. [Question] How difficult is for a human assessor to

  4. Advanced Metrics for Assessing Holistic Care: The "Epidaurus 2" Project.

    Science.gov (United States)

    Foote, Frederick O; Benson, Herbert; Berger, Ann; Berman, Brian; DeLeo, James; Deuster, Patricia A; Lary, David J; Silverman, Marni N; Sternberg, Esther M

    2018-01-01

    In response to the challenge of military traumatic brain injury and posttraumatic stress disorder, the US military developed a wide range of holistic care modalities at the new Walter Reed National Military Medical Center, Bethesda, MD, from 2001 to 2017, guided by civilian expert consultation via the Epidaurus Project. These projects spanned a range from healing buildings to wellness initiatives and healing through nature, spirituality, and the arts. The next challenge was to develop whole-body metrics to guide the use of these therapies in clinical care. Under the "Epidaurus 2" Project, a national search produced 5 advanced metrics for measuring whole-body therapeutic effects: genomics, integrated stress biomarkers, language analysis, machine learning, and "Star Glyphs." This article describes the metrics, their current use in guiding holistic care at Walter Reed, and their potential for operationalizing personalized care, patient self-management, and the improvement of public health. Development of these metrics allows the scientific integration of holistic therapies with organ-system-based care, expanding the powers of medicine.

  5. National evaluation of multidisciplinary quality metrics for head and neck cancer.

    Science.gov (United States)

    Cramer, John D; Speedy, Sedona E; Ferris, Robert L; Rademaker, Alfred W; Patel, Urjeet A; Samant, Sandeep

    2017-11-15

    The National Quality Forum has endorsed quality-improvement measures for multiple cancer types that are being developed into actionable tools to improve cancer care. No nationally endorsed quality metrics currently exist for head and neck cancer. The authors identified patients with surgically treated, invasive, head and neck squamous cell carcinoma in the National Cancer Data Base from 2004 to 2014 and compared the rate of adherence to 5 different quality metrics and whether compliance with these quality metrics impacted overall survival. The metrics examined included negative surgical margins, neck dissection lymph node (LN) yield ≥ 18, appropriate adjuvant radiation, appropriate adjuvant chemoradiation, adjuvant therapy within 6 weeks, as well as overall quality. In total, 76,853 eligible patients were identified. There was substantial variability in patient-level adherence, which was 80% for negative surgical margins, 73.1% for neck dissection LN yield, 69% for adjuvant radiation, 42.6% for adjuvant chemoradiation, and 44.5% for adjuvant therapy within 6 weeks. Risk-adjusted Cox proportional-hazard models indicated that all metrics were associated with a reduced risk of death: negative margins (hazard ratio [HR] 0.73; 95% confidence interval [CI], 0.71-0.76), LN yield ≥ 18 (HR, 0.93; 95% CI, 0.89-0.96), adjuvant radiation (HR, 0.67; 95% CI, 0.64-0.70), adjuvant chemoradiation (HR, 0.84; 95% CI, 0.79-0.88), and adjuvant therapy ≤6 weeks (HR, 0.92; 95% CI, 0.89-0.96). Patients who received high-quality care had a 19% reduced adjusted hazard of mortality (HR, 0.81; 95% CI, 0.79-0.83). Five head and neck cancer quality metrics were identified that have substantial variability in adherence and meaningfully impact overall survival. These metrics are appropriate candidates for national adoption. Cancer 2017;123:4372-81. © 2017 American Cancer Society. © 2017 American Cancer Society.

  6. Quality metric for accurate overlay control in <20nm nodes

    Science.gov (United States)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  7. No Reference Prediction of Quality Metrics for H.264 Compressed Infrared Image Sequences for UAV Applications

    DEFF Research Database (Denmark)

    Hossain, Kabir; Mantel, Claire; Forchhammer, Søren

    2018-01-01

    The framework for this research work is the acquisition of Infrared (IR) images from Unmanned Aerial Vehicles (UAV). In this paper we consider the No-Reference (NR) prediction of Full Reference Quality Metrics for Infrared (IR) video sequences which are compressed and thus distorted by an H.264...

  8. Synthesized view comparison method for no-reference 3D image quality assessment

    Science.gov (United States)

    Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun

    2018-04-01

    We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.

  9. Visual signal quality assessment quality of experience (QOE)

    CERN Document Server

    Ma, Lin; Lin, Weisi; Ngan, King

    2015-01-01

    This book provides comprehensive coverage of the latest trends/advances in subjective and objective quality evaluation for traditional visual signals, such as 2D images and video, as well as the most recent challenges for the field of multimedia quality assessment and processing, such as mobile video and social media. Readers will learn how to ensure the highest storage/delivery/ transmission quality of visual content (including image, video, graphics, animation, etc.) from the server to the consumer, under resource constraints, such as computation, bandwidth, storage space, battery life, etc.    Provides an overview of quality assessment for traditional visual signals; Covers newly emerged visual signals such as social media, 3D image/video, mobile video, high dynamic range (HDR) images, graphics/animation, etc., which demand better quality of experience (QoE); Helps readers to develop better quality metrics and processing methods for newly emerged visual signals; Enables testing, optimizing, benchmarking...

  10. Simulation and assessment of urbanization impacts on runoff metrics

    DEFF Research Database (Denmark)

    Zhang, Yongyong; Xia, Jun; Yu, Jingjie

    2018-01-01

    changes. The Qing River catchment as a peri-urban catchment in the Beijing metropolitan area is selected as our study region. Results show that: (1) the dryland agriculture is decreased from 13.9% to 1.5% of the total catchment area in the years 2000–2015, while the percentages of impervious surface...... information for urban planning such as Sponge City design.......Urbanization-induced landuse changes alter runoff regimes in complex ways. In this study, a detailed investigation of the urbanization impacts on runoff regimes is provided by using multiple runoff metrics and with consideration of landuse dynamics. A catchment hydrological model is modified...

  11. Assessment of various supervised learning algorithms using different performance metrics

    Science.gov (United States)

    Susheel Kumar, S. M.; Laxkar, Deepak; Adhikari, Sourav; Vijayarajan, V.

    2017-11-01

    Our work brings out comparison based on the performance of supervised machine learning algorithms on a binary classification task. The supervised machine learning algorithms which are taken into consideration in the following work are namely Support Vector Machine(SVM), Decision Tree(DT), K Nearest Neighbour (KNN), Naïve Bayes(NB) and Random Forest(RF). This paper mostly focuses on comparing the performance of above mentioned algorithms on one binary classification task by analysing the Metrics such as Accuracy, F-Measure, G-Measure, Precision, Misclassification Rate, False Positive Rate, True Positive Rate, Specificity, Prevalence.

  12. Area of Concern: a new paradigm in life cycle assessment for the development of footprint metrics

    Science.gov (United States)

    Purpose: As a class of environmental metrics, footprints have been poorly defined, have shared an unclear relationship to life cycle assessment (LCA), and the variety of approaches to quantification have sometimes resulted in confusing and contradictory messages in the marketplac...

  13. Power quality assessment

    International Nuclear Information System (INIS)

    Fathi, H.M.E.

    2012-01-01

    The electrical power systems are exposed to different types of power quality disturbances problems. Assessment of power quality is necessary for maintaining accurate operation of sensitive equipment's especially for nuclear installations, it also ensures that unnecessary energy losses in a power system are kept at a minimum which lead to more profits. With advanced in technology growing of industrial / commercial facilities in many region. Power quality problems have been a major concern among engineers; particularly in an industrial environment, where there are many large-scale type of equipment. Thus, it would be useful to investigate and mitigate the power quality problems. Assessment of Power quality requires the identification of any anomalous behavior on a power system, which adversely affects the normal operation of electrical or electronic equipment. The choice of monitoring equipment in a survey is also important to ascertain a solution to these power quality problems. A power quality assessment involves gathering data resources; analyzing the data (with reference to power quality standards); then, if problems exist, recommendation of mitigation techniques must be considered. The main objective of the present work is to investigate and mitigate of power quality problems in nuclear installations. Normally electrical power is supplied to the installations via two sources to keep good reliability. Each source is designed to carry the full load. The Assessment of power quality was performed at the nuclear installations for both sources at different operation conditions. The thesis begins with a discussion of power quality definitions and the results of previous studies in power quality monitoring. The assessment determines that one source of electricity was deemed to have relatively good power quality; there were several disturbances, which exceeded the thresholds. Among of them are fifth harmonic, voltage swell, overvoltage and flicker. While the second

  14. Effect of thematic map misclassification on landscape multi-metric assessment.

    Science.gov (United States)

    Kleindl, William J; Powell, Scott L; Hauer, F Richard

    2015-06-01

    Advancements in remote sensing and computational tools have increased our awareness of large-scale environmental problems, thereby creating a need for monitoring, assessment, and management at these scales. Over the last decade, several watershed and regional multi-metric indices have been developed to assist decision-makers with planning actions of these scales. However, these tools use remote-sensing products that are subject to land-cover misclassification, and these errors are rarely incorporated in the assessment results. Here, we examined the sensitivity of a landscape-scale multi-metric index (MMI) to error from thematic land-cover misclassification and the implications of this uncertainty for resource management decisions. Through a case study, we used a simplified floodplain MMI assessment tool, whose metrics were derived from Landsat thematic maps, to initially provide results that were naive to thematic misclassification error. Using a Monte Carlo simulation model, we then incorporated map misclassification error into our MMI, resulting in four important conclusions: (1) each metric had a different sensitivity to error; (2) within each metric, the bias between the error-naive metric scores and simulated scores that incorporate potential error varied in magnitude and direction depending on the underlying land cover at each assessment site; (3) collectively, when the metrics were combined into a multi-metric index, the effects were attenuated; and (4) the index bias indicated that our naive assessment model may overestimate floodplain condition of sites with limited human impacts and, to a lesser extent, either over- or underestimated floodplain condition of sites with mixed land use.

  15. Assessing the influence of multiple stressors on stream diatom metrics in the upper Midwest, USA

    Science.gov (United States)

    Munn, Mark D.; Waite, Ian R.; Konrad, Christopher P.

    2018-01-01

    Water resource managers face increasing challenges in identifying what physical and chemical stressors are responsible for the alteration of biological conditions in streams. The objective of this study was to assess the comparative influence of multiple stressors on benthic diatoms at 98 sites that spanned a range of stressors in an agriculturally dominated region in the upper Midwest, USA. The primary stressors of interest included: nutrients, herbicides and fungicides, sediment, and streamflow; although the influence of physical habitat was incorporated in the assessment. Boosted Regression Tree was used to examine both the sensitivity of various diatom metrics and the relative importance of the primary stressors. Percent Sensitive Taxa, percent Highly Motile Taxa, and percent High Phosphorus Taxa had the strongest response to stressors. Habitat and total phosphorous were the most common discriminators of diatom metrics, with herbicides as secondary factors. A Classification and Regression Tree (CART) model was used to examine conditional relations among stressors and indicated that fine-grain streams had a lower percentage of Sensitive Taxa than coarse-grain streams, with Sensitive Taxa decreasing further with increased water temperature (>30 °C) and triazine concentrations (>1500 ng/L). In contrast, streams dominated by coarse-grain substrate contained a higher percentage of Sensitive Taxa, with relative abundance increasing with lower water temperatures (water depth (water temperature appears to be a major limiting factor in Midwest streams; whereas both total phosphorus and percent fines showed a slight subsidy-stress response. While using benthic algae for assessing stream quality can be challenging, field-based studies can elucidate stressor effects and interactions when the response variables are appropriate, sufficient stressor resolution is achieved, and the number and type of sites represent a gradient of stressor conditions and at least a quasi

  16. Beyond metrics? Utilizing ‘soft intelligence’ for healthcare quality and safety

    OpenAIRE

    Martin, Graham P.; McKee, Lorna; Dixon-Woods, Mary

    2015-01-01

    Formal metrics for monitoring the quality and safety of healthcare have a valuable role, but may not, by themselves, yield full insight into the range of fallibilities in organizations. ‘Soft intelligence’ is usefully understood as the processes and behaviours associated with seeking and interpreting soft data—of the kind that evade easy capture, straightforward classification and simple quantification—to produce forms of knowledge that can provide the basis for intervention. With the aim of ...

  17. Simulation of devices mobility to estimate wireless channel quality metrics in 5G networks

    Science.gov (United States)

    Orlov, Yu.; Fedorov, S.; Samuylov, A.; Gaidamaka, Yu.; Molchanov, D.

    2017-07-01

    The problem of channel quality estimation for devices in a wireless 5G network is formulated. As a performance metrics of interest we choose the signal-to-interference-plus-noise ratio, which depends essentially on the distance between the communicating devices. A model with a plurality of moving devices in a bounded three-dimensional space and a simulation algorithm to determine the distances between the devices for a given motion model are devised.

  18. Performance of different colour quality metrics proposed to CIE TC 1-91

    OpenAIRE

    Bhusal, Pramod; Dangol, Rajendra

    2017-01-01

    The main aim of the article is to find out the performance of different metrics proposed to CIE TC 1-91. Currently, six different indexes have been proposed to CIE TC 1-91: Colour Quality Scale (CQS), Feeling of Contrast Index (FCI), Memory colour rendering index (MCRI), Preference of skin (PS), Relative gamut area index (RGAI) and Illuminating Engineering society Method for evaluating light source colour rendition (IES TM-30). The evaluation and analysis are based on previously conducted exp...

  19. Patent Assessment Quality

    DEFF Research Database (Denmark)

    Burke, Paul F.; Reitzig, Markus

    2006-01-01

    The increasing number of patent applications worldwide and the extension of patenting to the areas of software and business methods have triggered a debate on "patent quality". While patent quality may have various dimensions, this paper argues that consistency in the decision making on the side...... of the patent office is one important dimension, particularly in new patenting areas (emerging technologies). In order to understand whether patent offices appear capable of providing consistent assessments of a patent's technological quality in such novel industries from the beginning, we study the concordance...... of the European Patent Office's (EPO's) granting and opoposition decisions for individual patents. We use the historical example of biotech patens filed between 1978 until 1986, the early stage of the industry. Our results indicate that the EPO shows systematically different assessments of technological quality...

  20. NASA Aviation Safety Program Systems Analysis/Program Assessment Metrics Review

    Science.gov (United States)

    Louis, Garrick E.; Anderson, Katherine; Ahmad, Tisan; Bouabid, Ali; Siriwardana, Maya; Guilbaud, Patrick

    2003-01-01

    The goal of this project is to evaluate the metrics and processes used by NASA's Aviation Safety Program in assessing technologies that contribute to NASA's aviation safety goals. There were three objectives for reaching this goal. First, NASA's main objectives for aviation safety were documented and their consistency was checked against the main objectives of the Aviation Safety Program. Next, the metrics used for technology investment by the Program Assessment function of AvSP were evaluated. Finally, other metrics that could be used by the Program Assessment Team (PAT) were identified and evaluated. This investigation revealed that the objectives are in fact consistent across organizational levels at NASA and with the FAA. Some of the major issues discussed in this study which should be further investigated, are the removal of the Cost and Return-on-Investment metrics, the lack of the metrics to measure the balance of investment and technology, the interdependencies between some of the metric risk driver categories, and the conflict between 'fatal accident rate' and 'accident rate' in the language of the Aviation Safety goal as stated in different sources.

  1. Developing a composite weighted quality metric to reflect the total benefit conferred by a health plan.

    Science.gov (United States)

    Taskler, Glen B; Braithwaite, R Scott

    2015-03-01

    To improve individual health quality measures, which are associated with varying degrees of health benefit, and composite quality metrics, which weight individual measures identically. We developed a health-weighted composite quality measure reflecting the total health benefit conferred by a health plan annually, using preventive care as a test case. Using national disease prevalence, we simulated a hypothetical insurance panel of individuals aged 25 to 84 years. For each individual, we estimated the gain in life expectancy associated with 1 year of health system exposure to encourage adherence to major preventive care guidelines, controlling for patient characteristics (age, race, gender, comorbidity) and variation in individual adherence rates. This personalized gain in life expectancy was used to proxy for the amount of health benefit conferred by a health plan annually to its members, and formed weights in our health-weighted composite quality measure. We aggregated health benefits across the health insurance membership panel to analyze total health system performance. Our composite quality metric gave the highest weights to health plans that succeeded in implementing tobacco cessation and weight loss. One year of compliance with these goals was associated with 2 to 10 times as much health benefit as compliance with easier-to-follow preventive care services, such as mammography, aspirin, and antihypertensives. For example, for women aged 55 to 64 years, successful interventions to encourage weight loss were associated with 2.1 times the health benefit of blood pressure reduction and 3.9 times the health benefit of increasing adherence with screening mammography. A single health-weighted quality metric may inform measurement of total health system performance.

  2. ROBUSTNESS AND PREDICTION ACCURACY OF MACHINE LEARNING FOR OBJECTIVE VISUAL QUALITY ASSESSMENT

    OpenAIRE

    Hines, Andrew; Kendrick, Paul; Barri, Adriaan; Narwaria, Manish; Redi, Judith A.

    2014-01-01

    Machine Learning (ML) is a powerful tool to support the development of objective visual quality assessment metrics, serving as a substitute model for the perceptual mechanisms acting in visual quality appreciation. Nevertheless, the reliability of ML-based techniques within objective quality assessment metrics is often questioned. In this study, the robustness of ML in supporting objective quality assessment is investigated, specifically when the feature set adopted for prediction is suboptim...

  3. Integrated Metrics for Improving the Life Cycle Approach to Assessing Product System Sustainability

    Directory of Open Access Journals (Sweden)

    Wesley Ingwersen

    2014-03-01

    Full Text Available Life cycle approaches are critical for identifying and reducing environmental burdens of products. While these methods can indicate potential environmental impacts of a product, current Life Cycle Assessment (LCA methods fail to integrate the multiple impacts of a system into unified measures of social, economic or environmental performance related to sustainability. Integrated metrics that combine multiple aspects of system performance based on a common scientific or economic principle have proven to be valuable for sustainability evaluation. In this work, we propose methods of adapting four integrated metrics for use with LCAs of product systems: ecological footprint, emergy, green net value added, and Fisher information. These metrics provide information on the full product system in land, energy, monetary equivalents, and as a unitless information index; each bundled with one or more indicators for reporting. When used together and for relative comparison, integrated metrics provide a broader coverage of sustainability aspects from multiple theoretical perspectives that is more likely to illuminate potential issues than individual impact indicators. These integrated metrics are recommended for use in combination with traditional indicators used in LCA. Future work will test and demonstrate the value of using these integrated metrics and combinations to assess product system sustainability.

  4. MUSTANG: A Community-Facing Web Service to Improve Seismic Data Quality Awareness Through Metrics

    Science.gov (United States)

    Templeton, M. E.; Ahern, T. K.; Casey, R. E.; Sharer, G.; Weertman, B.; Ashmore, S.

    2014-12-01

    IRIS DMC is engaged in a new effort to provide broad and deep visibility into the quality of data and metadata found in its terabyte-scale geophysical data archive. Taking advantage of large and fast disk capacity, modern advances in open database technologies, and nimble provisioning of virtual machine resources, we are creating an openly accessible treasure trove of data measurements for scientists and the general public to utilize in providing new insights into the quality of this data. We have branded this statistical gathering system MUSTANG, and have constructed it as a component of the web services suite that IRIS DMC offers. MUSTANG measures over forty data metrics addressing issues with archive status, data statistics and continuity, signal anomalies, noise analysis, metadata checks, and station state of health. These metrics could potentially be used both by network operators to diagnose station problems and by data users to sort suitable data from unreliable or unusable data. Our poster details what MUSTANG is, how users can access it, what measurements they can find, and how MUSTANG fits into the IRIS DMC's data access ecosystem. Progress in data processing, approaches to data visualization, and case studies of MUSTANG's use for quality assurance will be presented. We want to illustrate what is possible with data quality assurance, the need for data quality assurance, and how the seismic community will benefit from this freely available analytics service.

  5. Homogeneity and EPR metrics for assessment of regular grids used in CW EPR powder simulations.

    Science.gov (United States)

    Crăciun, Cora

    2014-08-01

    CW EPR powder spectra may be approximated numerically using a spherical grid and a Voronoi tessellation-based cubature. For a given spin system, the quality of simulated EPR spectra depends on the grid type, size, and orientation in the molecular frame. In previous work, the grids used in CW EPR powder simulations have been compared mainly from geometric perspective. However, some grids with similar homogeneity degree generate different quality simulated spectra. This paper evaluates the grids from EPR perspective, by defining two metrics depending on the spin system characteristics and the grid Voronoi tessellation. The first metric determines if the grid points are EPR-centred in their Voronoi cells, based on the resonance magnetic field variations inside these cells. The second metric verifies if the adjacent Voronoi cells of the tessellation are EPR-overlapping, by computing the common range of their resonance magnetic field intervals. Beside a series of well known regular grids, the paper investigates a modified ZCW grid and a Fibonacci spherical code, which are new in the context of EPR simulations. For the investigated grids, the EPR metrics bring more information than the homogeneity quantities and are better related to the grids' EPR behaviour, for different spin system symmetries. The metrics' efficiency and limits are finally verified for grids generated from the initial ones, by using the original or magnetic field-constraint variants of the Spherical Centroidal Voronoi Tessellation method. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. qcML: an exchange format for quality control metrics from mass spectrometry experiments.

    Science.gov (United States)

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W P; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A; Kelstrup, Christian D; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S; Olsen, Jesper V; Heck, Albert J R; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart

    2014-08-01

    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.

  7. qcML: An Exchange Format for Quality Control Metrics from Mass Spectrometry Experiments*

    Science.gov (United States)

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W. P.; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A.; Kelstrup, Christian D.; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S.; Olsen, Jesper V.; Heck, Albert J. R.; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart

    2014-01-01

    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. PMID:24760958

  8. Parameter Search Algorithms for Microwave Radar-Based Breast Imaging: Focal Quality Metrics as Fitness Functions.

    Science.gov (United States)

    O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin

    2017-12-06

    Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.

  9. Revision and extension of Eco-LCA metrics for sustainability assessment of the energy and chemical processes.

    Science.gov (United States)

    Yang, Shiying; Yang, Siyu; Kraslawski, Andrzej; Qian, Yu

    2013-12-17

    Ecologically based life cycle assessment (Eco-LCA) is an appealing approach for the evaluation of resources utilization and environmental impacts of the process industries from an ecological scale. However, the aggregated metrics of Eco-LCA suffer from some drawbacks: the environmental impact metric has limited applicability; the resource utilization metric ignores indirect consumption; the renewability metric fails to address the quantitative distinction of resources availability; the productivity metric seems self-contradictory. In this paper, the existing Eco-LCA metrics are revised and extended for sustainability assessment of the energy and chemical processes. A new Eco-LCA metrics system is proposed, including four independent dimensions: environmental impact, resource utilization, resource availability, and economic effectiveness. An illustrative example of comparing assessment between a gas boiler and a solar boiler process provides insight into the features of the proposed approach.

  10. Assessment of water quality

    International Nuclear Information System (INIS)

    Qureshi, I.H.

    2002-01-01

    Water is the most essential component of all living things and it supports the life process. Without water, it would not have been possible to sustain life on this planet. The total quantity of water on earth is estimated to be 1.4 trillion cubic meter. Of this, less than 1 % water, present in rivers and ground resources is available to meet our requirement. These resources are being contaminated with toxic substances due to ever increasing environmental pollution. To reduce this contamination, many countries have established standards for the discharge of municipal and industrial waste into water streams. We use water for various purposes and for each purpose we require water of appropriate quality. The quality of water is assessed by evaluating the physical chemical, biological and radiological characteristics of water. Water for drinking and food preparation must be free from turbidity, colour, odour and objectionable tastes, as well as from disease causing organisms and inorganic and organic substances, which may produce adverse physiological effects, Such water is referred to as potable water and is produced by treatment of raw water, involving various unit operations. The effectiveness of the treatment processes is checked by assessing the various parameters of water quality, which involves sampling and analysis of water and comparison with the National Quality Standards or WHO standards. Water which conforms to these standards is considered safe and palatable for human consumption. Periodic assessment of water is necessary, to ensure the quality of water supplied to the public. This requires proper sampling at specified locations and analysis of water, employing reliable analytical techniques. (author)

  11. Metric Assessments of Books As Families of Works

    DEFF Research Database (Denmark)

    Zuccala, Alesia Ann; Breum, Mads; Bruun, Kasper

    2017-01-01

    We describe the intellectual and physical properties of books as manifestations, expressions and works and assess the current indexing and metadata structure of monographs in the Book Citation Index (BKCI). Our focus is on the interrelationship of these properties in light of the Functional...... Requirements for Bibliographic Records (FRBR). Data pertaining to monographs were collected from the Danish PURE repository system as well as the BKCI (2005-2015) via their International Standard Book Numbers (ISBNs). Each ISBN was then matched to the same ISBN and family-related ISBNs cataloged in two...

  12. Survey of Quantitative Research Metrics to Assess Pilot Performance in Upset Recovery

    Science.gov (United States)

    Le Vie, Lisa R.

    2016-01-01

    Accidents attributable to in-flight loss of control are the primary cause for fatal commercial jet accidents worldwide. The National Aeronautics and Space Administration (NASA) conducted a literature review to determine and identify the quantitative standards for assessing upset recovery performance. This review contains current recovery procedures for both military and commercial aviation and includes the metrics researchers use to assess aircraft recovery performance. Metrics include time to first input, recognition time and recovery time and whether that input was correct or incorrect. Other metrics included are: the state of the autopilot and autothrottle, control wheel/sidestick movement resulting in pitch and roll, and inputs to the throttle and rudder. In addition, airplane state measures, such as roll reversals, altitude loss/gain, maximum vertical speed, maximum/minimum air speed, maximum bank angle and maximum g loading are reviewed as well.

  13. Development and validation of trauma surgical skills metrics: Preliminary assessment of performance after training.

    Science.gov (United States)

    Shackelford, Stacy; Garofalo, Evan; Shalin, Valerie; Pugh, Kristy; Chen, Hegang; Pasley, Jason; Sarani, Babak; Henry, Sharon; Bowyer, Mark; Mackenzie, Colin F

    2015-07-01

    Maintaining trauma-specific surgical skills is an ongoing challenge for surgical training programs. An objective assessment of surgical skills is needed. We hypothesized that a validated surgical performance assessment tool could detect differences following a training intervention. We developed surgical performance assessment metrics based on discussion with expert trauma surgeons, video review of 10 experts and 10 novice surgeons performing three vascular exposure procedures and lower extremity fasciotomy on cadavers, and validated the metrics with interrater reliability testing by five reviewers blinded to level of expertise and a consensus conference. We tested these performance metrics in 12 surgical residents (Year 3-7) before and 2 weeks after vascular exposure skills training in the Advanced Surgical Skills for Exposure in Trauma (ASSET) course. Performance was assessed in three areas as follows: knowledge (anatomic, management), procedure steps, and technical skills. Time to completion of procedures was recorded, and these metrics were combined into a single performance score, the Trauma Readiness Index (TRI). Wilcoxon matched-pairs signed-ranks test compared pretraining/posttraining effects. Mean time to complete procedures decreased by 4.3 minutes (from 13.4 minutes to 9.1 minutes). The performance component most improved by the 1-day skills training was procedure steps, completion of which increased by 21%. Technical skill scores improved by 12%. Overall knowledge improved by 3%, with 18% improvement in anatomic knowledge. TRI increased significantly from 50% to 64% with ASSET training. Interrater reliability of the surgical performance assessment metrics was validated with single intraclass correlation coefficient of 0.7 to 0.98. A trauma-relevant surgical performance assessment detected improvements in specific procedure steps and anatomic knowledge taught during a 1-day course, quantified by the TRI. ASSET training reduced time to complete vascular

  14. Application of a simple, affordable quality metric tool to colorectal, upper gastrointestinal, hernia, and hepatobiliary surgery patients: the HARM score.

    Science.gov (United States)

    Brady, Justin T; Ko, Bona; Hohmann, Samuel F; Crawshaw, Benjamin P; Leinicke, Jennifer A; Steele, Scott R; Augestad, Knut M; Delaney, Conor P

    2018-06-01

    Quality is the major driver for both clinical and financial assessment. There remains a need for simple, affordable, quality metric tools to evaluate patient outcomes, which led us to develop the HospitAl length of stay, Readmission and Mortality (HARM) score. We hypothesized that the HARM score would be a reliable tool to assess patient outcomes across various surgical specialties. From 2011 to 2015, we identified colorectal, hepatobiliary, upper gastrointestinal, and hernia surgery admissions using the Vizient Clinical Database. Individual and hospital HARM scores were calculated from length of stay, 30-day readmission, and mortality rates. We evaluated the correlation of HARM scores with complication rates using the Clavien-Dindo classification. We identified 525,083 surgical patients: 206,981 colorectal, 164,691 hepatobiliary, 97,157 hernia, and 56,254 upper gastrointestinal. Overall, 53.8% of patients were admitted electively with a mean HARM score of 2.24; 46.2% were admitted emergently with a mean HARM score of 1.45 (p  4 (p  4, complication rates were 9.3, 23.2, 38.8, and 71.6%, respectively. There was a similar trend for increasing HARM score in emergent admissions as well. For all surgical procedure categories, increasing HARM score, with and without risk adjustment, correlated with increasing severity of complications by Clavien-Dindo classification. The HARM score is an easy-to-use quality metric that correlates with increasing complication rates and complication severity across multiple surgical disciplines when evaluated on a large administrative database. This inexpensive tool could be adopted across multiple institutions to compare the quality of surgical care.

  15. The art of assessing quality for images and video

    International Nuclear Information System (INIS)

    Deriche, M.

    2011-01-01

    The early years of this century have witnessed a tremendous growth in the use of digital multimedia data for di?erent communication applications. Researchers from around the world are spending substantial research efforts in developing techniques for improving the appearance of images/video. However, as we know, preserving high quality is a challenging task. Images are subject to distortions during acquisition, compression, transmission, analysis, and reconstruction. For this reason, the research area focusing on image and video quality assessment has attracted a lot of attention in recent years. In particular, compression applications and other multimedia applications need powerful techniques for evaluating quality objectively without human interference. This tutorial will cover the di?erent faces of image quality assessment. We will motivate the need for robust image quality assessment techniques, then discuss the main algorithms found in the literature with a critical perspective. We will present the di?erent metrics used for full reference, reduced reference and no reference applications. We will then discuss the difference between image and video quality assessment. In all of the above, we will take a critical approach to explain which metric can be used for which application. Finally we will discuss the different approaches to analyze the performance of image/video quality metrics, and end the tutorial with some perspectives on newly introduced metrics and their potential applications.

  16. Robustness and prediction accuracy of machine learning for objective visual quality assessment

    OpenAIRE

    HINES, ANDREW

    2014-01-01

    PUBLISHED Lisbon, Portugal Machine Learning (ML) is a powerful tool to support the development of objective visual quality assessment metrics, serving as a substitute model for the perceptual mechanisms acting in visual quality appreciation. Nevertheless, the reli- ability of ML-based techniques within objective quality as- sessment metrics is often questioned. In this study, the ro- bustness of ML in supporting objective quality assessment is investigated, specific...

  17. Alternative "global warming" metrics in life cycle assessment: a case study with existing transportation data.

    Science.gov (United States)

    Peters, Glen P; Aamaas, Borgar; T Lund, Marianne; Solli, Christian; Fuglestvedt, Jan S

    2011-10-15

    The Life Cycle Assessment (LCA) impact category "global warming" compares emissions of long-lived greenhouse gases (LLGHGs) using Global Warming Potential (GWP) with a 100-year time-horizon as specified in the Kyoto Protocol. Two weaknesses of this approach are (1) the exclusion of short-lived climate forcers (SLCFs) and biophysical factors despite their established importance, and (2) the use of a particular emission metric (GWP) with a choice of specific time-horizons (20, 100, and 500 years). The GWP and the three time-horizons were based on an illustrative example with value judgments and vague interpretations. Here we illustrate, using LCA data of the transportation sector, the importance of SLCFs relative to LLGHGs, different emission metrics, and different treatments of time. We find that both the inclusion of SLCFs and the choice of emission metric can alter results and thereby change mitigation priorities. The explicit inclusion of time, both for emissions and impacts, can remove value-laden assumptions and provide additional information for impact assessments. We believe that our results show that a debate is needed in the LCA community on the impact category "global warming" covering which emissions to include, the emission metric(s) to use, and the treatment of time.

  18. Transfusion rate as a quality metric: is blood conservation a learnable skill?

    Science.gov (United States)

    Paone, Gaetano; Brewer, Robert; Likosky, Donald S; Theurer, Patricia F; Bell, Gail F; Cogan, Chad M; Prager, Richard L

    2013-10-01

    Between January 2008 and December 2012, a multicenter quality collaborative initiated a focus on blood conservation as a quality metric, with educational presentations and quarterly reporting of institutional-level perioperative transfusion rates and outcomes. This prospective cohort study was undertaken to determine the effect of that initiative on transfusion rates after isolated coronary artery bypass grafting (CABG). Between January 1, 2008, and December 31, 2012, 30,271 patients underwent isolated CABG in Michigan. Evaluated were annual crude and adjusted trends in overall transfusion rates for red blood cells (RBCs), fresh frozen plasma (FFP), and platelets, and in operative death. Transfusion rates continuously decreased for all blood products. RBC use decreased from 56.4% in 2008 (baseline) to 38.3% in 2012, FFP use decreased from 14.8% to 9.1%, and platelet use decreased from 20.5% to 13.4% (ptrend conservation techniques, coincident with regular reporting and review of perioperative transfusion rates as a quality metric, was associated with a significant decrease in blood product utilization. These reductions were concurrent with significant improvement in most perioperative outcomes. This intervention was also safe, as it was not associated with any increases in mortality. Copyright © 2013 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  19. Development of quality metrics for ambulatory pediatric cardiology: Transposition of the great arteries after arterial switch operation.

    Science.gov (United States)

    Baker-Smith, Carissa M; Carlson, Karina; Ettedgui, Jose; Tsuda, Takeshi; Jayakumar, K Anitha; Park, Matthew; Tede, Nikola; Uzark, Karen; Fleishman, Craig; Connuck, David; Likes, Maggie; Penny, Daniel J

    2018-01-01

    To develop quality metrics (QMs) for the ambulatory care of patients with transposition of the great arteries following arterial switch operation (TGA/ASO). Under the auspices of the American College of Cardiology Adult Congenital and Pediatric Cardiology (ACPC) Steering committee, the TGA/ASO team generated candidate QMs related to TGA/ASO ambulatory care. Candidate QMs were submitted to the ACPC Steering Committee and were reviewed for validity and feasibility using individual expert panel member scoring according to the RAND-UCLA methodology. QMs were then made available for review by the entire ACC ACPC during an "open comment period." Final approval of each QM was provided by a vote of the ACC ACPC Council. Patients with TGA who had undergone an ASO were included. Patients with complex transposition were excluded. Twelve candidate QMs were generated. Seven metrics passed the RAND-UCLA process. Four passed the "open comment period" and were ultimately approved by the Council. These included: (1) at least 1 echocardiogram performed during the first year of life reporting on the function, aortic dimension, degree of neoaortic valve insufficiency, the patency of the systemic and pulmonary outflows, the patency of the branch pulmonary arteries and coronary arteries, (2) neurodevelopmental (ND) assessment after ASO; (3) lipid profile by age 11 years; and (4) documentation of a transition of care plan to an adult congenital heart disease (CHD) provider by 18 years of age. Application of the RAND-UCLA methodology and linkage of this methodology to the ACPC approval process led to successful generation of 4 QMs relevant to the care of TGA/ASO pediatric patients in the ambulatory setting. These metrics have now been incorporated into the ACPC Quality Network providing guidance for the care of TGA/ASO patients across 30 CHD centers. © 2017 Wiley Periodicals, Inc.

  20. Lyapunov exponent as a metric for assessing the dynamic content and predictability of large-eddy simulations

    Science.gov (United States)

    Nastac, Gabriel; Labahn, Jeffrey W.; Magri, Luca; Ihme, Matthias

    2017-09-01

    Metrics used to assess the quality of large-eddy simulations commonly rely on a statistical assessment of the solution. While these metrics are valuable, a dynamic measure is desirable to further characterize the ability of a numerical simulation for capturing dynamic processes inherent in turbulent flows. To address this issue, a dynamic metric based on the Lyapunov exponent is proposed which assesses the growth rate of the solution separation. This metric is applied to two turbulent flow configurations: forced homogeneous isotropic turbulence and a turbulent jet diffusion flame. First, it is shown that, despite the direct numerical simulation (DNS) and large-eddy simulation (LES) being high-dimensional dynamical systems with O (107) degrees of freedom, the separation growth rate qualitatively behaves like a lower-dimensional dynamical system, in which the dimension of the Lyapunov system is substantially smaller than the discretized dynamical system. Second, a grid refinement analysis of each configuration demonstrates that as the LES filter width approaches the smallest scales of the system the Lyapunov exponent asymptotically approaches a plateau. Third, a small perturbation is superimposed onto the initial conditions of each configuration, and the Lyapunov exponent is used to estimate the time required for divergence, thereby providing a direct assessment of the predictability time of simulations. By comparing inert and reacting flows, it is shown that combustion increases the predictability of the turbulent simulation as a result of the dilatation and increased viscosity by heat release. The predictability time is found to scale with the integral time scale in both the reacting and inert jet flows. Fourth, an analysis of the local Lyapunov exponent is performed to demonstrate that this metric can also determine flow-dependent properties, such as regions that are sensitive to small perturbations or conditions of large turbulence within the flow field. Finally

  1. Water Quality Assessment and Management

    Science.gov (United States)

    Overview of Clean Water Act (CWA) restoration framework including; water quality standards, monitoring/assessment, reporting water quality status, TMDL development, TMDL implementation (point & nonpoint source control)

  2. Localized Multi-Model Extremes Metrics for the Fourth National Climate Assessment

    Science.gov (United States)

    Thompson, T. R.; Kunkel, K.; Stevens, L. E.; Easterling, D. R.; Biard, J.; Sun, L.

    2017-12-01

    We have performed localized analysis of scenario-based datasets for the Fourth National Climate Assessment (NCA4). These datasets include CMIP5-based Localized Constructed Analogs (LOCA) downscaled simulations at daily temporal resolution and 1/16th-degree spatial resolution. Over 45 temperature and precipitation extremes metrics have been processed using LOCA data, including threshold, percentile, and degree-days calculations. The localized analysis calculates trends in the temperature and precipitation extremes metrics for relatively small regions such as counties, metropolitan areas, climate zones, administrative areas, or economic zones. For NCA4, we are currently addressing metropolitan areas as defined by U.S. Census Bureau Metropolitan Statistical Areas. Such localized analysis provides essential information for adaptation planning at scales relevant to local planning agencies and businesses. Nearly 30 such regions have been analyzed to date. Each locale is defined by a closed polygon that is used to extract LOCA-based extremes metrics specific to the area. For each metric, single-model data at each LOCA grid location are first averaged over several 30-year historical and future periods. Then, for each metric, the spatial average across the region is calculated using model weights based on both model independence and reproducibility of current climate conditions. The range of single-model results is also captured on the same localized basis, and then combined with the weighted ensemble average for each region and each metric. For example, Boston-area cooling degree days and maximum daily temperature is shown below for RCP8.5 (red) and RCP4.5 (blue) scenarios. We also discuss inter-regional comparison of these metrics, as well as their relevance to risk analysis for adaptation planning.

  3. Holistic Metrics for Assessment of the Greenness of Chemical Reactions in the Context of Chemical Education

    Science.gov (United States)

    Ribeiro, M. Gabriela T. C.; Machado, Adelio A. S. C.

    2013-01-01

    Two new semiquantitative green chemistry metrics, the green circle and the green matrix, have been developed for quick assessment of the greenness of a chemical reaction or process, even without performing the experiment from a protocol if enough detail is provided in it. The evaluation is based on the 12 principles of green chemistry. The…

  4. Impact of artefact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data.

    Directory of Open Access Journals (Sweden)

    Thomas Samuel Carroll

    2014-04-01

    Full Text Available With the advent of ChIP-seq multiplexing technologies and the subsequent increase in ChIP-seq throughput, the development of working standards for the quality assessment of ChIP-seq studies has received significant attention. The ENCODE consortium’s large scale analysis of transcription factor binding and epigenetic marks as well as concordant work on ChIP-seq by other laboratories has established a new generation of ChIP-seq quality control measures. The use of these metrics alongside common processing steps has however not been evaluated. In this study, we investigate the effects of blacklisting and removal of duplicated reads on established metrics of ChIP-seq quality and show that the interpretation of these metrics is highly dependent on the ChIP-seq preprocessing steps applied. Further to this we perform the first investigation of the use of these metrics for ChIP-exo data and make recommendations for the adaptation of the NSC statistic to allow for the assessment of ChIP-exo efficiency.

  5. Homogeneity and EPR metrics for assessment of regular grids used in CW EPR powder simulations

    Science.gov (United States)

    Crăciun, Cora

    2014-08-01

    CW EPR powder spectra may be approximated numerically using a spherical grid and a Voronoi tessellation-based cubature. For a given spin system, the quality of simulated EPR spectra depends on the grid type, size, and orientation in the molecular frame. In previous work, the grids used in CW EPR powder simulations have been compared mainly from geometric perspective. However, some grids with similar homogeneity degree generate different quality simulated spectra. This paper evaluates the grids from EPR perspective, by defining two metrics depending on the spin system characteristics and the grid Voronoi tessellation. The first metric determines if the grid points are EPR-centred in their Voronoi cells, based on the resonance magnetic field variations inside these cells. The second metric verifies if the adjacent Voronoi cells of the tessellation are EPR-overlapping, by computing the common range of their resonance magnetic field intervals. Beside a series of well known regular grids, the paper investigates a modified ZCW grid and a Fibonacci spherical code, which are new in the context of EPR simulations. For the investigated grids, the EPR metrics bring more information than the homogeneity quantities and are better related to the grids’ EPR behaviour, for different spin system symmetries. The metrics’ efficiency and limits are finally verified for grids generated from the initial ones, by using the original or magnetic field-constraint variants of the Spherical Centroidal Voronoi Tessellation method.

  6. Image-guided radiotherapy quality control: Statistical process control using image similarity metrics.

    Science.gov (United States)

    Shiraishi, Satomi; Grams, Michael P; Fong de Los Santos, Luis E

    2018-05-01

    The purpose of this study was to demonstrate an objective quality control framework for the image review process. A total of 927 cone-beam computed tomography (CBCT) registrations were retrospectively analyzed for 33 bilateral head and neck cancer patients who received definitive radiotherapy. Two registration tracking volumes (RTVs) - cervical spine (C-spine) and mandible - were defined, within which a similarity metric was calculated and used as a registration quality tracking metric over the course of treatment. First, sensitivity to large misregistrations was analyzed for normalized cross-correlation (NCC) and mutual information (MI) in the context of statistical analysis. The distribution of metrics was obtained for displacements that varied according to a normal distribution with standard deviation of σ = 2 mm, and the detectability of displacements greater than 5 mm was investigated. Then, similarity metric control charts were created using a statistical process control (SPC) framework to objectively monitor the image registration and review process. Patient-specific control charts were created using NCC values from the first five fractions to set a patient-specific process capability limit. Population control charts were created using the average of the first five NCC values for all patients in the study. For each patient, the similarity metrics were calculated as a function of unidirectional translation, referred to as the effective displacement. Patient-specific action limits corresponding to 5 mm effective displacements were defined. Furthermore, effective displacements of the ten registrations with the lowest similarity metrics were compared with a three dimensional (3DoF) couch displacement required to align the anatomical landmarks. Normalized cross-correlation identified suboptimal registrations more effectively than MI within the framework of SPC. Deviations greater than 5 mm were detected at 2.8σ and 2.1σ from the mean for NCC and MI

  7. Global-cognitive health metrics: A novel approach for assessing cognition impairment in adult population.

    Directory of Open Access Journals (Sweden)

    Chia-Kuang Tsai

    Full Text Available Dementia is the supreme worldwide burden for welfare and the health care system in the 21st century. The early identification and control of the modifiable risk factors of dementia are important. Global-cognitive health (GCH metrics, encompassing controllable cardiovascular health (CVH and non-CVH risk factors of dementia, is a newly developed approach to assess the risk of cognitive impairment. The components of ideal GCH metrics includes better education, non-obesity, normal blood pressure, no smoking, no depression, ideal physical activity, good social integration, normal glycated hemoglobin (HbA1c, and normal hearing. This study focuses on the association between ideal GCH metrics and the cognitive function in young adults by investigating the Third Health and Nutrition Examination Survey (NHANES III database, which has not been reported previously. A total of 1243 participants aged 17 to 39 years were recruited in this study. Cognitive functioning was evaluated by the simple reaction time test (SRTT, symbol-digit substitution test (SDST, and serial digit learning test (SDLT. Participants with significantly higher scores of GCH metrics had better cognitive performance (p for trend <0.01 in three cognitive tests. Moreover, better education, ideal physical activity, good social integration and normal glycated hemoglobin were the optimistic components of ideal GCH metrics associated with better cognitive performance after adjusting for covariates (p < 0.05 in three cognitive tests. These findings emphasize the importance of a preventive strategy for modifiable dementia risk factors to enhance cognitive functioning during adulthood.

  8. Using research metrics to evaluate the International Atomic Energy Agency guidelines on quality assurance for R&D

    Energy Technology Data Exchange (ETDEWEB)

    Bodnarczuk, M.

    1994-06-01

    The objective of the International Atomic Energy Agency (IAEA) Guidelines on Quality Assurance for R&D is to provide guidance for developing quality assurance (QA) programs for R&D work on items, services, and processes important to safety, and to support the siting, design, construction, commissioning, operation, and decommissioning of nuclear facilities. The standard approach to writing papers describing new quality guidelines documents is to present a descriptive overview of the contents of the document. I will depart from this approach. Instead, I will first discuss a conceptual framework of metrics for evaluating and improving basic and applied experimental science as well as the associated role that quality management should play in understanding and implementing these metrics. I will conclude by evaluating how well the IAEA document addresses the metrics from this conceptual framework and the broader principles of quality management.

  9. Assessing precision, bias and sigma-metrics of 53 measurands of the Alinity ci system.

    Science.gov (United States)

    Westgard, Sten; Petrides, Victoria; Schneider, Sharon; Berman, Marvin; Herzogenrath, Jörg; Orzechowski, Anthony

    2017-12-01

    Assay performance is dependent on the accuracy and precision of a given method. These attributes can be combined into an analytical Sigma-metric, providing a simple value for laboratorians to use in evaluating a test method's capability to meet its analytical quality requirements. Sigma-metrics were determined for 37 clinical chemistry assays, 13 immunoassays, and 3 ICT methods on the Alinity ci system. Analytical Performance Specifications were defined for the assays, following a rationale of using CLIA goals first, then Ricos Desirable goals when CLIA did not regulate the method, and then other sources if the Ricos Desirable goal was unrealistic. A precision study was conducted at Abbott on each assay using the Alinity ci system following the CLSI EP05-A2 protocol. Bias was estimated following the CLSI EP09-A3 protocol using samples with concentrations spanning the assay's measuring interval tested in duplicate on the Alinity ci system and ARCHITECT c8000 and i2000 SR systems, where testing was also performed at Abbott. Using the regression model, the %bias was estimated at an important medical decisions point. Then the Sigma-metric was estimated for each assay and was plotted on a method decision chart. The Sigma-metric was calculated using the equation: Sigma-metric=(%TEa-|%bias|)/%CV. The Sigma-metrics and Normalized Method Decision charts demonstrate that a majority of the Alinity assays perform at least at five Sigma or higher, at or near critical medical decision levels. More than 90% of the assays performed at Five and Six Sigma. None performed below Three Sigma. Sigma-metrics plotted on Normalized Method Decision charts provide useful evaluations of performance. The majority of Alinity ci system assays had sigma values >5 and thus laboratories can expect excellent or world class performance. Laboratorians can use these tools as aids in choosing high-quality products, further contributing to the delivery of excellent quality healthcare for patients

  10. Testing Quality and Metrics for the LHC Magnet Powering System throughout Past and Future Commissioning

    CERN Document Server

    Anderson, D; Charifoulline, Z; Dragu, M; Fuchsberger, K; Garnier, JC; Gorzawski, AA; Koza, M; Krol, K; Rowan, S; Stamos, K; Zerlauth, M

    2014-01-01

    The LHC magnet powering system is composed of thousands of individual components to assure a safe operation when operating with stored energies as high as 10GJ in the superconducting LHC magnets. Each of these components has to be thoroughly commissioned following interventions and machine shutdown periods to assure their protection function in case of powering failures. As well as having dependable tracking of test executions it is vital that the executed commissioning steps and applied analysis criteria adequately represent the operational state of each component. The Accelerator Testing (AccTesting) framework in combination with a domain specific analysis language provides the means to quantify and improve the quality of analysis for future campaigns. Dedicated tools were developed to analyse in detail the reasons for failures and success of commissioning steps in past campaigns and to compare the results with newly developed quality metrics. Observed shortcomings and discrepancies are used to propose addi...

  11. The role of metrics and measurements in a software intensive total quality management environment

    Science.gov (United States)

    Daniels, Charles B.

    1992-01-01

    Paramax Space Systems began its mission as a member of the Rockwell Space Operations Company (RSOC) team which was the successful bidder on a massive operations consolidation contract for the Mission Operations Directorate (MOD) at JSC. The contract awarded to the team was the Space Transportation System Operations Contract (STSOC). Our initial challenge was to accept responsibility for a very large, highly complex and fragmented collection of software from eleven different contractors and transform it into a coherent, operational baseline. Concurrently, we had to integrate a diverse group of people from eleven different companies into a single, cohesive team. Paramax executives recognized the absolute necessity to develop a business culture based on the concept of employee involvement to execute and improve the complex process of our new environment. Our executives clearly understood that management needed to set the example and lead the way to quality improvement. The total quality management policy and the metrics used in this endeavor are presented.

  12. Hospital readiness for health information exchange: development of metrics associated with successful collaboration for quality improvement.

    Science.gov (United States)

    Korst, Lisa M; Aydin, Carolyn E; Signer, Jordana M K; Fink, Arlene

    2011-08-01

    The development of readiness metrics for organizational participation in health information exchange is critical for monitoring progress toward, and achievement of, successful inter-organizational collaboration. In preparation for the development of a tool to measure readiness for data-sharing, we tested whether organizational capacities known to be related to readiness were associated with successful participation in an American data-sharing collaborative for quality improvement. Cross-sectional design, using an on-line survey of hospitals in a large, mature data-sharing collaborative organized for benchmarking and improvement in nursing care quality. Factor analysis was used to identify salient constructs, and identified factors were analyzed with respect to "successful" participation. "Success" was defined as the incorporation of comparative performance data into the hospital dashboard. The most important factor in predicting success included survey items measuring the strength of organizational leadership in fostering a culture of quality improvement (QI Leadership): (1) presence of a supportive hospital executive; (2) the extent to which a hospital values data; (3) the presence of leaders' vision for how the collaborative advances the hospital's strategic goals; (4) hospital use of the collaborative data to track quality outcomes; and (5) staff recognition of a strong mandate for collaborative participation (α=0.84, correlation with Success 0.68 [P<0.0001]). The data emphasize the importance of hospital QI Leadership in collaboratives that aim to share data for QI or safety purposes. Such metrics should prove useful in the planning and development of this complex form of inter-organizational collaboration. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. The Relationship between the Level and Modality of HRM Metrics, Quality of HRM Practice and Organizational Performance

    OpenAIRE

    Nina Pološki Vokić

    2011-01-01

    The paper explores the relationship between the way organizations measure HRM and overall quality of HRM activities, as well as the relationship between HRM metrics used and financial performance of an organization. In the theoretical part of the paper modalities of HRM metrics are grouped into five groups (evaluating HRM using accounting principles, evaluating HRM using management techniques, evaluating individual HRM activities, aggregate evaluation of HRM, and evaluating HRM de...

  14. Assessing the metrics of climate change. Current methods and future possibilities

    Energy Technology Data Exchange (ETDEWEB)

    Fuglestveit, Jan S.; Berntsen, Terje K.; Godal, Odd; Sausen, Robert; Shine, Keith P.; Skodvin, Tora

    2001-07-01

    With the principle of comprehensiveness embedded in the UN Framework Convention on Climate Change (Art. 3), a multi-gas abatement strategy with emphasis also on non-CO2 greenhouse gases as targets for reduction and control measures has been adopted in the international climate regime. In the Kyoto Protocol, the comprehensive approach is made operative as the aggregate anthropogenic carbon dioxide equivalent emissions of six specified greenhouse gases or groups of gases (Art. 3). With this operationalisation, the emissions of a set of greenhouse gases with very different atmospheric lifetimes and radiative properties are transformed into one common unit - CO2 equivalents. This transformation is based on the Global Warming Potential (GWP) index, which in turn is based on the concept of radiative forcing. The GWP metric and its application in policy making has been debated, and several other alternative concepts have been suggested. In this paper, we review existing and alternative metrics of climate change, with particular emphasis on radiative forcing and GWPs, in terms of their scientific performance. This assessment focuses on questions such as the climate impact (end point) against which gases are weighted; the extent to which and how temporality is included, both with regard to emission control and with regard to climate impact; how cost issues are dealt with; and the sensitivity of the metrics to various assumptions. It is concluded that the radiative forcing concept is a robust and useful metric of the potential climatic impact of various agents and that there are prospects for improvement by weighing different forcings according to their effectiveness. We also find that although the GWP concept is associated with serious shortcomings, it retains advantages over any of the proposed alternatives in terms of political feasibility. Alternative metrics, however, make a significant contribution to addressing important issues, and this contribution should be taken

  15. Assessing the metrics of climate change. Current methods and future possibilities

    International Nuclear Information System (INIS)

    Fuglestveit, Jan S.; Berntsen, Terje K.; Godal, Odd; Sausen, Robert; Shine, Keith P.; Skodvin, Tora

    2001-01-01

    With the principle of comprehensiveness embedded in the UN Framework Convention on Climate Change (Art. 3), a multi-gas abatement strategy with emphasis also on non-CO2 greenhouse gases as targets for reduction and control measures has been adopted in the international climate regime. In the Kyoto Protocol, the comprehensive approach is made operative as the aggregate anthropogenic carbon dioxide equivalent emissions of six specified greenhouse gases or groups of gases (Art. 3). With this operationalisation, the emissions of a set of greenhouse gases with very different atmospheric lifetimes and radiative properties are transformed into one common unit - CO2 equivalents. This transformation is based on the Global Warming Potential (GWP) index, which in turn is based on the concept of radiative forcing. The GWP metric and its application in policy making has been debated, and several other alternative concepts have been suggested. In this paper, we review existing and alternative metrics of climate change, with particular emphasis on radiative forcing and GWPs, in terms of their scientific performance. This assessment focuses on questions such as the climate impact (end point) against which gases are weighted; the extent to which and how temporality is included, both with regard to emission control and with regard to climate impact; how cost issues are dealt with; and the sensitivity of the metrics to various assumptions. It is concluded that the radiative forcing concept is a robust and useful metric of the potential climatic impact of various agents and that there are prospects for improvement by weighing different forcings according to their effectiveness. We also find that although the GWP concept is associated with serious shortcomings, it retains advantages over any of the proposed alternatives in terms of political feasibility. Alternative metrics, however, make a significant contribution to addressing important issues, and this contribution should be taken

  16. Climate Classification is an Important Factor in ­Assessing Hospital Performance Metrics

    Science.gov (United States)

    Boland, M. R.; Parhi, P.; Gentine, P.; Tatonetti, N. P.

    2017-12-01

    Context/Purpose: Climate is a known modulator of disease, but its impact on hospital performance metrics remains unstudied. Methods: We assess the relationship between Köppen-Geiger climate classification and hospital performance metrics, specifically 30-day mortality, as reported in Hospital Compare, and collected for the period July 2013 through June 2014 (7/1/2013 - 06/30/2014). A hospital-level multivariate linear regression analysis was performed while controlling for known socioeconomic factors to explore the relationship between all-cause mortality and climate. Hospital performance scores were obtained from 4,524 hospitals belonging to 15 distinct Köppen-Geiger climates and 2,373 unique counties. Results: Model results revealed that hospital performance metrics for mortality showed significant climate dependence (psocioeconomic factors. Interpretation: Currently, hospitals are reimbursed by Governmental agencies using 30-day mortality rates along with 30-day readmission rates. These metrics allow Government agencies to rank hospitals according to their `performance' along these metrics. Various socioeconomic factors are taken into consideration when determining individual hospitals performance. However, no climate-based adjustment is made within the existing framework. Our results indicate that climate-based variability in 30-day mortality rates does exist even after socioeconomic confounder adjustment. Use of standardized high-level climate classification systems (such as Koppen-Geiger) would be useful to incorporate in future metrics. Conclusion: Climate is a significant factor in evaluating hospital 30-day mortality rates. These results demonstrate that climate classification is an important factor when comparing hospital performance across the United States.

  17. Brief educational interventions to improve performance on novel quality metrics in ambulatory settings in Kenya: A multi-site pre-post effectiveness trial.

    Science.gov (United States)

    Korom, Robert Ryan; Onguka, Stephanie; Halestrap, Peter; McAlhaney, Maureen; Adam, Mary

    2017-01-01

    The quality of primary care delivered in resource-limited settings is low. While some progress has been made using educational interventions, it is not yet clear how to sustainably improve care for common acute illnesses in the outpatient setting. Management of urinary tract infection is particularly important in resource-limited settings, where it is commonly diagnosed and associated with high levels of antimicrobial resistance. We describe an educational programme targeting non-physician health care providers and its effects on various clinical quality metrics for urinary tract infection. We used a series of educational interventions including 1) formal introduction of a clinical practice guideline, 2) peer-to-peer chart review, and 3) peer-reviewed literature describing local antimicrobial resistance patterns. Interventions were conducted for clinical officers (N = 24) at two outpatient centers near Nairobi, Kenya over a one-year period. The medical records of 474 patients with urinary tract infections were scored on five clinical quality metrics, with the primary outcome being the proportion of cases in which the guideline-recommended antibiotic was prescribed. The results at baseline and following each intervention were compared using chi-squared tests and unpaired two-tailed T-tests for significance. Logistic regression analysis was used to assess for possible confounders. Clinician adherence to the guideline-recommended antibiotic improved significantly during the study period, from 19% at baseline to 68% following all interventions (Χ2 = 150.7, p quality score also improved significantly from an average of 2.16 to 3.00 on a five-point scale (t = 6.58, p educational interventions can dramatically improve the quality of care for routine acute illnesses in the outpatient setting. Measurement of quality metrics allows for further targeting of educational interventions depending on the needs of the providers and the community. Further study is needed to expand

  18. The software product assurance metrics study: JPL's software systems quality and productivity

    Science.gov (United States)

    Bush, Marilyn W.

    1989-01-01

    The findings are reported of the Jet Propulsion Laboratory (JPL)/Software Product Assurance (SPA) Metrics Study, conducted as part of a larger JPL effort to improve software quality and productivity. Until recently, no comprehensive data had been assembled on how JPL manages and develops software-intensive systems. The first objective was to collect data on software development from as many projects and for as many years as possible. Results from five projects are discussed. These results reflect 15 years of JPL software development, representing over 100 data points (systems and subsystems), over a third of a billion dollars, over four million lines of code and 28,000 person months. Analysis of this data provides a benchmark for gauging the effectiveness of past, present and future software development work. In addition, the study is meant to encourage projects to record existing metrics data and to gather future data. The SPA long term goal is to integrate the collection of historical data and ongoing project data with future project estimations.

  19. Mask industry quality assessment

    Science.gov (United States)

    Strott, Al; Bassist, Larry

    1994-12-01

    Product quality and timely delivery are two of the most important parameters in determining the success of a mask manufacturing facility. Because of the sensitivity of this data, very little was known about industry performance in these areas until an assessment was authored and presented at the 1993 BACUS Symposium by Larry Regis of Intel Corporation, Neil Paulsen of Intel Corporation, and James A. Reynolds of Reynolds Consulting. This data has been updated and will be published and presented at this year's BACUS Symposium. Contributor identities will again remain protected by utilizing Arthur Andersen & Company to compile the submittals. Participation was consistent with last year's representation of over 75% of the total merchant and captive mask volume in the United States. The data compiled includes shipments, customer return rate, customer return reasons from 1988 through Q2, 1994, performance to schedule, plate survival yield, and throughput time (TPT).

  20. Drinking water quality assessment.

    Science.gov (United States)

    Aryal, J; Gautam, B; Sapkota, N

    2012-09-01

    Drinking water quality is the great public health concern because it is a major risk factor for high incidence of diarrheal diseases in Nepal. In the recent years, the prevalence rate of diarrhoea has been found the highest in Myagdi district. This study was carried out to assess the quality of drinking water from different natural sources, reservoirs and collection taps at Arthunge VDC of Myagdi district. A cross-sectional study was carried out using random sampling method in Arthunge VDC of Myagdi district from January to June,2010. 84 water samples representing natural sources, reservoirs and collection taps from the study area were collected. The physico-chemical and microbiological analysis was performed following standards technique set by APHA 1998 and statistical analysis was carried out using SPSS 11.5. The result was also compared with national and WHO guidelines. Out of 84 water samples (from natural source, reservoirs and tap water) analyzed, drinking water quality parameters (except arsenic and total coliform) of all water samples was found to be within the WHO standards and national standards.15.48% of water samples showed pH (13) higher than the WHO permissible guideline values. Similarly, 85.71% of water samples showed higher Arsenic value (72) than WHO value. Further, the statistical analysis showed no significant difference (Pwater for collection taps water samples of winter (January, 2010) and summer (June, 2010). The microbiological examination of water samples revealed the presence of total coliform in 86.90% of water samples. The results obtained from physico-chemical analysis of water samples were within national standard and WHO standards except arsenic. The study also found the coliform contamination to be the key problem with drinking water.

  1. Modeling Relationships between Surface Water Quality and Landscape Metrics Using the Adaptive Neuro-Fuzzy Inference System, A Case Study in Mazandaran Province

    Directory of Open Access Journals (Sweden)

    mohsen Mirzayi

    2016-03-01

    Full Text Available Landscape indices can be used as an approach for predicting water quality changes to monitor non-point source pollution. In the present study, the data collected over the period from 2012 to 2013 from 81 water quality stations along the rivers flowing in Mazandaran Province were analyzed. Upstream boundries were drawn and landscape metrics were extracted for each of the sub-watersheds at class and landscape levels. Principal component analysis was used to single out the relevant water quality parameters and forward linear regression was employed to determine the optimal metrics for the description of each parameter. The first five components were able to describe 96.61% of the variation in water quality in Mazandaran Province. Adaptive Neuro-fuzzy Inference System (ANFIS and multiple linear regression were used to model the relationship between landscape metrics and water quality parameters. The results indicate that multiple regression was able to predict SAR, TDS, pH, NO3‒, and PO43‒ in the test step, with R2 values equal to 0.81, 0.56, 0.73, 0.44. and 0.63, respectively. The corresponding R2 value of ANFIS in the test step were 0.82, 0.79, 0.82, 0.31, and 0.36, respectively. Clearly, ANFIS exhibited a better performance in each case than did the linear regression model. This indicates a nonlinear relationship between the water quality parameters and landscape metrics. Since different land cover/uses have considerable impacts on both the outflow water quality and the available and dissolved pollutants in rivers, the method can be reasonably used for regional planning and environmental impact assessment in development projects in the region.

  2. Noisy EEG signals classification based on entropy metrics. Performance assessment using first and second generation statistics.

    Science.gov (United States)

    Cuesta-Frau, David; Miró-Martínez, Pau; Jordán Núñez, Jorge; Oltra-Crespo, Sandra; Molina Picó, Antonio

    2017-08-01

    This paper evaluates the performance of first generation entropy metrics, featured by the well known and widely used Approximate Entropy (ApEn) and Sample Entropy (SampEn) metrics, and what can be considered an evolution from these, Fuzzy Entropy (FuzzyEn), in the Electroencephalogram (EEG) signal classification context. The study uses the commonest artifacts found in real EEGs, such as white noise, and muscular, cardiac, and ocular artifacts. Using two different sets of publicly available EEG records, and a realistic range of amplitudes for interfering artifacts, this work optimises and assesses the robustness of these metrics against artifacts in class segmentation terms probability. The results show that the qualitative behaviour of the two datasets is similar, with SampEn and FuzzyEn performing the best, and the noise and muscular artifacts are the most confounding factors. On the contrary, there is a wide variability as regards initialization parameters. The poor performance achieved by ApEn suggests that this metric should not be used in these contexts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. On use of image quality metrics for perceptual blur modeling: image/video compression case

    Science.gov (United States)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  4. Area of Concern: A new paradigm in life cycle assessment for the development of footprint metrics

    DEFF Research Database (Denmark)

    Ridoutt, Bradley G.; Pfister, Stephan; Manzardo, Alessandro

    2016-01-01

    As a class of environmental metrics, footprints have been poorly defined, have shared an unclear relationship to life cycle assessment (LCA), and the variety of approaches to quantification have sometimes resulted in confusing and contradictory messages in the marketplace. In response, a task force...... operating under the auspices of the UNEP/SETAC Life Cycle Initiative project on environmental life cycle impact assessment (LCIA) has been working to develop generic guidance for developers of footprint metrics. The purpose of this paper is to introduce a universal footprint definition and related...... terminology as well as to discuss modelling implications. The task force has worked from the perspective that footprints should be based on LCA methodology, underpinned by the same data systems and models as used in LCA. However, there are important differences in purpose and orientation relative to LCA...

  5. Information System Quality Assessment Methods

    OpenAIRE

    Korn, Alexandra

    2014-01-01

    This thesis explores challenging topic of information system quality assessment and mainly process assessment. In this work the term Information System Quality is defined as well as different approaches in a quality definition for different domains of information systems are outlined. Main methods of process assessment are overviewed and their relationships are described. Process assessment methods are divided into two categories: ISO standards and best practices. The main objective of this w...

  6. Quality assessment of laparoscopic hysterectomy

    NARCIS (Netherlands)

    Driessen, S.R.C.

    2017-01-01

    Quality assessment is surgical care is very important, though very difficult. With this thesis we attempted to overcome the limitations of currently used quality indicators and developed a dynamic, unique quality assessment tool to reflect upon individual surgical performance with case-mix

  7. Workshop summary: 'Integrating air quality and climate mitigation - is there a need for new metrics to support decision making?'

    Science.gov (United States)

    von Schneidemesser, E.; Schmale, J.; Van Aardenne, J.

    2013-12-01

    Air pollution and climate change are often treated at national and international level as separate problems under different regulatory or thematic frameworks and different policy departments. With air pollution and climate change being strongly linked with regard to their causes, effects and mitigation options, the integration of policies that steer air pollutant and greenhouse gas emission reductions might result in cost-efficient, more effective and thus more sustainable tackling of the two problems. To support informed decision making and to work towards an integrated air quality and climate change mitigation policy requires the identification, quantification and communication of present-day and potential future co-benefits and trade-offs. The identification of co-benefits and trade-offs requires the application of appropriate metrics that are well rooted in science, easy to understand and reflect the needs of policy, industry and the public for informed decision making. For the purpose of this workshop, metrics were loosely defined as a quantified measure of effect or impact used to inform decision-making and to evaluate mitigation measures. The workshop held on October 9 and 10 and co-organized between the European Environment Agency and the Institute for Advanced Sustainability Studies brought together representatives from science, policy, NGOs, and industry to discuss whether current available metrics are 'fit for purpose' or whether there is a need to develop alternative metrics or reassess the way current metrics are used and communicated. Based on the workshop outcome the presentation will (a) summarize the informational needs and current application of metrics by the end-users, who, depending on their field and area of operation might require health, policy, and/or economically relevant parameters at different scales, (b) provide an overview of the state of the science of currently used and newly developed metrics, and the scientific validity of these

  8. Assessing spelling in kindergarten: further comparison of scoring metrics and their relation to reading skills.

    Science.gov (United States)

    Clemens, Nathan H; Oslund, Eric L; Simmons, Leslie E; Simmons, Deborah

    2014-02-01

    Early reading and spelling development share foundational skills, yet spelling assessment is underutilized in evaluating early reading. This study extended research comparing the degree to which methods for scoring spelling skills at the end of kindergarten were associated with reading skills measured at the same time as well as at the end of first grade. Five strategies for scoring spelling responses were compared: totaling the number of words spelled correctly, totaling the number of correct letter sounds, totaling the number of correct letter sequences, using a rubric for scoring invented spellings, and calculating the Spelling Sensitivity Score (Masterson & Apel, 2010b). Students (N=287) who were identified at kindergarten entry as at risk for reading difficulty and who had received supplemental reading intervention were administered a standardized spelling assessment in the spring of kindergarten, and measures of phonological awareness, decoding, word recognition, and reading fluency were administered concurrently and at the end of first grade. The five spelling scoring metrics were similar in their strong relations with factors summarizing reading subskills (phonological awareness, decoding, and word reading) on a concurrent basis. Furthermore, when predicting first-grade reading skills based on spring-of-kindergarten performance, spelling scores from all five metrics explained unique variance over the autoregressive effects of kindergarten word identification. The practical advantages of using a brief spelling assessment for early reading evaluation and the relative tradeoffs of each scoring metric are discussed. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  9. Seizure control as a new metric in assessing efficacy of tumor treatment in low-grade glioma trials

    Science.gov (United States)

    Chamberlain, Marc; Schiff, David; Reijneveld, Jaap C.; Armstrong, Terri S.; Ruda, Roberta; Wen, Patrick Y.; Weller, Michael; Koekkoek, Johan A. F.; Mittal, Sandeep; Arakawa, Yoshiki; Choucair, Ali; Gonzalez-Martinez, Jorge; MacDonald, David R.; Nishikawa, Ryo; Shah, Aashit; Vecht, Charles J.; Warren, Paula; van den Bent, Martin J.; DeAngelis, Lisa M.

    2017-01-01

    Patients with low-grade glioma frequently have brain tumor–related epilepsy, which is more common than in patients with high-grade glioma. Treatment for tumor-associated epilepsy usually comprises a combination of surgery, anti-epileptic drugs (AEDs), chemotherapy, and radiotherapy. Response to tumor-directed treatment is measured primarily by overall survival and progression-free survival. However, seizure frequency has been observed to respond to tumor-directed treatment with chemotherapy or radiotherapy. A review of the current literature regarding seizure assessment for low-grade glioma patients reveals a heterogeneous manner in which seizure response has been reported. There is a need for a systematic approach to seizure assessment and its influence on health-related quality-of-life outcomes in patients enrolled in low-grade glioma therapeutic trials. In view of the need to have an adjunctive metric of tumor response in these patients, a method of seizure assessment as a metric in brain tumor treatment trials is proposed. PMID:27651472

  10. Using animation quality metric to improve efficiency of global illumination computation for dynamic environments

    Science.gov (United States)

    Myszkowski, Karol; Tawara, Takehiro; Seidel, Hans-Peter

    2002-06-01

    In this paper, we consider applications of perception-based video quality metrics to improve the performance of global lighting computations for dynamic environments. For this purpose we extend the Visible Difference Predictor (VDP) developed by Daly to handle computer animations. We incorporate into the VDP the spatio-velocity CSF model developed by Kelly. The CSF model requires data on the velocity of moving patterns across the image plane. We use the 3D image warping technique to compensate for the camera motion, and we conservatively assume that the motion of animated objects (usually strong attractors of the visual attention) is fully compensated by the smooth pursuit eye motion. Our global illumination solution is based on stochastic photon tracing and takes advantage of temporal coherence of lighting distribution, by processing photons both in the spatial and temporal domains. The VDP is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained.

  11. A new normalizing algorithm for BAC CGH arrays with quality control metrics.

    Science.gov (United States)

    Miecznikowski, Jeffrey C; Gaile, Daniel P; Liu, Song; Shepherd, Lori; Nowak, Norma

    2011-01-01

    The main focus in pin-tip (or print-tip) microarray analysis is determining which probes, genes, or oligonucleotides are differentially expressed. Specifically in array comparative genomic hybridization (aCGH) experiments, researchers search for chromosomal imbalances in the genome. To model this data, scientists apply statistical methods to the structure of the experiment and assume that the data consist of the signal plus random noise. In this paper we propose "SmoothArray", a new method to preprocess comparative genomic hybridization (CGH) bacterial artificial chromosome (BAC) arrays and we show the effects on a cancer dataset. As part of our R software package "aCGHplus," this freely available algorithm removes the variation due to the intensity effects, pin/print-tip, the spatial location on the microarray chip, and the relative location from the well plate. removal of this variation improves the downstream analysis and subsequent inferences made on the data. Further, we present measures to evaluate the quality of the dataset according to the arrayer pins, 384-well plates, plate rows, and plate columns. We compare our method against competing methods using several metrics to measure the biological signal. With this novel normalization algorithm and quality control measures, the user can improve their inferences on datasets and pinpoint problems that may arise in their BAC aCGH technology.

  12. A New Normalizing Algorithm for BAC CGH Arrays with Quality Control Metrics

    Directory of Open Access Journals (Sweden)

    Jeffrey C. Miecznikowski

    2011-01-01

    Full Text Available The main focus in pin-tip (or print-tip microarray analysis is determining which probes, genes, or oligonucleotides are differentially expressed. Specifically in array comparative genomic hybridization (aCGH experiments, researchers search for chromosomal imbalances in the genome. To model this data, scientists apply statistical methods to the structure of the experiment and assume that the data consist of the signal plus random noise. In this paper we propose “SmoothArray”, a new method to preprocess comparative genomic hybridization (CGH bacterial artificial chromosome (BAC arrays and we show the effects on a cancer dataset. As part of our R software package “aCGHplus,” this freely available algorithm removes the variation due to the intensity effects, pin/print-tip, the spatial location on the microarray chip, and the relative location from the well plate. removal of this variation improves the downstream analysis and subsequent inferences made on the data. Further, we present measures to evaluate the quality of the dataset according to the arrayer pins, 384-well plates, plate rows, and plate columns. We compare our method against competing methods using several metrics to measure the biological signal. With this novel normalization algorithm and quality control measures, the user can improve their inferences on datasets and pinpoint problems that may arise in their BAC aCGH technology.

  13. Social Advertising Quality: Assessment Criteria

    Directory of Open Access Journals (Sweden)

    S. B. Kalmykov

    2017-01-01

    Full Text Available Purpose: the The purpose of the publication is development of existing criterial assessment in social advertising sphere. The next objectives are provided for its achievement: to establish research methodology, to develop the author’s version of necessary notional apparatus and conceptual generalization, to determine the elements of social advertising quality, to establish the factors of its quality, to conduct the systematization of existing criteria and measuring instruments of quality assessment, to form new criteria of social advertising quality, to apply received results for development of criterial assessment to determine the further research perspectives. Methods: the methodology of research of management of social advertising interaction with target audience, which has dynamic procedural character with use of sociological knowledge multivariate paradigmatic status, has been proposed. Results: the primary received results: the multivariate paradigmatic research basis with use of works of famous domestic and foreign scientists in sociology, qualimetry and management spheres; the definitions of social advertising, its quality, sociological quality provision system, target audience behavior model during social advertising interaction are offered; the quality factors with three groups by level of effect on consumer are established; the systematization of existing quality and its measure instruments assessment criteria by detected social advertising quality elements are conducted; the two new criteria and its management quality assessment measuring instruments in social advertising sphere are developed; the one of the common groups of production quality criteria – adaptability with considering of new management quality criteria and conducted systematization of existing social advertising creative quality assessment criteria development; the perspective of further perfection of quality criterial assessment based on social advertising

  14. Tropospheric Ozone Assessment Report: Database and Metrics Data of Global Surface Ozone Observations

    Directory of Open Access Journals (Sweden)

    Martin G. Schultz

    2017-10-01

    Full Text Available In support of the first Tropospheric Ozone Assessment Report (TOAR a relational database of global surface ozone observations has been developed and populated with hourly measurement data and enhanced metadata. A comprehensive suite of ozone data products including standard statistics, health and vegetation impact metrics, and trend information, are made available through a common data portal and a web interface. These data form the basis of the TOAR analyses focusing on human health, vegetation, and climate relevant ozone issues, which are part of this special feature. Cooperation among many data centers and individual researchers worldwide made it possible to build the world's largest collection of 'in-situ' hourly surface ozone data covering the period from 1970 to 2015. By combining the data from almost 10,000 measurement sites around the world with global metadata information, new analyses of surface ozone have become possible, such as the first globally consistent characterisations of measurement sites as either urban or rural/remote. Exploitation of these global metadata allows for new insights into the global distribution, and seasonal and long-term changes of tropospheric ozone and they enable TOAR to perform the first, globally consistent analysis of present-day ozone concentrations and recent ozone changes with relevance to health, agriculture, and climate. Considerable effort was made to harmonize and synthesize data formats and metadata information from various networks and individual data submissions. Extensive quality control was applied to identify questionable and erroneous data, including changes in apparent instrument offsets or calibrations. Such data were excluded from TOAR data products. Limitations of 'a posteriori' data quality assurance are discussed. As a result of the work presented here, global coverage of surface ozone data for scientific analysis has been significantly extended. Yet, large gaps remain in the surface

  15. The effect of assessment scale and metric selection on the greenhouse gas benefits of woody biomass

    International Nuclear Information System (INIS)

    Galik, Christopher S.; Abt, Robert C.

    2012-01-01

    Recent attention has focused on the net greenhouse gas (GHG) implications of using woody biomass to produce energy. In particular, a great deal of controversy has erupted over the appropriate manner and scale at which to evaluate these GHG effects. Here, we conduct a comparative assessment of six different assessment scales and four different metric calculation techniques against the backdrop of a common biomass demand scenario. We evaluate the net GHG balance of woody biomass co-firing in existing coal-fired facilities in the state of Virginia, finding that assessment scale and metric calculation technique do in fact strongly influence the net GHG balance yielded by this common scenario. Those assessment scales that do not include possible market effects attributable to increased biomass demand, including changes in forest area, forest management intensity, and traditional industry production, generally produce less-favorable GHG balances than those that do. Given the potential difficulty small operators may have generating or accessing information on the extent of these market effects, however, it is likely that stakeholders and policy makers will need to balance accuracy and comprehensiveness with reporting and administrative simplicity. -- Highlights: ► Greenhouse gas (GHG) effects of co-firing forest biomass with coal are assessed. ► GHG effect of replacing coal with forest biomass linked to scale, analytic approach. ► Not accounting for indirect market effects yields poorer relative GHG balances. ► Accounting systems must balance comprehensiveness with administrative simplicity.

  16. Sugar concentration in nectar: a quantitative metric of crop attractiveness for refined pollinator risk assessments.

    Science.gov (United States)

    Knopper, Loren D; Dan, Tereza; Reisig, Dominic D; Johnson, Josephine D; Bowers, Lisa M

    2016-10-01

    Those involved with pollinator risk assessment know that agricultural crops vary in attractiveness to bees. Intuitively, this means that exposure to agricultural pesticides is likely greatest for attractive plants and lowest for unattractive plants. While crop attractiveness in the risk assessment process has been qualitatively remarked on by some authorities, absent is direction on how to refine the process with quantitative metrics of attractiveness. At a high level, attractiveness of crops to bees appears to depend on several key variables, including but not limited to: floral, olfactory, visual and tactile cues; seasonal availability; physical and behavioral characteristics of the bee; plant and nectar rewards. Notwithstanding the complexities and interactions among these variables, sugar content in nectar stands out as a suitable quantitative metric by which to refine pollinator risk assessments for attractiveness. Provided herein is a proposed way to use sugar nectar concentration to adjust the exposure parameter (with what is called a crop attractiveness factor) in the calculation of risk quotients in order to derive crop-specific tier I assessments. This Perspective is meant to invite discussion on incorporating such changes in the risk assessment process. © 2016 The Authors. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. © 2016 The Authors. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.

  17. Evaluation of the performance of a micromethod for measuring urinary iodine by using six sigma quality metrics.

    Science.gov (United States)

    Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud

    2013-09-01

    The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)Sigma metrics at all concentrations. Only one laboratory had TEcalc

  18. Genome Assembly Forensics: Metrics for Assessing Assembly Correctness (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    Energy Technology Data Exchange (ETDEWEB)

    Pop, Mihai

    2011-10-13

    University of Maryland's Mihai Pop on Genome Assembly Forensics: Metrics for Assessing Assembly Correctness at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  19. Estimated work ability in warm outdoor environments depends on the chosen heat stress assessment metric.

    Science.gov (United States)

    Bröde, Peter; Fiala, Dusan; Lemke, Bruno; Kjellstrom, Tord

    2018-03-01

    With a view to occupational effects of climate change, we performed a simulation study on the influence of different heat stress assessment metrics on estimated workability (WA) of labour in warm outdoor environments. Whole-day shifts with varying workloads were simulated using as input meteorological records for the hottest month from four cities with prevailing hot (Dallas, New Delhi) or warm-humid conditions (Managua, Osaka), respectively. In addition, we considered the effects of adaptive strategies like shielding against solar radiation and different work-rest schedules assuming an acclimated person wearing light work clothes (0.6 clo). We assessed WA according to Wet Bulb Globe Temperature (WBGT) by means of an empirical relation of worker performance from field studies (Hothaps), and as allowed work hours using safety threshold limits proposed by the corresponding standards. Using the physiological models Predicted Heat Strain (PHS) and Universal Thermal Climate Index (UTCI)-Fiala, we calculated WA as the percentage of working hours with body core temperature and cumulated sweat loss below standard limits (38 °C and 7.5% of body weight, respectively) recommended by ISO 7933 and below conservative (38 °C; 3%) and liberal (38.2 °C; 7.5%) limits in comparison. ANOVA results showed that the different metrics, workload, time of day and climate type determined the largest part of WA variance. WBGT-based metrics were highly correlated and indicated slightly more constrained WA for moderate workload, but were less restrictive with high workload and for afternoon work hours compared to PHS and UTCI-Fiala. Though PHS showed unrealistic dynamic responses to rest from work compared to UTCI-Fiala, differences in WA assessed by the physiological models largely depended on the applied limit criteria. In conclusion, our study showed that the choice of the heat stress assessment metric impacts notably on the estimated WA. Whereas PHS and UTCI-Fiala can account for

  20. Estimated work ability in warm outdoor environments depends on the chosen heat stress assessment metric

    Science.gov (United States)

    Bröde, Peter; Fiala, Dusan; Lemke, Bruno; Kjellstrom, Tord

    2018-03-01

    With a view to occupational effects of climate change, we performed a simulation study on the influence of different heat stress assessment metrics on estimated workability (WA) of labour in warm outdoor environments. Whole-day shifts with varying workloads were simulated using as input meteorological records for the hottest month from four cities with prevailing hot (Dallas, New Delhi) or warm-humid conditions (Managua, Osaka), respectively. In addition, we considered the effects of adaptive strategies like shielding against solar radiation and different work-rest schedules assuming an acclimated person wearing light work clothes (0.6 clo). We assessed WA according to Wet Bulb Globe Temperature (WBGT) by means of an empirical relation of worker performance from field studies (Hothaps), and as allowed work hours using safety threshold limits proposed by the corresponding standards. Using the physiological models Predicted Heat Strain (PHS) and Universal Thermal Climate Index (UTCI)-Fiala, we calculated WA as the percentage of working hours with body core temperature and cumulated sweat loss below standard limits (38 °C and 7.5% of body weight, respectively) recommended by ISO 7933 and below conservative (38 °C; 3%) and liberal (38.2 °C; 7.5%) limits in comparison. ANOVA results showed that the different metrics, workload, time of day and climate type determined the largest part of WA variance. WBGT-based metrics were highly correlated and indicated slightly more constrained WA for moderate workload, but were less restrictive with high workload and for afternoon work hours compared to PHS and UTCI-Fiala. Though PHS showed unrealistic dynamic responses to rest from work compared to UTCI-Fiala, differences in WA assessed by the physiological models largely depended on the applied limit criteria. In conclusion, our study showed that the choice of the heat stress assessment metric impacts notably on the estimated WA. Whereas PHS and UTCI-Fiala can account for

  1. Mask quality assessment

    Science.gov (United States)

    Regis, Larry; Paulson, Neil; Reynolds, James A.

    1994-02-01

    Product quality and timely delivery are two of the most important parameters, determining the success of a mask manufacturing facility. Because of the sensitivity of this data, however, very little is known about industry performance in these areas. Using Arthur Andersen & Co. to protect contributor identity, the authors have conducted a blind quality survey of mask shops which represents over 75% of the total merchant and captive mask volume in the US. Quantities such as return rate, plate survival yield, performance to schedule and reason for return were requested from 1988 through Q2 1993. Data is analyzed and conclusions are presented.

  2. Harmonizing exposure metrics and methods for sustainability assessments of food contact materials

    DEFF Research Database (Denmark)

    Ernstoff, Alexi; Jolliet, Olivier; Niero, Monia

    2016-01-01

    ) and Cradle to Cradle to support packaging design. Each assessment has distinct context and goals, but can help manage exposure to toxic chemicals and other environmental impacts. Metrics a nd methods to quantify and characterize exposure to potentially toxic chemicals specifically in food packaging are......, however, notably lacking from such assessments. Furthermore, previous case studies demonstrated that sustainable packaging design focuses, such as decreasing greenhouse gas emissions or resource consumption, can increase exposure to toxic chemicals through packaging. Thereby, developing harmonized methods...... for quantifying exposure to chemicals in food packaging is critical to ensure ‘sustainable packages’ do not increase exposure to toxic chemicals. Therefore we developed modelling methods suitable for first-tier risk screening and environmental assessments. The modelling framework was based on the new product...

  3. The health-related quality of life journey of gynecologic oncology surgical patients: Implications for the incorporation of patient-reported outcomes into surgical quality metrics.

    Science.gov (United States)

    Doll, Kemi M; Barber, Emma L; Bensen, Jeannette T; Snavely, Anna C; Gehrig, Paola A

    2016-05-01

    To report the changes in patient-reported quality of life for women undergoing gynecologic oncology surgeries. In a prospective cohort study from 10/2013-10/2014, women were enrolled pre-operatively and completed comprehensive interviews at baseline, 1, 3, and 6months post-operatively. Measures included the disease-specific Functional Assessment of Cancer Therapy-General (FACT-GP), general Patient Reported Outcome Measure Information System (PROMIS) global health and validated measures of anxiety and depression. Bivariate statistics were used to analyze demographic groups and changes in mean scores over time. Of 231 patients completing baseline interviews, 185 (80%) completed 1-month, 170 (74%) 3-month, and 174 (75%) 6-month interviews. Minimally invasive (n=115, 63%) and laparotomy (n=60, 32%) procedures were performed. Functional wellbeing (20 → 17.6, ptherapy administration. In an exploratory analysis of the interaction of QOL and quality, patients with increased postoperative healthcare resource use were noted to have higher baseline levels of anxiety. For women undergoing gynecologic oncology procedures, temporary declines in functional wellbeing are balanced by improvements in emotional wellbeing and decreased anxiety symptoms after surgery. Not all commonly used QOL surveys are sensitive to changes during the perioperative period and may not be suitable for use in surgical quality metrics. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Energetic fitness: Field metabolic rates assessed via 3D accelerometry complement conventional fitness metrics

    Science.gov (United States)

    Grémillet, David; Lescroël, Amelie; Ballard, Grant; Dugger, Katie M.; Massaro, Melanie; Porzig, Elizabeth L.; Ainley, David G.

    2018-01-01

    Evaluating the fitness of organisms is an essential step towards understanding their responses to environmental change. Connections between energy expenditure and fitness have been postulated for nearly a century. However, testing this premise among wild animals is constrained by difficulties in measuring energy expenditure while simultaneously monitoring conventional fitness metrics such as survival and reproductive output.We addressed this issue by exploring the functional links between field metabolic rate (FMR), body condition, sex, age and reproductive performance in a wild population.We deployed 3D accelerometers on 115 Adélie penguins Pygoscelis adeliae during four breeding seasons at one of the largest colonies of this species, Cape Crozier, on Ross Island, Antarctica. The demography of this population has been studied for the past 18 years. From accelerometry recordings, collected for birds of known age and breeding history, we determined the vector of the dynamic body acceleration (VeDBA) and used it as a proxy for FMR.This allowed us to demonstrate relationships among FMR, a breeding quality index (BQI) and body condition. Notably, we found a significant quadratic relationship between mean VeDBA during foraging and BQI for experienced breeders, and individuals in better body condition showed lower rates of energy expenditure.We conclude that using FMR as a fitness component complementary to more conventional fitness metrics will yield greater understanding of evolutionary and conservation physiology.

  5. Audiovisual quality assessment in communications applications: Current status, trends and challenges

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2010-01-01

    Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception...... addressed in practical quality metrics is the co-impact of audio and video qualities. This paper provides an overview of the current trends and challenges in objective audiovisual quality assessment, with emphasis on communication applications...

  6. Critical Assessment of the Foundations of Power Transmission and Distribution Reliability Metrics and Standards.

    Science.gov (United States)

    Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan

    2016-01-01

    The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events. © 2015 Society for Risk Analysis.

  7. Alternative Metrics ("Altmetrics") for Assessing Article Impact in Popular General Radiology Journals.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Ayoola, Abimbola; Singh, Kush; Duszak, Richard

    2017-07-01

    Emerging alternative metrics leverage social media and other online platforms to provide immediate measures of biomedical articles' reach among diverse public audiences. We aimed to compare traditional citation and alternative impact metrics for articles in popular general radiology journals. All 892 original investigations published in 2013 issues of Academic Radiology, American Journal of Roentgenology, Journal of the American College of Radiology, and Radiology were included. Each article's content was classified as imaging vs nonimaging. Traditional journal citations to articles were obtained from Web of Science. Each article's Altmetric Attention Score (Altmetric), representing weighted mentions across a variety of online platforms, was obtained from Altmetric.com. Statistical assessment included the McNemar test, the Mann-Whitney test, and the Pearson correlation. Mean and median traditional citation counts were 10.7 ± 15.4 and 5 vs 3.3 ± 13.3 and 0 for Altmetric. Among all articles, 96.4% had ≥1 traditional citation vs 41.8% for Altmetric (P nonimaging content (11.5 ± 16.2 vs 6.9 ± 9.8, P nonimaging content (5.1 ± 11.1 vs 2.8 ± 13.7, P = 0.006). Although overall online attention to radiology journal content was low, alternative metrics exhibited unique trends, particularly for nonclinical articles, and may provide a complementary measure of radiology research impact compared to traditional citation counts. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  8. Brief educational interventions to improve performance on novel quality metrics in ambulatory settings in Kenya: A multi-site pre-post effectiveness trial

    Science.gov (United States)

    Onguka, Stephanie; Halestrap, Peter; McAlhaney, Maureen; Adam, Mary

    2017-01-01

    Background The quality of primary care delivered in resource-limited settings is low. While some progress has been made using educational interventions, it is not yet clear how to sustainably improve care for common acute illnesses in the outpatient setting. Management of urinary tract infection is particularly important in resource-limited settings, where it is commonly diagnosed and associated with high levels of antimicrobial resistance. We describe an educational programme targeting non-physician health care providers and its effects on various clinical quality metrics for urinary tract infection. Methods We used a series of educational interventions including 1) formal introduction of a clinical practice guideline, 2) peer-to-peer chart review, and 3) peer-reviewed literature describing local antimicrobial resistance patterns. Interventions were conducted for clinical officers (N = 24) at two outpatient centers near Nairobi, Kenya over a one-year period. The medical records of 474 patients with urinary tract infections were scored on five clinical quality metrics, with the primary outcome being the proportion of cases in which the guideline-recommended antibiotic was prescribed. The results at baseline and following each intervention were compared using chi-squared tests and unpaired two-tailed T-tests for significance. Logistic regression analysis was used to assess for possible confounders. Findings Clinician adherence to the guideline-recommended antibiotic improved significantly during the study period, from 19% at baseline to 68% following all interventions (Χ2 = 150.7, p < 0.001). The secondary outcome of composite quality score also improved significantly from an average of 2.16 to 3.00 on a five-point scale (t = 6.58, p < 0.001). Interventions had different effects at different clinical sites; the primary outcome of appropriate antibiotic prescription was met 83% of the time at Penda Health, and 50% of the time at AICKH, possibly reflecting

  9. Institutional Consequences of Quality Assessment

    Science.gov (United States)

    Joao Rosa, Maria; Tavares, Diana; Amaral, Alberto

    2006-01-01

    This paper analyses the opinions of Portuguese university rectors and academics on the quality assessment system and its consequences at the institutional level. The results obtained show that university staff (rectors and academics, with more of the former than the latter) held optimistic views of the positive consequences of quality assessment…

  10. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2012-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed....... This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical...

  11. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed....... This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical...

  12. Impact of alternative metrics on estimates of extent of occurrence for extinction risk assessment.

    Science.gov (United States)

    Joppa, Lucas N; Butchart, Stuart H M; Hoffmann, Michael; Bachman, Steve P; Akçakaya, H Resit; Moat, Justin F; Böhm, Monika; Holland, Robert A; Newton, Adrian; Polidoro, Beth; Hughes, Adrian

    2016-04-01

    In International Union for Conservation of Nature (IUCN) Red List assessments, extent of occurrence (EOO) is a key measure of extinction risk. However, the way assessors estimate EOO from maps of species' distributions is inconsistent among assessments of different species and among major taxonomic groups. Assessors often estimate EOO from the area of mapped distribution, but these maps often exclude areas that are not habitat in idiosyncratic ways and are not created at the same spatial resolutions. We assessed the impact on extinction risk categories of applying different methods (minimum convex polygon, alpha hull) for estimating EOO for 21,763 species of mammals, birds, and amphibians. Overall, the percentage of threatened species requiring down listing to a lower category of threat (taking into account other Red List criteria under which they qualified) spanned 11-13% for all species combined (14-15% for mammals, 7-8% for birds, and 12-15% for amphibians). These down listings resulted from larger estimates of EOO and depended on the EOO calculation method. Using birds as an example, we found that 14% of threatened and near threatened species could require down listing based on the minimum convex polygon (MCP) approach, an approach that is now recommended by IUCN. Other metrics (such as alpha hull) had marginally smaller impacts. Our results suggest that uniformly applying the MCP approach may lead to a one-time down listing of hundreds of species but ultimately ensure consistency across assessments and realign the calculation of EOO with the theoretical basis on which the metric was founded. © 2015 Society for Conservation Biology.

  13. Ability to Work among Patients with ESKD: Relevance of Quality Care Metrics.

    Science.gov (United States)

    Kutner, Nancy G; Zhang, Rebecca

    2017-08-07

    Enabling patient ability to work was a key rationale for enacting the United States (US) Medicare program that provides financial entitlement to renal replacement therapy for persons with end-stage kidney disease (ESKD). However, fewer than half of working-age individuals in the US report the ability to work after starting maintenance hemodialysis (HD). Quality improvement is a well-established objective in oversight of the dialysis program, but a more patient-centered quality assessment approach is increasingly advocated. The ESKD Quality Incentive Program (QIP) initiated in 2012 emphasizes clinical performance indicators, but a newly-added measure requires the monitoring of patient depression-an issue that is important for work ability and employment. We investigated depression scores and four dialysis-specific QIP measures in relation to work ability reported by a multi-clinic cohort of 528 working-age maintenance HD patients. The prevalence of elevated depression scores was substantially higher among patients who said they were not able to work, while only one of the four dialysis-specific clinical measures differed for patients able/not able to work. Ability to work may be among patients' top priorities. As the parameters of quality assessment continue to evolve, increased attention to patient priorities might facilitate work ability and employment outcomes.

  14. Determine metrics and set targets for soil quality on agriculture residue and energy crop pathways

    Energy Technology Data Exchange (ETDEWEB)

    Ian Bonner; David Muth

    2013-09-01

    There are three objectives for this project: 1) support OBP in meeting MYPP stated performance goals for the Sustainability Platform, 2) develop integrated feedstock production system designs that increase total productivity of the land, decrease delivered feedstock cost to the conversion facilities, and increase environmental performance of the production system, and 3) deliver to the bioenergy community robust datasets and flexible analysis tools for establishing sustainable and viable use of agricultural residues and dedicated energy crops. The key project outcome to date has been the development and deployment of a sustainable agricultural residue removal decision support framework. The modeling framework has been used to produce a revised national assessment of sustainable residue removal potential. The national assessment datasets are being used to update national resource assessment supply curves using POLYSIS. The residue removal modeling framework has also been enhanced to support high fidelity sub-field scale sustainable removal analyses. The framework has been deployed through a web application and a mobile application. The mobile application is being used extensively in the field with industry, research, and USDA NRCS partners to support and validate sustainable residue removal decisions. The results detailed in this report have set targets for increasing soil sustainability by focusing on primary soil quality indicators (total organic carbon and erosion) in two agricultural residue management pathways and a dedicated energy crop pathway. The two residue pathway targets were set to, 1) increase residue removal by 50% while maintaining soil quality, and 2) increase soil quality by 5% as measured by Soil Management Assessment Framework indicators. The energy crop pathway was set to increase soil quality by 10% using these same indicators. To demonstrate the feasibility and impact of each of these targets, seven case studies spanning the US are presented

  15. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    Directory of Open Access Journals (Sweden)

    Daniel Laney

    2014-01-01

    Full Text Available This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. We compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.

  16. WE-AB-209-07: Explicit and Convex Optimization of Plan Quality Metrics in Intensity-Modulated Radiation Therapy Treatment Planning

    International Nuclear Information System (INIS)

    Engberg, L; Eriksson, K; Hardemark, B; Forsgren, A

    2016-01-01

    Purpose: To formulate objective functions of a multicriteria fluence map optimization model that correlate well with plan quality metrics, and to solve this multicriteria model by convex approximation. Methods: In this study, objectives of a multicriteria model are formulated to explicitly either minimize or maximize a dose-at-volume measure. Given the widespread agreement that dose-at-volume levels play important roles in plan quality assessment, these objectives correlate well with plan quality metrics. This is in contrast to the conventional objectives, which are to maximize clinical goal achievement by relating to deviations from given dose-at-volume thresholds: while balancing the new objectives means explicitly balancing dose-at-volume levels, balancing the conventional objectives effectively means balancing deviations. Constituted by the inherently non-convex dose-at-volume measure, the new objectives are approximated by the convex mean-tail-dose measure (CVaR measure), yielding a convex approximation of the multicriteria model. Results: Advantages of using the convex approximation are investigated through juxtaposition with the conventional objectives in a computational study of two patient cases. Clinical goals of each case respectively point out three ROI dose-at-volume measures to be considered for plan quality assessment. This is translated in the convex approximation into minimizing three mean-tail-dose measures. Evaluations of the three ROI dose-at-volume measures on Pareto optimal plans are used to represent plan quality of the Pareto sets. Besides providing increased accuracy in terms of feasibility of solutions, the convex approximation generates Pareto sets with overall improved plan quality. In one case, the Pareto set generated by the convex approximation entirely dominates that generated with the conventional objectives. Conclusion: The initial computational study indicates that the convex approximation outperforms the conventional objectives

  17. Indoor Climate Quality Assessment -

    DEFF Research Database (Denmark)

    Ansaldi, Roberta; Asadi, Ehsan; Costa, José Joaquim

    This Guidebook gives building professionals useful support in the practical measurements and monitoring of the indoor climate in buildings. It is evident that energy consumption in a building is directly influenced by required and maintained indoor comfort level. Wireless technologies for measure...... for measurement and monitoring have allowed a significantly increased number of possible applications, especially in existing buildings. The Guidebook illustrates several cases with the instrumentation of the monitoring and assessment of indoor climate.......This Guidebook gives building professionals useful support in the practical measurements and monitoring of the indoor climate in buildings. It is evident that energy consumption in a building is directly influenced by required and maintained indoor comfort level. Wireless technologies...

  18. The Northeast Stream Quality Assessment

    Science.gov (United States)

    Van Metre, Peter C.; Riva-Murray, Karen; Coles, James F.

    2016-04-22

    In 2016, the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) is assessing stream quality in the northeastern United States. The goal of the Northeast Stream Quality Assessment (NESQA) is to assess the quality of streams in the region by characterizing multiple water-quality factors that are stressors to aquatic life and evaluating the relation between these stressors and biological communities. The focus of NESQA in 2016 will be on the effects of urbanization and agriculture on stream quality in all or parts of eight states: Connecticut, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont.Findings will provide the public and policymakers with information about the most critical factors affecting stream quality, thus providing insights about possible approaches to protect the health of streams in the region. The NESQA study will be the fourth regional study conducted as part of NAWQA and will be of similar design and scope to the first three, in the Midwest in 2013, the Southeast in 2014, and the Pacific Northwest in 2015 (http://txpub.usgs.gov/RSQA/).

  19. Timeliness “at a glance”: assessing the turnaround time through the six sigma metrics.

    Science.gov (United States)

    Ialongo, Cristiano; Bernardini, Sergio

    2016-01-01

    Almost thirty years of systematic analysis have proven the turnaround time to be a fundamental dimension for the clinical laboratory. Several indicators are to date available to assess and report quality with respect to timeliness, but they sometimes lack the communicative immediacy and accuracy. The six sigma is a paradigm developed within the industrial domain for assessing quality and addressing goal and issues. The sigma level computed through the Z-score method is a simple and straightforward tool which delivers quality by a universal dimensionless scale and allows to handle non-normal data. Herein we report our preliminary experience in using the sigma level to assess the change in urgent (STAT) test turnaround time due to the implementation of total automation. We found that the Z-score method is a valuable and easy to use method for assessing and communicating the quality level of laboratory timeliness, providing a good correspondence with the actual change in efficiency which was retrospectively observed.

  20. The application of simple metrics in the assessment of glycaemic variability.

    Science.gov (United States)

    Monnier, L; Colette, C; Owens, D R

    2018-03-06

    The assessment of glycaemic variability (GV) remains a subject of debate with many indices proposed to represent either short- (acute glucose fluctuations) or long-term GV (variations of HbA 1c ). For the assessment of short-term within-day GV, the coefficient of variation for glucose (%CV) defined as the standard deviation adjusted on the 24-h mean glucose concentration is easy to perform and with a threshold of 36%, recently adopted by the international consensus on use of continuous glucose monitoring, separating stable from labile glycaemic states. More complex metrics such as the Low Blood Glucose Index (LBGI) or High Blood Glucose Index (HBGI) allow the risk of hypo or hyperglycaemic episodes, respectively to be assessed although in clinical practice its application is limited due to the need for more complex computation. This also applies to other indices of short-term intraday GV including the mean amplitude of glycemic excursions (MAGE), Shlichtkrull's M-value and CONGA. GV is important clinically as exaggerated glucose fluctuations are associated with an enhanced risk of adverse cardiovascular outcomes due primarily to hypoglycaemia. In contrast, there is at present no compelling evidence that elevated short-term GV is an independent risk factor of microvascular complications of diabetes. Concerning long-term GV there are numerous studies supporting its association with an enhanced risk of cardiovascular events. However, this association raises the question as to whether the impact of long-term variability is not simply the consequence of repeated exposure to short-term GV or ambient chronic hyperglycaemia. The renewed emphasis on glucose monitoring with the introduction of continuous glucose monitoring technologies can benefit from the introduction and application of simple metrics for describing GV along with supporting recommendations. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  1. In Data We Trust? Comparison of Electronic Versus Manual Abstraction of Antimicrobial Prescribing Quality Metrics for Hospitalized Veterans With Pneumonia.

    Science.gov (United States)

    Jones, Barbara E; Haroldsen, Candace; Madaras-Kelly, Karl; Goetz, Matthew B; Ying, Jian; Sauer, Brian; Jones, Makoto M; Leecaster, Molly; Greene, Tom; Fridkin, Scott K; Neuhauser, Melinda M; Samore, Matthew H

    2018-07-01

    Electronic health records provide the opportunity to assess system-wide quality measures. Veterans Affairs Pharmacy Benefits Management Center for Medication Safety uses medication use evaluation (MUE) through manual review of the electronic health records. To compare an electronic MUE approach versus human/manual review for extraction of antibiotic use (choice and duration) and severity metrics. Retrospective. Hospitalizations for uncomplicated pneumonia occurring during 2013 at 30 Veterans Affairs facilities. We compared summary statistics, individual hospitalization-level agreement, facility-level consistency, and patterns of variation between electronic and manual MUE for initial severity, antibiotic choice, daily clinical stability, and antibiotic duration. Among 2004 hospitalizations, electronic and manual abstraction methods showed high individual hospitalization-level agreement for initial severity measures (agreement=86%-98%, κ=0.5-0.82), antibiotic choice (agreement=89%-100%, κ=0.70-0.94), and facility-level consistency for empiric antibiotic choice (anti-MRSA r=0.97, P<0.001; antipseudomonal r=0.95, P<0.001) and therapy duration (r=0.77, P<0.001) but lower facility-level consistency for days to clinical stability (r=0.52, P=0.006) or excessive duration of therapy (r=0.55, P=0.005). Both methods identified widespread facility-level variation in antibiotic choice, but we found additional variation in manual estimation of excessive antibiotic duration and initial illness severity. Electronic and manual MUE agreed well for illness severity, antibiotic choice, and duration of therapy in pneumonia at both the individual and facility levels. Manual MUE showed additional reviewer-level variation in estimation of initial illness severity and excessive antibiotic use. Electronic MUE allows for reliable, scalable tracking of national patterns of antimicrobial use, enabling the examination of system-wide interventions to improve quality.

  2. A City and National Metric measuring Isolation from the Global Market for Food Security Assessment

    Science.gov (United States)

    Brown, Molly E.; Silver, Kirk Coleman; Rajagopalan, Krishnan

    2013-01-01

    The World Bank has invested in infrastructure in developing countries for decades. This investment aims to reduce the isolation of markets, reducing both seasonality and variability in food availability and food prices. Here we combine city market price data, global distance to port, and country infrastructure data to create a new Isolation Index for countries and cities around the world. Our index quantifies the isolation of a city from the global market. We demonstrate that an index built at the country level can be applied at a sub-national level to quantify city isolation. In doing so, we offer policy makers with an alternative metric to assess food insecurity. We compare our isolation index with other indices and economic data found in the literature.We show that our Index measures economic isolation regardless of economic stability using correlation and analysis

  3. Software Architecture Coupling Metric for Assessing Operational Responsiveness of Trading Systems

    Directory of Open Access Journals (Sweden)

    Claudiu VINTE

    2012-01-01

    Full Text Available The empirical observation that motivates our research relies on the difficulty to assess the performance of a trading architecture beyond a few synthetic indicators like response time, system latency, availability or volume capacity. Trading systems involve complex software architectures of distributed resources. However, in the context of a large brokerage firm, which offers a global coverage from both, market and client perspectives, the term distributed gains a critical significance indeed. Offering a low latency ordering system by nowadays standards is relatively easily achievable, but integrating it in a flexible manner within the broader information system architecture of a broker/dealer requires operational aspects to be factored in. We propose a metric for measuring the coupling level within software architecture, and employ it to identify architectural designs that can offer a higher level of operational responsiveness, which ultimately would raise the overall real-world performance of a trading system.

  4. Leveraging multi-channel x-ray detector technology to improve quality metrics for industrial and security applications

    Science.gov (United States)

    Jimenez, Edward S.; Thompson, Kyle R.; Stohn, Adriana; Goodner, Ryan N.

    2017-09-01

    Sandia National Laboratories has recently developed the capability to acquire multi-channel radio- graphs for multiple research and development applications in industry and security. This capability allows for the acquisition of x-ray radiographs or sinogram data to be acquired at up to 300 keV with up to 128 channels per pixel. This work will investigate whether multiple quality metrics for computed tomography can actually benefit from binned projection data compared to traditionally acquired grayscale sinogram data. Features and metrics to be evaluated include the ability to dis- tinguish between two different materials with similar absorption properties, artifact reduction, and signal-to-noise for both raw data and reconstructed volumetric data. The impact of this technology to non-destructive evaluation, national security, and industry is wide-ranging and has to potential to improve upon many inspection methods such as dual-energy methods, material identification, object segmentation, and computer vision on radiographs.

  5. Next-Generation Metrics: Responsible Metrics & Evaluation for Open Science

    Energy Technology Data Exchange (ETDEWEB)

    Wilsdon, J.; Bar-Ilan, J.; Peters, I.; Wouters, P.

    2016-07-01

    Metrics evoke a mixed reaction from the research community. A commitment to using data to inform decisions makes some enthusiastic about the prospect of granular, real-time analysis o of research and its wider impacts. Yet we only have to look at the blunt use of metrics such as journal impact factors, h-indices and grant income targets, to be reminded of the pitfalls. Some of the most precious qualities of academic culture resist simple quantification, and individual indicators often struggle to do justice to the richness and plurality of research. Too often, poorly designed evaluation criteria are “dominating minds, distorting behaviour and determining careers (Lawrence, 2007).” Metrics hold real power: they are constitutive of values, identities and livelihoods. How to exercise that power to more positive ends has been the focus of several recent and complementary initiatives, including the San Francisco Declaration on Research Assessment (DORA1), the Leiden Manifesto2 and The Metric Tide3 (a UK government review of the role of metrics in research management and assessment). Building on these initiatives, the European Commission, under its new Open Science Policy Platform4, is now looking to develop a framework for responsible metrics for research management and evaluation, which can be incorporated into the successor framework to Horizon 2020. (Author)

  6. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery

    International Nuclear Information System (INIS)

    Shiraishi, Satomi; Moore, Kevin L.; Tan, Jun; Olsen, Lindsey A.

    2015-01-01

    Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V 10Gy (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM clin − QM pred , and a coefficient of determination, R 2 . For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are stratified based on

  7. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Shiraishi, Satomi; Moore, Kevin L., E-mail: kevinmoore@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California 92093 (United States); Tan, Jun [Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas 75490 (United States); Olsen, Lindsey A. [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States)

    2015-02-15

    Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V{sub 10Gy} (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM{sub clin} − QM{sub pred}, and a coefficient of determination, R{sup 2}. For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are

  8. QUAST: quality assessment tool for genome assemblies.

    Science.gov (United States)

    Gurevich, Alexey; Saveliev, Vladislav; Vyahhi, Nikolay; Tesler, Glenn

    2013-04-15

    Limitations of genome sequencing techniques have led to dozens of assembly algorithms, none of which is perfect. A number of methods for comparing assemblers have been developed, but none is yet a recognized benchmark. Further, most existing methods for comparing assemblies are only applicable to new assemblies of finished genomes; the problem of evaluating assemblies of previously unsequenced species has not been adequately considered. Here, we present QUAST-a quality assessment tool for evaluating and comparing genome assemblies. This tool improves on leading assembly comparison software with new ideas and quality metrics. QUAST can evaluate assemblies both with a reference genome, as well as without a reference. QUAST produces many reports, summary tables and plots to help scientists in their research and in their publications. In this study, we used QUAST to compare several genome assemblers on three datasets. QUAST tables and plots for all of them are available in the Supplementary Material, and interactive versions of these reports are on the QUAST website. http://bioinf.spbau.ru/quast . Supplementary data are available at Bioinformatics online.

  9. Total Probability of Collision as a Metric for Finite Conjunction Assessment and Collision Risk Management

    Science.gov (United States)

    Frigm, Ryan C.; Hejduk, Matthew D.; Johnson, Lauren C.; Plakalovic, Dragan

    2015-01-01

    On-orbit collision risk is becoming an increasing mission risk to all operational satellites in Earth orbit. Managing this risk can be disruptive to mission and operations, present challenges for decision-makers, and is time-consuming for all parties involved. With the planned capability improvements to detecting and tracking smaller orbital debris and capacity improvements to routinely predict on-orbit conjunctions, this mission risk will continue to grow in terms of likelihood and effort. It is very real possibility that the future space environment will not allow collision risk management and mission operations to be conducted in the same manner as it is today. This paper presents the concept of a finite conjunction assessment-one where each discrete conjunction is not treated separately but, rather, as a continuous event that must be managed concurrently. The paper also introduces the Total Probability of Collision as an analogous metric for finite conjunction assessment operations and provides several options for its usage in a Concept of Operations.

  10. Interactions of visual attention and quality perception

    NARCIS (Netherlands)

    Redi, J.A.; Liu, H.; Zunino, R.; Heynderickx, I.E.J.R.

    2011-01-01

    Several attempts to integrate visual saliency information in quality metrics are described in literature, albeit with contradictory results. The way saliency is integrated in quality metrics should reflect the mechanisms underlying the interaction between image quality assessment and visual

  11. Subjective and Objective Quality Assessment of Single-Channel Speech Separation Algorithms

    DEFF Research Database (Denmark)

    Mowlaee, Pejman; Saeidi, Rahim; Christensen, Mads Græsbøll

    2012-01-01

    Previous studies on performance evaluation of single-channel speech separation (SCSS) algorithms mostly focused on automatic speech recognition (ASR) accuracy as their performance measure. Assessing the separated signals by different metrics other than this has the benefit that the results...... are expected to carry on to other applications beyond ASR. In this paper, in addition to conventional speech quality metrics (PESQ and SNRloss), we also evaluate the separation systems output using different source separation metrics: blind source separation evaluation (BSS EVAL) and perceptual evaluation...... that PESQ and PEASS quality metrics predict well the subjective quality of separated signals obtained by the separation systems. From the results it is observed that the short-time objective intelligibility (STOI) measure predict the speech intelligibility results....

  12. Challenges, Solutions, and Quality Metrics of Personal Genome Assembly in Advancing Precision Medicine

    Directory of Open Access Journals (Sweden)

    Wenming Xiao

    2016-04-01

    Full Text Available Even though each of us shares more than 99% of the DNA sequences in our genome, there are millions of sequence codes or structure in small regions that differ between individuals, giving us different characteristics of appearance or responsiveness to medical treatments. Currently, genetic variants in diseased tissues, such as tumors, are uncovered by exploring the differences between the reference genome and the sequences detected in the diseased tissue. However, the public reference genome was derived with the DNA from multiple individuals. As a result of this, the reference genome is incomplete and may misrepresent the sequence variants of the general population. The more reliable solution is to compare sequences of diseased tissue with its own genome sequence derived from tissue in a normal state. As the price to sequence the human genome has dropped dramatically to around $1000, it shows a promising future of documenting the personal genome for every individual. However, de novo assembly of individual genomes at an affordable cost is still challenging. Thus, till now, only a few human genomes have been fully assembled. In this review, we introduce the history of human genome sequencing and the evolution of sequencing platforms, from Sanger sequencing to emerging “third generation sequencing” technologies. We present the currently available de novo assembly and post-assembly software packages for human genome assembly and their requirements for computational infrastructures. We recommend that a combined hybrid assembly with long and short reads would be a promising way to generate good quality human genome assemblies and specify parameters for the quality assessment of assembly outcomes. We provide a perspective view of the benefit of using personal genomes as references and suggestions for obtaining a quality personal genome. Finally, we discuss the usage of the personal genome in aiding vaccine design and development, monitoring host

  13. A Metric and Workflow for Quality Control in the Analysis of Heterogeneity in Phenotypic Profiles and Screens

    Science.gov (United States)

    Gough, Albert; Shun, Tongying; Taylor, D. Lansing; Schurdak, Mark

    2016-01-01

    Heterogeneity is well recognized as a common property of cellular systems that impacts biomedical research and the development of therapeutics and diagnostics. Several studies have shown that analysis of heterogeneity: gives insight into mechanisms of action of perturbagens; can be used to predict optimal combination therapies; and to quantify heterogeneity in tumors where heterogeneity is believed to be associated with adaptation and resistance. Cytometry methods including high content screening (HCS), high throughput microscopy, flow cytometry, mass spec imaging and digital pathology capture cell level data for populations of cells. However it is often assumed that the population response is normally distributed and therefore that the average adequately describes the results. A deeper understanding of the results of the measurements and more effective comparison of perturbagen effects requires analysis that takes into account the distribution of the measurements, i.e. the heterogeneity. However, the reproducibility of heterogeneous data collected on different days, and in different plates/slides has not previously been evaluated. Here we show that conventional assay quality metrics alone are not adequate for quality control of the heterogeneity in the data. To address this need, we demonstrate the use of the Kolmogorov-Smirnov statistic as a metric for monitoring the reproducibility of heterogeneity in an SAR screen, describe a workflow for quality control in heterogeneity analysis. One major challenge in high throughput biology is the evaluation and interpretation of heterogeneity in thousands of samples, such as compounds in a cell-based screen. In this study we also demonstrate that three heterogeneity indices previously reported, capture the shapes of the distributions and provide a means to filter and browse big data sets of cellular distributions in order to compare and identify distributions of interest. These metrics and methods are presented as a

  14. Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images

    Science.gov (United States)

    Kamble, V. M.; Bhurchandi, K.

    2018-03-01

    Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.

  15. Fifty shades of grey: Variability in metric-based assessment of surface waters using macroinvertebrates

    NARCIS (Netherlands)

    Keizer-Vlek, H.E.

    2014-01-01

    Since the introduction of the European Water Framework Directive (WFD) in 2000, every member state is obligated to assess the effects of human activities on the ecological quality status of all water bodies and to indicate the level of confidence and precision of the results provided by the

  16. Quality assessment of images displayed on LCD screen with local backlight dimming

    DEFF Research Database (Denmark)

    Mantel, Claire; Burini, Nino; Korhonen, Jari

    2013-01-01

    This paper presents a subjective experiment collecting quality assessment of images displayed on a LCD with local backlight dimming using two methodologies: absolute category ratings and paired-comparison. Some well-known objective quality metrics are then applied to the stimuli and their respect...

  17. Quality assessment in pancreatic surgery: what might tomorrow require?

    Science.gov (United States)

    Kalish, Brian T; Vollmer, Charles M; Kent, Tara S; Nealon, William H; Tseng, Jennifer F; Callery, Mark P

    2013-01-01

    The Institute of Medicine (IOM) defines healthcare quality across six domains: safety, timeliness, effectiveness, patient centeredness, efficiency, and equitability. We asked experts in pancreatic surgery (PS) whether improved quality metrics are needed, and how they could align to contemporary IOM healthcare quality domains. We created and distributed a web-based survey to pancreatic surgeons. Respondents ranked 62 proposed PS quality metrics on level of importance (LoI) and aligned each metric to one or more IOM quality domains (multi-domain alignment (MDA)). LoI and MDA scores for a given quality metric were averaged together to render a total quality score (TQS) normalized to a 100-point scale. One hundred six surgeons (21 %) completed the survey. Ninety percent of respondents indicated a definite or probable need for improved quality metrics in PS. Metrics related to mortality, to rates and severity of complications, and to access to multidisciplinary services had the highest TQS. Metrics related to patient satisfaction, costs, and patient demographics had the lowest TQS. The least represented IOM domains were equitability, efficiency, and patient-centeredness. Experts in pancreatic surgery have significant consensus on 12 proposed metrics of quality that they view as both highly important and aligned with more than one IOM healthcare quality domain.

  18. Use of plan quality degradation to evaluate tradeoffs in delivery efficiency and clinical plan metrics arising from IMRT optimizer and sequencer compromises

    Science.gov (United States)

    Wilkie, Joel R.; Matuszak, Martha M.; Feng, Mary; Moran, Jean M.; Fraass, Benedick A.

    2013-01-01

    Purpose: Plan degradation resulting from compromises made to enhance delivery efficiency is an important consideration for intensity modulated radiation therapy (IMRT) treatment plans. IMRT optimization and/or multileaf collimator (MLC) sequencing schemes can be modified to generate more efficient treatment delivery, but the effect those modifications have on plan quality is often difficult to quantify. In this work, the authors present a method for quantitative assessment of overall plan quality degradation due to tradeoffs between delivery efficiency and treatment plan quality, illustrated using comparisons between plans developed allowing different numbers of intensity levels in IMRT optimization and/or MLC sequencing for static segmental MLC IMRT plans. Methods: A plan quality degradation method to evaluate delivery efficiency and plan quality tradeoffs was developed and used to assess planning for 14 prostate and 12 head and neck patients treated with static IMRT. Plan quality was evaluated using a physician's predetermined “quality degradation” factors for relevant clinical plan metrics associated with the plan optimization strategy. Delivery efficiency and plan quality were assessed for a range of optimization and sequencing limitations. The “optimal” (baseline) plan for each case was derived using a clinical cost function with an unlimited number of intensity levels. These plans were sequenced with a clinical MLC leaf sequencer which uses >100 segments, assuring delivered intensities to be within 1% of the optimized intensity pattern. Each patient's optimal plan was also sequenced limiting the number of intensity levels (20, 10, and 5), and then separately optimized with these same numbers of intensity levels. Delivery time was measured for all plans, and direct evaluation of the tradeoffs between delivery time and plan degradation was performed. Results: When considering tradeoffs, the optimal number of intensity levels depends on the treatment

  19. Impact of Constant Rate Factor on Objective Video Quality Assessment

    Directory of Open Access Journals (Sweden)

    Juraj Bienik

    2017-01-01

    Full Text Available This paper deals with the impact of constant rate factor value on the objective video quality assessment using PSNR and SSIM metrics. Compression efficiency of H.264 and H.265 codecs defined by different Constant rate factor (CRF values was tested. The assessment was done for eight types of video sequences depending on content for High Definition (HD, Full HD (FHD and Ultra HD (UHD resolution. Finally, performance of both mentioned codecs with emphasis on compression ratio and efficiency of coding was compared.

  20. Assessing quality in cardiac surgery

    Directory of Open Access Journals (Sweden)

    Samer A.M. Nashef

    2005-07-01

    Full Text Available There is a the strong temporal, if not causal, link between the intervention and the outcome in cardiac surgery and therefore a link becomes established between operative mortality and the measurement of surgical performance. In Britain the law stipulates that data collected by any public body or using public funds must be made freely available. Tools and mechanisms we devise and develop are likely to form the models on which the quality of care is assessed in other surgical and perhaps medical specialties. Measuring professional performance should be done by the profession. To measure risk there are a number of scores as crude mortality is not enough. A very important benefit of assessing the risk of death is to use this knowledge in the determination of the indication to operate. The second benefit is in the assessment of the quality of care as risk prediction gives a standard against performance of hospitals and surgeons. Peer review and “naming and shaming” are two mechanisms to monitor quality. There are two potentially damaging outcomes from the publication of results in a league-table form: the first is the damage to the hospital; the second is to refuse to operate on high-risk patients. There is a real need for quality monitoring in medicine in general and in cardiac surgery in particular. Good quality surgical work requires robust knowledge of three crucial variables: activity, risk prediction and performance. In Europe, the three major specialist societies have agreed to establish the European Cardiovascular and Thoracic Surgery Institute of Accreditation (ECTSIA. Performance monitoring is soon to become imperative. If we surgeons are not on board, we shall have no control on its final destination, and the consequences may be equally damaging to us and to our patients.

  1. Assessing water quality trends in catchments with contrasting hydrological regimes

    Science.gov (United States)

    Sherriff, Sophie C.; Shore, Mairead; Mellander, Per-Erik

    2016-04-01

    Environmental resources are under increasing pressure to simultaneously achieve social, economic and ecological aims. Increasing demand for food production, for example, has expanded and intensified agricultural systems globally. In turn, greater risks of diffuse pollutant delivery (suspended sediment (SS) and Phosphorus (P)) from land to water due to higher stocking densities, fertilisation rates and soil erodibility has been attributed to deterioration of chemical and ecological quality of aquatic ecosystems. Development of sustainable and resilient management strategies for agro-ecosystems must detect and consider the impact of land use disturbance on water quality over time. However, assessment of multiple monitoring sites over a region is challenged by hydro-climatic fluctuations and the propagation of events through catchments with contrasting hydrological regimes. Simple water quality metrics, for example, flow-weighted pollutant exports have potential to normalise the impact of catchment hydrology and better identify water quality fluctuations due to land use and short-term climate fluctuations. This paper assesses the utility of flow-weighted water quality metrics to evaluate periods and causes of critical pollutant transfer. Sub-hourly water quality (SS and P) and discharge data were collected from hydrometric monitoring stations at the outlets of five small (~10 km2) agricultural catchments in Ireland. Catchments possess contrasting land uses (predominantly grassland or arable) and soil drainage (poorly, moderately or well drained) characteristics. Flow-weighted water quality metrics were calculated and evaluated according to fluctuations in source pressure and rainfall. Flow-weighted water quality metrics successfully identified fluctuations in pollutant export which could be attributed to land use changes through the agricultural calendar, i.e., groundcover fluctuations. In particular, catchments with predominantly poor or moderate soil drainage

  2. Quality assessment of immobilized wastes

    International Nuclear Information System (INIS)

    Rzyski, B.M.; Suarez, A.A.

    1988-01-01

    A final repository concept for LLW and ILW is being studied in Brazil. It is thus now possible to assess in a systematic way the requirements on the waste packages in each step of the treatment, conditioning, storage, transport, disposal and the quality control procedure needed to show the requirements are fulfilled. The methodology to perform this assessment is discussed in this paper. The results of this methodology is proposed as basis for the licencing of the disposal of different waste packages in Brazil. (author) [pt

  3. Assessing primary care data quality.

    Science.gov (United States)

    Lim, Yvonne Mei Fong; Yusof, Maryati; Sivasampu, Sheamini

    2018-04-16

    Purpose The purpose of this paper is to assess National Medical Care Survey data quality. Design/methodology/approach Data completeness and representativeness were computed for all observations while other data quality measures were assessed using a 10 per cent sample from the National Medical Care Survey database; i.e., 12,569 primary care records from 189 public and private practices were included in the analysis. Findings Data field completion ranged from 69 to 100 per cent. Error rates for data transfer from paper to web-based application varied between 0.5 and 6.1 per cent. Error rates arising from diagnosis and clinical process coding were higher than medication coding. Data fields that involved free text entry were more prone to errors than those involving selection from menus. The authors found that completeness, accuracy, coding reliability and representativeness were generally good, while data timeliness needs to be improved. Research limitations/implications Only data entered into a web-based application were examined. Data omissions and errors in the original questionnaires were not covered. Practical implications Results from this study provided informative and practicable approaches to improve primary health care data completeness and accuracy especially in developing nations where resources are limited. Originality/value Primary care data quality studies in developing nations are limited. Understanding errors and missing data enables researchers and health service administrators to prevent quality-related problems in primary care data.

  4. Compromises Between Quality of Service Metrics and Energy Consumption of Hierarchical and Flat Routing Protocols for Wireless Sensors Network

    Directory of Open Access Journals (Sweden)

    Abdelbari BEN YAGOUTA

    2016-11-01

    Full Text Available Wireless Sensor Network (WSN is wireless network composed of spatially distributed and tiny autonomous nodes, which cooperatively monitor physical or environmental conditions. Among the concerns of these networks is prolonging the lifetime by saving nodes energy. There are several protocols specially designed for WSNs based on energy conservation. However, many WSNs applications require QoS (Quality of Service criteria, such as latency, reliability and throughput. In this paper, we will compare three routing protocols for wireless sensors network LEACH (Low Energy Adaptive Clustering Hierarchy, AODV (Ad hoc on demand Distance Vector and LABILE (Link Quality-Based Lexical Routing using Castalia simulator in terms of energy consumption, throughput, reliability and latency time of packets received by sink under different conditions to determinate the best configurations that offers the most suitable compromises between energy conservation and all QoS metrics for each routing protocols. The results show that, the best configurations that offer the suitable compromises between energy conservation and all QoS metrics is a large number of deployed nodes with low packet rate for LEACH (300 nodes and 1 packet/s, a medium number of deployed nodes with low packet rate For AODV (100 nodes and 1 packet/s and a very low nodes density with low packet rate for LABILE (50 nodes and 1 packet/s.

  5. Objective Methodology to Assess Meaningful Research Productivity by Orthopaedic Residency Departments: Validation Against Widely Distributed Ranking Metrics and Published Surrogates.

    Science.gov (United States)

    Jones, Louis B; Goel, Sameer; Hung, Leroy Y; Graves, Matthew L; Spitler, Clay A; Russell, George V; Bergin, Patrick F

    2018-04-01

    The mission of any academic orthopaedic training program can be divided into 3 general areas of focus: clinical care, academic performance, and research. Clinical care is evaluated on clinical volume, patient outcomes, patient satisfaction, and becoming increasingly focused on data-driven quality metrics. Academic performance of a department can be used to motivate individual surgeons, but objective measures are used to define a residency program. Annual in-service examinations serve as a marker of resident knowledge base, and board pass rates are clearly scrutinized. Research productivity, however, has proven harder to objectively quantify. In an effort to improve transparency and better account for conflicts of interest, bias, and self-citation, multiple bibliometric measures have been developed. Rather than using individuals' research productivity as a surrogate for departmental research, we sought to establish an objective methodology to better assess a residency program's ability to conduct meaningful research. In this study, we describe a process to assess the number and quality of publications produced by an orthopaedic residency department. This would allow chairmen and program directors to benchmark their current production and make measurable goals for future research investment. The main goal of the benchmarking system is to create an "h-index" for residency programs. To do this, we needed to create a list of relevant articles in the orthopaedic literature. We used the Journal Citation Reports. This publication lists all orthopaedic journals that are given an impact factor rating every year. When we accessed the Journal Citation Reports database, there were 72 journals included in the orthopaedic literature section. To ensure only relevant, impactful journals were included, we selected journals with an impact factor greater than 0.95 and an Eigenfactor Score greater than 0.00095. After excluding journals not meeting these criteria, we were left with 45

  6. 1995 mask industry quality assessment

    Science.gov (United States)

    Bishop, Chris; Strott, Al

    1995-12-01

    The third annual mask industry assessment will again survey various industry companies for key performance measurements in the areas of quality and delivery. This year's assessment is enhanced to include the area of safety and further breakdown of the data into 5-inch vs. 6- inch. The data compiled includes shipments, customer return rate, customer return reason, performance to schedule, plate survival yield, and throughput time (TPT) from 1988 through Q2, 1995. Contributor identities remain protected by utilizing Arthur Andersen & Company to ensure participant confidentiality. Participation in the past included representation of over 75% of the total merchant and captive mask volume in the United States. This year's assessment is expected to result in expanded participation by again inviting all mask suppliers domestically to participate as well as an impact from inviting international suppliers to participate.

  7. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    International Nuclear Information System (INIS)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc

    2017-01-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  8. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    Energy Technology Data Exchange (ETDEWEB)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc [German Cancer Research Center, Heidelberg (Germany).

    2017-10-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  9. Assessing the Greenness of Chemical Reactions in the Laboratory Using Updated Holistic Graphic Metrics Based on the Globally Harmonized System of Classification and Labeling of Chemicals

    Science.gov (United States)

    Ribeiro, M. Gabriela T. C.; Yunes, Santiago F.; Machado, Adelio A. S. C.

    2014-01-01

    Two graphic holistic metrics for assessing the greenness of synthesis, the "green star" and the "green circle", have been presented previously. These metrics assess the greenness by the degree of accomplishment of each of the 12 principles of green chemistry that apply to the case under evaluation. The criteria for assessment…

  10. lessons and challenges from software quality assessment

    African Journals Online (AJOL)

    DJFLEX

    www.globaljournalseries.com, Email: info@globaljournalseries.com ... ASSESSMENT: THE CASE OF SPACE SYSTEMS SOFTWARE. ... KEYWORDS: Software, Software Quality ,Quality Standard, Characteristics, ... and communication, etc.

  11. Simulation and assessment of urbanization impacts on runoff metrics: insights from landuse changes

    Science.gov (United States)

    Zhang, Yongyong; Xia, Jun; Yu, Jingjie; Randall, Mark; Zhang, Yichi; Zhao, Tongtiegang; Pan, Xingyao; Zhai, Xiaoyan; Shao, Quanxi

    2018-05-01

    Urbanization-induced landuse changes alter runoff regimes in complex ways. In this study, a detailed investigation of the urbanization impacts on runoff regimes is provided by using multiple runoff metrics and with consideration of landuse dynamics. A catchment hydrological model is modified by coupling a simplified flow routing module of the urban drainage system and landuse dynamics to improve long-term urban runoff simulations. Moreover, multivariate statistical approach is adopted to mine the spatial variations of runoff metrics so as to further identify critical impact factors of landuse changes. The Qing River catchment as a peri-urban catchment in the Beijing metropolitan area is selected as our study region. Results show that: (1) the dryland agriculture is decreased from 13.9% to 1.5% of the total catchment area in the years 2000-2015, while the percentages of impervious surface, forest and grass are increased from 63.5% to 72.4%, 13.5% to 16.6% and 5.1% to 6.5%, respectively. The most dramatic landuse changes occur in the middle and downstream regions; (2) The combined landuse changes do not alter the average flow metrics obviously at the catchment outlet, but slightly increase the high flow metrics, particularly the extreme high flows; (3) The impacts on runoff metrics in the sub-catchments are more obvious than those at the catchment outlet. For the average flow metrics, the most impacted metric is the runoff depth in the dry season (October ∼ May) with a relative change from -10.9% to 11.6%, and the critical impact factors are the impervious surface and grass. For the high flow metrics, the extreme high flow depth is increased most significantly with a relative change from -0.6% to 10.5%, and the critical impact factors are the impervious surface and dryland agriculture; (4) The runoff depth metrics in the sub-catchments are increased because of the landuse changes from dryland agriculture to impervious surface, but are decreased because of the

  12. No-reference visual quality assessment for image inpainting

    Science.gov (United States)

    Voronin, V. V.; Frantc, V. A.; Marchuk, V. I.; Sherstobitov, A. I.; Egiazarian, K.

    2015-03-01

    Inpainting has received a lot of attention in recent years and quality assessment is an important task to evaluate different image reconstruction approaches. In many cases inpainting methods introduce a blur in sharp transitions in image and image contours in the recovery of large areas with missing pixels and often fail to recover curvy boundary edges. Quantitative metrics of inpainting results currently do not exist and researchers use human comparisons to evaluate their methodologies and techniques. Most objective quality assessment methods rely on a reference image, which is often not available in inpainting applications. Usually researchers use subjective quality assessment by human observers. It is difficult and time consuming procedure. This paper focuses on a machine learning approach for no-reference visual quality assessment for image inpainting based on the human visual property. Our method is based on observation that Local Binary Patterns well describe local structural information of the image. We use a support vector regression learned on assessed by human images to predict perceived quality of inpainted images. We demonstrate how our predicted quality value correlates with qualitative opinion in a human observer study. Results are shown on a human-scored dataset for different inpainting methods.

  13. Using spatial metrics to assess the efficacy of biodiversity conservation within the Romanian Carpathian Convention area

    Directory of Open Access Journals (Sweden)

    Petrişor Alexandru-Ionuţ

    2017-06-01

    Full Text Available The alpine region is of crucial importance for the European Union; as a result, the Carpathian Convention aims at its sustainable development. Since sustainability implies also conservation through natural protected areas, aimed at including regions representative for the national biogeographical space, this article aims at assessing the efficiency of conservation. The methodology consisted of using spatial metrics applied to Romanian and European data on the natural protected areas, land cover and use and their transitional dynamics. The findings show a very good coverage of the Alpine biogeographical region (98% included in the Convention area, and 43% of it protected within the Convention area and of the ecological region of Carpathian montane coniferous forests (88% included in the Convention area, and 42% of it protected within the Convention area. The dominant land cover is represented by forests (63% within the Convention area, and 70% of the total protected area. The main transitional dynamics are deforestation (covering 50% of all changes area within the Convention area and 46% from the changed area within its protected area and forestations – including afforestation, reforestation and colonization of abandoned agricultural areas by forest vegetation (covering 44% of all changes area within the Convention area and 51% from the changed area within its protected area during 1990-2000 and deforestation (covering 97% of all changes area within the Convention area and 99% from the changed area within its protected area during 1990-2000. The results suggest that the coverage of biogeographical and ecological zones is good, especially for the most relevant ones, but deforestations are a serious issue, regardless of occurring before or after achieving the protection status.

  14. qcML : an exchange format for quality control metrics from mass spectrometry experiments

    NARCIS (Netherlands)

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W P|info:eu-repo/dai/nl/31093205X; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A; Kelstrup, Christian D; Gatto, Laurent; van Breukelen, Bas|info:eu-repo/dai/nl/244219087; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S; Olsen, Jesper V; Heck, Albert J R|info:eu-repo/dai/nl/105189332; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart

    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data

  15. Quality assessment of digital annotated ECG data from clinical trials by the FDA ECG Warehouse.

    Science.gov (United States)

    Sarapa, Nenad

    2007-09-01

    The FDA mandates that digital electrocardiograms (ECGs) from 'thorough' QTc trials be submitted into the ECG Warehouse in Health Level 7 extended markup language format with annotated onset and offset points of waveforms. The FDA did not disclose the exact Warehouse metrics and minimal acceptable quality standards. The author describes the Warehouse scoring algorithms and metrics used by FDA, points out ways to improve FDA review and suggests Warehouse benefits for pharmaceutical sponsors. The Warehouse ranks individual ECGs according to their score for each quality metric and produces histogram distributions with Warehouse-specific thresholds that identify ECGs of questionable quality. Automatic Warehouse algorithms assess the quality of QT annotation and duration of manual QT measurement by the central ECG laboratory.

  16. In Search of Helpful Group Awareness Metrics in Closed-Type Formative Assessment Tools

    DEFF Research Database (Denmark)

    Papadopoulos, Pantelis M.; Natsis, Antonios; Obwegeser, Nikolaus

    2017-01-01

    For 4 weeks, a total of 91 sophomore students started their classes with a short multiple-choice quiz. The students had to answer the quiz individually, view feedback on class activity, revise their initial answers, and discuss the correct answers with the teacher. The percentage of students...... that selected each question choice and their self-reported confidence and preparation were the three metrics included in the feedback. Results showed that students were relying mainly on the percentage metric. However, statistical analysis also revealed a significant main effect for confidence and preparation...

  17. Selection of metrics based on the seagrass Cymodocea nodosa and development of a biotic index (CYMOX) for assessing ecological status of coastal and transitional waters

    Science.gov (United States)

    Oliva, Silvia; Mascaró, Oriol; Llagostera, Izaskun; Pérez, Marta; Romero, Javier

    2012-12-01

    Bioindicators, based on a large variety of organisms, have been increasingly used in the assessment of the status of aquatic systems. In marine coastal waters, seagrasses have shown a great potential as bioindicator organisms, probably due to both their environmental sensitivity and the large amount of knowledge available. However, and as far as we are aware, only little attention has been paid to euryhaline species suitable for biomonitoring both transitional and marine waters. With the aim to contribute to this expanding field, and provide new and useful tools for managers, we develop here a multi-bioindicator index based on the seagrass Cymodocea nodosa. We first compiled from the literature a suite of 54 candidate metrics, i. e. measurable attribute of the concerned organism or community that adequately reflects properties of the environment, obtained from C. nodosa and its associated ecosystem, putatively responding to environmental deterioration. We then evaluated them empirically, obtaining a complete dataset on these metrics along a gradient of anthropogenic disturbance. Using this dataset, we selected the metrics to construct the index, using, successively: (i) ANOVA, to assess their capacity to discriminate among sites of different environmental conditions; (ii) PCA, to check the existence of a common pattern among the metrics reflecting the environmental gradient; and (iii) feasibility and cost-effectiveness criteria. Finally, 10 metrics (out of the 54 tested) encompassing from the physiological (δ15N, δ34S, % N, % P content of rhizomes), through the individual (shoot size) and the population (root weight ratio), to the community (epiphytes load) organisation levels, and some metallic pollution descriptors (Cd, Cu and Zn content of rhizomes) were retained and integrated into a single index (CYMOX) using the scores of the sites on the first axis of a PCA. These scores were reduced to a 0-1 (Ecological Quality Ratio) scale by referring the values to the

  18. THE MAQC PROJECT: ESTABLISHING QC METRICS AND THRESHOLDS FOR MICROARRAY QUALITY CONTROL

    Science.gov (United States)

    Microarrays represent a core technology in pharmacogenomics and toxicogenomics; however, before this technology can successfully and reliably be applied in clinical practice and regulatory decision-making, standards and quality measures need to be developed. The Microarray Qualit...

  19. Elliptical local vessel density: a fast and robust quality metric for retinal images

    OpenAIRE

    Giancardo, L.; Abramoff, M.D.; Chaum, E.; Karnowski, T.P.; Meriaudeau, F.; Tobin, K.W.

    2008-01-01

    A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches for automatically judging the image quality. We propose a new set of features independent of field of view or resolution to describe the morphology of the patient's vessels. Our initial results suggest that these features can be used to estimate the image quality i...

  20. Assessing the quality of restored images in optical long-baseline interferometry

    Science.gov (United States)

    Gomes, Nuno; Garcia, Paulo J. V.; Thiébaut, Éric

    2017-03-01

    Assessing the quality of aperture synthesis maps is relevant for benchmarking image reconstruction algorithms, for the scientific exploitation of data from optical long-baseline interferometers, and for the design/upgrade of new/existing interferometric imaging facilities. Although metrics have been proposed in these contexts, no systematic study has been conducted on the selection of a robust metric for quality assessment. This article addresses the question: what is the best metric to assess the quality of a reconstructed image? It starts by considering several metrics and selecting a few based on general properties. Then, a variety of image reconstruction cases are considered. The observational scenarios are phase closure and phase referencing at the Very Large Telescope Interferometer (VLTI), for a combination of two, three, four and six telescopes. End-to-end image reconstruction is accomplished with the MIRA software, and several merit functions are put to test. It is found that convolution by an effective point spread function is required for proper image quality assessment. The effective angular resolution of the images is superior to naive expectation based on the maximum frequency sampled by the array. This is due to the prior information used in the aperture synthesis algorithm and to the nature of the objects considered. The ℓ1-norm is the most robust of all considered metrics, because being linear it is less sensitive to image smoothing by high regularization levels. For the cases considered, this metric allows the implementation of automatic quality assessment of reconstructed images, with a performance similar to human selection.

  1. Color Image Quality Assessment Based on CIEDE2000

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2012-01-01

    Full Text Available Combining the color difference formula of CIEDE2000 and the printing industry standard for visual verification, we present an objective color image quality assessment method correlated with subjective vision perception. An objective score conformed to subjective perception (OSCSP Q was proposed to directly reflect the subjective visual perception. In addition, we present a general method to calibrate correction factors of color difference formula under real experimental conditions. Our experiment results show that the present DE2000-based metric can be consistent with human visual system in general application environment.

  2. Accounting for no net loss: A critical assessment of biodiversity offsetting metrics and methods.

    Science.gov (United States)

    Carreras Gamarra, Maria Jose; Lassoie, James Philip; Milder, Jeffrey

    2018-08-15

    Biodiversity offset strategies are based on the explicit calculation of both losses and gains necessary to establish ecological equivalence between impact and offset areas. Given the importance of quantifying biodiversity values, various accounting methods and metrics are continuously being developed and tested for this purpose. Considering the wide array of alternatives, selecting an appropriate one for a specific project can be not only challenging, but also crucial; accounting methods can strongly influence the biodiversity outcomes of an offsetting strategy, and if not well-suited to the context and values being offset, a no net loss outcome might not be delivered. To date there has been no systematic review or comparative classification of the available biodiversity accounting alternatives that aim at facilitating metric selection, and no tools that guide decision-makers throughout such a complex process. We fill this gap by developing a set of analyses to support (i) identifying the spectrum of available alternatives, (ii) understanding the characteristics of each and, ultimately (iii) making the most sensible and sound decision about which one to implement. The metric menu, scoring matrix, and decision tree developed can be used by biodiversity offsetting practitioners to help select an existing metric, and thus achieve successful outcomes that advance the goal of no net loss of biodiversity. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Advanced Metrics for Assessing Holistic Care: The “Epidaurus 2” Project

    Science.gov (United States)

    Foote, Frederick O; Benson, Herbert; Berger, Ann; Berman, Brian; DeLeo, James; Deuster, Patricia A.; Lary, David J; Silverman, Marni N.; Sternberg, Esther M

    2018-01-01

    In response to the challenge of military traumatic brain injury and posttraumatic stress disorder, the US military developed a wide range of holistic care modalities at the new Walter Reed National Military Medical Center, Bethesda, MD, from 2001 to 2017, guided by civilian expert consultation via the Epidaurus Project. These projects spanned a range from healing buildings to wellness initiatives and healing through nature, spirituality, and the arts. The next challenge was to develop whole-body metrics to guide the use of these therapies in clinical care. Under the “Epidaurus 2” Project, a national search produced 5 advanced metrics for measuring whole-body therapeutic effects: genomics, integrated stress biomarkers, language analysis, machine learning, and “Star Glyphs.” This article describes the metrics, their current use in guiding holistic care at Walter Reed, and their potential for operationalizing personalized care, patient self-management, and the improvement of public health. Development of these metrics allows the scientific integration of holistic therapies with organ-system-based care, expanding the powers of medicine. PMID:29497586

  4. Higher Education Quality Assessment Model: Towards Achieving Educational Quality Standard

    Science.gov (United States)

    Noaman, Amin Y.; Ragab, Abdul Hamid M.; Madbouly, Ayman I.; Khedra, Ahmed M.; Fayoumi, Ayman G.

    2017-01-01

    This paper presents a developed higher education quality assessment model (HEQAM) that can be applied for enhancement of university services. This is because there is no universal unified quality standard model that can be used to assess the quality criteria of higher education institutes. The analytical hierarchy process is used to identify the…

  5. Measuring scientific impact beyond academia: An assessment of existing impact metrics and proposed improvements.

    Science.gov (United States)

    Ravenscroft, James; Liakata, Maria; Clare, Amanda; Duma, Daniel

    2017-01-01

    How does scientific research affect the world around us? Being able to answer this question is of great importance in order to appropriately channel efforts and resources in science. The impact by scientists in academia is currently measured by citation based metrics such as h-index, i-index and citation counts. These academic metrics aim to represent the dissemination of knowledge among scientists rather than the impact of the research on the wider world. In this work we are interested in measuring scientific impact beyond academia, on the economy, society, health and legislation (comprehensive impact). Indeed scientists are asked to demonstrate evidence of such comprehensive impact by authoring case studies in the context of the Research Excellence Framework (REF). We first investigate the extent to which existing citation based metrics can be indicative of comprehensive impact. We have collected all recent REF impact case studies from 2014 and we have linked these to papers in citation networks that we constructed and derived from CiteSeerX, arXiv and PubMed Central using a number of text processing and information retrieval techniques. We have demonstrated that existing citation-based metrics for impact measurement do not correlate well with REF impact results. We also consider metrics of online attention surrounding scientific works, such as those provided by the Altmetric API. We argue that in order to be able to evaluate wider non-academic impact we need to mine information from a much wider set of resources, including social media posts, press releases, news articles and political debates stemming from academic work. We also provide our data as a free and reusable collection for further analysis, including the PubMed citation network and the correspondence between REF case studies, grant applications and the academic literature.

  6. Measuring scientific impact beyond academia: An assessment of existing impact metrics and proposed improvements.

    Directory of Open Access Journals (Sweden)

    James Ravenscroft

    Full Text Available How does scientific research affect the world around us? Being able to answer this question is of great importance in order to appropriately channel efforts and resources in science. The impact by scientists in academia is currently measured by citation based metrics such as h-index, i-index and citation counts. These academic metrics aim to represent the dissemination of knowledge among scientists rather than the impact of the research on the wider world. In this work we are interested in measuring scientific impact beyond academia, on the economy, society, health and legislation (comprehensive impact. Indeed scientists are asked to demonstrate evidence of such comprehensive impact by authoring case studies in the context of the Research Excellence Framework (REF. We first investigate the extent to which existing citation based metrics can be indicative of comprehensive impact. We have collected all recent REF impact case studies from 2014 and we have linked these to papers in citation networks that we constructed and derived from CiteSeerX, arXiv and PubMed Central using a number of text processing and information retrieval techniques. We have demonstrated that existing citation-based metrics for impact measurement do not correlate well with REF impact results. We also consider metrics of online attention surrounding scientific works, such as those provided by the Altmetric API. We argue that in order to be able to evaluate wider non-academic impact we need to mine information from a much wider set of resources, including social media posts, press releases, news articles and political debates stemming from academic work. We also provide our data as a free and reusable collection for further analysis, including the PubMed citation network and the correspondence between REF case studies, grant applications and the academic literature.

  7. Impact of Fellowship Training Level on Colonoscopy Quality and Efficiency Metrics.

    Science.gov (United States)

    Bitar, Hussein; Zia, Hassaan; Bashir, Muhammad; Parava, Pratyusha; Hanafi, Muhammad; Tierney, William; Madhoun, Mohammad

    2018-04-18

    Previous studies have described variable effects of fellow involvement on the adenoma detection rate (ADR), but few have stratified this effect by level of training. We aimed to evaluate the "fellow effect" on multiple procedural metrics including a newly defined adenoma management efficiency index, which may have a role in documenting colonoscopy proficiency for trainees. We also describe the impact of level of training on moderate sedation use. We performed a retrospective review of 2024 patients (mean age 60.9 ± 10. 94% males) who underwent outpatient colonoscopy between June 2012 and December 2014 at our Veterans Affairs Medical Center. Colonoscopies were divided into 5 groups. The first 2 groups were first year fellows in the first 6 months and last 6 months of the training year. Second and third year fellows and attending only procedures accounted for one group each. We collected data on doses of sedatives used, frequency of adjunctive agent use, procedural times as well as location, size and histology of polyps. We defined the adenoma management efficiency index as average time required per adenoma resected during withdrawal. 1675 colonoscopies involved a fellow. 349 were performed by the attending alone. There was no difference in ADR between fellows according to level of training (P=0.8), or between fellows compared with attending-only procedures (P=0.67). Procedural times decreased consistently during training, and declined further for attending only procedures. This translated into improvement in the adenoma management efficiency index (fellow groups by ascending level of training 23.5 minutes vs 18.3 minutes vs 13.7 minutes vs 13.4 minutes vs attending group 11.7 minutes; PEfficiency of detecting and resecting polyps improved throughout training without reaching attending level. Fellow involvement led to greater use of moderate sedation, which may relate to a longer procedure duration and an evolving experience in endoscopic technique. Copyright

  8. Compression-based classification of biological sequences and structures via the Universal Similarity Metric: experimental assessment

    Directory of Open Access Journals (Sweden)

    Manzini Giovanni

    2007-07-01

    Full Text Available Abstract Background Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity, NCD (Normalized Compression Dissimilarity and CD (Compression Dissimilarity. Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. Results We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC

  9. Compression-based classification of biological sequences and structures via the Universal Similarity Metric: experimental assessment.

    Science.gov (United States)

    Ferragina, Paolo; Giancarlo, Raffaele; Greco, Valentina; Manzini, Giovanni; Valiente, Gabriel

    2007-07-13

    Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric) has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity), NCD (Normalized Compression Dissimilarity) and CD (Compression Dissimilarity). Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC (Receiver Operating Curve) analysis, aims at

  10. NMF-Based Image Quality Assessment Using Extreme Learning Machine.

    Science.gov (United States)

    Wang, Shuigen; Deng, Chenwei; Lin, Weisi; Huang, Guang-Bin; Zhao, Baojun

    2017-01-01

    Numerous state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage process: distortion description followed by distortion effects pooling. As for the first stage, the distortion descriptors or measurements are expected to be effective representatives of human visual variations, while the second stage should well express the relationship among quality descriptors and the perceptual visual quality. However, most of the existing quality descriptors (e.g., luminance, contrast, and gradient) do not seem to be consistent with human perception, and the effects pooling is often done in ad-hoc ways. In this paper, we propose a novel full-reference IQA metric. It applies non-negative matrix factorization (NMF) to measure image degradations by making use of the parts-based representation of NMF. On the other hand, a new machine learning technique [extreme learning machine (ELM)] is employed to address the limitations of the existing pooling techniques. Compared with neural networks and support vector regression, ELM can achieve higher learning accuracy with faster learning speed. Extensive experimental results demonstrate that the proposed metric has better performance and lower computational complexity in comparison with the relevant state-of-the-art approaches.

  11. Elliptical Local Vessel Density: a Fast and Robust Quality Metric for Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Chaum, Edward [ORNL; Karnowski, Thomas Paul [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL; Abramoff, M.D. [University of Iowa

    2008-01-01

    A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches to the problem of automatically judging the image quality. We propose a new set of features independent of Field of View or resolution to describe the morphology of the patient's vessels. Our initial results suggest that they can be used to estimate the image quality in a time one order of magnitude shorter respect to previous techniques.

  12. Habitat connectivity as a metric for aquatic microhabitat quality: Application to Chinook salmon spawning habitat

    Science.gov (United States)

    Ryan Carnie; Daniele Tonina; Jim McKean; Daniel Isaak

    2016-01-01

    Quality of fish habitat at the scale of a single fish, at the metre resolution, which we defined here as microhabitat, has been primarily evaluated on short reaches, and their results have been extended through long river segments with methods that do not account for connectivity, a measure of the spatial distribution of habitat patches. However, recent...

  13. A metrics-based comparison of secondary user quality between iOS and Android

    NARCIS (Netherlands)

    T. Amman

    2014-01-01

    htmlabstract Native mobile applications gain popularity in the commercial market. There is no other econom- ical sector that grows as fast. A lot of economical research is done in this sector, but there is very little research that deals with qualities for mobile application developers. This paper

  14. [Establishing IAQ Metrics and Baseline Measures.] "Indoor Air Quality Tools for Schools" Update #20

    Science.gov (United States)

    US Environmental Protection Agency, 2009

    2009-01-01

    This issue of "Indoor Air Quality Tools for Schools" Update ("IAQ TfS" Update) contains the following items: (1) News and Events; (2) IAQ Profile: Establishing Your Baseline for Long-Term Success (Feature Article); (3) Insight into Excellence: Belleville Township High School District #201, 2009 Leadership Award Winner; and (4) Have Your Questions…

  15. Tracker Performance Metric

    National Research Council Canada - National Science Library

    Olson, Teresa; Lee, Harry; Sanders, Johnnie

    2002-01-01

    .... We have developed the Tracker Performance Metric (TPM) specifically for this purpose. It was designed to measure the output performance, on a frame-by-frame basis, using its output position and quality...

  16. Blind image quality assessment based on aesthetic and statistical quality-aware features

    Science.gov (United States)

    Jenadeleh, Mohsen; Masaeli, Mohammad Masood; Moghaddam, Mohsen Ebrahimi

    2017-07-01

    The main goal of image quality assessment (IQA) methods is the emulation of human perceptual image quality judgments. Therefore, the correlation between objective scores of these methods with human perceptual scores is considered as their performance metric. Human judgment of the image quality implicitly includes many factors when assessing perceptual image qualities such as aesthetics, semantics, context, and various types of visual distortions. The main idea of this paper is to use a host of features that are commonly employed in image aesthetics assessment in order to improve blind image quality assessment (BIQA) methods accuracy. We propose an approach that enriches the features of BIQA methods by integrating a host of aesthetics image features with the features of natural image statistics derived from multiple domains. The proposed features have been used for augmenting five different state-of-the-art BIQA methods, which use statistical natural scene statistics features. Experiments were performed on seven benchmark image quality databases. The experimental results showed significant improvement of the accuracy of the methods.

  17. The Use of Performance Metrics for the Assessment of Safeguards Effectiveness at the State Level

    Energy Technology Data Exchange (ETDEWEB)

    Bachner K. M.; George Anzelon, Lawrence Livermore National Laboratory, Livermore, CA Yana Feldman, Lawrence Livermore National Laboratory, Livermore, CA Mark Goodman,Department of State, Washington, DC Dunbar Lockwood, National Nuclear Security Administration, Washington, DC Jonathan B. Sanborn, JBS Consulting, LLC, Arlington, VA.

    2016-07-24

    In the ongoing evolution of International Atomic Energy Agency (IAEA) safeguards at the state level, many safeguards implementation principles have been emphasized: effectiveness, efficiency, non-discrimination, transparency, focus on sensitive materials, centrality of material accountancy for detecting diversion, independence, objectivity, and grounding in technical considerations, among others. These principles are subject to differing interpretations and prioritizations and sometimes conflict. This paper is an attempt to develop metrics and address some of the potential tradeoffs inherent in choices about how various safeguards policy principles are implemented. The paper carefully defines effective safeguards, including in the context of safeguards approaches that take account of the range of state-specific factors described by the IAEA Secretariat and taken note of by the Board in September 2014, and (2) makes use of performance metrics to help document, and to make transparent, how safeguards implementation would meet such effectiveness requirements.

  18. A Web-Based Graphical Food Frequency Assessment System: Design, Development and Usability Metrics.

    Science.gov (United States)

    Franco, Rodrigo Zenun; Alawadhi, Balqees; Fallaize, Rosalind; Lovegrove, Julie A; Hwang, Faustina

    2017-05-08

    Food frequency questionnaires (FFQs) are well established in the nutrition field, but there remain important questions around how to develop online tools in a way that can facilitate wider uptake. Also, FFQ user acceptance and evaluation have not been investigated extensively. This paper presents a Web-based graphical food frequency assessment system that addresses challenges of reproducibility, scalability, mobile friendliness, security, and usability and also presents the utilization metrics and user feedback from a deployment study. The application design employs a single-page application Web architecture with back-end services (database, authentication, and authorization) provided by Google Firebase's free plan. Its design and responsiveness take advantage of the Bootstrap framework. The FFQ was deployed in Kuwait as part of the EatWellQ8 study during 2016. The EatWellQ8 FFQ contains 146 food items (including drinks). Participants were recruited in Kuwait without financial incentive. Completion time was based on browser timestamps and usability was measured using the System Usability Scale (SUS), scoring between 0 and 100. Products with a SUS higher than 70 are considered to be good. A total of 235 participants created accounts in the system, and 163 completed the FFQ. Of those 163 participants, 142 reported their gender (93 female, 49 male) and 144 reported their date of birth (mean age of 35 years, range from 18-65 years). The mean completion time for all FFQs (n=163), excluding periods of interruption, was 14.2 minutes (95% CI 13.3-15.1 minutes). Female participants (n=93) completed in 14.1 minutes (95% CI 12.9-15.3 minutes) and male participants (n=49) completed in 14.3 minutes (95% CI 12.6-15.9 minutes). Participants using laptops or desktops (n=69) completed the FFQ in an average of 13.9 minutes (95% CI 12.6-15.1 minutes) and participants using smartphones or tablets (n=91) completed in an average of 14.5 minutes (95% CI 13.2-15.8 minutes). The median SUS

  19. Trends in Surface Level Ozone Observations from Human-health Relevant Metrics: Results from the Tropospheric Ozone Assessment Report (TOAR)

    Science.gov (United States)

    Fleming, Z. L.; von Schneidemesser, E.; Doherty, R. M.; Malley, C.; Cooper, O. R.; Pinto, J. P.; Colette, A.; Xu, X.; Simpson, D.; Schultz, M.; Hamad, S.; Moola, R.; Solberg, S.; Feng, Z.

    2017-12-01

    Ozone is an air pollutant formed in the atmosphere from precursor species (NOx, VOCs, CH4, CO) that is detrimental to human health and ecosystems. The global Tropospheric Ozone Assessment Report (TOAR) initiative has assembled a global database of surface ozone observations and generated ozone exposure metrics at thousands of measurement sites around the world. This talk will present results from the assessment focused on those indicators most relevant to human health. Specifically, the trends in ozone, comparing different time periods and patterns across regions and among metrics will be addressed. In addition, the fraction of population exposed to high ozone levels and how this has changed between 2000 and 2014 will also be discussed. The core time period analyzed for trends was 2000-2014, selected to include a greater number of sites in East Asia. Negative trends were most commonly observed at many US and some European sites, whereas many sites in East Asia showed positive trends, while sites in Japan showed more of a mix of positive and negative trends. More than half of the sites showed a common direction and significance in the trends for all five human-health relevant metrics. The peak ozone metrics indicate a reduction in exposure to peak levels of ozone related to photochemical episodes in Europe and the US. A considerable number of European countries and states within the US have shown a decrease in population-weighted ozone over time. The 2000-2014 results will be augmented and compared to the trend analysis for additional time periods that cover a greater number of years, but by necessity are based on fewer sites. Trends are found to be statistically significant at a larger fraction of sites with longer time series, compared to the shorter (2000-2014) time series.

  20. Landscape Classifications for Landscape Metrics-based Assessment of Urban Heat Island: A Comparative Study

    International Nuclear Information System (INIS)

    Zhao, X F; Deng, L; Wang, H N; Chen, F; Hua, L Z

    2014-01-01

    In recent years, some studies have been carried out on the landscape analysis of urban thermal patterns. With the prevalence of thermal landscape, a key problem has come forth, which is how to classify thermal landscape into thermal patches. Current researches used different methods of thermal landscape classification such as standard deviation method (SD) and R method. To find out the differences, a comparative study was carried out in Xiamen using a 20-year winter time-serial Landsat images. After the retrieval of land surface temperature (LST), the thermal landscape was classified using the two methods separately. Then landscape metrics, 6 at class level and 14 at landscape level, were calculated and analyzed using Fragstats 3.3. We found that: (1) at the class level, all the metrics with SD method were evened and did not show an obvious trend along with the process of urbanization, while the R method could. (2) While at the landscape level, 6 of the 14 metrics remains the similar trends, 5 were different at local turn points of the curve, 3 of them differed completely in the shape of curves. (3) When examined with visual interpretation, SD method tended to exaggerate urban heat island effects than the R method

  1. Comparison of Two Probabilistic Fatigue Damage Assessment Approaches Using Prognostic Performance Metrics

    Directory of Open Access Journals (Sweden)

    Xuefei Guan

    2011-01-01

    Full Text Available In this paper, two probabilistic prognosis updating schemes are compared. One is based on the classical Bayesian approach and the other is based on newly developed maximum relative entropy (MRE approach. The algorithm performance of the two models is evaluated using a set of recently developed prognostics-based metrics. Various uncertainties from measurements, modeling, and parameter estimations are integrated into the prognosis framework as random input variables for fatigue damage of materials. Measures of response variables are then used to update the statistical distributions of random variables and the prognosis results are updated using posterior distributions. Markov Chain Monte Carlo (MCMC technique is employed to provide the posterior samples for model updating in the framework. Experimental data are used to demonstrate the operation of the proposed probabilistic prognosis methodology. A set of prognostics-based metrics are employed to quantitatively evaluate the prognosis performance and compare the proposed entropy method with the classical Bayesian updating algorithm. In particular, model accuracy, precision, robustness and convergence are rigorously evaluated in addition to the qualitative visual comparison. Following this, potential development and improvement for the prognostics-based metrics are discussed in detail.

  2. Metrics to assess ecological condition, change, and impacts in sandy beach ecosystems.

    Science.gov (United States)

    Schlacher, Thomas A; Schoeman, David S; Jones, Alan R; Dugan, Jenifer E; Hubbard, David M; Defeo, Omar; Peterson, Charles H; Weston, Michael A; Maslo, Brooke; Olds, Andrew D; Scapini, Felicita; Nel, Ronel; Harris, Linda R; Lucrezi, Serena; Lastra, Mariano; Huijbers, Chantal M; Connolly, Rod M

    2014-11-01

    Complexity is increasingly the hallmark in environmental management practices of sandy shorelines. This arises primarily from meeting growing public demands (e.g., real estate, recreation) whilst reconciling economic demands with expectations of coastal users who have modern conservation ethics. Ideally, shoreline management is underpinned by empirical data, but selecting ecologically-meaningful metrics to accurately measure the condition of systems, and the ecological effects of human activities, is a complex task. Here we construct a framework for metric selection, considering six categories of issues that authorities commonly address: erosion; habitat loss; recreation; fishing; pollution (litter and chemical contaminants); and wildlife conservation. Possible metrics were scored in terms of their ability to reflect environmental change, and against criteria that are widely used for judging the performance of ecological indicators (i.e., sensitivity, practicability, costs, and public appeal). From this analysis, four types of broadly applicable metrics that also performed very well against the indicator criteria emerged: 1.) traits of bird populations and assemblages (e.g., abundance, diversity, distributions, habitat use); 2.) breeding/reproductive performance sensu lato (especially relevant for birds and turtles nesting on beaches and in dunes, but equally applicable to invertebrates and plants); 3.) population parameters and distributions of vertebrates associated primarily with dunes and the supralittoral beach zone (traditionally focused on birds and turtles, but expandable to mammals); 4.) compound measurements of the abundance/cover/biomass of biota (plants, invertebrates, vertebrates) at both the population and assemblage level. Local constraints (i.e., the absence of birds in highly degraded urban settings or lack of dunes on bluff-backed beaches) and particular issues may require alternatives. Metrics - if selected and applied correctly - provide

  3. Privacy Metrics and Boundaries

    NARCIS (Netherlands)

    L-F. Pau (Louis-François)

    2005-01-01

    textabstractThis paper aims at defining a set of privacy metrics (quantitative and qualitative) in the case of the relation between a privacy protector ,and an information gatherer .The aims with such metrics are: -to allow to assess and compare different user scenarios and their differences; for

  4. Material quality assurance risk assessment.

    Science.gov (United States)

    2013-01-01

    Over the past two decades the role of SHA has shifted from quality control (QC) of materials and : placement techniques to quality assurance (QA) and acceptance. The role of the Office of Materials : Technology (OMT) has been shifting towards assuran...

  5. Semantic metrics

    OpenAIRE

    Hu, Bo; Kalfoglou, Yannis; Dupplaw, David; Alani, Harith; Lewis, Paul; Shadbolt, Nigel

    2006-01-01

    In the context of the Semantic Web, many ontology-related operations, e.g. ontology ranking, segmentation, alignment, articulation, reuse, evaluation, can be boiled down to one fundamental operation: computing the similarity and/or dissimilarity among ontological entities, and in some cases among ontologies themselves. In this paper, we review standard metrics for computing distance measures and we propose a series of semantic metrics. We give a formal account of semantic metrics drawn from a...

  6. Surface water quality assessment using factor analysis

    African Journals Online (AJOL)

    2006-01-16

    Jan 16, 2006 ... Surface water, groundwater quality assessment and environ- .... Urbanisation influences the water cycle through changes in flow and water ..... tion of aquatic life, CCME water quality Index 1, 0. User`s ... Water, Air Soil Pollut.

  7. Quality-assessment expectations and quality-assessment reality in ...

    African Journals Online (AJOL)

    dissonance between stated and actual quality criteria among a group of lecturers in a particular. 1 For a discussion of quality in .... 6 The feasibility study undertaken by the SU Language Centre and its impact on interpreting at SU will be discussed in detail in a ..... Communication & Cognition 38(1-2): 27-46. Kopczynski, A.

  8. Towards Quality Assessment in an EFL Programme

    Science.gov (United States)

    Ali, Holi Ibrahim Holi; Al Ajmi, Ahmed Ali Saleh

    2013-01-01

    Assessment is central in education and the teaching-learning process. This study attempts to explore the perspectives and views about quality assessment among teachers of English as a Foreign Language (EFL), and to find ways of promoting quality assessment. Quantitative methodology was used to collect data. To answer the study questions, a…

  9. DOE JGI Quality Metrics; Approaches to Scaling and Improving Metagenome Assembly (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    Energy Technology Data Exchange (ETDEWEB)

    Copeland, Alex; Brown, C. Titus

    2011-10-13

    DOE JGI's Alex Copeland on "DOE JGI Quality Metrics" and Michigan State University's C. Titus Brown on "Approaches to Scaling and Improving Metagenome Assembly" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  10. Use of a line-pair resolution phantom for comprehensive quality assurance of electronic portal imaging devices based on fundamental imaging metrics

    International Nuclear Information System (INIS)

    Gopal, Arun; Samant, Sanjiv S.

    2009-01-01

    linear systems metrics such as robustness, sensitivity across the full spatial frequency range of interest, and normalization to imaging conditions (magnification, system gain settings, and exposure), with the simplicity, ease, and speed of traditional phantom imaging. The algorithm was analyzed for accuracy and sensitivity by comparing with a commercial portal imaging QA method (PIPSPRO, Standard Imaging, Middleton, WI) on both first-generation lens-coupled and modern a-Si flat-panel based clinical EPID systems. The bar-pattern based QA measurements were found to be far more sensitive to even small levels of degradation in spatial resolution and noise. The bar-pattern based QA methodology offers a comprehensive image quality assessment tool suitable for both commissioning and routine EPID QA.

  11. A simple metric to predict stream water quality from storm runoff in an urban watershed.

    Science.gov (United States)

    Easton, Zachary M; Sullivan, Patrick J; Walter, M Todd; Fuka, Daniel R; Petrovic, A Martin; Steenhuis, Tammo S

    2010-01-01

    The contribution of runoff from various land uses to stream channels in a watershed is often speculated and used to underpin many model predictions. However, these contributions, often based on little or no measurements in the watershed, fail to appropriately consider the influence of the hydrologic location of a particular landscape unit in relation to the stream network. A simple model was developed to predict storm runoff and the phosphorus (P) status of a perennial stream in an urban watershed in New York State using the covariance structure of runoff from different landscape units in the watershed to predict runoff in time. One hundred and twenty-seven storm events were divided into parameterization (n = 85) and forecasting (n = 42) data sets. Runoff, dissolved P (DP), and total P (TP) were measured at nine sites distributed among three land uses (high maintenance, unmaintained, wooded), three positions in the watershed (near the outlet, midwatershed, upper watershed), and in the stream at the watershed outlet. The autocorrelation among runoff and P concentrations from the watershed landscape units (n = 9) and the covariance between measurements from the landscape units and measurements from the stream were calculated and used to predict the stream response. Models, validated using leave-one-out cross-validation and a forecasting method, were able to correctly capture temporal trends in streamflow and stream P chemistry (Nash-Sutcliffe efficiencies, 0.49-0.88). The analysis suggests that the covariance structure was consistent for all models, indicating that the physical processes governing runoff and P loss from these landscape units were stationary in time and that landscapes located in hydraulically active areas have a direct hydraulic link to the stream. This methodology provides insight into the impact of various urban landscape units on stream water quantity and quality.

  12. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    Science.gov (United States)

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  13. Quality assurance in performance assessments

    International Nuclear Information System (INIS)

    Maul, P.R.; Watkins, B.M.; Salter, P.; Mcleod, R

    1999-01-01

    Following publication of the Site-94 report, SKI wishes to review how Quality Assurance (QA) issues could be treated in future work both in undertaking their own Performance Assessment (PA) calculations and in scrutinising documents supplied by SKB (on planning a repository for spent fuels in Sweden). The aim of this report is to identify the key QA issues and to outline the nature and content of a QA plan which would be suitable for SKI, bearing in mind the requirements and recommendations of relevant standards. Emphasis is on issues which are specific to Performance Assessments for deep repositories for radioactive wastes, but consideration is also given to issues which need to be addressed in all large projects. Given the long time over which the performance of a deep repository system must be evaluated, the demonstration that a repository is likely to perform satisfactorily relies on the use of computer-generated model predictions of system performance. This raises particular QA issues which are generally not encountered in other technical areas (for instance, power station operations). The traceability of the arguments used is a key QA issue, as are conceptual model uncertainty, and code verification and validation; these were all included in the consideration of overall uncertainties in the Site-94 project. Additionally, issues which are particularly relevant to SKI include: How QA in a PA fits in with the general QA procedures of the organisation undertaking the work. The relationship between QA as applied by the regulator and the implementor of a repository development programme. Section 2 introduces the discussion of these issues by reviewing the standards and guidance which are available from national and international organisations. This is followed in Section 3 by a review of specific issues which arise from the Site-94 exercise. An outline procedure for managing QA issues in SKI is put forward as a basis for discussion in Section 4. It is hoped that

  14. Quality assurance in performance assessments

    Energy Technology Data Exchange (ETDEWEB)

    Maul, P.R.; Watkins, B.M.; Salter, P.; Mcleod, R [QuantiSci Ltd, Henley-on-Thames (United Kingdom)

    1999-01-01

    Following publication of the Site-94 report, SKI wishes to review how Quality Assurance (QA) issues could be treated in future work both in undertaking their own Performance Assessment (PA) calculations and in scrutinising documents supplied by SKB (on planning a repository for spent fuels in Sweden). The aim of this report is to identify the key QA issues and to outline the nature and content of a QA plan which would be suitable for SKI, bearing in mind the requirements and recommendations of relevant standards. Emphasis is on issues which are specific to Performance Assessments for deep repositories for radioactive wastes, but consideration is also given to issues which need to be addressed in all large projects. Given the long time over which the performance of a deep repository system must be evaluated, the demonstration that a repository is likely to perform satisfactorily relies on the use of computer-generated model predictions of system performance. This raises particular QA issues which are generally not encountered in other technical areas (for instance, power station operations). The traceability of the arguments used is a key QA issue, as are conceptual model uncertainty, and code verification and validation; these were all included in the consideration of overall uncertainties in the Site-94 project. Additionally, issues which are particularly relevant to SKI include: How QA in a PA fits in with the general QA procedures of the organisation undertaking the work. The relationship between QA as applied by the regulator and the implementor of a repository development programme. Section 2 introduces the discussion of these issues by reviewing the standards and guidance which are available from national and international organisations. This is followed in Section 3 by a review of specific issues which arise from the Site-94 exercise. An outline procedure for managing QA issues in SKI is put forward as a basis for discussion in Section 4. It is hoped that

  15. Linear associations between clinically assessed upper motor neuron disease and diffusion tensor imaging metrics in amyotrophic lateral sclerosis.

    Science.gov (United States)

    Woo, John H; Wang, Sumei; Melhem, Elias R; Gee, James C; Cucchiara, Andrew; McCluskey, Leo; Elman, Lauren

    2014-01-01

    To assess the relationship between clinically assessed Upper Motor Neuron (UMN) disease in Amyotrophic Lateral Sclerosis (ALS) and local diffusion alterations measured in the brain corticospinal tract (CST) by a tractography-driven template-space region-of-interest (ROI) analysis of Diffusion Tensor Imaging (DTI). This cross-sectional study included 34 patients with ALS, on whom DTI was performed. Clinical measures were separately obtained including the Penn UMN Score, a summary metric based upon standard clinical methods. After normalizing all DTI data to a population-specific template, tractography was performed to determine a region-of-interest (ROI) outlining the CST, in which average Mean Diffusivity (MD) and Fractional Anisotropy (FA) were estimated. Linear regression analyses were used to investigate associations of DTI metrics (MD, FA) with clinical measures (Penn UMN Score, ALSFRS-R, duration-of-disease), along with age, sex, handedness, and El Escorial category as covariates. For MD, the regression model was significant (p = 0.02), and the only significant predictors were the Penn UMN Score (p = 0.005) and age (p = 0.03). The FA regression model was also significant (p = 0.02); the only significant predictor was the Penn UMN Score (p = 0.003). Measured by the template-space ROI method, both MD and FA were linearly associated with the Penn UMN Score, supporting the hypothesis that DTI alterations reflect UMN pathology as assessed by the clinical examination.

  16. The Pacific northwest stream quality assessment

    Science.gov (United States)

    Van Metre, Peter C.; Morace, Jennifer L.; Sheibley, Rich W.

    2015-01-01

    In 2015, the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) program is assessing stream quality in the Pacific Northwest. The goals of the Pacific Northwest Stream Quality Assessment (Pacific Northwest study) are to assess the quality of streams in the region by characterizing multiple water-quality factors that are stressors to aquatic life and to evaluate the relation between these stressors and biological communities. The effects of urbanization and agriculture on stream quality for the Puget Lowlands and Willamette Valley are the focus of this regional study. Findings will provide the public and policymakers with information regarding which human and environmental factors are the most critical in affecting stream quality and, thus, provide insights about possible approaches to protect or improve the health of streams in the region.

  17. Does Objective Quality of Physicians Correlate with Patient Satisfaction Measured by Hospital Compare Metrics in New York State?

    Science.gov (United States)

    Bekelis, Kimon; Missios, Symeon; MacKenzie, Todd A; O'Shaughnessy, Patrick M

    2017-07-01

    It is unclear whether publicly reported benchmarks correlate with quality of physicians and institutions. We investigated the association of patient satisfaction measures from a public reporting platform with performance of neurosurgeons in New York State. This cohort study comprised patients undergoing neurosurgical operations from 2009 to 2013 who were registered in the Statewide Planning and Research Cooperative System database. The cohort was merged with publicly available data from the Centers for Medicare and Medicaid Services Hospital Compare website. Propensity-adjusted regression analysis was used to investigate the association of patient satisfaction metrics with neurosurgeon quality, as measured by the neurosurgeon's individual rate of mortality and average length of stay. During the study period, 166,365 patients underwent neurosurgical procedures. Using propensity-adjusted multivariable regression analysis, we demonstrated that undergoing neurosurgical operations in hospitals with a greater percentage of patient-assigned "high" scores was associated with higher chance of being treated by a physician with superior performance in terms of mortality (odds ratio 1.90, 95% confidence interval 1.86-1.95), and a higher chance of being treated by a physician with superior performance in terms of length of stay (odds ratio 1.24, 95% confidence interval 1.21-1.27). Similar associations were identified for hospitals with a higher percentage of patients who claimed they would recommend these institutions to others. Merging a comprehensive all-payer cohort of neurosurgery patients in New York State with data from the Hospital Compare website, we observed an association of superior hospital-level patient satisfaction measures with objective performance of individual neurosurgeons in the corresponding hospitals. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. STATISTICS IN SERVICE QUALITY ASSESSMENT

    Directory of Open Access Journals (Sweden)

    Dragana Gardašević

    2012-09-01

    Full Text Available For any quality evaluation in sports, science, education, and so, it is useful to collect data to construct a strategy to improve the quality of services offered to the user. For this purpose, we use statistical software packages for data processing data collected in order to increase customer satisfaction. The principle is demonstrated by the example of the level of student satisfaction ratings Belgrade Polytechnic (as users the quality of institutions (Belgrade Polytechnic. Here, the emphasis on statistical analysis as a tool for quality control in order to improve the same, and not the interpretation of results. Therefore, the above can be used as a model in sport to improve the overall results.

  19. Utility of whole-lesion ADC histogram metrics for assessing the malignant potential of pancreatic intraductal papillary mucinous neoplasms (IPMNs).

    Science.gov (United States)

    Hoffman, David H; Ream, Justin M; Hajdu, Christina H; Rosenkrantz, Andrew B

    2017-04-01

    To evaluate whole-lesion ADC histogram metrics for assessing the malignant potential of pancreatic intraductal papillary mucinous neoplasms (IPMNs), including in comparison with conventional MRI features. Eighteen branch-duct IPMNs underwent MRI with DWI prior to resection (n = 16) or FNA (n = 2). A blinded radiologist placed 3D volumes-of-interest on the entire IPMN on the ADC map, from which whole-lesion histogram metrics were generated. The reader also assessed IPMN size, mural nodularity, and adjacent main-duct dilation. Benign (low-to-intermediate grade dysplasia; n = 10) and malignant (high-grade dysplasia or invasive adenocarcinoma; n = 8) IPMNs were compared. Whole-lesion ADC histogram metrics demonstrating significant differences between benign and malignant IPMNs were: entropy (5.1 ± 0.2 vs. 5.4 ± 0.2; p = 0.01, AUC = 86%); mean of the bottom 10th percentile (2.2 ± 0.4 vs. 1.6 ± 0.7; p = 0.03; AUC = 81%); and mean of the 10-25th percentile (2.8 ± 0.4 vs. 2.3 ± 0.6; p = 0.04; AUC = 79%). The overall mean ADC, skewness, and kurtosis were not significantly different between groups (p ≥ 0.06; AUC = 50-78%). For entropy (highest performing histogram metric), an optimal threshold of >5.3 achieved a sensitivity of 100%, a specificity of 70%, and an accuracy of 83% for predicting malignancy. No significant difference (p = 0.18-0.64) was observed between benign and malignant IPMNs for cyst size ≥3 cm, adjacent main-duct dilatation, or mural nodule. At multivariable analysis of entropy in combination with all other ADC histogram and conventional MRI features, entropy was the only significant independent predictor of malignancy (p = 0.004). Although requiring larger studies, ADC entropy obtained from 3D whole-lesion histogram analysis may serve as a biomarker for identifying the malignant potential of IPMNs, independent of conventional MRI features.

  20. Clinical Music Study Quality Assessment Scale (MUSIQUAS)

    NARCIS (Netherlands)

    Jaschke, A.C.; Eggermont, L.H.P.; Scherder, E.J.A.; Shippton, M.; Hiomonides, I.

    2013-01-01

    AIMS Quality assessment of studies is essential for the understanding and application of these in systematic reviews and meta analyses, the two “gold standards” of medical sciences. Publications in scientific journals have extensively used assessment scales to address poor methodological quality,

  1. Assessing the performance of macroinvertebrate metrics in the Challhuaco-Ñireco System (Northern Patagonia, Argentina

    Directory of Open Access Journals (Sweden)

    Melina Mauad

    2015-09-01

    Full Text Available ABSTRACT Seven sites were examined in the Challhuaco-Ñireco system, located in the reserve of the Nahuel Huapi National Park, however part of the catchment is urbanized, being San Carlos de Bariloche (150,000 inhabitants placed in the lower part of the basin. Physico-chemical variables were measured and benthic macroinvertebrates were collected during three consecutive years at seven sites from the headwater to the river outlet. Sites near the source of the river were characterised by Plecoptera, Ephemeroptera, Trichoptera and Diptera, whereas sites close to the river mouth were dominated by Diptera, Oligochaeta and Mollusca. Regarding functional feeding groups, collector-gatherers were dominant at all sites and this pattern was consistent among years. Ordination Analysis (RDA revealed that species assemblages distribution responded to the climatic and topographic gradient (temperature and elevation, but also were associated with variables related to human impact (conductivity, nitrate and phosphate contents. Species assemblages at headwaters were mostly represented by sensitive insects, whereas tolerant taxa such as Tubificidae, Lumbriculidae, Chironomidae and crustacean Aegla sp. were dominant at urbanised sites. Regarding macroinvertebrate metrics employed, total richness, EPT taxa, Shannon diversity index and Biotic Monitoring Patagonian Stream index resulted fairly consistent and evidenced different levels of disturbances at the stream, meaning that this measures are suitable for evaluation of the status of Patagonian mountain streams.

  2. Targeted Assessment for Prevention of Healthcare-Associated Infections: A New Prioritization Metric.

    Science.gov (United States)

    Soe, Minn M; Gould, Carolyn V; Pollock, Daniel; Edwards, Jonathan

    2015-12-01

    To develop a method for calculating the number of healthcare-associated infections (HAIs) that must be prevented to reach a HAI reduction goal and identifying and prioritizing healthcare facilities where the largest reductions can be achieved. Acute care hospitals that report HAI data to the Centers for Disease Control and Prevention's National Healthcare Safety Network. METHODS :The cumulative attributable difference (CAD) is calculated by subtracting a numerical prevention target from an observed number of HAIs. The prevention target is the product of the predicted number of HAIs and a standardized infection ratio goal, which represents a HAI reduction goal. The CAD is a numeric value that if positive is the number of infections to prevent to reach the HAI reduction goal. We calculated the CAD for catheter-associated urinary tract infections for each of the 3,639 hospitals that reported such data to National Healthcare Safety Network in 2013 and ranked the hospitals by their CAD values in descending order. Of 1,578 hospitals with positive CAD values, preventing 10,040 catheter-associated urinary tract infections at 293 hospitals (19%) with the highest CAD would enable achievement of the national 25% catheter-associated urinary tract infection reduction goal. The CAD is a new metric that facilitates ranking of facilities, and locations within facilities, to prioritize HAI prevention efforts where the greatest impact can be achieved toward a HAI reduction goal.

  3. A metric space for Type Ia supernova spectra: a new method to assess explosion scenarios

    Science.gov (United States)

    Sasdelli, Michele; Hillebrandt, W.; Kromer, M.; Ishida, E. E. O.; Röpke, F. K.; Sim, S. A.; Pakmor, R.; Seitenzahl, I. R.; Fink, M.

    2017-04-01

    Over the past years, Type Ia supernovae (SNe Ia) have become a major tool to determine the expansion history of the Universe, and considerable attention has been given to, both, observations and models of these events. However, until now, their progenitors are not known. The observed diversity of light curves and spectra seems to point at different progenitor channels and explosion mechanisms. Here, we present a new way to compare model predictions with observations in a systematic way. Our method is based on the construction of a metric space for SN Ia spectra by means of linear principal component analysis, taking care of missing and/or noisy data, and making use of partial least-squares regression to find correlations between spectral properties and photometric data. We investigate realizations of the three major classes of explosion models that are presently discussed: delayed-detonation Chandrasekhar-mass explosions, sub-Chandrasekhar-mass detonations and double-degenerate mergers, and compare them with data. We show that in the principal component space, all scenarios have observed counterparts, supporting the idea that different progenitors are likely. However, all classes of models face problems in reproducing the observed correlations between spectral properties and light curves and colours. Possible reasons are briefly discussed.

  4. Computing eye gaze metrics for the automatic assessment of radiographer performance during X-ray image interpretation.

    Science.gov (United States)

    McLaughlin, Laura; Bond, Raymond; Hughes, Ciara; McConnell, Jonathan; McFadden, Sonyia

    2017-09-01

    each group could be reflected in the variability of their eye tracking heat maps. Participants' thoughts and decisions were quantified using the eye tracking data. Eye tracking metrics also reflected the different search strategies that each group of participants adopted during their image interpretations. This is the first study to use eye tracking technology to assess image interpretation skills between various groups of different levels of experience in radiography, especially on a combination of the MSK system, chest cavity and a variety of pathologies. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Automated Neuropsychological Assessment Metrics, Version 4 (ANAM4): Examination of Select Psychometric Properties and Administration Procedures

    Science.gov (United States)

    2018-03-01

    301 Minnesota 306 Kentucky 193 Texas 188 Total 1461 14 Task 27 (Months 85-96) Continue data quality control checks and preliminary...research hypotheses – COMPLETED Data management and data quality control checks have been completed with all data collected as part of this effort...summarizing data from Studies 1-3 are being finalized. Data management procedures for Study 4 are completed and analyses and manuscript preparations are

  6. Perceived Quality of Full HD Video - Subjective Quality Assessment

    Directory of Open Access Journals (Sweden)

    Juraj Bienik

    2016-01-01

    Full Text Available In recent years, an interest in multimedia services has become a global trend and this trend is still rising. The video quality is a very significant part from the bundle of multimedia services, which leads to a requirement for quality assessment in the video domain. Video quality of a streamed video across IP networks is generally influenced by two factors “transmission link imperfection and efficiency of compression standards. This paper deals with subjective video quality assessment and the impact of the compression standards H.264, H.265 and VP9 on perceived video quality of these compression standards. The evaluation is done for four full HD sequences, the difference of scenes is in the content“ distinction is based on Spatial (SI and Temporal (TI Index of test sequences. Finally, experimental results follow up to 30% bitrate reducing of H.265 and VP9 compared with the reference H.264.

  7. Soldier Quality of Life Assessment

    Science.gov (United States)

    2016-09-01

    SUSTAINMENT LOGISTICS EMOTIONS QUALITY OF LIFE MENTAL READINESS FUEL DEMAND REDUCTION FEEDBACK ARMY PERSONNEL ARMY...QoL as a measure of how well a given camp supports the physical and mental (to include the cognitive, social, and emotional dimensions) readiness of...housing fewer than 1,000 personnel. Larger FOBs have significantly more capabilities (e.g., food courts with contractor-prepared, name brand fast foods

  8. Improvement in Total Joint Replacement Quality Metrics: Year One Versus Year Three of the Bundled Payments for Care Improvement Initiative.

    Science.gov (United States)

    Dundon, John M; Bosco, Joseph; Slover, James; Yu, Stephen; Sayeed, Yousuf; Iorio, Richard

    2016-12-07

    In January 2013, a large, tertiary, urban academic medical center began participation in the Bundled Payments for Care Improvement (BPCI) initiative for total joint arthroplasty, a program implemented by the Centers for Medicare & Medicaid Services (CMS) in 2011. Medicare Severity-Diagnosis Related Groups (MS-DRGs) 469 and 470 were included. We participated in BPCI Model 2, by which an episode of care includes the inpatient and all post-acute care costs through 90 days following discharge. The goal for this initiative is to improve patient care and quality through a patient-centered approach with increased care coordination supported through payment innovation. Length of stay (LOS), readmissions, discharge disposition, and cost per episode of care were analyzed for year 3 compared with year 1 of the initiative. Multiple programs were implemented after the first year to improve performance metrics: a surgeon-directed preoperative risk-factor optimization program, enhanced care coordination and home services, a change in venous thromboembolic disease (VTED) prophylaxis to a risk-stratified protocol, infection-prevention measures, a continued emphasis on discharge to home rather than to an inpatient facility, and a quality-dependent gain-sharing program among surgeons. There were 721 Medicare primary total joint arthroplasty patients in year 1 and 785 in year 3; their data were compared. The average hospital LOS decreased from 3.58 to 2.96 days. The rate of discharge to an inpatient facility decreased from 44% to 28%. The 30-day all-cause readmission rate decreased from 7% to 5%; the 60-day all-cause readmission rate decreased from 11% to 6%; and the 90-day all-cause readmission rate decreased from 13% to 8%. The average 90-day cost per episode decreased by 20%. Mid-term results from the implementation of Medicare BPCI Model 2 for primary total joint arthroplasty demonstrated decreased LOS, decreased discharges to inpatient facilities, decreased readmissions, and

  9. Mass Customization Measurements Metrics

    DEFF Research Database (Denmark)

    Nielsen, Kjeld; Brunø, Thomas Ditlev; Jørgensen, Kaj Asbjørn

    2014-01-01

    A recent survey has indicated that 17 % of companies have ceased mass customizing less than 1 year after initiating the effort. This paper presents measurement for a company’s mass customization performance, utilizing metrics within the three fundamental capabilities: robust process design, choice...... navigation, and solution space development. A mass customizer when assessing performance with these metrics can identify within which areas improvement would increase competitiveness the most and enable more efficient transition to mass customization....

  10. Welfare Quality assessment protocol for laying hens = Welfare Quality assessment protocol voor leghennen

    NARCIS (Netherlands)

    Niekerk, van T.G.C.M.; Gunnink, H.; Reenen, van C.G.

    2012-01-01

    Results of a study on the Welfare Quality® assessment protocol for laying hens. It reports the development of the integration of welfare assessment as scores per criteria as well as simplification of the Welfare Quality® assessment protocol. Results are given from assessment of 122 farms.

  11. Assessment of multi-version NPP I and C systems safety. Metric-based approach, technique and tool

    International Nuclear Information System (INIS)

    Kharchenko, Vyacheslav; Volkovoy, Andrey; Bakhmach, Eugenii; Siora, Alexander; Duzhyi, Vyacheslav

    2011-01-01

    The challenges related to problem of assessment of actual diversity level and evaluation of diversity-oriented NPP I and C systems safety are analyzed. There are risks of inaccurate assessment and problems of insufficient decreasing probability of CCFs. CCF probability of safety-critical systems may be essentially decreased due to application of several different types of diversity (multi-diversity). Different diversity types of FPGA-based NPP I and C systems, general approach and stages of diversity and safety assessment as a whole are described. Objectives of the report are: (a) analysis of the challenges caused by use of diversity approach in NPP I and C systems in context of FPGA and other modern technologies application; (b) development of multi-version NPP I and C systems assessment technique and tool based on check-list and metric-oriented approach; (c) case-study of the technique: assessment of multi-version FPGA-based NPP I and C developed by use of Radiy TM Platform. (author)

  12. Assessing Woody Vegetation Trends in Sahelian Drylands Using MODIS Based Seasonal Metrics

    Science.gov (United States)

    Brandt, Martin; Hiernaux, Pierre; Rasmussen, Kjeld; Mbow, Cheikh; Kergoat, Laurent; Tagesson, Torbern; Ibrahim, Yahaya Z.; Wele, Abdoulaye; Tucker, Compton J.; Fensholt, Rasmus

    2016-01-01

    Woody plants play a major role for the resilience of drylands and in peoples' livelihoods. However, due to their scattered distribution, quantifying and monitoring woody cover over space and time is challenging. We develop a phenology driven model and train/validate MODIS (MCD43A4, 500m) derived metrics with 178 ground observations from Niger, Senegal and Mali to estimate woody cover trends from 2000 to 2014 over the entire Sahel. The annual woody cover estimation at 500 m scale is fairly accurate with an RMSE of 4.3 (woody cover %) and r(exp 2) = 0.74. Over the 15 year period we observed an average increase of 1.7 (+/- 5.0) woody cover (%) with large spatial differences: No clear change can be observed in densely populated areas (0.2 +/- 4.2), whereas a positive change is seen in sparsely populated areas (2.1 +/- 5.2). Woody cover is generally stable in cropland areas (0.9 +/- 4.6), reflecting the protective management of parkland trees by the farmers. Positive changes are observed in savannas (2.5 +/- 5.4) and woodland areas (3.9 +/- 7.3). The major pattern of woody cover change reveals strong increases in the sparsely populated Sahel zones of eastern Senegal, western Mali and central Chad, but a decreasing trend is observed in the densely populated western parts of Senegal, northern Nigeria, Sudan and southwestern Niger. This decrease is often local and limited to woodlands, being an indication of ongoing expansion of cultivated areas and selective logging.We show that an overall positive trend is found in areas of low anthropogenic pressure demonstrating the potential of these ecosystems to provide services such as carbon storage, if not over-utilized. Taken together, our results provide an unprecedented synthesis of woody cover dynamics in theSahel, and point to land use and human population density as important drivers, however only partially and locally offsetting a general post-drought increase.

  13. Quality of life and functional capacity outcomes in the MOMENTUM 3 trial at 6 months: A call for new metrics for left ventricular assist device patients.

    Science.gov (United States)

    Cowger, Jennifer A; Naka, Yoshifumi; Aaronson, Keith D; Horstmanshof, Douglas; Gulati, Sanjeev; Rinde-Hoffman, Debbie; Pinney, Sean; Adatya, Sirtaz; Farrar, David J; Jorde, Ulrich P

    2018-01-01

    The Multicenter Study of MAGLEV Technology in Patients Undergoing Mechanical Circulatory Support Therapy with HeartMate 3 (MOMENTUM 3) clinical trial demonstrated improved 6-month event-free survival, but a detailed analysis of health-related quality of life (HR-QOL) and functional capacity (FC) was not presented. Further, the effect of early serious adverse events (SAEs) on these metrics and on the general ability to live well while supported with a left ventricular assist system (LVAS) warrants evaluation. FC (New York Heart Association [NYHA] and 6-minute walk test [6MWT]) and HR-QOL (European Quality of Life [EQ-5D-5L] and the Kansas City Cardiomyopathy [KCCQ]) assessments were obtained at baseline and 6 months after HeartMate 3 (HM3, n = 151; Abbott, Abbott Park, IL) or HeartMate II (HMII, n = 138; Abbott) implant as part of the MOMENTUM 3 clinical trial. Metrics were compared between devices and in those with and without events. The proportion of patients "living well on an LVAS" at 6 months, defined as alive with satisfactory FC (NYHA I/II or 6MWT > 300 meters) and HR-QOL (overall KCCQ > 50), was evaluated. Although the median (25th-75th percentile) patient KCCQ (change for HM3: +28 [10-46]; HMII: +29 [9-48]) and EQ-5D-5L (change for HM3: -1 [-5 to 0]; HMII: -2 [-6 to 0]) scores improved from baseline to 6 months (p 0.05). Likewise, there was an equivalent improvement in 6MWT distance at 6 months in HM3 (+94 [1-274] meters] and HMII (+188[43-340 meters]) from baseline. In patients with SAEs (n = 188), 6MWTs increased from baseline (p < 0.001), but gains for both devices were less than those without SAE (HM3: +74 [-9 to 183] meters with SAE vs +140 [35-329] meters without SAE; HMII: +177 [47-356] meters with SAE vs +192 [23-337] meters without SAE, both p < 0.003). SAEs did not affect the 6-month HR-QOL scores. The "living well" end point was achieved in 145 HM3 (63%) and 120 HMII (68%) patients (p = 0.44). Gains in HR-QOL and FC were similar early after HM3

  14. General discussion of data quality challenges in social media metrics: Extensive comparison of four major altmetric data aggregators

    Science.gov (United States)

    2018-01-01

    The data collection and reporting approaches of four major altmetric data aggregators are studied. The main aim of this study is to understand how differences in social media tracking and data collection methodologies can have effects on the analytical use of altmetric data. For this purpose, discrepancies in the metrics across aggregators have been studied in order to understand how the methodological choices adopted by these aggregators can explain the discrepancies found. Our results show that different forms of accessing the data from diverse social media platforms, together with different approaches of collecting, processing, summarizing, and updating social media metrics cause substantial differences in the data and metrics offered by these aggregators. These results highlight the importance that methodological choices in the tracking, collecting, and reporting of altmetric data can have in the analytical value of the data. Some recommendations for altmetric users and data aggregators are proposed and discussed. PMID:29772003

  15. Metric properties of the "timed get up and go- modified version" test, in risk assessment of falls in active women.

    Science.gov (United States)

    Alfonso Mora, Margareth Lorena

    2017-03-30

    To analyse the metric properties of the Timed Get up and Go-Modified Version Test (TGUGM), in risk assessment of falls in a group of physically active women. A sample was constituted by 202 women over 55 years of age, were assessed through a crosssectional study. The TGUGM was applied to assess their fall risk. The test was analysed by comparison of the qualitative and quantitative information and by factor analysis. The development of a logistic regression model explained the risk of falls according to the test components. The TGUGM was useful for assessing the risk of falls in the studied group. The test revealed two factors: the Get Up and the Gait with dual task . Less than twelve points in the evaluation or runtimes higher than 35 seconds was associated with high risk of falling. More than 35 seconds in the test indicated a risk fall probability greater than 0.50. Also, scores less than 12 points were associated with a delay of 7 seconds more in the execution of the test ( p = 0.0016). Factor analysis of TGUGM revealed two dimensions that can be independent predictors of risk of falling: The Get up that explains between 64% and 87% of the risk of falling, and the Gait with dual task, that explains between 77% and 95% of risk of falling.

  16. A multi-scale metrics approach to forest fragmentation for Strategic Environmental Impact Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eunyoung, E-mail: eykim@kei.re.kr [Korea Environment Institute, 215 Jinheungno, Eunpyeong-gu, Seoul 122-706 (Korea, Republic of); Song, Wonkyong, E-mail: wksong79@gmail.com [Suwon Research Institute, 145 Gwanggyo-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-270 (Korea, Republic of); Lee, Dongkun, E-mail: dklee7@snu.ac.kr [Department of Landscape Architecture and Rural System Engineering, Seoul National University, 599 Gwanakro, Gwanak-gu, Seoul 151-921 (Korea, Republic of); Research Institute for Agriculture and Life Sciences, Seoul National University, Seoul 151-921 (Korea, Republic of)

    2013-09-15

    Forests are becoming severely fragmented as a result of land development. South Korea has responded to changing community concerns about environmental issues. The nation has developed and is extending a broad range of tools for use in environmental management. Although legally mandated environmental compliance requirements in South Korea have been implemented to predict and evaluate the impacts of land-development projects, these legal instruments are often insufficient to assess the subsequent impact of development on the surrounding forests. It is especially difficult to examine impacts on multiple (e.g., regional and local) scales in detail. Forest configuration and size, including forest fragmentation by land development, are considered on a regional scale. Moreover, forest structure and composition, including biodiversity, are considered on a local scale in the Environmental Impact Assessment process. Recently, the government amended the Environmental Impact Assessment Act, including the SEA, EIA, and small-scale EIA, to require an integrated approach. Therefore, the purpose of this study was to establish an impact assessment system that minimizes the impacts of land development using an approach that is integrated across multiple scales. This study focused on forest fragmentation due to residential development and road construction sites in selected Congestion Restraint Zones (CRZs) in the Greater Seoul Area of South Korea. Based on a review of multiple-scale impacts, this paper integrates models that assess the impacts of land development on forest ecosystems. The applicability of the integrated model for assessing impacts on forest ecosystems through the SEIA process is considered. On a regional scale, it is possible to evaluate the location and size of a land-development project by considering aspects of forest fragmentation, such as the stability of the forest structure and the degree of fragmentation. On a local scale, land-development projects should

  17. Assessing Participation in Secondary Education Quality Enhancement

    African Journals Online (AJOL)

    Assessing Participation in Secondary Education Quality Enhancement: Teachers, Parents and Communities in Cross River State. ... ailing economy, low moral values and philosophy of the end justifies the means were reasons for low parents and communities involvement in secondary education-quality improvement.

  18. the research quality plus (rq+) assessment instrument

    International Development Research Centre (IDRC) Digital Library (Canada)

    Thomas Schwandt

    THE RESEARCH QUALITY PLUS (RQ+) ASSESSMENT INSTRUMENT ... consistent way to allow for further meta-analysis about research quality over time. ... Addresses complex and integrative problems, requiring systems-based approaches ..... benefits or financial costs for participants that might not be appropriate in the ...

  19. Metric learning

    CERN Document Server

    Bellet, Aurelien; Sebban, Marc

    2015-01-01

    Similarity between objects plays an important role in both human cognitive processes and artificial systems for recognition and categorization. How to appropriately measure such similarities for a given task is crucial to the performance of many machine learning, pattern recognition and data mining methods. This book is devoted to metric learning, a set of techniques to automatically learn similarity and distance functions from data that has attracted a lot of interest in machine learning and related fields in the past ten years. In this book, we provide a thorough review of the metric learnin

  20. Quality Assessment in the Primary care

    Directory of Open Access Journals (Sweden)

    Muharrem Ak

    2013-04-01

    Full Text Available -Quality Assessment in the Primary care Dear Editor; I have read the article titled as “Implementation of Rogi Kalyan Samiti (RKS at Primary Health Centre Durvesh” with great interest. Shrivastava et all concluded that assessment mechanism for the achievement of objectives for the suggested RKS model was not successful (1. Hereby I would like to emphasize the importance of quality assessment (QA especially in the era of newly established primary care implementations in our country. Promotion of quality has been fundamental part of primary care health services. Nevertheless variations in quality of care exist even in the developed countries. Accomplishment of quality in the primary care has some barriers like administration and directorial factors, absence of evidence-based medicine practice lack of continuous medical education. Quality of health care is no doubt multifaceted model that covers all components of health structures and processes of care. Quality in the primary care set up includes patient physician relationship, immunization, maternal, adolescent, adult and geriatric health care, referral, non-communicable disease management and prescribing (2. Most countries are recently beginning the implementation of quality assessments in all walks of healthcare. Organizations like European society for quality and safety in family practice (EQuiP endeavor to accomplish quality by collaboration. There are reported developments and experiments related to the methodology, processes and outcomes of quality assessments of health care. Quality assessments will not only contribute the accomplishment of the program / project but also detect the areas where obstacles also exist. In order to speed up the adoption of QA and to circumvent the occurrence of mistakes, health policy makers and family physicians from different parts of the world should share their experiences. Consensus on quality in preventive medicine implementations can help to yield

  1. Quality control in public participation assessments of water quality: the OPAL Water Survey.

    Science.gov (United States)

    Rose, N L; Turner, S D; Goldsmith, B; Gosling, L; Davidson, T A

    2016-07-22

    Public participation in scientific data collection is a rapidly expanding field. In water quality surveys, the involvement of the public, usually as trained volunteers, generally includes the identification of aquatic invertebrates to a broad taxonomic level. However, quality assurance is often not addressed and remains a key concern for the acceptance of publicly-generated water quality data. The Open Air Laboratories (OPAL) Water Survey, launched in May 2010, aimed to encourage interest and participation in water science by developing a 'low-barrier-to-entry' water quality survey. During 2010, over 3000 participant-selected lakes and ponds were surveyed making this the largest public participation lake and pond survey undertaken to date in the UK. But the OPAL approach of using untrained volunteers and largely anonymous data submission exacerbates quality control concerns. A number of approaches were used in order to address data quality issues including: sensitivity analysis to determine differences due to operator, sampling effort and duration; direct comparisons of identification between participants and experienced scientists; the use of a self-assessment identification quiz; the use of multiple participant surveys to assess data variability at single sites over short periods of time; comparison of survey techniques with other measurement variables and with other metrics generally considered more accurate. These quality control approaches were then used to screen the OPAL Water Survey data to generate a more robust dataset. The OPAL Water Survey results provide a regional and national assessment of water quality as well as a first national picture of water clarity (as suspended solids concentrations). Less than 10 % of lakes and ponds surveyed were 'poor' quality while 26.8 % were in the highest water quality band. It is likely that there will always be a question mark over untrained volunteer generated data simply because quality assurance is uncertain

  2. Quality of Experience Assessment of Video Quality in Social Clouds

    Directory of Open Access Journals (Sweden)

    Asif Ali Laghari

    2017-01-01

    Full Text Available Video sharing on social clouds is popular among the users around the world. High-Definition (HD videos have big file size so the storing in cloud storage and streaming of videos with high quality from cloud to the client are a big problem for service providers. Social clouds compress the videos to save storage and stream over slow networks to provide quality of service (QoS. Compression of video decreases the quality compared to original video and parameters are changed during the online play as well as after download. Degradation of video quality due to compression decreases the quality of experience (QoE level of end users. To assess the QoE of video compression, we conducted subjective (QoE experiments by uploading, sharing, and playing videos from social clouds. Three popular social clouds, Facebook, Tumblr, and Twitter, were selected to upload and play videos online for users. The QoE was recorded by using questionnaire given to users to provide their experience about the video quality they perceive. Results show that Facebook and Twitter compressed HD videos more as compared to other clouds. However, Facebook gives a better quality of compressed videos compared to Twitter. Therefore, users assigned low ratings for Twitter for online video quality compared to Tumblr that provided high-quality online play of videos with less compression.

  3. Image quality assessment using deep convolutional networks

    Science.gov (United States)

    Li, Yezhou; Ye, Xiang; Li, Yong

    2017-12-01

    This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.

  4. Using the Consumer Experience with Pharmacy Services Survey as a quality metric for ambulatory care pharmacies: older adults' perspectives.

    Science.gov (United States)

    Shiyanbola, Olayinka O; Mott, David A; Croes, Kenneth D

    2016-05-26

    To describe older adults' perceptions of evaluating and comparing pharmacies based on the Consumer Experience with Pharmacy Services Survey (CEPSS), describe older adults' perceived importance of the CEPSS and its specific domains, and explore older adults' perceptions of the influence of specific CEPSS domains in choosing/switching pharmacies. Focus group methodology was combined with the administration of a questionnaire. The focus groups explored participants' perceived importance of the CEPSS and their perception of using the CEPSS to choose and/or switch pharmacies. Then, using the questionnaire, participants rated their perceived importance of each CEPSS domain in evaluating a pharmacy, and the likelihood of using CEPSS to switch pharmacies if their current pharmacy had low ratings. Descriptive and thematic analyses were done. 6 semistructured focus groups were conducted in a private meeting room in a Mid-Western state in the USA. 60 English-speaking adults who were at least 65 years, and had filled a prescription at a retail pharmacy within 90 days. During the focus groups, the older adults perceived the CEPSS to have advantages and disadvantages in evaluating and comparing pharmacies. Older adults thought the CEPSS was important in choosing the best pharmacies and avoiding the worst pharmacies. The perceived influence of the CEPSS in switching pharmacies varied depending on the older adult's personal experience or trust of other consumers' experience. Questionnaire results showed that participants perceived health/medication-focused communication as very important or extremely important (n=47, 82.5%) in evaluating pharmacies and would be extremely likely (n=21, 36.8%) to switch pharmacies if their pharmacy had low ratings in this domain. The older adults in this study are interested in using patient experiences as a quality metric for avoiding the worst pharmacies. Pharmacists' communication about health and medicines is perceived important and likely

  5. From Log Files to Assessment Metrics: Measuring Students' Science Inquiry Skills Using Educational Data Mining

    Science.gov (United States)

    Gobert, Janice D.; Sao Pedro, Michael; Raziuddin, Juelaila; Baker, Ryan S.

    2013-01-01

    We present a method for assessing science inquiry performance, specifically for the inquiry skill of designing and conducting experiments, using educational data mining on students' log data from online microworlds in the Inq-ITS system (Inquiry Intelligent Tutoring System; www.inq-its.org). In our approach, we use a 2-step process: First we use…

  6. A Comparison of Vector and Raster GIS Methods for Calculating Landscape Metrics Used in Environmental Assessments

    Science.gov (United States)

    Timothy G. Wade; James D. Wickham; Maliha S. Nash; Anne C. Neale; Kurt H. Riitters; K. Bruce Jones

    2003-01-01

    AbstractGIS-based measurements that combine native raster and native vector data are commonly used in environmental assessments. Most of these measurements can be calculated using either raster or vector data formats and processing methods. Raster processes are more commonly used because they can be significantly faster computationally...

  7. Automated Neuropsychological Assessment Metrics Version 4 (ANAM4): Select Psychometric Properties and Administration Procedures

    Science.gov (United States)

    2013-12-01

    disorders (including attention deficit hyperac- tivity disorder [ ADHD ]), and no gross visual (no worse than 20/30 corrected or uncorrected) or hearing...of conscious- ness, substance abuse problems/treatment, known neuro- logical disorders , major psychiatric disorders (including attention deficit ... hyperactivity disorder ), vision worse than 20/30 after correction, and hearing problems. Family his- tory of psychiatric disorders was not assessed.

  8. ASSESSMENT OF QUALITY OF INNOVATIVE TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    Larisa Alexejevna Ismagilova

    2016-12-01

    Full Text Available We consider the topical issue of implementation of innovative technologies in the aircraft engine building industry. In this industry, products with high reliability requirements are developed and mass-produced. These products combine the latest achievements of science and technology. To make a decision on implementation of innovative technologies, a comprehensive assessment is carried out. It affects the efficiency of the innovations realization. In connection with this, the assessment of quality of innovative technologies is a key aspect in the selection of technological processes for their implementation. Problems concerning assessment of the quality of new technologies and processes of production are considered in the suggested method with respect to new positions. The developed method of assessing the quality of innovative technologies stands out for formed system of the qualimetric characteristics ensuring the effectiveness, efficiency, adaptability of innovative technologies and processes. The feature of suggested system of assessment is that it is based on principles of matching and grouping of quality indicators of innovative technologies and the characteristics of technological processes. The indicators are assessed from the standpoint of feasibility, technologies competiveness and commercial demand of products. In this paper, we discuss the example of implementing the approach of assessing the quality of the innovative technology of high-tech products such as turbine aircraft engine.

  9. No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.

    Science.gov (United States)

    Li, Xuelong; Guo, Qun; Lu, Xiaoqiang

    2016-05-13

    It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.

  10. Capability Assessment and Performance Metrics for the Titan Multispectral Mapping Lidar

    Directory of Open Access Journals (Sweden)

    Juan Carlos Fernandez-Diaz

    2016-11-01

    Full Text Available In this paper we present a description of a new multispectral airborne mapping light detection and ranging (lidar along with performance results obtained from two years of data collection and test campaigns. The Titan multiwave lidar is manufactured by Teledyne Optech Inc. (Toronto, ON, Canada and emits laser pulses in the 1550, 1064 and 532 nm wavelengths simultaneously through a single oscillating mirror scanner at pulse repetition frequencies (PRF that range from 50 to 300 kHz per wavelength (max combined PRF of 900 kHz. The Titan system can perform simultaneous mapping in terrestrial and very shallow water environments and its multispectral capability enables new applications, such as the production of false color active imagery derived from the lidar return intensities and the automated classification of target and land covers. Field tests and mapping projects performed over the past two years demonstrate capabilities to classify five land covers in urban environments with an accuracy of 90%, map bathymetry under more than 15 m of water, and map thick vegetation canopies at sub-meter vertical resolutions. In addition to its multispectral and performance characteristics, the Titan system is designed with several redundancies and diversity schemes that have proven to be beneficial for both operations and the improvement of data quality.

  11. Assessing product image quality for online shopping

    Science.gov (United States)

    Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq

    2012-01-01

    Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.

  12. Research Quality Assessment and Planning Journals. The Italian Perspective.

    Directory of Open Access Journals (Sweden)

    Bruno Zanon

    2014-02-01

    Full Text Available Assessment of research products is a crucial issue for universities and research institutions faced with internationalization and competition. Disciplines are reacting differently to this challenge, and planning, in its various forms – from urban design to process­oriented sectors – is under strain because the increasingly common assessment procedures based on the number of articles published in ranked journals and on citation data are not generally accepted. The reputation of journals, the impact of publications, and the profiles of scholars are increasingly defined by means of indexes such as impact factor and citations counts, but these metrics are questioned because they do not take account of all journals and magazines – in particular those published in languages other than English – and they do not consider teaching and other activities typical of academics and which have a real impact on planning practices at the local level. In Italy the discussion is particularly heated because assessment procedures are recent, the disciplinary community is not used to publishing in ranked international journals, and the Italian literature is not attuned to the international quality criteria. The paper reviews the recent debate on planning journals and research assessment. It focuses on the Italian case from the perspective of improving current practices.

  13. A condition metric for Eucalyptus woodland derived from expert evaluations.

    Science.gov (United States)

    Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D

    2018-02-01

    The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.

  14. MO-D-213-06: Quantitative Image Quality Metrics Are for Physicists, Not Radiologists: How to Communicate to Your Radiologists Using Their Language

    International Nuclear Information System (INIS)

    Szczykutowicz, T; Rubert, N; Ranallo, F

    2015-01-01

    Purpose: A framework for explaining differences in image quality to non-technical audiences in medial imaging is needed. Currently, this task is something that is learned “on the job.” The lack of a formal methodology for communicating optimal acquisition parameters into the clinic effectively mitigates many technological advances. As a community, medical physicists need to be held responsible for not only advancing image science, but also for ensuring its proper use in the clinic. This work outlines a framework that bridges the gap between the results from quantitative image quality metrics like detectability, MTF, and NPS and their effect on specific anatomical structures present in diagnostic imaging tasks. Methods: Specific structures of clinical importance were identified for a body, an extremity, a chest, and a temporal bone protocol. Using these structures, quantitative metrics were used to identify the parameter space that should yield optimal image quality constrained within the confines of clinical logistics and dose considerations. The reading room workflow for presenting the proposed changes for imaging each of these structures is presented. The workflow consists of displaying images for physician review consisting of different combinations of acquisition parameters guided by quantitative metrics. Examples of using detectability index, MTF, NPS, noise and noise non-uniformity are provided. During review, the physician was forced to judge the image quality solely on those features they need for diagnosis, not on the overall “look” of the image. Results: We found that in many cases, use of this framework settled mis-agreements between physicians. Once forced to judge images on the ability to detect specific structures inter reader agreement was obtained. Conclusion: This framework will provide consulting, research/industrial, or in-house physicists with clinically relevant imaging tasks to guide reading room image review. This framework avoids use

  15. MO-D-213-06: Quantitative Image Quality Metrics Are for Physicists, Not Radiologists: How to Communicate to Your Radiologists Using Their Language

    Energy Technology Data Exchange (ETDEWEB)

    Szczykutowicz, T; Rubert, N; Ranallo, F [University Wisconsin-Madison, Madison, WI (United States)

    2015-06-15

    Purpose: A framework for explaining differences in image quality to non-technical audiences in medial imaging is needed. Currently, this task is something that is learned “on the job.” The lack of a formal methodology for communicating optimal acquisition parameters into the clinic effectively mitigates many technological advances. As a community, medical physicists need to be held responsible for not only advancing image science, but also for ensuring its proper use in the clinic. This work outlines a framework that bridges the gap between the results from quantitative image quality metrics like detectability, MTF, and NPS and their effect on specific anatomical structures present in diagnostic imaging tasks. Methods: Specific structures of clinical importance were identified for a body, an extremity, a chest, and a temporal bone protocol. Using these structures, quantitative metrics were used to identify the parameter space that should yield optimal image quality constrained within the confines of clinical logistics and dose considerations. The reading room workflow for presenting the proposed changes for imaging each of these structures is presented. The workflow consists of displaying images for physician review consisting of different combinations of acquisition parameters guided by quantitative metrics. Examples of using detectability index, MTF, NPS, noise and noise non-uniformity are provided. During review, the physician was forced to judge the image quality solely on those features they need for diagnosis, not on the overall “look” of the image. Results: We found that in many cases, use of this framework settled mis-agreements between physicians. Once forced to judge images on the ability to detect specific structures inter reader agreement was obtained. Conclusion: This framework will provide consulting, research/industrial, or in-house physicists with clinically relevant imaging tasks to guide reading room image review. This framework avoids use

  16. Drawing a baseline in aesthetic quality assessment

    Science.gov (United States)

    Rubio, Fernando; Flores, M. Julia; Puerta, Jose M.

    2018-04-01

    Aesthetic classification of images is an inherently subjective task. There does not exist a validated collection of images/photographs labeled as having good or bad quality from experts. Nowadays, the closest approximation to that is to use databases of photos where a group of users rate each image. Hence, there is not a unique good/bad label but a rating distribution given by users voting. Due to this peculiarity, it is not possible to state the problem of binary aesthetic supervised classification in such a direct mode as other Computer Vision tasks. Recent literature follows an approach where researchers utilize the average rates from the users for each image, and they establish an arbitrary threshold to determine their class or label. In this way, images above the threshold are considered of good quality, while images below the threshold are seen as bad quality. This paper analyzes current literature, and it reviews those attributes able to represent an image, differentiating into three families: specific, general and deep features. Among those which have been proved more competitive, we have selected a representative subset, being our main goal to establish a clear experimental framework. Finally, once features were selected, we have used them for the full AVA dataset. We have to remark that to perform validation we report not only accuracy values, which is not that informative in this case, but also, metrics able to evaluate classification power within imbalanced datasets. We have conducted a series of experiments so that distinct well-known classifiers are learned from data. Like that, this paper provides what we could consider valuable and valid baseline results for the given problem.

  17. Metrication manual

    International Nuclear Information System (INIS)

    Harper, A.F.A.; Digby, R.B.; Thong, S.P.; Lacey, F.

    1978-04-01

    In April 1978 a meeting of senior metrication officers convened by the Commonwealth Science Council of the Commonwealth Secretariat, was held in London. The participants were drawn from Australia, Bangladesh, Britain, Canada, Ghana, Guyana, India, Jamaica, Papua New Guinea, Solomon Islands and Trinidad and Tobago. Among other things, the meeting resolved to develop a set of guidelines to assist countries to change to SI and to compile such guidelines in the form of a working manual

  18. Image Quality Assessment via Quality-aware Group Sparse Coding

    Directory of Open Access Journals (Sweden)

    Minglei Tong

    2014-12-01

    Full Text Available Image quality assessment has been attracting growing attention at an accelerated pace over the past decade, in the fields of image processing, vision and machine learning. In particular, general purpose blind image quality assessment is technically challenging and lots of state-of-the-art approaches have been developed to solve this problem, most under the supervised learning framework where the human scored samples are needed for training a regression model. In this paper, we propose an unsupervised learning approach that work without the human label. In the off-line stage, our method trains a dictionary covering different levels of image quality patch atoms across the training samples without knowing the human score, where each atom is associated with a quality score induced from the reference image; at the on-line stage, given each image patch, our method performs group sparse coding to encode the sample, such that the sample quality can be estimated from the few labeled atoms whose encoding coefficients are nonzero. Experimental results on the public dataset show the promising performance of our approach and future research direction is also discussed.

  19. Quantitative Metrics and Risk Assessment: The Three Tenets Model of Cybersecurity

    Directory of Open Access Journals (Sweden)

    Jeff Hughes

    2013-08-01

    Full Text Available Progress in operational cybersecurity has been difficult to demonstrate. In spite of the considerable research and development investments made for more than 30 years, many government, industrial, financial, and consumer information systems continue to be successfully attacked and exploited on a routine basis. One of the main reasons that progress has been so meagre is that most technical cybersecurity solutions that have been proposed to-date have been point solutions that fail to address operational tradeoffs, implementation costs, and consequent adversary adaptations across the full spectrum of vulnerabilities. Furthermore, sound prescriptive security principles previously established, such as the Orange Book, have been difficult to apply given current system complexity and acquisition approaches. To address these issues, the authors have developed threat-based descriptive methodologies to more completely identify system vulnerabilities, to quantify the effectiveness of possible protections against those vulnerabilities, and to evaluate operational consequences and tradeoffs of possible protections. This article begins with a discussion of the tradeoffs among seemingly different system security properties such as confidentiality, integrity, and availability. We develop a quantitative framework for understanding these tradeoffs and the issues that arise when those security properties are all in play within an organization. Once security goals and candidate protections are identified, risk/benefit assessments can be performed using a novel multidisciplinary approach, called “QuERIES.” The article ends with a threat-driven quantitative methodology, called “The Three Tenets”, for identifying vulnerabilities and countermeasures in networked cyber-physical systems. The goal of this article is to offer operational guidance, based on the techniques presented here, for informed decision making about cyber-physical system security.

  20. Assessment Quality in Tertiary Education: An Integrative Literature Review

    OpenAIRE

    Gerritsen-van Leeuwenkamp, Karin; Joosten-ten Brinke, Desirée; Kester, Liesbeth

    2018-01-01

    In tertiary education, inferior assessment quality is a problem that has serious consequences for students, teachers, government, and society. A lack of a clear and overarching conceptualization of assessment quality can cause difficulties in guaranteeing assessment quality in practice. Thus, the aim of this study is to conceptualize assessment quality in tertiary education by providing an overview of the assessment quality criteria, their influences, the evaluation of the assessment quality ...

  1. Quality Management Plan for the Environmental Assessment and Innovation Division

    Science.gov (United States)

    Quality management plan (QMP) which identifies the mission, roles, responsibilities of personnel with regard to quality assurance and quality management for the environmental assessment and innovation division.

  2. Cyber threat metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Frye, Jason Neal; Veitch, Cynthia K.; Mateski, Mark Elliot; Michalski, John T.; Harris, James Mark; Trevino, Cassandra M.; Maruoka, Scott

    2012-03-01

    Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.

  3. Assessment of every day extremely low frequency (Elf) electromagnetic fields (50-60 Hz) exposure: which metrics?

    International Nuclear Information System (INIS)

    Verrier, A.; Magne, I.; Souqes, M.; Lambrozo, J.

    2006-01-01

    Because electricity is encountered at every moment of the day, at home with household appliances, or in every type of transportation, people are most of the time exposed to extremely low frequency (E.L.F.) electromagnetic fields (50-60 Hz) in a various way. Due to a lack of knowledge about the biological mechanisms of 50 Hz magnetic fields, studies seeking to identify health effects of exposure use central tendency metrics. The objective of our study is to provide better information about these exposure measurements from three categories of metrics. We calculated metrics of exposure measurements from data series (79 very day exposed subjects), made up approximately 20,000 recordings of magnetic fields, measured every 30 seconds for 7 days with an E.M.D.E.X. II dosimeter. These indicators were divided into three categories : central tendency metrics, dispersion metrics and variability metrics.We use Principal Component Analysis, a multidimensional technique to examine the relations between different exposure metrics for a group of subjects. Principal component Analysis (P.C.A.) enabled us to identify from the foreground 71.7% of the variance. The first component (42.7%) was characterized by central tendency; the second (29.0%) was composed of dispersion characteristics. The third component (17.2%) was composed of variability characteristics. This study confirm the need to improve exposure measurements by using at least two dimensions intensity and dispersion. (authors)

  4. Assessing Journal Quality in Mathematics Education

    Science.gov (United States)

    Nivens, Ryan Andrew; Otten, Samuel

    2017-01-01

    In this Research Commentary, we describe 3 journal metrics--the Web of Science's Impact Factor, Scopus's SCImago Journal Rank, and Google Scholar Metrics' h5-index--and compile the rankings (if they exist) for 69 mathematics education journals. We then discuss 2 paths that the mathematics education community should consider with regard to these…

  5. Assessing indoor air quality in New York City nail salons.

    Science.gov (United States)

    Pavilonis, Brian; Roelofs, Cora; Blair, Carly

    2018-05-01

    Nail salons are an important business and employment sector for recent immigrants offering popular services to a diverse range of customers across the United States. However, due to the nature of nail products and services, salon air can be burdened with a mix of low levels of hazardous airborne contaminants. Surveys of nail technicians have commonly found increased work-related symptoms, such as headaches and respiratory irritation, that are consistent with indoor air quality problems. In an effort to improve indoor air quality in nail salons, the state of New York recently promulgated regulations to require increased outdoor air and "source capture" of contaminants. Existing indoor air quality in New York State salons is unknown. In advance of the full implementation of the rules by 2021, we sought to establish reliable and usable baseline indoor air quality metrics to determine the feasibility and effectiveness of the requirement. In this pilot study, we measured total volatile organic compounds (TVOC) and carbon dioxide (CO 2 ) concentrations in 10 nail salons located in New York City to assess temporal and spatial trends. Within salon contaminant variation was generally minimal, indicating a well-mixed room and similar general exposure despite the task being performed. TVOC and CO 2 concentrations were strongly positively correlated (ρ = 0.81; p air quality for the purposes of compliance with the standard. An almost tenfold increase in TVOC concentration was observed when the American National Standards Institute/American Society of Heating, Refrigerating and Air-Conditioning Engineers (ANSI/ASHRAE) target CO 2 concentration of 850 ppm was exceeded compared to when this target was met.

  6. Quality Assessment of Collection 6 MODIS Atmospheric Science Products

    Science.gov (United States)

    Manoharan, V. S.; Ridgway, B.; Platnick, S. E.; Devadiga, S.; Mauoka, E.

    2015-12-01

    Since the launch of the NASA Terra and Aqua satellites in December 1999 and May 2002, respectively, atmosphere and land data acquired by the MODIS (Moderate Resolution Imaging Spectroradiometer) sensor on-board these satellites have been reprocessed five times at the MODAPS (MODIS Adaptive Processing System) located at NASA GSFC. The global land and atmosphere products use science algorithms developed by the NASA MODIS science team investigators. MODAPS completed Collection 6 reprocessing of MODIS Atmosphere science data products in April 2015 and is currently generating the Collection 6 products using the latest version of the science algorithms. This reprocessing has generated one of the longest time series of consistent data records for understanding cloud, aerosol, and other constituents in the earth's atmosphere. It is important to carefully evaluate and assess the quality of this data and remove any artifacts to maintain a useful climate data record. Quality Assessment (QA) is an integral part of the processing chain at MODAPS. This presentation will describe the QA approaches and tools adopted by the MODIS Land/Atmosphere Operational Product Evaluation (LDOPE) team to assess the quality of MODIS operational Atmospheric products produced at MODAPS. Some of the tools include global high resolution images, time series analysis and statistical QA metrics. The new high resolution global browse images with pan and zoom have provided the ability to perform QA of products in real time through synoptic QA on the web. This global browse generation has been useful in identifying production error, data loss, and data quality issues from calibration error, geolocation error and algorithm performance. A time series analysis for various science datasets in the Level-3 monthly product was recently developed for assessing any long term drifts in the data arising from instrument errors or other artifacts. This presentation will describe and discuss some test cases from the

  7. Contribution to a quantitative assessment model for reliability-based metrics of electronic and programmable safety-related functions

    International Nuclear Information System (INIS)

    Hamidi, K.

    2005-10-01

    The use of fault-tolerant EP architectures has induced growing constraints, whose influence on reliability-based performance metrics is no more negligible. To face up the growing influence of simultaneous failure, this thesis proposes, for safety-related functions, a new-trend assessment method of reliability, based on a better taking into account of time-aspect. This report introduces the concept of information and uses it to interpret the failure modes of safety-related function as the direct result of the initiation and propagation of erroneous information until the actuator-level. The main idea is to distinguish the apparition and disappearance of erroneous states, which could be defined as intrinsically dependent of HW-characteristic and maintenance policies, and their possible activation, constrained through architectural choices, leading to the failure of safety-related function. This approach is based on a low level on deterministic SED models of the architecture and use non homogeneous Markov chains to depict the time-evolution of probabilities of errors. (author)

  8. A metric-based assessment of flood risk and vulnerability of rural communities in the Lower Shire Valley, Malawi

    Science.gov (United States)

    Adeloye, A. J.; Mwale, F. D.; Dulanya, Z.

    2015-06-01

    In response to the increasing frequency and economic damages of natural disasters globally, disaster risk management has evolved to incorporate risk assessments that are multi-dimensional, integrated and metric-based. This is to support knowledge-based decision making and hence sustainable risk reduction. In Malawi and most of Sub-Saharan Africa (SSA), however, flood risk studies remain focussed on understanding causation, impacts, perceptions and coping and adaptation measures. Using the IPCC Framework, this study has quantified and profiled risk to flooding of rural, subsistent communities in the Lower Shire Valley, Malawi. Flood risk was obtained by integrating hazard and vulnerability. Flood hazard was characterised in terms of flood depth and inundation area obtained through hydraulic modelling in the valley with Lisflood-FP, while the vulnerability was indexed through analysis of exposure, susceptibility and capacity that were linked to social, economic, environmental and physical perspectives. Data on these were collected through structured interviews of the communities. The implementation of the entire analysis within GIS enabled the visualisation of spatial variability in flood risk in the valley. The results show predominantly medium levels in hazardousness, vulnerability and risk. The vulnerability is dominated by a high to very high susceptibility. Economic and physical capacities tend to be predominantly low but social capacity is significantly high, resulting in overall medium levels of capacity-induced vulnerability. Exposure manifests as medium. The vulnerability and risk showed marginal spatial variability. The paper concludes with recommendations on how these outcomes could inform policy interventions in the Valley.

  9. Considerations of the Software Metric-based Methodology for Software Reliability Assessment in Digital I and C Systems

    International Nuclear Information System (INIS)

    Ha, J. H.; Kim, M. K.; Chung, B. S.; Oh, H. C.; Seo, M. R.

    2007-01-01

    Analog I and C systems have been replaced by digital I and C systems because the digital systems have many potential benefits to nuclear power plants in terms of operational and safety performance. For example, digital systems are essentially free of drifts, have higher data handling and storage capabilities, and provide improved performance by accuracy and computational capabilities. In addition, analog replacement parts become more difficult to obtain since they are obsolete and discontinued. There are, however, challenges to the introduction of digital technology into the nuclear power plants because digital systems are more complex than analog systems and their operation and failure modes are different. Especially, software, which can be the core of functionality in the digital systems, does not wear out physically like hardware and its failure modes are not yet defined clearly. Thus, some researches to develop the methodology for software reliability assessment are still proceeding in the safety-critical areas such as nuclear system, aerospace and medical devices. Among them, software metric-based methodology has been considered for the digital I and C systems of Korean nuclear power plants. Advantages and limitations of that methodology are identified and requirements for its application to the digital I and C systems are considered in this study

  10. [Certification assessment and quality and risk management].

    Science.gov (United States)

    Papin-Morardet, Maud

    2018-03-01

    Organised by the French National Health Authority (HAS), certification is an external assessment process which is obligatory for all public and private health facilities, whatever their size or activity. The aim is to independently evaluate the quality of the health care provision of hospitals and clinics in France. This article looks at the investigation methods and the procedure used during the certification assessment of Henri Mondor University Hospitals in 2016. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  11. A novel no-reference objective stereoscopic video quality assessment method based on visual saliency analysis

    Science.gov (United States)

    Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin

    2017-07-01

    This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.

  12. Assessment of Performance Measures for Security of the Maritime Transportation Network, Port Security Metrics : Proposed Measurement of Deterrence Capability

    Science.gov (United States)

    2007-01-03

    This report is the thirs in a series describing the development of performance measures pertaining to the security of the maritime transportation network (port security metrics). THe development of measures to guide improvements in maritime security ...

  13. Sustainability Assessment of a Military Installation: A Template for Developing a Mission Sustainability Framework, Goals, Metrics and Reporting System

    Science.gov (United States)

    2009-08-01

    integration across base MSF Category: Neighbors and Stakeholders (NS) No. Conceptual Metric No. Conceptual Metric NS1 “ Walkable ” on-base community...34 Walkable " on- base community design 1 " Walkable " community Design – on-base: clustering of facilities, presence of sidewalks, need for car...access to public transit LEED for Neighborhood Development (ND) 0-100 index based on score of walkable community indicators Adapt LEED-ND

  14. The biological basis for environmental quality assessments

    International Nuclear Information System (INIS)

    Karpov, V.I.; Kudritsky, Y.K.; Georgievsky, A.B.

    1991-01-01

    A systematic approach is required to environmental quality assessments with regard to the Baltic regions in order to address the problem of pollution abatement. The proposed systematization of adaptive states stems from the general theory of adaptation. The various types of adaption are described. (AB)

  15. Assessment of physicochemical qualities, heavy metal ...

    African Journals Online (AJOL)

    Ogbe

    2012-08-23

    Aug 23, 2012 ... dominance of metals in the water followed the sequence: Al > Zn > Cu > Fe > Mn > Cd > Pb > Hg > As. ... ted and treated waters poses a considerable health risk ..... quently used to assess the general hygienic quality of water ...

  16. Quality assessment of pacemaker implantations in Denmark

    DEFF Research Database (Denmark)

    Møller, M; Arnsbo, P; Asklund, Mogens

    2002-01-01

    AIMS: Quality assessment of therapeutic procedures is essential to insure a cost-effective health care system. Pacemaker implantation is a common procedure with more than 500,000 implantations world-wide per year, but the general complication rate is not well described. We studied procedure relat...

  17. A Quality Approach to Writing Assessment.

    Science.gov (United States)

    Andrade, Joanne; Ryley, Helen

    1992-01-01

    A Colorado elementary school began its Total Quality Management work about a year ago after several staff members participated in an IBM Leadership Training Program addressing applications of Deming's theories. The school's new writing assessment has increased collegiality and cross-grade collaboration. (MLH)

  18. Metrical Phonology and SLA.

    Science.gov (United States)

    Tice, Bradley S.

    Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English language with the intention that it may be used in second language instruction. Stress is defined by its physical and acoustical correlates, and the principles of…

  19. Engineering performance metrics

    Science.gov (United States)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  20. Assessing Quality of Data Standards: Framework and Illustration Using XBRL GAAP Taxonomy

    Science.gov (United States)

    Zhu, Hongwei; Wu, Harris

    The primary purpose of data standards or metadata schemas is to improve the interoperability of data created by multiple standard users. Given the high cost of developing data standards, it is desirable to assess the quality of data standards. We develop a set of metrics and a framework for assessing data standard quality. The metrics include completeness and relevancy. Standard quality can also be indirectly measured by assessing interoperability of data instances. We evaluate the framework using data from the financial sector: the XBRL (eXtensible Business Reporting Language) GAAP (Generally Accepted Accounting Principles) taxonomy and US Securities and Exchange Commission (SEC) filings produced using the taxonomy by approximately 500 companies. The results show that the framework is useful and effective. Our analysis also reveals quality issues of the GAAP taxonomy and provides useful feedback to taxonomy users. The SEC has mandated that all publicly listed companies must submit their filings using XBRL. Our findings are timely and have practical implications that will ultimately help improve the quality of financial data.

  1. Air Quality Assessment Using Interpolation Technique

    Directory of Open Access Journals (Sweden)

    Awkash Kumar

    2016-07-01

    Full Text Available Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance Weighting (IDW of Geographical Information System (GIS has been used to perform interpolation with the help of concentration data on air quality at three locations of Mumbai for the year 2008. The classification was done for the spatial and temporal variation in air quality levels for Mumbai region. The seasonal and annual variations of air quality levels for SO2, NOx and SPM (Suspended Particulate Matter have been focused in this study. Results show that SPM concentration always exceeded the permissible limit of National Ambient Air Quality Standard. Also, seasonal trends of pollutant SPM was low in monsoon due rain fall. The finding of this study will help to formulate control strategies for rational management of air pollution and can be used for many other regions.

  2. Arbuscular mycorrhiza in soil quality assessment

    DEFF Research Database (Denmark)

    Kling, M.; Jakobsen, I.

    1998-01-01

    aggregates and to the protection of plants against drought and root pathogens. Assessment of soil quality, defined as the capacity of a soil to function within ecosystem boundaries to sustain biological productivity, maintain environmental quality, and promote plant health, should therefore include both......Arbuscular mycorrhizal (AM) fungi constitute a living bridge for the transport of nutrients from soil to plant roots, and are considered as the group of soil microorganisms that is of most direct importance to nutrient uptake by herbaceous plants. AM fungi also contribute to the formation of soil...... quantitative and qualitative measurements of this important biological resource. Various methods for the assessment of the potential for mycorrhiza formation and function are presented. Examples are given of the application of these methods to assess the impact of pesticides on the mycorrhiza....

  3. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    International Nuclear Information System (INIS)

    Yaparpalvi, R; Mynampati, D; Kuo, H; Garg, M; Tome, W; Kalnicki, S

    2016-01-01

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performed using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V_2_0 and V_5 to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm"3. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V_2_0 (+3.1%) and V_5 (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates attributable to algorithm

  4. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Yaparpalvi, R; Mynampati, D; Kuo, H; Garg, M; Tome, W; Kalnicki, S [Montefiore Medical Center, Bronx, NY (United States)

    2016-06-15

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performed using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V{sub 20} and V{sub 5} to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm{sup 3}. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V{sub 20} (+3.1%) and V{sub 5} (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates

  5. Quality assessment of palliative home care in Italy.

    Science.gov (United States)

    Scaccabarozzi, Gianlorenzo; Lovaglio, Pietro Giorgio; Limonta, Fabrizio; Floriani, Maddalena; Pellegrini, Giacomo

    2017-08-01

    The complexity of end-of-life care, represented by a large number of units caring for dying patients, of different types of organizations motivates the importance of measure the quality of provided care. Despite the law 38/2010 promulgated to remove the barriers and provide affordable access to palliative care, measurement, and monitoring of processes of home care providers in Italy has not been attempted. Using data drawn by an institutional voluntary observatory established in Italy in 2013, collecting home palliative care units caring for people between January and December 2013, we assess the degree to which Italian home palliative care teams endorse a set of standards required by the 38/2010 law and best practices as emerged from the literature. The evaluation strategy is based on Rasch analysis, allowing to objectively measuring both performances of facilities and quality indicators' difficulty on the same metric, using 14 quality indicators identified by the observatory's steering committee. Globally, 195 home care teams were registered in the observatory reporting globally 40 955 cured patients in 2013 representing 66% of the population of home palliative care units active in Italy in 2013. Rasch analysis identifies 5 indicators ("interview" with caregivers, continuous training provided to medical and nursing staff, provision of specialized multidisciplinary interventions, psychological support to the patient and family, and drug supply at home) easy to endorse by health care providers and 3 problematic indicators (presence of a formally established Local Network of Palliative care in the area of reference, provision of the care for most problematic patient requiring high intensity of the care, and the percentage of cancer patient dying at Home). The lack of Local Network of Palliative care, required by law 38/2010, is, at the present, the main barrier to its application. However, the adopted methodology suggests that a clear roadmap for health facilities

  6. Identified metabolic signature for assessing red blood cell unit quality is associated with endothelial damage markers and clinical outcomes

    DEFF Research Database (Denmark)

    Bordbar, Aarash; Johansson, Pär I.; Paglia, Giuseppe

    2016-01-01

    shown no difference of clinical outcome for patients receiving old or fresh RBCs. An overlooked but essential issue in assessing RBC unit quality and ultimately designing the necessary clinical trials is a metric for what constitutes an old or fresh RBC unit. STUDY DESIGN AND METHODS: Twenty RBC units...... years and endothelial damage markers in healthy volunteers undergoing autologous transfusions. CONCLUSION: The state of RBC metabolism may be a better indicator of cellular quality than traditional hematologic variables....

  7. A farm platform approach to optimizing temperate grazing-livestock systems: metrics for trade-off assessments and future innovations

    Science.gov (United States)

    Harris, Paul; Takahashi, Taro; Blackwell, Martin; Cardenas, Laura; Collins, Adrian; Dungait, Jennifer; Eisler, Mark; Hawkins, Jane; Misselbrook, Tom; Mcauliffe, Graham; Mcfadzean, Jamie; Murray, Phil; Orr, Robert; Jordana Rivero, M.; Wu, Lianhai; Lee, Michael

    2017-04-01

    data on hydrology, emissions, nutrient cycling, biodiversity, productivity and livestock welfare/health for 2 years (April 2011 to March 2013). Since April 2013, the platform has been progressively modified across three distinct ca. 22 ha farmlets with the underlying principle being to improve the sustainability (economic, social and environmental) by comparing contrasting pasture-based systems (permanent pasture, grass and clover swards, and reseeding of high quality germplasm on a regular cycle). This modification or transitional period ended in July 2015, when the platform assumed full post-baseline status. In this paper, we summarise the sustainability trade-off metrics developed to compare the three systems, together with the farm platform data collections used to create them; collections that can be viewed as 'big data' when considered in their entirety. We concentrate on the baseline and transitional periods and discuss the potential innovations to optimise grazing livestock systems utilising an experimental farm platform approach.

  8. Global Ozone Distribution relevant to Human Health: Metrics and present day levels from the Tropospheric Ozone Assessment Report (TOAR)

    Science.gov (United States)

    Fleming, Z. L.; Doherty, R. M.; von Schneidemesser, E.; Cooper, O. R.; Malley, C.; Colette, A.; Xu, X.; Pinto, J. P.; Simpson, D.; Schultz, M. G.; Hamad, S.; Moola, R.; Solberg, S.; Feng, Z.

    2017-12-01

    Using stations from the TOAR surface ozone database, this study quantifies present-day global and regional distributions of five ozone metrics relevant for both short-term and long-term human exposure. These metrics were explored at ozone monitoring sites globally, and re-classified for this project as urban or non-urban using population densities and night-time lights. National surface ozone limit values are usually related to an annual number of exceedances of daily maximum 8-hour running mean (MDA8), with many countries not even having any ozone limit values. A discussion and comparison of exceedances in the different ozone metrics, their locations and the seasonality of exceedances provides clues as to the regions that potentially have more serious ozone health implications. Present day ozone levels (2010-2014) have been compared globally and show definite geographical differences (see Figure showing the annual 4th highest MDA8 for present day ozone for all non-urban stations). Higher ozone levels are seen in western compared to eastern US, and between southern and northern Europe, and generally higher levels in east Asia. The metrics reflective of peak concentrations show highest values in western North America, southern Europe and East Asia. A number of the metrics show similar distributions of North-South gradients, most prominent across Europe and Japan. The interquartile range of the regional ozone metrics was largest in East Asia, higher for urban stations in Asia but higher for non-urban stations in Europe and North America. With over 3000 monitoring stations included in this analysis and despite the higher densities of monitoring stations in Europe, north America and East Asia, this study provides the most comprehensive global picture to date of surface ozone levels in terms of health-relevant metrics.

  9. Metrical Phonology: German Sound System.

    Science.gov (United States)

    Tice, Bradley S.

    Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English and German languages. The objective is to promote use of metrical phonology as a tool for enhancing instruction in stress patterns in words and sentences, particularly in…

  10. The assessment of quality of products using selected quality instruments

    Directory of Open Access Journals (Sweden)

    Edyta Kardas

    2016-03-01

    Full Text Available The quality parameters of products should be controlled at every stage of the production process, since it allows detection of any problems even in the initial stages of production processes and removes their causes during manufacturing. Final control of products is intended to capture non-conforming products that did not go to the customers. The results of such controls should constantly be analysed. Such analysis can help to detect the most common problems, determine some dependences or identify the causes of such situations. A lot of different instruments that can support improvement of processes can be used for this kind of analysis. The paper presents the possibility of using some tools which can be utilized to support the analysis and assessment of quality of products at different stages of the production process. The quality analysis of exemplary products using selected quality methods and tolls is carried out. Metal sleeve, that is part of electronic control subassembly of anti-blocking system of ABS, which was the research component being studied.

  11. A multi-metric assessment of environmental contaminant exposure and effects in an urbanized reach of the Charles River near Watertown, Massachusetts

    Science.gov (United States)

    Smith, Stephen B.; Anderson, Patrick J.; Baumann, Paul C.; DeWeese, Lawrence R.; Goodbred, Steven L.; Coyle, James J.; Smith, David S.

    2012-01-01

    The Charles River Project provided an opportunity to simultaneously deploy a combination of biomonitoring techniques routinely used by the U.S. Geological Survey National Water Quality Assessment Program, the Biomonitoring of Environmental Status and Trends Project, and the Contaminant Biology Program at an urban site suspected to be contaminated with polycyclic aromatic hydrocarbons. In addition to these standardized methods, additional techniques were used to further elucidate contaminant exposure and potential impacts of exposure on biota. The purpose of the study was to generate a comprehensive, multi-metric data set to support assessment of contaminant exposure and effects at the site. Furthermore, the data set could be assessed to determine the relative performance of the standardized method suites typically used by the National Water Quality Assessment Program and the Biomonitoring of Environmental Status and Trends Project, as well as the additional biomonitoring methods used in the study to demonstrate ecological effects of contaminant exposure. The Contaminant Effects Workgroup, an advisory committee of the U.S. Geological Survey/Contaminant Biology Program, identified polycyclic aromatic hydrocarbons as the contaminant class of greatest concern in urban streams of all sizes. The reach of the Charles River near Watertown, Massachusetts, was selected as the site for this study based on the suspected presence of polycyclic aromatic hydrocarbon contamination and the presence of common carp (Cyprinus carpio), largemouth bass (Micropterus salmoides), and white sucker (Catostomus commersoni). All of these fish have extensive contaminant-exposure profiles related to polycyclic aromatic hydrocarbons and other environmental contaminants. This project represented a collaboration of universities, Department of the Interior bureaus including multiple components of the USGS (Biological Resources Discipline and Water Resources Discipline Science Centers, the

  12. Quality Assessment of Domesticated Animal Genome Assemblies

    DEFF Research Database (Denmark)

    Seemann, Stefan E; Anthon, Christian; Palasca, Oana

    2015-01-01

    affected by the lack of genomic sequence. Herein, we quantify the quality of the genome assemblies of 20 domesticated animals and related species by assessing a range of measurable parameters, and we show that there is a positive correlation between the fraction of mappable reads from RNAseq data...... domesticated animal genomes still need to be sequenced deeper in order to produce high-quality assemblies. In the meanwhile, ironically, the extent to which RNAseq and other next-generation data is produced frequently far exceeds that of the genomic sequence. Furthermore, basic comparative analysis is often...

  13. Drinking Water Quality Assessment in Tetova Region

    OpenAIRE

    B. H. Durmishi; M. Ismaili; A. Shabani; Sh. Abduli

    2012-01-01

    Problem statement: The quality of drinking water is a crucial factor for human health. The objective of this study was the assessment of physical, chemical and bacteriological quality of the drinking water in the city of Tetova and several surrounding villages in the Republic of Macedonia for the period May 2007-2008. The sampling and analysis are conducted in accordance with State Regulation No. 57/2004, which is in compliance with EU and WHO standards. A total of 415 samples were taken for ...

  14. Evaluating how variants of floristic quality assessment indicate wetland condition.

    Science.gov (United States)

    Kutcher, Thomas E; Forrester, Graham E

    2018-03-28

    Biological indicators are useful tools for the assessment of ecosystem condition. Multi-metric and multi-taxa indicators may respond to a broader range of disturbances than simpler indicators, but their complexity can make them difficult to interpret, which is critical to indicator utility for ecosystem management. Floristic Quality Assessment (FQA) is an example of a biological assessment approach that has been widely tested for indicating freshwater wetland condition, but less attention has been given to clarifying the factors controlling its response. FQA quantifies the aggregate of vascular plant species tolerance to habitat degradation (conservatism), and model variants have incorporated species richness, abundance, and indigenity (native or non-native). To assess bias, we tested FQA variants in open-canopy freshwater wetlands against three independent reference measures, using practical vegetation sampling methods. FQA variants incorporating species richness did not correlate with our reference measures and were influenced by wetland size and hydrogeomorphic class. In contrast, FQA variants lacking measures of species richness responded linearly to reference measures quantifying individual and aggregate stresses, suggesting a broad response to cumulative degradation. FQA variants incorporating non-native species, and a variant additionally incorporating relative species abundance, improved performance over using only native species. We relate our empirical findings to ecological theory to clarify the functional properties and implications of the FQA variants. Our analysis indicates that (1) aggregate conservatism reliably declines with increased disturbance; (2) species richness has varying relationships with disturbance and increases with site area, confounding FQA response; and (3) non-native species signal human disturbance. We propose that incorporating species abundance can improve FQA site-level relevance with little extra sampling effort. Using our

  15. Quality assurance in diagnostic radiology - assessing the fluoroscopic image quality

    International Nuclear Information System (INIS)

    Tabakov, S.

    1995-01-01

    The X-ray fluoroscopic image has a considerably lower resolution than the radiographic one. This requires a careful quality control aiming at optimal use of the fluoroscopic equipment. The basic procedures for image quality assessment of Image Intensifier/TV image are described. Test objects from Leeds University (UK) are used as prototypes. The results from examining 50 various fluoroscopic devices are shown. Their limiting spatial resolution varies between 0.8 lp/mm (at maximum II field size) and 2.24 lp/mm (at minimum field size). The mean value of the limiting spatial resolution for a 23 cm Image Intensifier is about 1.24 lp/mm. The mean limits of variation of the contrast/detail diagram for various fluoroscopic equipment are graphically expressed. 14 refs., 1 fig. (author)

  16. An assessment of tropical cyclone representation in a regional reanalysis and a shape metric methodology for studying the evolving precipitation structure prior to and during landfall

    Science.gov (United States)

    Zick, Stephanie E.

    Tropical cyclone (TC) precipitation is intricately organized with multiple scales of phenomena collaborating to harness the massive energy required to support these storms. During landfall, a TC leaves the tropical oceanic environment and encounters a wide range of continental air mass regimes. Although evolving precipitation patterns are qualitatively observed in these storms during landfall, the timing and spatial variability of these structural changes have yet to be quantified or documented. This dissertation integrates meteorological and geographic concepts to explore the representation and evolution of TC rainfall at the crucial time of landfall when coastal and inland communities and environments are most vulnerable to TC-associated flooding. This research begins with a two-part assessment of TC representation in the North American Regional Reanalysis (NARR), which is selected for its documented skill in characterizing North American precipitation patterns. Due to the sparsely available data over the tropical oceans, spatial biases exist in both global and regional reanalysis datasets. However, within the NARR the introduction of over-ocean precipitation assimilation in 2004 leads to an improved analysis of TC warm core structure, which results in an improved precipitation forecast. Collectively, these studies highlight the need for sophisticated observational and data assimilation systems. Specifically, the development of new, novel precipitation assimilation techniques will be valuable to the construction of better-quality forecasting tools with more authentic TC representation. In the third study, the fundamental geographic concept of compactness is utilized to construct a shape metric methodology for investigating (a) the overall evolution of and (b) the spatiotemporal positions of significant changes to synoptic-scale precipitation structure. These metrics encompass the characteristic geometries of TCs moving into the mid-latitudes: asymmetry

  17. Modeling LCD Displays with Local Backlight Dimming for Image Quality Assessment

    DEFF Research Database (Denmark)

    Korhonen, Jari; Burini, Nino; Forchhammer, Søren

    2011-01-01

    for evaluating the signal quality distortion related directly to digital signal processing, such as compression. However, the physical characteristics of the display device also pose a significant impact on the overall perception. In order to facilitate image quality assessment on modern liquid crystaldisplays...... (LCD) using light emitting diode (LED) backlight with local dimming, we present the essential considerations and guidelines for modeling the characteristics of displays with high dynamic range (HDR) and locally adjustable backlight segments. The representation of the image generated by the model can...... be assessed using the traditional objective metrics, and therefore the proposed approach is useful for assessing the performance of different backlight dimming algorithms in terms of resulting quality and power consumption in a simulated environment. We have implemented the proposed model in C++ and compared...

  18. Sustainability Metrics: The San Luis Basin Project

    Science.gov (United States)

    Sustainability is about promoting humanly desirable dynamic regimes of the environment. Metrics: ecological footprint, net regional product, exergy, emergy, and Fisher Information. Adaptive management: (1) metrics assess problem, (2) specific problem identified, and (3) managemen...

  19. Air quality assessment in Salim Slam Tunnel

    International Nuclear Information System (INIS)

    El-Fadel, M.; Hashisho, Z.; Saikaly, P.

    1999-01-01

    Full text.Vehicle emissions constitute a serious occupational environmental hazard particularly in confined spaces such as tunnels and underground parking garages. these emissions at elevated concentrations, can cause adverse health effects, which range from nausea and eye irritation to mutagenicity, carcinogenicity and even death. This paper presents an environmental air quality assessment in a tunnel located in a highly congested urban area. For this purpose, air samples were collected and analyzed for the presence of primary air pollutants, priority metals, and volatile organic carbons. Air quality modeling was conducted to simulate variations of pollutant concentrations in the tunnel under worst case scenarios including traffic congestion and no air ventilation. Field measurements and mathematical simulation results were used to develop a strategy for proper air quality management in tunnels

  20. Repeatability of FDG PET/CT metrics assessed in free breathing and deep inspiration breath hold in lung cancer patients.

    Science.gov (United States)

    Nygård, Lotte; Aznar, Marianne C; Fischer, Barbara M; Persson, Gitte F; Christensen, Charlotte B; Andersen, Flemming L; Josipovic, Mirjana; Langer, Seppo W; Kjær, Andreas; Vogelius, Ivan R; Bentzen, Søren M

    2018-01-01

    We measured the repeatability of FDG PET/CT uptake metrics when acquiring scans in free breathing (FB) conditions compared with deep inspiration breath hold (DIBH) for locally advanced lung cancer. Twenty patients were enrolled in this prospective study. Two FDG PET/CT scans per patient were conducted few days apart and in two breathing conditions (FB and DIBH). This resulted in four scans per patient. Up to four FDG PET avid lesions per patient were contoured. The following FDG metrics were measured in all lesions and in all four scans: Standardized uptake value (SUV) peak , SUV max , SUV mean , metabolic tumor volume (MTV) and total lesion glycolysis (TLG), based on an isocontur of 50% of SUV max . FDG PET avid volumes were delineated by a nuclear medicine physician. The gross tumor volumes (GTV) were contoured on the corresponding CT scans. Nineteen patients were available for analysis. Test-retest standard deviations of FDG uptake metrics in FB and DIBH were: SUV peak FB/DIBH: 16.2%/16.5%; SUV max : 18.2%/22.1%; SUV mean : 18.3%/22.1%; TLG: 32.4%/40.5%. DIBH compared to FB resulted in higher values with mean differences in SUV max of 12.6%, SUV peak 4.4% and SUV mean 11.9%. MTV, TLG and GTV were all significantly smaller on day 1 in DIBH compared to FB. However, the differences between metrics under FB and DIBH were in all cases smaller than 1 SD of the day to day repeatability. FDG acquisition in DIBH does not have a clinically relevant impact on the uptake metrics and does not improve the test-retest repeatability of FDG uptake metrics in lung cancer patients.

  1. Validation of Metrics for Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2008-01-01

    Full Text Available This paper describe the new concepts of collaborative systems metrics validation. The paper define the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.

  2. Validation of Metrics for Collaborative Systems

    OpenAIRE

    Ion IVAN; Cristian CIUREA

    2008-01-01

    This paper describe the new concepts of collaborative systems metrics validation. The paper define the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.

  3. Quality of assessments within reach: Review study of research and results of the quality of assessments

    NARCIS (Netherlands)

    Maassen, Nathalie Anthonia Maria; Hopster-den Otter, Dorothea; Wools, S.; Hemker, B.T.; Straetmans, G.J.J.M.; Eggen, Theodorus Johannes Hendrikus Maria

    2015-01-01

    Educational tests and assessments are important instruments to measure a student’s knowledge and skills. The question that is addressed in this review study is: “which aspects are currently considered as important to the quality of educational assessments?” Furthermore, it is explored how this

  4. Using Data Mining for Wine Quality Assessment

    Science.gov (United States)

    Cortez, Paulo; Teixeira, Juliana; Cerdeira, António; Almeida, Fernando; Matos, Telmo; Reis, José

    Certification and quality assessment are crucial issues within the wine industry. Currently, wine quality is mostly assessed by physicochemical (e.g alcohol levels) and sensory (e.g. human expert evaluation) tests. In this paper, we propose a data mining approach to predict wine preferences that is based on easily available analytical tests at the certification step. A large dataset is considered with white vinho verde samples from the Minho region of Portugal. Wine quality is modeled under a regression approach, which preserves the order of the grades. Explanatory knowledge is given in terms of a sensitivity analysis, which measures the response changes when a given input variable is varied through its domain. Three regression techniques were applied, under a computationally efficient procedure that performs simultaneous variable and model selection and that is guided by the sensitivity analysis. The support vector machine achieved promising results, outperforming the multiple regression and neural network methods. Such model is useful for understanding how physicochemical tests affect the sensory preferences. Moreover, it can support the wine expert evaluations and ultimately improve the production.

  5. A comparison of metrics for assessing state-of-the-art climate models and implications for probabilistic projections of climate change

    Science.gov (United States)

    Ring, Christoph; Pollinger, Felix; Kaspar-Ott, Irena; Hertig, Elke; Jacobeit, Jucundus; Paeth, Heiko

    2018-03-01

    A major task of climate science are reliable projections of climate change for the future. To enable more solid statements and to decrease the range of uncertainty, global general circulation models and regional climate models are evaluated based on a 2 × 2 contingency table approach to generate model weights. These weights are compared among different methodologies and their impact on probabilistic projections of temperature and precipitation changes is investigated. Simulated seasonal precipitation and temperature for both 50-year trends and climatological means are assessed at two spatial scales: in seven study regions around the globe and in eight sub-regions of the Mediterranean area. Overall, 24 models of phase 3 and 38 models of phase 5 of the Coupled Model Intercomparison Project altogether 159 transient simulations of precipitation and 119 of temperature from four emissions scenarios are evaluated against the ERA-20C reanalysis over the 20th century. The results show high conformity with previous model evaluation studies. The metrics reveal that mean of precipitation and both temperature mean and trend agree well with the reference dataset and indicate improvement for the more recent ensemble mean, especially for temperature. The method is highly transferrable to a variety of further applications in climate science. Overall, there are regional differences of simulation quality, however, these are less pronounced than those between the results for 50-year mean and trend. The trend results are suitable for assigning weighting factors to climate models. Yet, the implications for probabilistic climate projections is strictly dependent on the region and season.

  6. Weighted-MSE based on saliency map for assessing video quality of H.264 video streams

    Science.gov (United States)

    Boujut, H.; Benois-Pineau, J.; Hadar, O.; Ahmed, T.; Bonnet, P.

    2011-01-01

    Human vision system is very complex and has been studied for many years specifically for purposes of efficient encoding of visual, e.g. video content from digital TV. There have been physiological and psychological evidences which indicate that viewers do not pay equal attention to all exposed visual information, but only focus on certain areas known as focus of attention (FOA) or saliency regions. In this work, we propose a novel based objective quality assessment metric, for assessing the perceptual quality of decoded video sequences affected by transmission errors and packed loses. The proposed method weights the Mean Square Error (MSE), Weighted-MSE (WMSE), according to the calculated saliency map at each pixel. Our method was validated trough subjective quality experiments.

  7. Sequencing quality assessment tools to enable data-driven informatics for high throughput genomics

    Directory of Open Access Journals (Sweden)

    Richard Mark Leggett

    2013-12-01

    Full Text Available The processes of quality assessment and control are an active area of research at The Genome Analysis Centre (TGAC. Unlike other sequencing centres that often concentrate on a certain species or technology, TGAC applies expertise in genomics and bioinformatics to a wide range of projects, often requiring bespoke wet lab and in silico workflows. TGAC is fortunate to have access to a diverse range of sequencing and analysis platforms, and we are at the forefront of investigations into library quality and sequence data assessment. We have developed and implemented a number of algorithms, tools, pipelines and packages to ascertain, store, and expose quality metrics across a number of next-generation sequencing platforms, allowing rapid and in-depth cross-platform QC bioinformatics. In this review, we describe these tools as a vehicle for data-driven informatics, offering the potential to provide richer context for downstream analysis and to inform experimental design.

  8. Assessment of the quality of educational portals

    Directory of Open Access Journals (Sweden)

    R. G. Bolbakov

    2017-01-01

    Full Text Available The article describes the results of theoretical and experimental studies on the evaluation of the quality of educational information placed on information and educational portals. The methodology allows you to compare not only portals, but also the results of training on exam scores and test scores. The methodological basis of the assessment is the cognitive approach and the negentropic approach. The article gives a comparison of entropy and negentropy. On the basis of comparison, the authors propose a negentropic approach to assessing the quality of educational resources obtained as a result of information retrieval. The search results are evaluated by cognitive and perpetual scores. Estimates are introduced into the entropy formula and converted to the formula of negentropy. The negentropic approach serves as the basis for calculating the statistical amount of information obtained as a result of information retrieval. The cognitive approach serves as a basis for assessing the qualitative characteristics of educational information, such as: visibility, perceptibility, interpretability. Open information portalsare the source of educational resources. The article shows that modern information portals are often clogged with unreliable or unnecessary information, which makes it difficult to find relevant educational information. In contrast to the widespread methods for one relevanceassess of the information retrieval, this article differentiates the notion of the relevance of the information retrieval. The article introduces three qualitatively different notions of relevance: formal, semantic and perpetual – relevance. The article introduces new additional characteristics of the quality of information search, the coefficient of cognition and the coefficient of perpetuation. These coefficients are introduced into the formula for estimating entropy and obtain the cognitive-entropy formula. As a result, a new method for assessing the content of

  9. Quality Assessment of Landsat Surface Reflectance Products Using MODIS Data

    Science.gov (United States)

    Feng, Min; Huang, Chengquan; Channan, Saurabh; Vermote, Eric; Masek, Jeffrey G.; Townshend, John R.

    2012-01-01

    Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat

  10. Assessment of daylight quality in simple rooms

    DEFF Research Database (Denmark)

    Johnsen, Kjeld; Dubois, Marie-Claude; Sørensen, Karl Grau

    The present report documents the results of a study on daylight conditions in simple rooms of residential buildings. The overall objective of the study was to develop a basis for a method for the assessment of daylight quality in a room with simple geometry and window configurations. As a tool...... in daylighting conditions for a number of lighting parameters. The results gave clear indications of, for instance, which room would be the brightest, under which conditions might glare be a problem and which type of window would yield the greatest luminous variation (or visual interest), etc....

  11. Service Quality and Process Maturity Assessment

    Directory of Open Access Journals (Sweden)

    Serek Radomir

    2013-12-01

    Full Text Available This article deals with service quality and the methods for its measurement and improvements to reach the so called service excellence. Besides older methods such as SERVQUAL and SERPERF, there are also shortly described capability maturity models based on which the own methodology is developed and used for process maturity assessment in organizations providing technical services. This method is equally described and accompanied by examples on pictures. The verification of method functionality is explored on finding a correlation between service employee satisfaction and average process maturity in a service organization. The results seem to be quite promising and open an arena for further studies.

  12. DAF: differential ACE filtering image quality assessment by automatic color equalization

    Science.gov (United States)

    Ouni, S.; Chambah, M.; Saint-Jean, C.; Rizzi, A.

    2008-01-01

    Ideally, a quality assessment system would perceive and measure image or video impairments just like a human being. But in reality, objective quality metrics do not necessarily correlate well with perceived quality [1]. Plus, some measures assume that there exists a reference in the form of an "original" to compare to, which prevents their usage in digital restoration field, where often there is no reference to compare to. That is why subjective evaluation is the most used and most efficient approach up to now. But subjective assessment is expensive, time consuming and does not respond, hence, to the economic requirements [2,3]. Thus, reliable automatic methods for visual quality assessment are needed in the field of digital film restoration. The ACE method, for Automatic Color Equalization [4,6], is an algorithm for digital images unsupervised enhancement. It is based on a new computational approach that tries to model the perceptual response of our vision system merging the Gray World and White Patch equalization mechanisms in a global and local way. Like our vision system ACE is able to adapt to widely varying lighting conditions, and to extract visual information from the environment efficaciously. Moreover ACE can be run in an unsupervised manner. Hence it is very useful as a digital film restoration tool since no a priori information is available. In this paper we deepen the investigation of using the ACE algorithm as a basis for a reference free image quality evaluation. This new metric called DAF for Differential ACE Filtering [7] is an objective quality measure that can be used in several image restoration and image quality assessment systems. In this paper, we compare on different image databases, the results obtained with DAF and with some subjective image quality assessments (Mean Opinion Score MOS as measure of perceived image quality). We study also the correlation between objective measure and MOS. In our experiments, we have used for the first image

  13. Assessment of sleep quality in powernapping

    DEFF Research Database (Denmark)

    Kooravand Takht Sabzy, Bashaer; Thomsen, Carsten E

    2011-01-01

    The purpose of this study is to assess the Sleep Quality (SQ) in powernapping. The contributed factors for SQ assessment are time of Sleep Onset (SO), Sleep Length (SL), Sleep Depth (SD), and detection of sleep events (K-complex (KC) and Sleep Spindle (SS)). Data from daytime nap for 10 subjects, 2...... days each, including EEG and ECG were recorded. The SD and sleep events were analyzed by applying spectral analysis. The SO time was detected by a combination of signal spectral analysis, Slow Rolling Eye Movement (SREM) detection, Heart Rate Variability (HRV) analysis and EEG segmentation using both...... Autocorrelation Function (ACF), and Crosscorrelation Function (CCF) methods. The EEG derivation FP1-FP2 filtered in a narrow band and used as an alternative to EOG for SREM detection. The ACF and CCF segmentation methods were also applied for detection of sleep events. The ACF method detects segment boundaries...

  14. Assessment of American Heart Association's Ideal Cardiovascular Health Metrics Among Employees of a Large Healthcare Organization: The Baptist Health South Florida Employee Study.

    Science.gov (United States)

    Ogunmoroti, Oluseye; Younus, Adnan; Rouseff, Maribeth; Spatz, Erica S; Das, Sankalp; Parris, Don; Aneni, Ehimen; Holzwarth, Leah; Guzman, Henry; Tran, Thinh; Roberson, Lara; Ali, Shozab S; Agatston, Arthur; Maziak, Wasim; Feldman, Theodore; Veledar, Emir; Nasir, Khurram

    2015-07-01

    Healthcare organizations and their employees are critical role models for healthy living in their communities. The American Heart Association (AHA) 2020 impact goal provides a national framework that can be used to track the success of employee wellness programs with a focus on improving cardiovascular (CV) health. This study aimed to assess the CV health of the employees of Baptist Health South Florida (BHSF), a large nonprofit healthcare organization. HRAs and wellness examinations can be used to measure the cardiovascular health status of an employee population. The AHA's 7 CV health metrics (diet, physical activity, smoking, body mass index, blood pressure, total cholesterol, and blood glucose) categorized as ideal, intermediate, or poor were estimated among employees of BHSF participating voluntarily in an annual health risk assessment (HRA) and wellness fair. Age and gender differences were analyzed using χ(2) test. The sample consisted of 9364 employees who participated in the 2014 annual HRA and wellness fair (mean age [standard deviation], 43 [12] years, 74% women). Sixty (1%) individuals met the AHA's definition of ideal CV health. Women were more likely than men to meet the ideal criteria for more than 5 CV health metrics. The proportion of participants meeting the ideal criteria for more than 5 CV health metrics decreased with age. A combination of HRAs and wellness examinations can provide useful insights into the cardiovascular health status of an employee population. Future tracking of the CV health metrics will provide critical feedback on the impact of system wide wellness efforts as well as identifying proactive programs to assist in making substantial progress toward the AHA 2020 Impact Goal. © 2015 Wiley Periodicals, Inc.

  15. QUALIMETRIC QUALITY ASSESSMENT OF IODINE SUPPLEMENTS

    Directory of Open Access Journals (Sweden)

    F. S. Bazrova

    2015-01-01

    Full Text Available The article discusses the new iodine-containing supplements (ID derived from organic media collagenous animal protein (pork rind, carpatina and collagen and protein concentrates brands SCANGEN and PROMIL C95. It is shown that the use of these proteins as carriers of iodine is due to the high content of the amino acids glycine and alanine, which correlates with the degree of binding of iodine objects. New additives in addition to the special focus improve rheological properties of foods, including texture, appearance and functional properties. To assess the quality'ID and selection of preferred option the proposed qualitative assessment and a systematic approach to consider all'ID as a system to allocate its elements, to justify the principles of its construction and the requirements imposed on it, to build a General decision tree. For the construction of complex criterion for assessing the quality'ID proposed procedure formalization based on selection and evaluation of individual indicators, the definition of the laws of their change, depending on the dose, duration and temperature of exposure, and functional efficiency. For comparative evaluation of single and calculation of group indicators all of them were reduced to a single dimension by introducing the dimensionless coefficients of adequately describing the analyzed indicators. The article presents the calculated values of single and group of indicators characterizing technological properties 'ID: the degree of binding of iodine, the binding rate of iodine, heat losses of iodine and basic functional and technological properties of meat stuffing systems (water-binding, moisture-holding, emulsifying capacity and emulsion stability, obtained by the introduction of stuffing in the system studied'ID. At the final stage is the selection of the best 'ID, on the basis of an assessment of group performance.

  16. Assessing Assessment Quality: Criteria for Quality Assurance in Design of (Peer) Assessment for Learning--A Review of Research Studies

    Science.gov (United States)

    Tillema, Harm; Leenknecht, Martijn; Segers, Mien

    2011-01-01

    The interest in "assessment for learning" (AfL) has resulted in a search for new modes of assessment that are better aligned to students' learning how to learn. However, with the introduction of new assessment tools, also questions arose with respect to the quality of its measurement. On the one hand, the appropriateness of traditional,…

  17. Coverage and quality: A comparison of Web of Science and Scopus databases for reporting faculty nursing publication metrics.

    Science.gov (United States)

    Powell, Kimberly R; Peterson, Shenita R

    Web of Science and Scopus are the leading databases of scholarly impact. Recent studies outside the field of nursing report differences in journal coverage and quality. A comparative analysis of nursing publications reported impact. Journal coverage by each database for the field of nursing was compared. Additionally, publications by 2014 nursing faculty were collected in both databases and compared for overall coverage and reported quality, as modeled by Scimajo Journal Rank, peer review status, and MEDLINE inclusion. Individual author impact, modeled by the h-index, was calculated by each database for comparison. Scopus offered significantly higher journal coverage. For 2014 faculty publications, 100% of journals were found in Scopus, Web of Science offered 82%. No significant difference was found in the quality of reported journals. Author h-index was found to be higher in Scopus. When reporting faculty publications and scholarly impact, academic nursing programs may be better represented by Scopus, without compromising journal quality. Programs with strong interdisciplinary work should examine all areas of strength to ensure appropriate coverage. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Quality Markers in Cardiology. Main Markers to Measure Quality of Results (Outcomes) and Quality Measures Related to Better Results in Clinical Practice (Performance Metrics). INCARDIO (Indicadores de Calidad en Unidades Asistenciales del Área del Corazón): A SEC/SECTCV Consensus Position Paper.

    Science.gov (United States)

    López-Sendón, José; González-Juanatey, José Ramón; Pinto, Fausto; Cuenca Castillo, José; Badimón, Lina; Dalmau, Regina; González Torrecilla, Esteban; López-Mínguez, José Ramón; Maceira, Alicia M; Pascual-Figal, Domingo; Pomar Moya-Prats, José Luis; Sionis, Alessandro; Zamorano, José Luis

    2015-11-01

    Cardiology practice requires complex organization that impacts overall outcomes and may differ substantially among hospitals and communities. The aim of this consensus document is to define quality markers in cardiology, including markers to measure the quality of results (outcomes metrics) and quality measures related to better results in clinical practice (performance metrics). The document is mainly intended for the Spanish health care system and may serve as a basis for similar documents in other countries. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  19. Water-quality impact assessment for hydropower

    International Nuclear Information System (INIS)

    Daniil, E.I.; Gulliver, J.; Thene, J.R.

    1991-01-01

    A methodology to assess the impact of a hydropower facility on downstream water quality is described. Negative impacts can result from the substitution of discharges aerated over a spillway with minimally aerated turbine discharges that are often withdrawn from lower reservoir levels, where dissolved oxygen (DO) is typically low. Three case studies illustrate the proposed method and problems that can be encountered. Historic data are used to establish the probability of low-dissolved-oxygen occurrences. Synoptic surveys, combined with downstream monitoring, give an overall picture of the water-quality dynamics in the river and the reservoir. Spillway aeration is determined through measurements and adjusted for temperature. Theoretical computations of selective withdrawal are sensitive to boundary conditions, such as the location of the outlet-relative to the reservoir bottom, but withdrawal from the different layers is estimated from measured upstream and downstream temperatures and dissolved-oxygen profiles. Based on field measurements, the downstream water quality under hydropower operation is predicted. Improving selective withdrawal characteristics or diverting part of the flow over the spillway provided cost-effective mitigation solutions for small hydropower facilities (less than 15 MW) because of the low capital investment required

  20. Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...

  1. Full-reference quality assessment of stereoscopic images by learning binocular receptive field properties.

    Science.gov (United States)

    Shao, Feng; Li, Kemeng; Lin, Weisi; Jiang, Gangyi; Yu, Mei; Dai, Qionghai

    2015-10-01

    Quality assessment of 3D images encounters more challenges than its 2D counterparts. Directly applying 2D image quality metrics is not the solution. In this paper, we propose a new full-reference quality assessment for stereoscopic images by learning binocular receptive field properties to be more in line with human visual perception. To be more specific, in the training phase, we learn a multiscale dictionary from the training database, so that the latent structure of images can be represented as a set of basis vectors. In the quality estimation phase, we compute sparse feature similarity index based on the estimated sparse coefficient vectors by considering their phase difference and amplitude difference, and compute global luminance similarity index by considering luminance changes. The final quality score is obtained by incorporating binocular combination based on sparse energy and sparse complexity. Experimental results on five public 3D image quality assessment databases demonstrate that in comparison with the most related existing methods, the devised algorithm achieves high consistency with subjective assessment.

  2. Towards web documents quality assessment for digital humanities scholars

    NARCIS (Netherlands)

    Ceolin, D.; Noordegraaf, Julia; Aroyo, L.M.; van Son, C.M.; Nejdl, Wolfgang; Hall, Wendy; Parigi, Paolo; Staab, Steffen

    2016-01-01

    We present a framework for assessing the quality of Web documents, and a baseline of three quality dimensions: trustworthiness, objectivity and basic scholarly quality. Assessing Web document quality is a "deep data" problem necessitating approaches to handle both data size and complexity.

  3. Quality assessment of orthodontic radiography in children.

    Science.gov (United States)

    Pakbaznejad Esmaeili, Elmira; Ekholm, Marja; Haukka, Jari; Waltimo-Sirén, Janna

    2016-02-01

    Numbers of dental panoramic tomographs (DPTs) and lateral cephalometric radiographs (LCRs) outweigh other radiographic examinations in 7- to 12-year-old Finns. Orthodontists and general practitioners (GPs) involved in orthodontics hold therefore the highest responsibility of the exposure of children to ionising radiation with its risks. Against this background, lack of reports on the quality of orthodontic radiography is surprising. The purpose of our study was to shed some light and draw the awareness of the orthodontic community on the subject by analyzing the quality of orthodontic radiography in Oral Healthcare Department of City of Helsinki, in the capital of Finland. We analyzed randomly selected 241 patient files with DPTs and 118 patient files with LCRs of 7- to 12-year-olds for the indications of radiography, quality of referrals, status of interpretation, and number of failed radiographs. The majority of DPTs (95%) and all LCRs had been ordered for orthodontic reasons. Of the DPTs, 60% were ordered by GPs, and of the LCRs, 64% by orthodontists. The referrals were adequate for most DPTs (78%) and LCRs (73%), orthodontists being responsible for the majority of inadequate referrals. Of the DPTs, 80% had been interpreted. Of the LCRs, 65% lacked interpretation, but 67% had been analysed cephalometrically. Failed radiographs, leading to repeated exposure, were found in 2-3%. The quality assessment revealed that orthodontic radiography may not completely fulfill the criteria of good practice. Our results stress further need of continuing education in radiation protection among both orthodontists and GPs involved in orthodontics. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  4. Assessment Quality in Tertiary Education: An Integrative Literature Review

    NARCIS (Netherlands)

    Gerritsen-van Leeuwenkamp, Karin; Joosten-ten Brinke, Desirée; Kester, Liesbeth

    2018-01-01

    In tertiary education, inferior assessment quality is a problem that has serious consequences for students, teachers, government, and society. A lack of a clear and overarching conceptualization of assessment quality can cause difficulties in guaranteeing assessment quality in practice. Thus, the

  5. The use of the kurtosis metric in the evaluation of occupational hearing loss in workers in China: Implications for hearing risk assessment

    Directory of Open Access Journals (Sweden)

    Robert I Davis

    2012-01-01

    Full Text Available This study examined: (1 the value of using the statistical metric, kurtosis [β(t], along with an energy metric to determine the hazard to hearing from high level industrial noise environments, and (2 the accuracy of the International Standard Organization (ISO-1999:1990 model for median noise-induced permanent threshold shift (NIPTS estimates with actual recent epidemiological data obtained on 240 highly screened workers exposed to high-level industrial noise in China. A cross-sectional approach was used in this study. Shift-long temporal waveforms of the noise that workers were exposed to for evaluation of noise exposures and audiometric threshold measures were obtained on all selected subjects. The subjects were exposed to only one occupational noise exposure without the use of hearing protection devices. The results suggest that: (1 the kurtosis metric is an important variable in determining the hazards to hearing posed by a high-level industrial noise environment for hearing conservation purposes, i.e., the kurtosis differentiated between the hazardous effects produced by Gaussian and non-Gaussian noise environments, (2 the ISO-1999 predictive model does not accurately estimate the degree of median NIPTS incurred to high level kurtosis industrial noise, and (3 the inherent large variability in NIPTS among subjects emphasize the need to develop and analyze a larger database of workers with well-documented exposures to better understand the effect of kurtosis on NIPTS incurred from high level industrial noise exposures. A better understanding of the role of the kurtosis metric may lead to its incorporation into a new generation of more predictive hearing risk assessment for occupational noise exposure.

  6. The use of the kurtosis metric in the evaluation of occupational hearing loss in workers in China: implications for hearing risk assessment.

    Science.gov (United States)

    Davis, Robert I; Qiu, Wei; Heyer, Nicholas J; Zhao, Yiming; Qiuling Yang, M S; Li, Nan; Tao, Liyuan; Zhu, Liangliang; Zeng, Lin; Yao, Daohua

    2012-01-01

    This study examined: (1) the value of using the statistical metric, kurtosis [β(t)], along with an energy metric to determine the hazard to hearing from high level industrial noise environments, and (2) the accuracy of the International Standard Organization (ISO-1999:1990) model for median noise-induced permanent threshold shift (NIPTS) estimates with actual recent epidemiological data obtained on 240 highly screened workers exposed to high-level industrial noise in China. A cross-sectional approach was used in this study. Shift-long temporal waveforms of the noise that workers were exposed to for evaluation of noise exposures and audiometric threshold measures were obtained on all selected subjects. The subjects were exposed to only one occupational noise exposure without the use of hearing protection devices. The results suggest that: (1) the kurtosis metric is an important variable in determining the hazards to hearing posed by a high-level industrial noise environment for hearing conservation purposes, i.e., the kurtosis differentiated between the hazardous effects produced by Gaussian and non-Gaussian noise environments, (2) the ISO-1999 predictive model does not accurately estimate the degree of median NIPTS incurred to high level kurtosis industrial noise, and (3) the inherent large variability in NIPTS among subjects emphasize the need to develop and analyze a larger database of workers with well-documented exposures to better understand the effect of kurtosis on NIPTS incurred from high level industrial noise exposures. A better understanding of the role of the kurtosis metric may lead to its incorporation into a new generation of more predictive hearing risk assessment for occupational noise exposure.

  7. Computing and Interpreting Fisher Information as a Metric of Sustainability: Regime Changes in the United States Air Quality

    Science.gov (United States)

    As a key tool in information theory, Fisher Information has been used to explore the observable behavior of a variety of systems. In particular, recent work has demonstrated its ability to assess the dynamic order of real and model systems. However, in order to solidify the use o...

  8. National Assessment of Quality Programs in Emergency Medical Services.

    Science.gov (United States)

    Redlener, Michael; Olivieri, Patrick; Loo, George T; Munjal, Kevin; Hilton, Michael T; Potkin, Katya Trudeau; Levy, Michael; Rabrich, Jeffrey; Gunderson, Michael R; Braithwaite, Sabina A

    2018-01-01

    This study aims to understand the adoption of clinical quality measurement throughout the United States on an EMS agency level, the features of agencies that do participate in quality measurement, and the level of physician involvement. It also aims to barriers to implementing quality improvement initiatives in EMS. A 46-question survey was developed to gather agency level data on current quality improvement practices and measurement. The survey was distributed nationally via State EMS Offices to EMS agencies nation-wide using Surveymonkey©. A convenience sample of respondents was enrolled between August and November, 2015. Univariate, bivariate and multiple logistic regression analyses were conducted to describe demographics and relationships between outcomes of interest and their covariates using SAS 9.3©. A total of 1,733 surveys were initiated and 1,060 surveys had complete or near-complete responses. This includes agencies from 45 states representing over 6.23 million 9-1-1 responses annually. Totals of 70.5% (747) agencies reported dedicated QI personnel, 62.5% (663) follow clinical metrics and 33.3% (353) participate in outside quality or research program. Medical director hours varied, notably, 61.5% (649) of EMS agencies had quality measures compared to fire-based agencies. Agencies in rural only environments were less likely to follow clinical quality metrics. (OR 0.47 CI 0.31 -0.72 p quality improvement resources, medical direction and specific clinical quality measures. More research is needed to understand the impact of this variation on patient care outcomes.

  9. Material quality assurance risk assessment : [summary].

    Science.gov (United States)

    2013-01-01

    With the shift from quality control (QC) of materials and placement techniques : to quality assurance (QA) and acceptance over the years, the role of the Office : of Materials Technology (OMT) has been shifting towards assurance of : material quality...

  10. Innovative technique for assessment of groundwater quality

    International Nuclear Information System (INIS)

    Ahmad, N.; Ahmad, M.; Sajjad, M.I.

    2001-07-01

    Groundwater quality of a part of Chaj Doab has been assessed with innovative techniques which are not reported in literature. The concept of triangular coordinates is modified by multi-rectangular ones for the classification of major cations and anions analysed in the ground water. A Multi-Rectangular Diagram (MRD) has been developed with the combination of rectangular coordinates by virtue of which milli-equivalent per liter percentages (meq/1%) of major cations and anions could be classified into different categories more efficiently as compared to classical trilinear diagrams. Both Piper diagram and MRD are used for the assessment of 259 data sets analysed from ground water of Chaj Doab area, Pakistan. The differentiated ground water types with MRD in the study area are calcium bicarbonate, magnesium bicarbonate, sodium bicarbonate and sodium sulfate. Sodium bicarbonate type emerges as the most abundant type of ground water in the study area. A map showing spatial variation of groundwater quality has been constructed with the help of MRD. This map shows that, in the vicinity of rivers Chenab and Jhelum, calcium bicarbonate type of waters occur while the central area is mainly covered by sodium bicarbonate dominant waters. Groundwaters near the upper Jhelum canal are dominant in sodium sulfate. An important relation between calcium and sodium is proposed which explains the movement history of groundwater in the aquifer. Hydrogeochemical processes have been evaluated with new methods. Ion exchange between calcium and sodium, precipitation of calcium bicarbonate and dissolution of rock forming minerals are the major delineated hydrogeochemical processes. (author)

  11. Objective assessment of the impact of frame rate on video quality

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Korhonen, Jari; Forchhammer, Søren

    2012-01-01

    In this paper, we present a novel objective quality metric that takes the impact of frame rate into account. The proposed metric uses PSNR, frame rate and a content dependent parameter that can easily be obtained from spatial and temporal activity indices. The results have been validated on data ...

  12. Metrics of quantum states

    International Nuclear Information System (INIS)

    Ma Zhihao; Chen Jingling

    2011-01-01

    In this work we study metrics of quantum states, which are natural generalizations of the usual trace metric and Bures metric. Some useful properties of the metrics are proved, such as the joint convexity and contractivity under quantum operations. Our result has a potential application in studying the geometry of quantum states as well as the entanglement detection.

  13. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin

    2015-01-01

    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  14. Return to intended oncologic treatment (RIOT): a novel metric for evaluating the quality of oncosurgical therapy for malignancy.

    Science.gov (United States)

    Aloia, Thomas A; Zimmitti, Giuseppe; Conrad, Claudius; Gottumukalla, Vijaya; Kopetz, Scott; Vauthey, Jean-Nicolas

    2014-08-01

    After cancer surgery, complications, and disability prevent some patients from receiving subsequent treatments. Given that an inability to complete all intended cancer therapies might negate the oncologic benefits of surgical therapy, strategies to improve return to intended oncologic treatment (RIOT), including minimally invasive surgery (MIS), are being investigated. This project was designed to evaluate liver tumor patients to determine the RIOT rate, risk factors for inability to RIOT, and its impact on survivals. Outcomes for a homogenous cohort of 223 patients who underwent open-approach surgery for metachronous colorectal liver metastases and a group of 27 liver tumor patients treated with MIS hepatectomy were examined. Of the 223 open-approach patients, 167 were offered postoperative therapy, yielding a RIOT rate of 75%. The remaining 56 (25%) patients were unable to receive further treatment due to surgical complications (n = 29 pts) or poor performance status (n = 27 pts). Risk factors associated with inability to RIOT were hypertension (OR 2.2, P = 0.025), multiple preoperative chemotherapy regimens (OR 5.9, P = 0.039), and postoperative complications (OR 2.0, P = 0.039). Inability to RIOT correlated with shorter disease-free and overall survivals (P relationship between RIOT and long-term oncologic outcomes suggests that RIOT rates for both open- and MIS-approach cancer surgery should routinely be reported as a quality indicator. © 2014 Wiley Periodicals, Inc.

  15. The Challenges of Data Quality and Data Quality Assessment in the Big Data Era

    Directory of Open Access Journals (Sweden)

    Li Cai

    2015-05-01

    Full Text Available High-quality data are the precondition for analyzing and using big data and for guaranteeing the value of the data. Currently, comprehensive analysis and research of quality standards and quality assessment methods for big data are lacking. First, this paper summarizes reviews of data quality research. Second, this paper analyzes the data characteristics of the big data environment, presents quality challenges faced by big data, and formulates a hierarchical data quality framework from the perspective of data users. This framework consists of big data quality dimensions, quality characteristics, and quality indexes. Finally, on the basis of this framework, this paper constructs a dynamic assessment process for data quality. This process has good expansibility and adaptability and can meet the needs of big data quality assessment. The research results enrich the theoretical scope of big data and lay a solid foundation for the future by establishing an assessment model and studying evaluation algorithms.

  16. Groundwater quality data from the National Water-Quality Assessment Project, May 2012 through December 2013

    Science.gov (United States)

    Arnold, Terri L.; Desimone, Leslie A.; Bexfield, Laura M.; Lindsey, Bruce D.; Barlow, Jeannie R.; Kulongoski, Justin T.; Musgrove, MaryLynn; Kingsbury, James A.; Belitz, Kenneth

    2016-06-20

    Groundwater-quality data were collected from 748 wells as part of the National Water-Quality Assessment Project of the U.S. Geological Survey National Water-Quality Program from May 2012 through December 2013. The data were collected from four types of well networks: principal aquifer study networks, which assess the quality of groundwater used for public water supply; land-use study networks, which assess land-use effects on shallow groundwater quality; major aquifer study networks, which assess the quality of groundwater used for domestic supply; and enhanced trends networks, which evaluate the time scales during which groundwater quality changes. Groundwater samples were analyzed for a large number of water-quality indicators and constituents, including major ions, nutrients, trace elements, volatile organic compounds, pesticides, and radionuclides. These groundwater quality data are tabulated in this report. Quality-control samples also were collected; data from blank and replicate quality-control samples are included in this report.

  17. Balancing Attended and Global Stimuli in Perceived Video Quality Assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2011-01-01

    . This paper proposes a quality model based on the late attention selection theory, assuming that the video quality is perceived via two mechanisms: global and local quality assessment. First we model several visual features influencing the visual attention in quality assessment scenarios to derive......The visual attention mechanism plays a key role in the human perception system and it has a significant impact on our assessment of perceived video quality. In spite of receiving less attention from the viewers, unattended stimuli can still contribute to the understanding of the visual content...... an attention map using appropriate fusion techniques. The global quality assessment as based on the assumption that viewers allocate their attention equally to the entire visual scene, is modeled by four carefully designed quality features. By employing these same quality features, the local quality model...

  18. Assessment and improvement of sound quality in cochlear implant users.

    Science.gov (United States)

    Caldwell, Meredith T; Jiam, Nicole T; Limb, Charles J

    2017-06-01

    Cochlear implants (CIs) have successfully provided speech perception to individuals with sensorineural hearing loss. Recent research has focused on more challenging acoustic stimuli such as music and voice emotion. The purpose of this review is to evaluate and describe sound quality in CI users with the purposes of summarizing novel findings and crucial information about how CI users experience complex sounds. Here we review the existing literature on PubMed and Scopus to present what is known about perceptual sound quality in CI users, discuss existing measures of sound quality, explore how sound quality may be effectively studied, and examine potential strategies of improving sound quality in the CI population. Sound quality, defined here as the perceived richness of an auditory stimulus, is an attribute of implant-mediated listening that remains poorly studied. Sound quality is distinct from appraisal, which is generally defined as the subjective likability or pleasantness of a sound. Existing studies suggest that sound quality perception in the CI population is limited by a range of factors, most notably pitch distortion and dynamic range compression. Although there are currently very few objective measures of sound quality, the CI-MUSHRA has been used as a means of evaluating sound quality. There exist a number of promising strategies to improve sound quality perception in the CI population including apical cochlear stimulation, pitch tuning, and noise reduction processing strategies. In the published literature, sound quality perception is severely limited among CI users. Future research should focus on developing systematic, objective, and quantitative sound quality metrics and designing therapies to mitigate poor sound quality perception in CI users. NA.

  19. $\\eta$-metric structures

    OpenAIRE

    Gaba, Yaé Ulrich

    2017-01-01

    In this paper, we discuss recent results about generalized metric spaces and fixed point theory. We introduce the notion of $\\eta$-cone metric spaces, give some topological properties and prove some fixed point theorems for contractive type maps on these spaces. In particular we show that theses $\\eta$-cone metric spaces are natural generalizations of both cone metric spaces and metric type spaces.

  20. Assessment of integrated watershed health based on the natural environment, hydrology, water quality, and aquatic ecology

    Directory of Open Access Journals (Sweden)

    S. R. Ahn

    2017-11-01

    Full Text Available Watershed health, including the natural environment, hydrology, water quality, and aquatic ecology, is assessed for the Han River basin (34 148 km2 in South Korea by using the Soil and Water Assessment Tool (SWAT. The evaluation procedures follow those of the Healthy Watersheds Assessment by the U.S. Environmental Protection Agency (EPA. Six components of the watershed landscape are examined to evaluate the watershed health (basin natural capacity: stream geomorphology, hydrology, water quality, aquatic habitat condition, and biological condition. In particular, the SWAT is applied to the study basin for the hydrology and water-quality components, including 237 sub-watersheds (within a standard watershed on the Korea Hydrologic Unit Map along with three multipurpose dams, one hydroelectric dam, and three multifunction weirs. The SWAT is calibrated (2005–2009 and validated (2010–2014 by using each dam and weir operation, the flux-tower evapotranspiration, the time-domain reflectometry (TDR soil moisture, and groundwater-level data for the hydrology assessment, and by using sediment, total phosphorus, and total nitrogen data for the water-quality assessment. The water balance, which considers the surface–groundwater interactions and variations in the stream-water quality, is quantified according to the sub-watershed-scale relationship between the watershed hydrologic cycle and stream-water quality. We assess the integrated watershed health according to the U.S. EPA evaluation process based on the vulnerability levels of the natural environment, water resources, water quality, and ecosystem components. The results indicate that the watershed's health declined during the most recent 10-year period of 2005–2014, as indicated by the worse results for the surface process metric and soil water dynamics compared to those of the 1995–2004 period. The integrated watershed health tended to decrease farther downstream within the watershed.

  1. Nonintrusive Method Based on Neural Networks for Video Quality of Experience Assessment

    Directory of Open Access Journals (Sweden)

    Diego José Luis Botia Valderrama

    2016-01-01

    Full Text Available The measurement and evaluation of the QoE (Quality of Experience have become one of the main focuses in the telecommunications to provide services with the expected quality for their users. However, factors like the network parameters and codification can affect the quality of video, limiting the correlation between the objective and subjective metrics. The above increases the complexity to evaluate the real quality of video perceived by users. In this paper, a model based on artificial neural networks such as BPNNs (Backpropagation Neural Networks and the RNNs (Random Neural Networks is applied to evaluate the subjective quality metrics MOS (Mean Opinion Score and the PSNR (Peak Signal Noise Ratio, SSIM (Structural Similarity Index Metric, VQM (Video Quality Metric, and QIBF (Quality Index Based Frame. The proposed model allows establishing the QoS (Quality of Service based in the strategy Diffserv. The metrics were analyzed through Pearson’s and Spearman’s correlation coefficients, RMSE (Root Mean Square Error, and outliers rate. Correlation values greater than 90% were obtained for all the evaluated metrics.

  2. Metric-based vs peer-reviewed evaluation of a research output: Lesson learnt from UK's national research assessment exercise.

    Directory of Open Access Journals (Sweden)

    Kushwanth Koya

    Full Text Available There is a general inquisition regarding the monetary value of a research output, as a substantial amount of funding in modern academia is essentially awarded to good research presented in the form of journal articles, conferences papers, performances, compositions, exhibitions, books and book chapters etc., which, eventually leads to another question if the value varies across different disciplines. Answers to these questions will not only assist academics and researchers, but will also help higher education institutions (HEIs make informed decisions in their administrative and research policies.To examine both the questions, we applied the United Kingdom's recently concluded national research assessment exercise known as the Research Excellence Framework (REF 2014 as a case study. All the data for this study is sourced from the openly available publications which arose from the digital repositories of REF's results and HEFCE's funding allocations.A world leading output earns between £7504 and £14,639 per year within the REF cycle, whereas an internationally excellent output earns between £1876 and £3659, varying according to their area of research. Secondly, an investigation into the impact rating of 25315 journal articles submitted in five areas of research by UK HEIs and their awarded funding revealed a linear relationship between the percentage of quartile-one journal publications and percentage of 4* outputs in Clinical Medicine, Physics and Psychology/Psychiatry/Neuroscience UoAs, and no relationship was found in the Classics and Anthropology/Development Studies UoAs, due to the fact that most publications in the latter two disciplines are not journal articles.The findings provide an indication of the monetary value of a research output, from the perspectives of government funding for research, and also what makes a good output, i.e. whether a relationship exists between good quality output and the source of its publication. The

  3. Metric-based vs peer-reviewed evaluation of a research output: Lesson learnt from UK's national research assessment exercise.

    Science.gov (United States)

    Koya, Kushwanth; Chowdhury, Gobinda

    2017-01-01

    There is a general inquisition regarding the monetary value of a research output, as a substantial amount of funding in modern academia is essentially awarded to good research presented in the form of journal articles, conferences papers, performances, compositions, exhibitions, books and book chapters etc., which, eventually leads to another question if the value varies across different disciplines. Answers to these questions will not only assist academics and researchers, but will also help higher education institutions (HEIs) make informed decisions in their administrative and research policies. To examine both the questions, we applied the United Kingdom's recently concluded national research assessment exercise known as the Research Excellence Framework (REF) 2014 as a case study. All the data for this study is sourced from the openly available publications which arose from the digital repositories of REF's results and HEFCE's funding allocations. A world leading output earns between £7504 and £14,639 per year within the REF cycle, whereas an internationally excellent output earns between £1876 and £3659, varying according to their area of research. Secondly, an investigation into the impact rating of 25315 journal articles submitted in five areas of research by UK HEIs and their awarded funding revealed a linear relationship between the percentage of quartile-one journal publications and percentage of 4* outputs in Clinical Medicine, Physics and Psychology/Psychiatry/Neuroscience UoAs, and no relationship was found in the Classics and Anthropology/Development Studies UoAs, due to the fact that most publications in the latter two disciplines are not journal articles. The findings provide an indication of the monetary value of a research output, from the perspectives of government funding for research, and also what makes a good output, i.e. whether a relationship exists between good quality output and the source of its publication. The findings may also

  4. Multi-Robot Assembly Strategies and Metrics

    Science.gov (United States)

    MARVEL, JEREMY A.; BOSTELMAN, ROGER; FALCO, JOE

    2018-01-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies. PMID:29497234

  5. Multi-Robot Assembly Strategies and Metrics.

    Science.gov (United States)

    Marvel, Jeremy A; Bostelman, Roger; Falco, Joe

    2018-02-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies.

  6. Monitoring and Assessment of Youshui River Water Quality in Youyang

    Science.gov (United States)

    Wang, Xue-qin; Wen, Juan; Chen, Ping-hua; Liu, Na-na

    2018-02-01

    By monitoring the water quality of Youshui River from January 2016 to December 2016, according to the indicator grading and the assessment standard of water quality, the formulas for 3 types water quality indexes are established. These 3 types water quality indexes, the single indicator index Ai, single moment index Ak and the comprehensive water quality index A, were used to quantitatively evaluate the quality of single indicator, the water quality and the change of water quality with time. The results show that, both total phosphorus and fecal coliform indicators exceeded the standard, while the other 16 indicators measured up to the standard. The water quality index of Youshui River is 0.93 and the grade of water quality comprehensive assessment is level 2, which indicated that the water quality of Youshui River is good, and there is room for further improvement. To this end, several protection measures for Youshui River environmental management and pollution treatment are proposed.

  7. Assessing water quality in Lake Naivasha

    NARCIS (Netherlands)

    Ndungu, J.N.

    2014-01-01

    Water quality in aquatic systems is important because it maintains the ecological processes that support biodiversity. However, declining water quality due to environmental perturbations threatens the stability of the biotic integrity and therefore hinders the ecosystem services and functions of

  8. Quality assessment for online iris images

    CSIR Research Space (South Africa)

    Makinana, S

    2015-01-01

    Full Text Available Iris recognition systems have attracted much attention for their uniqueness, stability and reliability. However, performance of this system depends on quality of iris image. Therefore there is a need to select good quality images before features can...

  9. Adaptive testing for video quality assessment

    NARCIS (Netherlands)

    Menkovski, V.; Exarchakos, G.; Liotta, A.; Damásio, M.J.; Cardoso, G.; Quico, C.; Geerts, D.

    2011-01-01

    Optimizing the Quality of Experience and avoiding under or over provisioning in video delivery services requires understanding of how different resources affect the perceived quality. The utility of resources, such as bit-rate, is directly calculated by proportioningthe improvement in quality over

  10. assessing participation in secondary education quality enhancement

    African Journals Online (AJOL)

    PROF. BARTH EKWEME

    for low parents and communities involvement in secondary education-quality improvement. It was recommended that the quality of instruction in ... concern on standard of education hinges on the quality of instruction the children are ... NTI, 2000). This implies that teachers have a duty of helping students under their care to.

  11. Indoor Air Quality Building Education and Assessment Model

    Science.gov (United States)

    The Indoor Air Quality Building Education and Assessment Model (I-BEAM), released in 2002, is a guidance tool designed for use by building professionals and others interested in indoor air quality in commercial buildings.

  12. Indoor Air Quality Building Education and Assessment Model Forms

    Science.gov (United States)

    The Indoor Air Quality Building Education and Assessment Model (I-BEAM) is a guidance tool designed for use by building professionals and others interested in indoor air quality in commercial buildings.

  13. Web metrics for library and information professionals

    CERN Document Server

    Stuart, David

    2014-01-01

    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional.Th...

  14. Building quality into performance and safety assessment software

    International Nuclear Information System (INIS)

    Wojciechowski, L.C.

    2011-01-01

    Quality assurance is integrated throughout the development lifecycle for performance and safety assessment software. The software used in the performance and safety assessment of a Canadian deep geological repository (DGR) follows the CSA quality assurance standard CSA-N286.7 [1], Quality Assurance of Analytical, Scientific and Design Computer Programs for Nuclear Power Plants. Quality assurance activities in this standard include tasks such as verification and inspection; however, much more is involved in producing a quality software computer program. The types of errors found with different verification methods are described. The integrated quality process ensures that defects are found and corrected as early as possible. (author)

  15. Disturbance metrics predict a wetland Vegetation Index of Biotic Integrity

    Science.gov (United States)

    Stapanian, Martin A.; Mack, John; Adams, Jean V.; Gara, Brian; Micacchion, Mick

    2013-01-01

    Indices of biological integrity of wetlands based on vascular plants (VIBIs) have been developed in many areas in the USA. Knowledge of the best predictors of VIBIs would enable management agencies to make better decisions regarding mitigation site selection and performance monitoring criteria. We use a novel statistical technique to develop predictive models for an established index of wetland vegetation integrity (Ohio VIBI), using as independent variables 20 indices and metrics of habitat quality, wetland disturbance, and buffer area land use from 149 wetlands in Ohio, USA. For emergent and forest wetlands, predictive models explained 61% and 54% of the variability, respectively, in Ohio VIBI scores. In both cases the most important predictor of Ohio VIBI score was a metric that assessed habitat alteration and development in the wetland. Of secondary importance as a predictor was a metric that assessed microtopography, interspersion, and quality of vegetation communities in the wetland. Metrics and indices assessing disturbance and land use of the buffer area were generally poor predictors of Ohio VIBI scores. Our results suggest that vegetation integrity of emergent and forest wetlands could be most directly enhanced by minimizing substrate and habitat disturbance within the wetland. Such efforts could include reducing or eliminating any practices that disturb the soil profile, such as nutrient enrichment from adjacent farm land, mowing, grazing, or cutting or removing woody plants.

  16. Development of a dementia assessment quality database

    DEFF Research Database (Denmark)

    Johannsen, P.; Jørgensen, Kasper; Korner, A.

    2011-01-01

    OBJECTIVE: Increased focus on the quality of health care requires tools and information to address and improve quality. One tool to evaluate and report the quality of clinical health services is quality indicators based on a clinical database. METHOD: The Capital Region of Denmark runs a quality...... database for dementia evaluation in the secondary health system. One volume and seven process quality indicators on dementia evaluations are monitored. Indicators include frequency of demented patients, percentage of patients evaluated within three months, whether the work-up included blood tests, Mini...... for the data analyses. RESULTS: The database was constructed in 2005 and covers 30% of the Danish population. Data from all consecutive cases evaluated for dementia in the secondary health system in the Capital Region of Denmark are entered. The database has shown that the basic diagnostic work-up programme...

  17. Quality assessment of home births in Denmark.

    Science.gov (United States)

    Jensen, Sabrina; Colmorn, Lotte B; Schroll, Anne-Mette; Krebs, Lone

    2017-05-01

    The safety of home births has been widely debated. Observational studies examining maternal and neonatal outcomes of home births have become more frequent, and the quality of these studies has improved. The aim of the present study was to describe neonatal outcomes of home births compared with hospital births and to discuss which data are needed to evaluate the safety of home births. This was a register-based cohort study. Data on all births in Denmark (2003-2013) were collected from the Danish Medical Birth Registry (DMBR). The cohort included healthy women with uncomplicated pregnancies and no medical interventions during delivery. A total of 6,395 home births and 266,604 hospital births were eligible for analysis. Comparative analyses were performed separately in nulliparous and multiparous women. The outcome measures were neonatal mortality and morbidity. Frequencies of admission to a neonatal intensive care unit and treatment with continuous positive airway pressure were significantly lower in infants born at home than in infants born at a hospital. A slightly, but significantly increased rate of early neonatal death was found among infants delivered by nulliparous at home. This study indicates that home births in Denmark are characterized by a high level of safety owing to low rates of perinatal mortality and morbidity. Missing registration on intrapartum transfers and planned versus unplanned home births in the DMBR are, however, major limitations to the validity and utility of the reported results. Registration of these items of information is necessary to make reasonable assessments of home births in the future. none. not relevant. Articles published in the DMJ are “open access”. This means that the articles are distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits any non-commercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

  18. A Review of Quality Measures for Assessing the Impact of Antimicrobial Stewardship Programs in Hospitals

    Directory of Open Access Journals (Sweden)

    Mary Richard Akpan

    2016-01-01

    Full Text Available The growing problem of antimicrobial resistance (AMR has led to calls for antimicrobial stewardship programs (ASP to control antibiotic use in healthcare settings. Key strategies include prospective audit with feedback and intervention, and formulary restriction and preauthorization. Education, guidelines, clinical pathways, de-escalation, and intravenous to oral conversion are also part of some programs. Impact and quality of ASP can be assessed using process or outcome measures. Outcome measures are categorized as microbiological, patient or financial outcomes. The objective of this review was to provide an overview of quality measures for assessing ASP and the reported impact of ASP in peer-reviewed studies, focusing particularly on patient outcomes. A literature search of papers published in English between 1990 and June 2015 was conducted in five databases using a combination of search terms. Primary studies of any design were included. A total of 63 studies were included in this review. Four studies defined quality metrics for evaluating ASP. Twenty-one studies assessed the impact of ASP on antimicrobial utilization and cost, 25 studies evaluated impact on resistance patterns and/or rate of Clostridium difficile infection (CDI. Thirteen studies assessed impact on patient outcomes including mortality, length of stay (LOS and readmission rates. Six of these 13 studies reported non-significant difference in mortality between pre- and post-ASP intervention, and five reported reductions in mortality rate. On LOS, six studies reported shorter LOS post intervention; a significant reduction was reported in one of these studies. Of note, this latter study reported significantly (p < 0.001 higher unplanned readmissions related to infections post-ASP. Patient outcomes need to be a key component of ASP evaluation. The choice of metrics is influenced by data and resource availability. Controlling for confounders must be considered in the design of

  19. A comparative study on assessment procedures and metric properties of two scoring systems of the Coma Recovery Scale-Revised items: standard and modified scores.

    Science.gov (United States)

    Sattin, Davide; Lovaglio, Piergiorgio; Brenna, Greta; Covelli, Venusia; Rossi Sebastiano, Davide; Duran, Dunja; Minati, Ludovico; Giovannetti, Ambra Mara; Rosazza, Cristina; Bersano, Anna; Nigri, Anna; Ferraro, Stefania; Leonardi, Matilde

    2017-09-01

    The study compared the metric characteristics (discriminant capacity and factorial structure) of two different methods for scoring the items of the Coma Recovery Scale-Revised and it analysed scale scores collected using the standard assessment procedure and a new proposed method. Cross sectional design/methodological study. Inpatient, neurological unit. A total of 153 patients with disorders of consciousness were consecutively enrolled between 2011 and 2013. All patients were assessed with the Coma Recovery Scale-Revised using standard (rater 1) and inverted (rater 2) procedures. Coma Recovery Scale-Revised score, number of cognitive and reflex behaviours and diagnosis. Regarding patient assessment, rater 1 using standard and rater 2 using inverted procedures obtained the same best scores for each subscale of the Coma Recovery Scale-Revised for all patients, so no clinical (and statistical) difference was found between the two procedures. In 11 patients (7.7%), rater 2 noted that some Coma Recovery Scale-Revised codified behavioural responses were not found during assessment, although higher response categories were present. A total of 51 (36%) patients presented the same Coma Recovery Scale-Revised scores of 7 or 8 using a standard score, whereas no overlap was found using the modified score. Unidimensionality was confirmed for both score systems. The Coma Recovery Scale Modified Score showed a higher discriminant capacity than the standard score and a monofactorial structure was also supported. The inverted assessment procedure could be a useful evaluation method for the assessment of patients with disorder of consciousness diagnosis.

  20. Unsupervised deep learning for real-time assessment of video streaming services

    NARCIS (Netherlands)

    Torres Vega, M.; Mocanu, D.C.; Liotta, A.

    2017-01-01

    Evaluating quality of experience in video streaming services requires a quality metric that works in real time and for a broad range of video types and network conditions. This means that, subjective video quality assessment studies, or complex objective video quality assessment metrics, which would

  1. Selected Malaysia air quality pollutants assessment using ...

    African Journals Online (AJOL)

    Analysis of PCA, FA, KMO and Bartlett's test were done on five main air quality pollutants (O3, NO2, SO2, CO and PM10) from all around Malaysia. From the data analysis obtained, the concentrations of air quality pollutants all around Malaysia starting from 2008 to 2011 were acceptable and the most dominant major ...

  2. Microbiological methods for assessing soil quality

    NARCIS (Netherlands)

    Bloem, J.; Hopkins, D.W.; Benedetti, A.

    2006-01-01

    This book provides a selection of microbiological methods that are already applied in regional or national soil quality monitoring programs. It is split into two parts: part one gives an overview of approaches to monitoring, evaluating and managing soil quality. Part two provides a selection of

  3. Water quality assessment of bioenergy production

    Science.gov (United States)

    Rocio Diaz-Chavez; Goran Berndes; Dan Neary; Andre Elia Neto; Mamadou Fall

    2011-01-01

    Water quality is a measurement of the biological, chemical, and physical characteristics of water against certain standards set to ensure ecological and/or human health. Biomass production and conversion to fuels and electricity can impact water quality in lakes, rivers, and aquifers with consequences for aquatic ecosystem health and also human water uses. Depending on...

  4. Regionally Varying Assessments of Tropical Width in Reanalyses and CMIP5 Models Using a Tropopause Break Metric

    Science.gov (United States)

    Homeyer, C. R.; Martin, E. R.; McKinzie, R.; McCarthy, K.

    2017-12-01

    The boundary between the tropics and the extratropics in each hemisphere is not fixed in space or time. Variations in the north-south width of the tropics are directly connected to changes in weather and climate. These fluctuations have been shown to impact tropical biodiversity, the spread of vector borne diseases, atmospheric chemistry, and additional natural and human sectors. However, there is no unanimous definition of the tropical boundary. This has led to a disagreement on the magnitude of changes in the tropical width during the past 30 years and a lack of understanding concerning its spatial and temporal variability. This study identifies the variability of the tropical width in modern reanalyses (ERA-Interim, JRA-55, CFSR, MERRA, and MERRA-2) and CMIP5 models (all models with available 6-hourly output) using a novel analysis metric: the tropopause "break" (i.e., the sharp discontinuity in tropopause altitude between the tropics and extratropics). Similarities and differences are found amongst the reanalyses, with some degree of tropical narrowing in the Eastern Pacific between 1981 and 2010. Historical simulations from the CMIP5 models agree well with the tropopause break latitudes depicted by the reanalyses, with considerable differences in estimated trends over the relatively short overlapping time period of the datasets. For future projections under the RCP8.5 scenario from 2006 to 2100, CMIP5 models generally show statistically significant increases in tropical width (at the 99% level) throughout each hemisphere, with regional variability of 1-2 degrees in poleward latitude trends. The impact of CMIP5 model grid resolution and other factors on the results of the tropopause break analysis will be discussed.

  5. Toward automated assessment of health Web page quality using the DISCERN instrument.

    Science.gov (United States)

    Allam, Ahmed; Schulz, Peter J; Krauthammer, Michael

    2017-05-01

    As the Internet becomes the number one destination for obtaining health-related information, there is an increasing need to identify health Web pages that convey an accurate and current view of medical knowledge. In response, the research community has created multicriteria instruments for reliably assessing online medical information quality. One such instrument is DISCERN, which measures health Web page quality by assessing an array of features. In order to scale up use of the instrument, there is interest in automating the quality evaluation process by building machine learning (ML)-based DISCERN Web page classifiers. The paper addresses 2 key issues that are essential before constructing automated DISCERN classifiers: (1) generation of a robust DISCERN training corpus useful for training classification algorithms, and (2) assessment of the usefulness of the current DISCERN scoring schema as a metric for evaluating the performance of these algorithms. Using DISCERN, 272 Web pages discussing treatment options in breast cancer, arthritis, and depression were evaluated and rated by trained coders. First, different consensus models were compared to obtain a robust aggregated rating among the coders, suitable for a DISCERN ML training corpus. Second, a new DISCERN scoring criterion was proposed (features-based score) as an ML performance metric that is more reflective of the score distribution across different DISCERN quality criteria. First, we found that a probabilistic consensus model applied to the DISCERN instrument was robust against noise (random ratings) and superior to other approaches for building a training corpus. Second, we found that the established DISCERN scoring schema (overall score) is ill-suited to measure ML performance for automated classifiers. Use of a probabilistic consensus model is advantageous for building a training corpus for the DISCERN instrument, and use of a features-based score is an appropriate ML metric for automated DISCERN

  6. A Risk-based Assessment And Management Framework For Multipollutant Air Quality

    Science.gov (United States)

    Frey, H. Christopher; Hubbell, Bryan

    2010-01-01

    The National Research Council recommended both a risk- and performance-based multipollutant approach to air quality management. Specifically, management decisions should be based on minimizing the exposure to, and risk of adverse effects from, multiple sources of air pollution and that the success of these decisions should be measured by how well they achieved this objective. We briefly describe risk analysis and its application within the current approach to air quality management. Recommendations are made as to how current practice could evolve to support a fully risk- and performance-based multipollutant air quality management system. The ability to implement a risk assessment framework in a credible and policy-relevant manner depends on the availability of component models and data which are scientifically sound and developed with an understanding of their application in integrated assessments. The same can be said about accountability assessments used to evaluate the outcomes of decisions made using such frameworks. The existing risk analysis framework, although typically applied to individual pollutants, is conceptually well suited for analyzing multipollutant management actions. Many elements of this framework, such as emissions and air quality modeling, already exist with multipollutant characteristics. However, the framework needs to be supported with information on exposure and concentration response relationships that result from multipollutant health studies. Because the causal chain that links management actions to emission reductions, air quality improvements, exposure reductions and health outcomes is parallel between prospective risk analyses and retrospective accountability assessments, both types of assessment should be placed within a single framework with common metrics and indicators where possible. Improvements in risk reductions can be obtained by adopting a multipollutant risk analysis framework within the current air quality management

  7. Assessing Requirements Quality through Requirements Coverage

    Science.gov (United States)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software

  8. Ljubljana quality selection (LQS) - innovative case of restaurant assessment system

    OpenAIRE

    Maja Uran Maravić; Daniela Gračan; Zrinka Zadel

    2014-01-01

    The purpose – The purpose of this paper is to briefly present the most well-known restaurant assessment systems where restaurant are assessed by experts. The aim is to highlight the strengths and weaknesses of each system. Design –The special focus is to give answers on questions: how are the restaurants assessed by experts, which are the elements and standards of assessment and whether they are consistent with the quality dimensions as advocated in the theory of service quality. Methodology ...

  9. Assessing Community Quality of Health Care.

    Science.gov (United States)

    Herrin, Jeph; Kenward, Kevin; Joshi, Maulik S; Audet, Anne-Marie J; Hines, Stephen J

    2016-02-01

    To determine the agreement of measures of care in different settings-hospitals, nursing homes (NHs), and home health agencies (HHAs)-and identify communities with high-quality care in all settings. Publicly available quality measures for hospitals, NHs, and HHAs, linked to hospital service areas (HSAs). We constructed composite quality measures for hospitals, HHAs, and nursing homes. We used these measures to identify HSAs with exceptionally high- or low-quality of care across all settings, or only high hospital quality, and compared these with respect to sociodemographic and health system factors. We identified three dimensions of hospital quality, four HHA dimensions, and two NH dimensions; these were poorly correlated across the three care settings. HSAs that ranked high on all dimensions had more general practitioners per capita, and fewer specialists per capita, than HSAs that ranked highly on only the hospital measures. Higher quality hospital, HHA, and NH care are not correlated at the regional level; regions where all dimensions of care are high differ systematically from regions which score well on only hospital measures and from those which score well on none. © Health Research and Educational Trust.

  10. DEVELOPMENT OF THE METHOD AND U.S. NORMALIZATION DATABASE FOR LIFE CYCLE IMPACT ASSESSMENT AND SUSTAINABILITY METRICS

    Science.gov (United States)

    Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as, life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relati...

  11. Quality Assurance of Assessment and Moderation Discourses Involving Sessional Staff

    Science.gov (United States)

    Grainger, Peter; Adie, Lenore; Weir, Katie

    2016-01-01

    Quality assurance is a major agenda in tertiary education. The casualisation of academic work, especially in teaching, is also a quality assurance issue. Casual or sessional staff members teach and assess more than 50% of all university courses in Australia, and yet the research in relation to the role sessional staff play in quality assurance of…

  12. Quality Assurance--Best Practices for Assessing Online Programs

    Science.gov (United States)

    Wang, Qi

    2006-01-01

    Educators have long sought to define quality in education. With the proliferation of distance education and online learning powered by the Internet, the tasks required to assess the quality of online programs become even more challenging. To assist educators and institutions in search of quality assurance methods to continuously improve their…

  13. Assessment of the Quality Management Models in Higher Education

    Science.gov (United States)

    Basar, Gulsun; Altinay, Zehra; Dagli, Gokmen; Altinay, Fahriye

    2016-01-01

    This study involves the assessment of the quality management models in Higher Education by explaining the importance of quality in higher education and by examining the higher education quality assurance system practices in other countries. The qualitative study was carried out with the members of the Higher Education Planning, Evaluation,…

  14. Service Quality and Customer Satisfaction: An Assessment and Future Directions.

    Science.gov (United States)

    Hernon, Peter; Nitecki, Danuta A.; Altman, Ellen

    1999-01-01

    Reviews the literature of library and information science to examine issues related to service quality and customer satisfaction in academic libraries. Discusses assessment, the application of a business model to higher education, a multiple constituency approach, decision areas regarding service quality, resistance to service quality, and future…

  15. Predictive no-reference assessment of video quality

    NARCIS (Netherlands)

    Torres Vega, M.; Mocanu, D.C.; Stavrou, S.; Liotta, A.

    2017-01-01

    Among the various means to evaluate the quality of video streams, light-weight No-Reference (NR) methods have low computation and may be executed on thin clients. Thus, these methods would be perfect candidates in cases of real-time quality assessment, automated quality control and in adaptive

  16. The Emergence of Quality Assessment in Brazilian Basic Education

    Science.gov (United States)

    Kauko, Jaakko; Centeno, Vera Gorodski; Candido, Helena; Shiroma, Eneida; Klutas, Anni

    2016-01-01

    The focus in this article is on Brazilian education policy, specifically quality assurance and evaluation. The starting point is that quality, measured by means of large-scale assessments, is one of the key discursive justifications for educational change. The article addresses the questions of how quality evaluation became a significant feature…

  17. ASSESSMENT OF WATER QUALITY INDEX FOR GROUNDWATER ...

    African Journals Online (AJOL)

    2013-12-31

    Dec 31, 2013 ... The advantages of an index include its ability to represent measurements of a ... Fair. Water quality is usually protected but occasionally threatened or ... Electrical Conductivity (EC) value is an index to represent the total.

  18. Quality assessment of gamma camera systems

    International Nuclear Information System (INIS)

    Kindler, M.

    1985-01-01

    There are methods and equipment in nuclear medical diagnostics that allow selective visualisation of the functioning of organs or organ systems, using radioactive substances for labelling and demonstration of metabolic processes. Following a previous contribution on fundamentals and systems components of a gamma camera system, the article in hand deals with the quality characteristics of such a system and with practical quality control and its significance for clinical applications. [de

  19. Real Time Face Quality Assessment for Face Log Generation

    DEFF Research Database (Denmark)

    Kamal, Nasrollahi; Moeslund, Thomas B.

    2009-01-01

    Summarizing a long surveillance video to just a few best quality face images of each subject, a face-log, is of great importance in surveillance systems. Face quality assessment is the back-bone for face log generation and improving the quality assessment makes the face logs more reliable....... Developing a real time face quality assessment system using the most important facial features and employing it for face logs generation are the concerns of this paper. Extensive tests using four databases are carried out to validate the usability of the system....

  20. Validity of portfolio assessment: which qualities determine ratings?

    NARCIS (Netherlands)

    Driessen, E.W.; Overeem, K.; Tartwijk, J. van; Vleuten, C.P.M. van der; Muijtjens, A.M.M.

    2006-01-01

    The portfolio is becoming increasingly accepted as a valuable tool for learning and assessment. The validity of portfolio assessment, however, may suffer from bias due to irrelevant qualities, such as lay-out and writing style. We examined the possible effects of such qualities in a portfolio

  1. 42 CFR 493.1289 - Standard: Analytic systems quality assessment.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Analytic systems quality assessment. 493.1289 Section 493.1289 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND... through 493.1283. (b) The analytic systems quality assessment must include a review of the effectiveness...

  2. 42 CFR 493.1299 - Standard: Postanalytic systems quality assessment.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Postanalytic systems quality assessment. 493.1299 Section 493.1299 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH....1291. (b) The postanalytic systems quality assessment must include a review of the effectiveness of...

  3. 42 CFR 493.1249 - Standard: Preanalytic systems quality assessment.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Preanalytic systems quality assessment. 493.1249 Section 493.1249 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH....1241 through 493.1242. (b) The preanalytic systems quality assessment must include a review of the...

  4. Standard setting and quality of assessment: A conceptual approach ...

    African Journals Online (AJOL)

    Quality performance standards and the effect of assessment outcomes are important in the educational milieu, as assessment remains the representative ... not be seen as a methodological process of setting pass/fail cut-off points only, but as a powerful catalyst for quality improvements in HPE by promoting excellence in ...

  5. Educational Quality, Outcomes Assessment, and Policy Change: The Virginia Example

    Science.gov (United States)

    Culver, Steve

    2010-01-01

    The higher education system in the Commonwealth of Virginia in the United States provides a case model for how discussions regarding educational quality and assessment of that quality have affected institutions' policy decisions and implementation. Using Levin's (1998) policy analysis framework, this essay explores how assessment of student…

  6. Using spatial metrics and surveys for the assessment of trans-boundary deforestation in protected areas of the Maya Mountain Massif: Belize-Guatemala border.

    Science.gov (United States)

    Chicas, S D; Omine, K; Ford, J B; Sugimura, K; Yoshida, K

    2017-02-01

    Understanding the trans-boundary deforestation history and patterns in protected areas along the Belize-Guatemala border is of regional and global importance. To assess deforestation history and patterns in our study area along a section of the Belize-Guatemala border, we incorporated multi-temporal deforestation rate analysis and spatial metrics with survey results. This multi-faceted approach provides spatial analysis with relevant insights from local stakeholders to better understand historic deforestation dynamics, spatial characteristics and human perspectives regarding the underlying causes thereof. During the study period 1991-2014, forest cover declined in Belize's protected areas: Vaca Forest Reserve 97.88%-87.62%, Chiquibul National Park 99.36%-92.12%, Caracol Archeological Reserve 99.47%-78.10% and Colombia River Forest Reserve 89.22%-78.38% respectively. A comparison of deforestation rates and spatial metrics indices indicated that between time periods 1991-1995 and 2012-2014 deforestation and fragmentation increased in protected areas. The major underlying causes, drivers, impacts, and barriers to bi-national collaboration and solutions of deforestation along the Belize-Guatemala border were identified by community leaders and stakeholders. The Mann-Whitney U test identified significant differences between leaders and stakeholders regarding the ranking of challenges faced by management organizations in the Maya Mountain Massif, except for the lack of assessment and quantification of deforestation (LD, SH: 18.67, 23.25, U = 148, p > 0.05). The survey results indicated that failure to integrate buffer communities, coordinate among managing organizations and establish strong bi-national collaboration has resulted in continued ecological and environmental degradation. The information provided by this research should aid managing organizations in their continued aim to implement effective deforestation mitigation strategies. Copyright © 2016 Elsevier

  7. Assessment of Performance Measures for Security of the Maritime Transportation Network. Port Security Metrics: Proposed Measurement of Deterrence Capability

    National Research Council Canada - National Science Library

    Hoaglund, Robert; Gazda, Walter

    2007-01-01

    The goal of this analysis is to provide ASCO and its customers with a comprehensive approach to the development of quantitative performance measures to assess security improvements to the port system...

  8. Dried fruits quality assessment by hyperspectral imaging

    Science.gov (United States)

    Serranti, Silvia; Gargiulo, Aldo; Bonifazi, Giuseppe

    2012-05-01

    Dried fruits products present different market values according to their quality. Such a quality is usually quantified in terms of freshness of the products, as well as presence of contaminants (pieces of shell, husk, and small stones), defects, mould and decays. The combination of these parameters, in terms of relative presence, represent a fundamental set of attributes conditioning dried fruits humans-senses-detectable-attributes (visual appearance, organolectic properties, etc.) and their overall quality in terms of marketable products. Sorting-selection strategies exist but sometimes they fail when a higher degree of detection is required especially if addressed to discriminate between dried fruits of relatively small dimensions and when aiming to perform an "early detection" of pathogen agents responsible of future moulds and decays development. Surface characteristics of dried fruits can be investigated by hyperspectral imaging (HSI). In this paper, specific and "ad hoc" applications addressed to propose quality detection logics, adopting a hyperspectral imaging (HSI) based approach, are described, compared and critically evaluated. Reflectance spectra of selected dried fruits (hazelnuts) of different quality and characterized by the presence of different contaminants and defects have been acquired by a laboratory device equipped with two HSI systems working in two different spectral ranges: visible-near infrared field (400-1000 nm) and near infrared field (1000-1700 nm). The spectra have been processed and results evaluated adopting both a simple and fast wavelength band ratio approach and a more sophisticated classification logic based on principal component (PCA) analysis.

  9. INFORMATION AND COMMUNICATION TECHNOLOGIES TRAINING: CRITERIA FOR INTERNAL QUALITY ASSESSMENT

    Directory of Open Access Journals (Sweden)

    Oleg M. Spirin

    2011-02-01

    Full Text Available In the article the concept of information and communication technology training is specified. It is grounded an internal criteria of information and communication technologies training quality assessment based on experience of the organization, carrying out, analysis of experimental work results on quality assessment of designing, working out, efficiency of methodical system of informatics teachers base vocational training introduction in the conditions of credit-modular technology. Indicators and approaches of their assessment to define the criteria degree are resulted. Indicators of criteria "level differentiation", "individualization" and "intensification" of educational process for information and communication technologies training quality assessment are specified.

  10. METRIC context unit architecture

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, R.O.

    1988-01-01

    METRIC is an architecture for a simple but powerful Reduced Instruction Set Computer (RISC). Its speed comes from the simultaneous processing of several instruction streams, with instructions from the various streams being dispatched into METRIC's execution pipeline as they become available for execution. The pipeline is thus kept full, with a mix of instructions for several contexts in execution at the same time. True parallel programming is supported within a single execution unit, the METRIC Context Unit. METRIC's architecture provides for expansion through the addition of multiple Context Units and of specialized Functional Units. The architecture thus spans a range of size and performance from a single-chip microcomputer up through large and powerful multiprocessors. This research concentrates on the specification of the METRIC Context Unit at the architectural level. Performance tradeoffs made during METRIC's design are discussed, and projections of METRIC's performance are made based on simulation studies.

  11. Quality Assessment of Urinary Stone Analysis

    DEFF Research Database (Denmark)

    Siener, Roswitha; Buchholz, Noor; Daudon, Michel

    2016-01-01

    After stone removal, accurate analysis of urinary stone composition is the most crucial laboratory diagnostic procedure for the treatment and recurrence prevention in the stone-forming patient. The most common techniques for routine analysis of stones are infrared spectroscopy, X-ray diffraction......, fulfilled the quality requirements. According to the current standard, chemical analysis is considered to be insufficient for stone analysis, whereas infrared spectroscopy or X-ray diffraction is mandatory. However, the poor results of infrared spectroscopy highlight the importance of equipment, reference...... spectra and qualification of the staff for an accurate analysis of stone composition. Regular quality control is essential in carrying out routine stone analysis....

  12. Doctors or technicians: assessing quality of medical education

    Directory of Open Access Journals (Sweden)

    Tayyab Hasan

    2010-09-01

    Full Text Available Tayyab HasanPAPRSB Institute of Health Sciences, University Brunei Darussalam, Bandar Seri Begawan, BruneiAbstract: Medical education institutions usually adapt industrial quality management models that measure the quality of the process of a program but not the quality of the product. The purpose of this paper is to analyze the impact of industrial quality management models on medical education and students, and to highlight the importance of introducing a proper educational quality management model. Industrial quality management models can measure the training component in terms of competencies, but they lack the educational component measurement. These models use performance indicators to assess their process improvement efforts. Researchers suggest that the performance indicators used in educational institutions may only measure their fiscal efficiency without measuring the quality of the educational experience of the students. In most of the institutions, where industrial models are used for quality assurance, students are considered as customers and are provided with the maximum services and facilities possible. Institutions are required to fulfill a list of recommendations from the quality control agencies in order to enhance student satisfaction and to guarantee standard services. Quality of medical education should be assessed by measuring the impact of the educational program and quality improvement procedures in terms of knowledge base development, behavioral change, and patient care. Industrial quality models may focus on academic support services and processes, but educational quality models should be introduced in parallel to focus on educational standards and products.Keywords: educational quality, medical education, quality control, quality assessment, quality management models

  13. METRIC EVALUATION PIPELINE FOR 3D MODELING OF URBAN SCENES

    Directory of Open Access Journals (Sweden)

    M. Bosch

    2017-05-01

    Full Text Available Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  14. Metric Evaluation Pipeline for 3d Modeling of Urban Scenes

    Science.gov (United States)

    Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.

    2017-05-01

    Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  15. Evaluation of HVS models in the application of medical image quality assessment

    Science.gov (United States)

    Zhang, L.; Cavaro-Menard, C.; Le Callet, P.

    2012-03-01

    In this study, four of the most widely used Human Visual System (HVS) models are applied on Magnetic Resonance (MR) images for signal detection task. Their performances are evaluated against gold standard derived from radiologists' majority decision. The task-based image quality assessment requires taking into account the human perception specificities, for which various HVS models have been proposed. However to our knowledge, no work was conducted to evaluate and compare the suitability of these models with respect to the assessment of medical image qualities. This pioneering study investigates the performances of different HVS models on medical images in terms of approximation to radiologist performance. We propose to score the performance of each HVS model using the AUC (Area Under the receiver operating characteristic Curve) and its variance estimate as the figure of merit. The radiologists' majority decision is used as gold standard so that the estimated AUC measures the distance between the HVS model and the radiologist perception. To calculate the variance estimate of AUC, we adopted the one-shot method that is independent of the HVS model's output range. The results of this study will help to provide arguments to the application of some HVS model on our future medical image quality assessment metric.

  16. Dehazed Image Quality Assessment by Haze-Line Theory

    Science.gov (United States)

    Song, Yingchao; Luo, Haibo; Lu, Rongrong; Ma, Junkai

    2017-06-01

    Images captured in bad weather suffer from low contrast and faint color. Recently, plenty of dehazing algorithms have been proposed to enhance visibility and restore color. However, there is a lack of evaluation metrics to assess the performance of these algorithms or rate them. In this paper, an indicator of contrast enhancement is proposed basing on the newly proposed haze-line theory. The theory assumes that colors of a haze-free image are well approximated by a few hundred distinct colors, which form tight clusters in RGB space. The presence of haze makes each color cluster forms a line, which is named haze-line. By using these haze-lines, we assess performance of dehazing algorithms designed to enhance the contrast by measuring the inter-cluster deviations between different colors of dehazed image. Experimental results demonstrated that the proposed Color Contrast (CC) index correlates well with human judgments of image contrast taken in a subjective test on various scene of dehazed images and performs better than state-of-the-art metrics.

  17. Assessing Students' Spiritual and Religious Qualities

    Science.gov (United States)

    Astin, Alexander W.; Astin, Helen S.; Lindholm, Jennifer A.

    2011-01-01

    This paper describes a comprehensive set of 12 new measures for studying undergraduate students' spiritual and religious development. The three measures of spirituality, four measures of "spiritually related" qualities, and five measures of religiousness demonstrate satisfactory reliability, robustness, and both concurrent and predictive validity.…

  18. Process of technical document quality assessment | Djebabra ...

    African Journals Online (AJOL)

    The most used instrument in training and scientific research is obviously the book which occupies since always a place of choice. Indeed, the book and more particularly the technical book are is used as a support, as well in the basic training as in the document is of good quality in order to have confidence in the services ...

  19. Microbiological Quality Assessment and Physico-chemical ...

    African Journals Online (AJOL)

    Two commercial poultry diets namely chick mash and grower mash were obtained from five (5) major poultry feed millers in Ilorin metropolis, Nigeria. A total of seventy – five (75) samples were collected and these diets were examined for their microbiological and physico-chemical qualities. Total bacterial counts in the chick ...

  20. Quality assessment of a placental perfusion protocol

    DEFF Research Database (Denmark)

    Mathiesen, Line; Mose, Tina; Mørck, Thit Juul

    2010-01-01

    mlh(-1) from the fetal reservoir) when adding 2 (n=7) and 20mg (n=9) FITC-dextran/100ml fetal perfusion media. Success rate of the Copenhagen placental perfusions is provided in this study, including considerations and quality control parameters. Three checkpoints suggested to determine success rate...

  1. Quality and effectiveness of strategic environmental assessment ...

    African Journals Online (AJOL)

    However, the SEA also achieved significant successes in terms of 'indirect outputs', such as a more holistic approach to water management, facilitated more effective public participation and contributed to broader strategic planning in the department. The paper concludes by making recommendations to improve the quality ...

  2. Physicochemical and bacteriological quality assessment of the ...

    African Journals Online (AJOL)

    In order to ascertain water quality for human consumption, physical and chemical parameters, together with faecal forms of bacteria were evaluated in the drinking water resources of the Bambui community in the North West region of Cameroon. This study was necessitated by the occasional presence of suspended ...

  3. Assessing air quality impacts of managed lanes.

    Science.gov (United States)

    2010-12-01

    Impacts on transit bus performance and air quality were investigated for a case study high-occupancy / toll (HOT) lane project on a corridor of I-95 near Miami. Trends in air pollutant concentration monitoring data in the study area first were analyz...

  4. Soil quality assessment in rice production systems

    NARCIS (Netherlands)

    Rodrigues de Lima, A.C.

    2007-01-01

    In the state of Rio Grande do Sul, Brazil, rice production is one of the most important regional activities. Farmers are concerned that the land use practices for rice production in the Camaquã region may not be sustainable because of detrimental effects on soil quality. The study presented in this

  5. Quality assessment and improvements in pathology practice

    NARCIS (Netherlands)

    Kuijpers, C.C.H.J.

    2016-01-01

    Every patient has the right to receive optimal quality health care. With regard to pathology practice, a small (interpretational) difference can have major impact for the patient, because prognosis and treatment selection are often based on the pathology report. Unfortunately, it is inevitable that

  6. Total Quality Management: Implications for Educational Assessment.

    Science.gov (United States)

    Rankin, Stuart C.

    1992-01-01

    Deming's "System of Profound Knowledge" is even more fundamental than his 14-principle system transformation guide and is based on 4 elements: systems theory, statistical variation, a theory of knowledge, and psychology. Management should revamp total system processes so that quality of product is continually improved. Implications for…

  7. Doctors or technicians: assessing quality of medical education.

    Science.gov (United States)

    Hasan, Tayyab

    2010-01-01

    Medical education institutions usually adapt industrial quality management models that measure the quality of the process of a program but not the quality of the product. The purpose of this paper is to analyze the impact of industrial quality management models on medical education and students, and to highlight the importance of introducing a proper educational quality management model. Industrial quality management models can measure the training component in terms of competencies, but they lack the educational component measurement. These models use performance indicators to assess their process improvement efforts. Researchers suggest that the performance indicators used in educational institutions may only measure their fiscal efficiency without measuring the quality of the educational experience of the students. In most of the institutions, where industrial models are used for quality assurance, students are considered as customers and are provided with the maximum services and facilities possible. Institutions are required to fulfill a list of recommendations from the quality control agencies in order to enhance student satisfaction and to guarantee standard services. Quality of medical education should be assessed by measuring the impact of the educational program and quality improvement procedures in terms of knowledge base development, behavioral change, and patient care. Industrial quality models may focus on academic support services and processes, but educational quality models should be introduced in parallel to focus on educational standards and products.

  8. Daily Encounter Cards—Evaluating the Quality of Documented Assessments

    Science.gov (United States)

    Cheung, Warren J.; Dudek, Nancy; Wood, Timothy J.; Frank, Jason R.

    2016-01-01

    ABSTRACT Background  Concerns over the quality of work-based assessment (WBA) completion has resulted in faculty development and rater training initiatives. Daily encounter cards (DECs) are a common form of WBA used in ambulatory care and shift work settings. A tool is needed to evaluate initiatives aimed at improving the quality of completion of this widely used form of WBA. Objective  The completed clinical evaluation report rating (CCERR) was designed to provide a measure of the quality of documented assessments on in-training evaluation reports. The purpose of this study was to provide validity evidence to support using the CCERR to assess the quality of DEC completion. Methods  Six experts in resident assessment grouped 60 DECs into 3 quality categories (high, average, and poor) based on how informative each DEC was for reporting judgments of the resident's performance. Eight supervisors (blinded to the expert groupings) scored the 10 most representative DECs in each group using the CCERR. Mean scores were compared to determine if the CCERR could discriminate based on DEC quality. Results  Statistically significant differences in CCERR scores were observed between all quality groups (P < .001). A generalizability analysis demonstrated the majority of score variation was due to differences in DECs. The reliability with a single rater was 0.95. Conclusions  The CCERR is a reliable and valid tool to evaluate DEC quality. It can serve as an outcome measure for studying interventions targeted at improving the quality of assessments documented on DECs. PMID:27777675

  9. Daily Encounter Cards-Evaluating the Quality of Documented Assessments.

    Science.gov (United States)

    Cheung, Warren J; Dudek, Nancy; Wood, Timothy J; Frank, Jason R

    2016-10-01

    Concerns over the quality of work-based assessment (WBA) completion has resulted in faculty development and rater training initiatives. Daily encounter cards (DECs) are a common form of WBA used in ambulatory care and shift work settings. A tool is needed to evaluate initiatives aimed at improving the quality of completion of this widely used form of WBA. The completed clinical evaluation report rating (CCERR) was designed to provide a measure of the quality of documented assessments on in-training evaluation reports. The purpose of this study was to provide validity evidence to support using the CCERR to assess the quality of DEC completion. Six experts in resident assessment grouped 60 DECs into 3 quality categories (high, average, and poor) based on how informative each DEC was for reporting judgments of the resident's performance. Eight supervisors (blinded to the expert groupings) scored the 10 most representative DECs in each group using the CCERR. Mean scores were compared to determine if the CCERR could discriminate based on DEC quality. Statistically significant differences in CCERR scores were observed between all quality groups ( P  evaluate DEC quality. It can serve as an outcome measure for studying interventions targeted at improving the quality of assessments documented on DECs.

  10. Center to Advance Palliative Care palliative care clinical care and customer satisfaction metrics consensus recommendations.

    Science.gov (United States)

    Weissman, David E; Morrison, R Sean; Meier, Diane E

    2010-02-01

    Data collection and analysis are vital for strategic planning, quality improvement, and demonstration of palliative care program impact to hospital administrators, private funders and policymakers. Since 2000, the Center to Advance Palliative Care (CAPC) has provided technical assistance to hospitals, health systems and hospices working to start, sustain, and grow nonhospice palliative care programs. CAPC convened a consensus panel in 2008 to develop recommendations for specific clinical and customer metrics that programs should track. The panel agreed on four key domains of clinical metrics and two domains of customer metrics. Clinical metrics include: daily assessment of physical/psychological/spiritual symptoms by a symptom assessment tool; establishment of patient-centered goals of care; support to patient/family caregivers; and management of transitions across care sites. For customer metrics, consensus was reached on two domains that should be tracked to assess satisfaction: patient/family satisfaction, and referring clinician satisfaction. In an effort to ensure access to reliably high-quality palliative care data throughout the nation, hospital palliative care programs are encouraged to collect and report outcomes for each of the metric domains described here.

  11. Landscape morphology metrics for urban areas: analysis of the role of vegetation in the management of the quality of urban environment

    Directory of Open Access Journals (Sweden)

    Danilo Marques de Magalhães

    2013-05-01

    Full Text Available This study has the objective to demonstrate the applicability of landscape metric analysis undertaken in fragments of urban land use. More specifically, it focuses in low vegetation cover, arboreal and shrubbery vegetation and their distribution on land use. Differences of vegetation cover in dense urban areas are explained. It also discusses briefly the state-of-the-art Landscape Ecology and landscape metrics. It develops, as an example, a case study in Belo Horizonte, Minas Gerais, Brazil. For this study, it selects the use of the area’s metrics, the relation between area, perimeter, core, and circumscribed circle. From this analysis, this paper proposes the definition of priority areas for conservation, urban parks, free spaces of common land, linear parks and green corridors. It is demonstrated that, in order to design urban landscape, studies of two-dimension landscape representations are still interesting, but should consider the systemic relation between different factors related to shape and land use.

  12. No-reference image quality assessment for horizontal-path imaging scenarios

    Science.gov (United States)

    Rios, Carlos; Gladysz, Szymon

    2013-05-01

    There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.

  13. Exploring the Notion of Quality in Quality Higher Education Assessment in a Collaborative Future

    Science.gov (United States)

    Maguire, Kate; Gibbs, Paul

    2013-01-01

    The purpose of this article is to contribute to the debate on the notion of quality in higher education with particular focus on "objectifying through articulation" the assessment of quality by professional experts. The article gives an overview of the differentiations of quality as used in higher education. It explores a substantial…

  14. Developing and validating a psychometric scale for image quality assessment

    International Nuclear Information System (INIS)

    Mraity, H.; England, A.; Hogg, P.

    2014-01-01

    Purpose: Using AP pelvis as a catalyst, this paper explains how a psychometric scale for image quality assessment can be created using Bandura's theory for self-efficacy. Background: Establishing an accurate diagnosis is highly dependent upon the quality of the radiographic image. Image quality, as a construct (i.e. set of attributes that makes up the image quality), continues to play an essential role in the field of diagnostic radiography. The process of assessing image quality can be facilitated by using criteria, such as the European Commission (EC) guidelines for quality criteria as published in 1996. However, with the advent of new technology (Computed Radiography and Digital Radiography), some of the EC criteria may no longer be suitable for assessing the visual quality of a digital radiographic image. Moreover, a lack of validated visual image quality scales in the literature can also lead to significant variations in image quality evaluation. Creating and validating visual image quality scales, using a robust methodology, could reduce variability and improve the validity and reliability of perceptual image quality evaluations

  15. Assessing the link between coastal urbanization and the quality of nekton habitat in mangrove tidal tributaries

    Science.gov (United States)

    Krebs, Justin M.; Bell, Susan S.; McIvor, Carole C.

    2014-01-01

    To assess the potential influence of coastal development on habitat quality for estuarine nekton, we characterized body condition and reproduction for common nekton from tidal tributaries classified as undeveloped, industrial, urban or man-made (i.e., mosquito-control ditches). We then evaluated these metrics of nekton performance, along with several abundance-based metrics and community structure from a companion paper (Krebs et al. 2013) to determine which metrics best reflected variation in land-use and in-stream habitat among tributaries. Body condition was not significantly different among undeveloped, industrial, and man-made tidal tributaries for six of nine taxa; however, three of those taxa were in significantly better condition in urban compared to undeveloped tributaries. Palaemonetes shrimp were the only taxon in significantly poorer condition in urban tributaries. For Poecilia latipinna, there was no difference in body condition (length–weight) between undeveloped and urban tributaries, but energetic condition was significantly better in urban tributaries. Reproductive output was reduced for both P. latipinna (i.e., fecundity) and grass shrimp (i.e., very low densities, few ovigerous females) in urban tributaries; however a tradeoff between fecundity and offspring size confounded meaningful interpretation of reproduction among land-use classes for P. latipinna. Reproductive allotment by P. latipinna did not differ significantly among land-use classes. Canonical correspondence analysis differentiated urban and non-urban tributaries based on greater impervious surface, less natural mangrove shoreline, higher frequency of hypoxia and lower, more variable salinities in urban tributaries. These characteristics explained 36 % of the variation in nekton performance, including high densities of poeciliid fishes, greater energetic condition of sailfin mollies, and low densities of several common nekton and economically important taxa from urban tributaries

  16. Environmental Monitoring, Water Quality - Lakes Assessments - Attaining

    Data.gov (United States)

    NSGIC Education | GIS Inventory — This layer shows only attaining lakes of the Integrated List. The Lakes Integrated List represents lake assessments in an integrated format for the Clean Water Act...

  17. Physicochemical and bacteriological quality assessment of the ...

    African Journals Online (AJOL)

    ALAKEH

    assessment of the Bambui community drinking water in the North West Region of .... The detection of bacteria such as E. coli and faecal coliform provides ... Water contamination may be due to leakage of pipes, cross contamination with.

  18. Virtual reality as a metric for the assessment of laparoscopic psychomotor skills. Learning curves and reliability measures.

    Science.gov (United States)

    Gallagher, A G; Satava, R M

    2002-12-01

    The objective assessment of the psychomotor skills of surgeons is now a priority; however, this is a difficult task because of measurement difficulties associated with the assessment of surgery in vivo. In this study, virtual reality (VR) was used to overcome these problems. Twelve experienced (>50 minimal-access procedures), 12 inexperienced laparoscopic surgeons (Virtual Reality (MIST VR). Experienced laparoscopic surgeons performed the tasks significantly (p < 0.01) faster, with less error, more economy in the movement of instruments and the use of diathermy, and with greater consistency in performance. The standardized coefficient alpha for performance measures ranged from a = 0.89 to 0.98, showing high internal measurement consistency. Test-retest reliability ranged from r = 0.96 to r = 0.5. VR is a useful tool for evaluating the psychomotor skills needed to perform laparoscopic surgery.

  19. Development, evaluation, and application of sediment quality targets for assessing and managing contaminated sediments in Tampa Bay, Florida

    Science.gov (United States)

    MacDonald, D.D.; Carr, R.S.; Eckenrod, D.; Greening, H.; Grabe, S.; Ingersoll, C.G.; Janicki, S.; Janicki, T.; Lindskoog, R.A.; Long, E.R.; Pribble, R.; Sloane, G.; Smorong, D.E.

    2004-01-01

    Tampa Bay is a large, urban estuary that is located in west central Florida. Although water quality conditions represent an important concern in this estuary, information from numerous sources indicates that sediment contamination also has the potential to adversely affect aquatic organisms, aquatic-dependent wildlife, and human health. As such, protecting relatively uncontaminated areas of the bay from contamination and reducing the amount of toxic chemicals in contaminated sediments have been identified as high-priority sediment management objectives for Tampa Bay. To address concerns related to sediment contamination in the bay, an ecosystem-based framework for assessing and managing sediment quality conditions was developed that included identification of sediment quality issues and concerns, development of ecosystem goals and objectives, selection of ecosystem health indicators, establishment of metrics and targets for key indicators, and incorporation of key indicators, metrics, and targets into watershed management plans and decision-making processes. This paper describes the process that was used to select and evaluate numerical sediment quality targets (SQTs) for assessing and managing contaminated sediments. These SQTs included measures of sediment chemistry, whole-sediment and pore-water toxicity, and benthic invertebrate community structure. In addition, the paper describes how the SQTs were used to develop site-specific concentration-response models that describe how the frequency of adverse biological effects changes with increasing concentrations of chemicals of potential concern. Finally, a key application of the SQTs for defining sediment management areas is discussed.

  20. Assessment of river quality in a subtropical Austral river system: a combined approach using benthic diatoms and macroinvertebrates

    Science.gov (United States)

    Nhiwatiwa, Tamuka; Dalu, Tatenda; Sithole, Tatenda

    2017-12-01

    River systems constitute areas of high human population densities owing to their favourable conditions for agriculture, water supply and transportation network. Despite human dependence on river systems, anthropogenic activities severely degrade water quality. The main aim of this study was to assess the river health of Ngamo River using diatom and macroinvertebrate community structure based on multivariate analyses and community metrics. Ammonia, pH, salinity, total phosphorus and temperature were found to be significantly different among the study seasons. The diatom and macroinvertebrate taxa richness increased downstream suggesting an improvement in water as we moved away from the pollution point sources. Canonical correspondence analyses identified nutrients (total nitrogen and reactive phosphorus) as important variables structuring diatom and macroinvertebrate community. The community metrics and diversity indices for both bioindicators highlighted that the water quality of the river system was very poor. These findings indicate that both methods can be used for water quality assessments, e.g. sewage and agricultural pollution, and they show high potential for use during water quality monitoring programmes in other regions.

  1. Assessing the colour quality of LED sources

    DEFF Research Database (Denmark)

    Jost-Boissard, S.; Avouac, P.; Fontoynont, Marc

    2015-01-01

    The CIE General Colour Rendering Index is currently the criterion used to describe and measure the colour-rendering properties of light sources. But over the past years, there has been increasing evidence of its limitations particularly its ability to predict the perceived colour quality of light...... sources and especially some LEDs. In this paper, several aspects of perceived colour quality are investigated using a side-by-side paired comparison method, and the following criteria: naturalness of fruits and vegetables, colourfulness of the Macbeth Color Checker chart, visual appreciation...... (attractiveness/ preference) and colour difference estimations for both visual scenes. Forty-five observers with normal colour vision evaluated nine light sources at 3000 K, and 36 observers evaluated eight light sources at 4000 K. Our results indicate that perceived colour differences are better dealt...

  2. Assessing future trends in indoor air quality

    International Nuclear Information System (INIS)

    van de Wiel, H.J.; Lebret, E.; van der Lingen, W.K.; Eerens, H.C.; Vaas, L.H.; Leupen, M.J.

    1990-01-01

    Several national and international health organizations have derived concentration levels below which adverse effects on men are not expected or levels below which the excess risk for individuals is less than a specified value. For every priority pollutant indoor concentrations below this limit are considered healthy. The percentage of Dutch homes exceeding such a limit is taken as a measure of indoor air quality for that component. The present and future indoor air quality of the Dutch housing stock is described for fourteen air pollutants. The highest percentages are scored by radon, environmental tobacco smoke, nitrogen dioxide from unvented combustion, and the potential presence of housedust mite and mould allergen in damp houses. Although the trend for all priority pollutants is downward the most serious ones remain high in the coming decades if no additional measures will be instituted

  3. ASSESSMENT OF QUALITY OF INNOVATIVE TECHNOLOGIES

    OpenAIRE

    Larisa Alexejevna Ismagilova; Nadegda Aleksandrovna Sukhova

    2016-01-01

    We consider the topical issue of implementation of innovative technologies in the aircraft engine building industry. In this industry, products with high reliability requirements are developed and mass-produced. These products combine the latest achievements of science and technology. To make a decision on implementation of innovative technologies, a comprehensive assessment is carried out. It affects the efficiency of the innovations realization. In connection with this, the assessment of qu...

  4. Considerations on the Assessment and Use of Cycling Performance Metrics and their Integration in the Athlete's Biological Passport

    Directory of Open Access Journals (Sweden)

    Paolo Menaspà

    2017-11-01

    Full Text Available Over the past few decades the possibility to capture real-time data from road cyclists has drastically improved. Given the increasing pressure for improved transparency and openness, there has been an increase in publication of cyclists' physiological and performance data. Recently, it has been suggested that the use of such performance biometrics may be used to strengthen the sensitivity and applicability of the Athlete Biological Passport (ABP and aid in the fight against doping. This is an interesting concept which has merit, although there are several important factors that need to be considered. These factors include accuracy of the data collected and validity (and reliability of the subsequent performance modeling. In order to guarantee high quality standards, the implementation of well-structured Quality-Systems within sporting organizations should be considered, and external certifications may be required. Various modeling techniques have been developed, many of which are based on fundamental intensity/time relationships. These models have increased our understanding of performance but are currently limited in their application, for example due to the largely unaccounted effects of environmental factors such as, heat and altitude. In conclusion, in order to use power data as a performance biometric to be integrated in the biological passport, a number of actions must be taken to ensure accuracy of the data and better understand road cycling performance in the field. This article aims to outline considerations in the quantification of cycling performance, also presenting an alternative method (i.e., monitoring race results to allow for determination of unusual performance improvements.

  5. Inter-rater Reliability for Metrics Scored in a Binary Fashion-Performance Assessment for an Arthroscopic Bankart Repair.

    Science.gov (United States)

    Gallagher, Anthony G; Ryu, Richard K N; Pedowitz, Robert A; Henn, Patrick; Angelo, Richard L

    2018-05-02

    To determine the inter-rater reliability (IRR) of a procedure-specific checklist scored in a binary fashion for the evaluation of surgical skill and whether it meets a minimum level of agreement (≥0.8 between 2 raters) required for high-stakes assessment. In a prospective randomized and blinded fashion, and after detailed assessment training, 10 Arthroscopy Association of North America Master/Associate Master faculty arthroscopic surgeons (in 5 pairs) with an average of 21 years of surgical experience assessed the video-recorded 3-anchor arthroscopic Bankart repair performance of 44 postgraduate year 4 or 5 residents from 21 Accreditation Council for Graduate Medical Education orthopaedic residency training programs from across the United States. No paired scores of resident surgeon performance evaluated by the 5 teams of faculty assessors dropped below the 0.8 IRR level (mean = 0.93; range 0.84-0.99; standard deviation = 0.035). A comparison between the 5 assessor groups with 1 factor analysis of variance showed that there was no significant difference between the groups (P = .205). Pearson's product-moment correlation coefficient revealed a strong and statistically significant negative correlation, that is, -0.856 (P fashion meet the need and can show a high (>80%) IRR. Copyright © 2018 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  6. Crowdsourcing based subjective quality assessment of adaptive video streaming

    DEFF Research Database (Denmark)

    Shahid, M.; Søgaard, Jacob; Pokhrel, J.

    2014-01-01

    In order to cater for user’s quality of experience (QoE) re- quirements, HTTP adaptive streaming (HAS) based solutions of video services have become popular recently. User QoE feedback can be instrumental in improving the capabilities of such services. Perceptual quality experiments that involve...... humans are considered to be the most valid method of the as- sessment of QoE. Besides lab-based subjective experiments, crowdsourcing based subjective assessment of video quality is gaining popularity as an alternative method. This paper presents insights into a study that investigates perceptual pref......- erences of various adaptive video streaming scenarios through crowdsourcing based subjective quality assessment....

  7. Quality assessment before and after knee replacement

    Directory of Open Access Journals (Sweden)

    Paweł Węgorowski

    2017-07-01

    On the basis of the research, it was concluded that the main cause of the implantation of the prosthesis was a knee injury (54%. The disease affected the deterioration of physical fitness prior to implantation of knee arthroplasty in 28% of respondents; 34% said they were very good. The quality of life after implantation of knee arthroplasty significantly improved in 57% of respondents. Self-service after surgery has improved considerably in 23% of respondents.

  8. QRS detection based ECG quality assessment

    International Nuclear Information System (INIS)

    Hayn, Dieter; Jammerbund, Bernhard; Schreier, Günter

    2012-01-01

    Although immediate feedback concerning ECG signal quality during recording is useful, up to now not much literature describing quality measures is available. We have implemented and evaluated four ECG quality measures. Empty lead criterion (A), spike detection criterion (B) and lead crossing point criterion (C) were calculated from basic signal properties. Measure D quantified the robustness of QRS detection when applied to the signal. An advanced Matlab-based algorithm combining all four measures and a simplified algorithm for Android platforms, excluding measure D, were developed. Both algorithms were evaluated by taking part in the Computing in Cardiology Challenge 2011. Each measure's accuracy and computing time was evaluated separately. During the challenge, the advanced algorithm correctly classified 93.3% of the ECGs in the training-set and 91.6 % in the test-set. Scores for the simplified algorithm were 0.834 in event 2 and 0.873 in event 3. Computing time for measure D was almost five times higher than for other measures. Required accuracy levels depend on the application and are related to computing time. While our simplified algorithm may be accurate for real-time feedback during ECG self-recordings, QRS detection based measures can further increase the performance if sufficient computing power is available. (paper)

  9. E-Services quality assessment framework for collaborative networks

    Science.gov (United States)

    Stegaru, Georgiana; Danila, Cristian; Sacala, Ioan Stefan; Moisescu, Mihnea; Mihai Stanescu, Aurelian

    2015-08-01

    In a globalised networked economy, collaborative networks (CNs) are formed to take advantage of new business opportunities. Collaboration involves shared resources and capabilities, such as e-Services that can be dynamically composed to automate CN participants' business processes. Quality is essential for the success of business process automation. Current approaches mostly focus on quality of service (QoS)-based service selection and ranking algorithms, overlooking the process of service composition which requires interoperable, adaptable and secure e-Services to ensure seamless collaboration, data confidentiality and integrity. Lack of assessment of these quality attributes can result in e-Service composition failure. The quality of e-Service composition relies on the quality of each e-Service and on the quality of the composition process. Therefore, there is the need for a framework that addresses quality from both views: product and process. We propose a quality of e-Service composition (QoESC) framework for quality assessment of e-Service composition for CNs which comprises of a quality model for e-Service evaluation and guidelines for quality of e-Service composition process. We implemented a prototype considering a simplified telemedicine use case which involves a CN in e-Healthcare domain. To validate the proposed quality-driven framework, we analysed service composition reliability with and without using the proposed framework.

  10. Factors Influencing Assessment Quality in Higher Vocational Education

    Science.gov (United States)

    Baartman, Liesbeth; Gulikers, Judith; Dijkstra, Asha

    2013-01-01

    The development of assessments that are fit to assess professional competence in higher vocational education requires a reconsideration of assessment methods, quality criteria and (self)evaluation. This article examines the self-evaluations of nine courses of a large higher vocational education institute. Per course, 4-11 teachers and 3-10…

  11. Assessment report for Hanford analytical services quality assurance plan

    International Nuclear Information System (INIS)

    Taylor, L.H.

    1994-11-01

    This report documents the assessment results of DOE/RL-94-55, Hanford Analytical Services Quality Assurance Plan. The assessment was conducted using the Requirement and Self-Assessment Database (RSAD), which contains mandatory and nonmandatory DOE Order statements for the relevant DOE orders

  12. App Usage Factor: A Simple Metric to Compare the Population Impact of Mobile Medical Apps.

    Science.gov (United States)

    Lewis, Thomas Lorchan; Wyatt, Jeremy C

    2015-08-19

    One factor when assessing the quality of mobile apps is quantifying the impact of a given app on a population. There is currently no metric which can be used to compare the population impact of a mobile app across different health care disciplines. The objective of this study is to create a novel metric to characterize the impact of a mobile app on a population. We developed the simple novel metric, app usage factor (AUF), defined as the logarithm of the product of the number of active users of a mobile app with the median number of daily uses of the app. The behavior of this metric was modeled using simulated modeling in Python, a general-purpose programming language. Three simulations were conducted to explore the temporal and numerical stability of our metric and a simulated app ecosystem model using a simulated dataset of 20,000 apps. Simulations confirmed the metric was stable between predicted usage limits and remained stable at extremes of these limits. Analysis of a simulated dataset of 20,000 apps calculated an average value for the app usage factor of 4.90 (SD 0.78). A temporal simulation showed that the metric remained stable over time and suitable limits for its use were identified. A key component when assessing app risk and potential harm is understanding the potential population impact of each mobile app. Our metric has many potential uses for a wide range of stakeholders in the app ecosystem, including users, regulators, developers, and health care professionals. Furthermore, this metric forms part of the overall estimate of risk and potential for harm or benefit posed by a mobile medical app. We identify the merits and limitations of this metric, as well as potential avenues for future validation and research.

  13. Diagnostic on the appropriation of metrics in software medium enterprises of Medellin city

    Directory of Open Access Journals (Sweden)

    Piedad Metaute P.

    2016-06-01

    Full Text Available This article was produced as a result of the investigation, "Ownership and use of metrics in software medium-sized city of Medellin." The objective of this research was to conduct an assessment of the ownership and use of metrics, seeking to make recommendations that contribute to the strengthening of academia and the productive sector in this topic. The methodology used was based on documentary review related to international norms, standards, methodologies, guides and tools that address software quality metrics especially applicable during Software Engineering. The main sources consulted were books, journals and articles, which could raise the foundation for such research, likewise, field research was used, it applied to medium-sized enterprises engaged in the construction of the product, where contact he had with people involved in these processes, of which data pertaining to real contexts where the events are generated are obtained. topics were addressed as project control, process control, software engineering, control of product quality software, application time metrics, applying metrics at different stages, certifications metrics, methodologies, tools used, processes where contributions in their application, types of tests which are applied, among others, which resulted, argued discussion findings generated from the respective regulations, best practices and needs of different contexts where they are used metrics apply software products in addition to the respective conclusions and practical implications that allowed for an assessment of the ownership and use of metrics in software medium-sized city of Medellin, as well as some suggestions for the academy, aimed at strengthening subjects whose responsibility generating skills in Software Engineering, especially in the metrics, and contextualized for significant contributions to the industry.

  14. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems.

    Science.gov (United States)

    Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A

    2015-01-01

    Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.

  15. Image quality assessment for determining efficacy and limitations of Super-Resolution Convolutional Neural Network (SRCNN)

    Science.gov (United States)

    Ward, Chris M.; Harguess, Joshua; Crabb, Brendan; Parameswaran, Shibin

    2017-09-01

    Traditional metrics for evaluating the efficacy of image processing techniques do not lend themselves to under- standing the capabilities and limitations of modern image processing methods - particularly those enabled by deep learning. When applying image processing in engineering solutions, a scientist or engineer has a need to justify their design decisions with clear metrics. By applying blind/referenceless image spatial quality (BRISQUE), Structural SIMilarity (SSIM) index scores, and Peak signal-to-noise ratio (PSNR) to images before and after im- age processing, we can quantify quality improvements in a meaningful way and determine the lowest recoverable image quality for a given method.

  16. Objective and Subjective Assessment of Digital Pathology Image Quality

    Directory of Open Access Journals (Sweden)

    Prarthana Shrestha

    2015-03-01

    Full Text Available The quality of an image produced by the Whole Slide Imaging (WSI scanners is of critical importance for using the image in clinical diagnosis. Therefore, it is very important to monitor and ensure the quality of images. Since subjective image quality assessments by pathologists are very time-consuming, expensive and difficult to reproduce, we propose a method for objective assessment based on clinically relevant and perceptual image parameters: sharpness, contrast, brightness, uniform illumination and color separation; derived from a survey of pathologists. We developed techniques to quantify the parameters based on content-dependent absolute pixel performance and to manipulate the parameters in a predefined range resulting in images with content-independent relative quality measures. The method does not require a prior reference model. A subjective assessment of the image quality is performed involving 69 pathologists and 372 images (including 12 optimal quality images and their distorted versions per parameter at 6 different levels. To address the inter-reader variability, a representative rating is determined as a one-tailed 95% confidence interval of the mean rating. The results of the subjective assessment support the validity of the proposed objective image quality assessment method to model the readers’ perception of image quality. The subjective assessment also provides thresholds for determining the acceptable level of objective quality per parameter. The images for both the subjective and objective quality assessment are based on the HercepTestTM slides scanned by the Philips Ultra Fast Scanners, developed at Philips Digital Pathology Solutions. However, the method is applicable also to other types of slides and scanners.

  17. Metric diffusion along foliations

    CERN Document Server

    Walczak, Szymon M

    2017-01-01

    Up-to-date research in metric diffusion along compact foliations is presented in this book. Beginning with fundamentals from the optimal transportation theory and the theory of foliations; this book moves on to cover Wasserstein distance, Kantorovich Duality Theorem, and the metrization of the weak topology by the Wasserstein distance. Metric diffusion is defined, the topology of the metric space is studied and the limits of diffused metrics along compact foliations are discussed. Essentials on foliations, holonomy, heat diffusion, and compact foliations are detailed and vital technical lemmas are proved to aide understanding. Graduate students and researchers in geometry, topology and dynamics of foliations and laminations will find this supplement useful as it presents facts about the metric diffusion along non-compact foliation and provides a full description of the limit for metrics diffused along foliation with at least one compact leaf on the two dimensions.

  18. Metric modular spaces

    CERN Document Server

    Chistyakov, Vyacheslav

    2015-01-01

    Aimed toward researchers and graduate students familiar with elements of functional analysis, linear algebra, and general topology; this book contains a general study of modulars, modular spaces, and metric modular spaces. Modulars may be thought of as generalized velocity fields and serve two important purposes: generate metric spaces in a unified manner and provide a weaker convergence, the modular convergence, whose topology is non-metrizable in general. Metric modular spaces are extensions of metric spaces, metric linear spaces, and classical modular linear spaces. The topics covered include the classification of modulars, metrizability of modular spaces, modular transforms and duality between modular spaces, metric  and modular topologies. Applications illustrated in this book include: the description of superposition operators acting in modular spaces, the existence of regular selections of set-valued mappings, new interpretations of spaces of Lipschitzian and absolutely continuous mappings, the existe...

  19. STUDY OF POND WATER QUALITY BY THE ASSESSMENT OF PHYSICOCHEMICAL PARAMETERS AND WATER QUALITY INDEX

    OpenAIRE

    Vinod Jena; Satish Dixit; Ravi ShrivastavaSapana Gupta; Sapana Gupta

    2013-01-01

    Water quality index (WQI) is a dimensionless number that combines multiple water quality factors into a single number by normalizing values to subjective rating curves. Conventionally it has been used for evaluating the quality of water for water resources suchas rivers, streams and lakes, etc. The present work is aimed at assessing the Water Quality Index (W.Q.I) ofpond water and the impact of human activities on it. Physicochemical parameters were monitored for the calculation of W.Q.I for ...

  20. Many quality measurements, but few quality measures assessing the quality of breast cancer care in women: A systematic review

    Directory of Open Access Journals (Sweden)

    Zhang Li

    2006-12-01

    Full Text Available Abstract Background Breast cancer in women is increasingly frequent, and care is complex, onerous and expensive, all of which lend urgency to improvements in care. Quality measurement is essential to monitor effectiveness and to guide improvements in healthcare. Methods Ten databases, including Medline, were searched electronically to identify measures assessing the quality of breast cancer care in women (diagnosis, treatment, followup, documentation of care. Eligible studies measured adherence to standards of breast cancer care in women diagnosed with, or in treatment for, any histological type of adenocarcinoma of the breast. Reference lists of studies, review articles, web sites, and files of experts were searched manually. Evidence appraisal entailed dual independent assessments of data (e.g., indicators used in quality me