WorldWideScience

Sample records for assessments quality metrics

  1. Assessing Software Quality Through Visualised Cohesion Metrics

    Directory of Open Access Journals (Sweden)

    Timothy Shih

    2001-05-01

    Full Text Available Cohesion is one of the most important factors for software quality as well as maintainability, reliability and reusability. Module cohesion is defined as a quality attribute that seeks for measuring the singleness of the purpose of a module. The module of poor quality can be a serious obstacle to the system quality. In order to design a good software quality, software managers and engineers need to introduce cohesion metrics to measure and produce desirable software. A highly cohesion software is thought to be a desirable constructing. In this paper, we propose a function-oriented cohesion metrics based on the analysis of live variables, live span and the visualization of processing element dependency graph. We give six typical cohesion examples to be measured as our experiments and justification. Therefore, a well-defined, well-normalized, well-visualized and well-experimented cohesion metrics is proposed to indicate and thus enhance software cohesion strength. Furthermore, this cohesion metrics can be easily incorporated with software CASE tool to help software engineers to improve software quality.

  2. Software Quality Metrics for Geant4: An Initial Assessment

    CERN Document Server

    Ronchieri, Elisabetta; Giacomini, Francesco

    2016-01-01

    In the context of critical applications, such as shielding and radiation protection, ensuring the quality of simulation software they depend on is of utmost importance. The assessment of simulation software quality is important not only to determine its adoption in experimental applications, but also to guarantee reproducibility of outcome over time. In this study, we present initial results from an ongoing analysis of Geant4 code based on established software metrics. The analysis evaluates the current status of the code to quantify its characteristics with respect to documented quality standards; further assessments concern evolutions over a series of release distributions. We describe the selected metrics that quantify software attributes ranging from code complexity to maintainability, and highlight what metrics are most effective at evaluating radiation transport software quality. The quantitative assessment of the software is initially focused on a set of Geant4 packages, which play a key role in a wide...

  3. Image quality assessment metrics by using directional projection

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Objective image quality mcasure, which is a fundamental and challenging job in image processing, evaluates the image quality consistently with human perception automatically. On the assumption that any image distortion could be modeled as the difference between the directional projection-based maps of reference and distortion images, wc propose a new objective quality assessment method based on directional projection for full reference model. Experimental results show that the proposed metrics are well consistent with the subjective quality score.

  4. Toward an impairment metric for stereoscopic video: a full-reference video quality metric to assess compressed stereoscopic video.

    Science.gov (United States)

    De Silva, Varuna; Arachchi, Hemantha Kodikara; Ekmekcioglu, Erhan; Kondoz, Ahmet

    2013-09-01

    The quality assessment of impaired stereoscopic video is a key element in designing and deploying advanced immersive media distribution platforms. A widely accepted quality metric to measure impairments of stereoscopic video is, however, still to be developed. As a step toward finding a solution to this problem, this paper proposes a full reference stereoscopic video quality metric to measure the perceptual quality of compressed stereoscopic video. A comprehensive set of subjective experiments is performed with 14 different stereoscopic video sequences, which are encoded using both the H.264 and high efficiency video coding compliant video codecs, to develop a subjective test results database of 116 test stimuli. The subjective results are analyzed using statistical techniques to uncover different patterns of subjective scoring for symmetrically and asymmetrically encoded stereoscopic video. The subjective result database is subsequently used for training and validating a simple but effective stereoscopic video quality metric considering heuristics of binocular vision. The proposed metric performs significantly better than state-of-the-art stereoscopic image and video quality metrics in predicting the subjective scores. The proposed metric and the subjective result database will be made publicly available, and it is expected that the proposed metric and the subjective assessments will have important uses in advanced 3D media delivery systems.

  5. Supporting analysis and assessments quality metrics: Utility market sector

    Energy Technology Data Exchange (ETDEWEB)

    Ohi, J. [National Renewable Energy Lab., Golden, CO (United States)

    1996-10-01

    In FY96, NREL was asked to coordinate all analysis tasks so that in FY97 these tasks will be part of an integrated analysis agenda that will begin to define a 5-15 year R&D roadmap and portfolio for the DOE Hydrogen Program. The purpose of the Supporting Analysis and Assessments task at NREL is to provide this coordination and conduct specific analysis tasks. One of these tasks is to prepare the Quality Metrics (QM) for the Program as part of the overall QM effort at DOE/EERE. The Hydrogen Program one of 39 program planning units conducting QM, a process begun in FY94 to assess benefits/costs of DOE/EERE programs. The purpose of QM is to inform decisionmaking during budget formulation process by describing the expected outcomes of programs during the budget request process. QM is expected to establish first step toward merit-based budget formulation and allow DOE/EERE to get {open_quotes}most bang for its (R&D) buck.{close_quotes} In FY96. NREL coordinated a QM team that prepared a preliminary QM for the utility market sector. In the electricity supply sector, the QM analysis shows hydrogen fuel cells capturing 5% (or 22 GW) of the total market of 390 GW of new capacity additions through 2020. Hydrogen consumption in the utility sector increases from 0.009 Quads in 2005 to 0.4 Quads in 2020. Hydrogen fuel cells are projected to displace over 0.6 Quads of primary energy in 2020. In future work, NREL will assess the market for decentralized, on-site generation, develop cost credits for distributed generation benefits (such as deferral of transmission and distribution investments, uninterruptible power service), cost credits for by-products such as heat and potable water, cost credits for environmental benefits (reduction of criteria air pollutants and greenhouse gas emissions), compete different fuel cell technologies against each other for market share, and begin to address economic benefits, especially employment.

  6. A no-reference video quality assessment metric based on ROI

    Science.gov (United States)

    Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan

    2015-01-01

    A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.

  7. The compressed average image intensity metric for stereoscopic video quality assessment

    Science.gov (United States)

    Wilczewski, Grzegorz

    2016-09-01

    The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.

  8. Quality Metrics in Inpatient Neurology.

    Science.gov (United States)

    Dhand, Amar

    2015-12-01

    Quality of care in the context of inpatient neurology is the standard of performance by neurologists and the hospital system as measured against ideal models of care. There are growing regulatory pressures to define health care value through concrete quantifiable metrics linked to reimbursement. Theoretical models of quality acknowledge its multimodal character with quantitative and qualitative dimensions. For example, the Donabedian model distils quality as a phenomenon of three interconnected domains, structure-process-outcome, with each domain mutually influential. The actual measurement of quality may be implicit, as in peer review in morbidity and mortality rounds, or explicit, in which criteria are prespecified and systemized before assessment. As a practical contribution, in this article a set of candidate quality indicators for inpatient neurology based on an updated review of treatment guidelines is proposed. These quality indicators may serve as an initial blueprint for explicit quality metrics long overdue for inpatient neurology.

  9. Quality Assessment of Adaptive Bitrate Videos using Image Metrics and Machine Learning

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Brunnström, Kjell

    2015-01-01

    Adaptive bitrate (ABR) streaming is widely used for distribution of videos over the internet. In this work, we investigate how well we can predict the quality of such videos using well-known image metrics, information about the bitrate levels, and a relatively simple machine learning method...

  10. Metrics and Assessment

    Directory of Open Access Journals (Sweden)

    Todd Carpenter

    2015-07-01

    Full Text Available An important and timely plenary session at the 2015 UKSG Conference and Exhibition focused on the role of metrics in research assessment. The two excellent speakers had slightly divergent views.Todd Carpenter from NISO (National Information Standards Organization argued that altmetrics aren’t alt anymore and that downloads and other forms of digital interaction, including social media reference, reference tracking, personal library saving, and secondary linking activity now provide mainstream approaches to the assessment of scholarly impact. James Wilsdon is professor of science and democracy in the Science Policy Research Unit at the University of Sussex and is chair of the Independent Review of the Role of Metrics in Research Assessment commissioned by the Higher Education Funding Council in England (HEFCE. The outcome of this review will inform the work of HEFCE and the other UK higher education funding bodies as they prepare for the future of the Research Excellence Framework. He is more circumspect arguing that metrics cannot and should not be used as a substitute for informed judgement. This article provides a summary of both presentations.

  11. Software Quality Metrics

    Science.gov (United States)

    1991-07-01

    March 1979, pp. 121-128. Gorla, Narasimhaiah, Alan C. Benander, and Barbara A. Benander, "Debugging Effort Estimation Using Software Metrics", IEEE...Society, IEEE Guide for the Use of IEEE Standard Dictionary of Measures to Produce Reliable Software, IEEE Std 982.2-1988, June 1989. Jones, Capers

  12. SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, T; Ruan, D [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject to random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be

  13. Quality through metrics.

    Science.gov (United States)

    Frederick, L; Kallal, T; Krook, H

    1999-01-01

    The Quality Assurance Unit analyzed 18 months of departmental data regarding the report-audit cycle. Process mapping was utilized to identify milestones in the cycle for measurement. Five milestones were identified in the audit cycle, as follows: (1) time from report receipt in quality assurance to start of audit, (2) total calendar days to audit a report, (3) actual person-hours to perform a report audit, (4) time from completion of audit to issuance of report, and (5) total time a report is in quality assurance. An interrelationship diagraph is a quality tool that is used to identify what activities impact the overall report-auditing process. Once the data collection procedure is defined, a spreadsheet is constructed that captures the data. The resulting information is presented in time charts and bar graphs to visually aid in interpretation and analysis. Using these quality tools and statistical analyses, the Quality Assurance Unit identified areas needing improvement and confirmed or dispelled previous assumptions regarding the report-auditing process. By mapping, measuring, analyzing, and displaying the data, the overall process was examined critically. This resulted in the identification of areas needing improvement and a greater understanding of the report-audit cycle. A further benefit from our increased knowledge was the ability to explain our findings objectively to our client groups. This sharing of information gave impetus to our clients to examine their report-generation process and to make improvements.

  14. Biotic, water-quality, and hydrologic metrics calculated for the analysis of temporal trends in National Water Quality Assessment Program Data in the Western United States

    Science.gov (United States)

    Wiele, Stephen M.; Brasher, Anne M.D.; Miller, Matthew P.; May, Jason T.; Carpenter, Kurt D.

    2012-01-01

    The U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program was established by Congress in 1991 to collect long-term, nationally consistent information on the quality of the Nation's streams and groundwater. The NAWQA Program utilizes interdisciplinary and dynamic studies that link the chemical and physical conditions of streams (such as flow and habitat) with ecosystem health and the biologic condition of algae, aquatic invertebrates, and fish communities. This report presents metrics derived from NAWQA data and the U.S. Geological Survey streamgaging network for sampling sites in the Western United States, as well as associated chemical, habitat, and streamflow properties. The metrics characterize the conditions of algae, aquatic invertebrates, and fish. In addition, we have compiled climate records and basin characteristics related to the NAWQA sampling sites. The calculated metrics and compiled data can be used to analyze ecohydrologic trends over time.

  15. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    Science.gov (United States)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  16. A Medical Image Watermarking Technique for Embedding EPR and Its Quality Assessment Using No-Reference Metrics

    Directory of Open Access Journals (Sweden)

    Rupinder Kaur

    2013-01-01

    Full Text Available Digital watermarking can be used as an important tool for the security and copyright protection of digital multimedia content. The present paper explores its applications as a quality indicator of a watermarked medical image when subjected to intentional (noise, cropping, alteration or unintentional (compression, transmission or filtering operations. The watermark also carries EPR data along with a binary mark (used for quality assessment. The binary mark is used as a No-Reference (NR quality metrics that blindly estimates the quality of an image without the need of original image. It is a semi-fragile watermark which degrades at around the same rate as the original image and thus gives an indication of the quality degradation of the host image at the receiving end. In the proposed method, the original image is divided into two parts- ROI and non-ROI. ROI is an area that contains diagnostically important information and must be processed without any distortion. The binary mark and EPR are embedded into the DCT domain of Non-ROI. Embedding EPR within a medical image reduces storage and transmission overheads and no additional file has to be sent along with an image. The watermark (binary mark and EPR is extracted from non-ROI part at the receiving end and a measure of degradation of binary mark is used to estimate the quality of the original image. The performance of the proposed method is evaluated by calculating MSE and PSNR of original and extracted mark.

  17. Application of sigma metrics for the assessment of quality control in clinical chemistry laboratory in Ghana: A pilot study

    Directory of Open Access Journals (Sweden)

    Justice Afrifa

    2015-01-01

    Full Text Available Background: Sigma metrics provide a uniquely defined scale with which we can assess the performance of a laboratory. The objective of this study was to assess the internal quality control (QC in the clinical chemistry laboratory of the University of Cape Cost Hospital (UCC using the six sigma metrics application. Materials and Methods: We used commercial control serum [normal (L1 and pathological (L2] for validation of quality control. Metabolites (glucose, urea, and creatinine, lipids [triglycerides (TG, total cholesterol, high-density lipoprotein cholesterol (HDL-C], enzymes [alkaline phosphatase (ALP, alanine aminotransferase (AST], electrolytes (sodium, potassium, chloride and total protein were assessed. Between-day imprecision (CVs, inaccuracy (Bias and sigma values were calculated for each control level. Results: Apart from sodium (2.40%, 3.83%, chloride (2.52% and 2.51% for both L1 and L2 respectively, and glucose (4.82%, cholesterol (4.86% for L2, CVs for all other parameters (both L1 and L2 were >5%. Four parameters (HDL-C, urea, creatinine and potassium achieved sigma levels >1 for both controls. Chloride and sodium achieved sigma levels >1 for L1 but 1 for L2. Glucose and ALP achieved a sigma level >1 for both control levels whereas TG achieved a sigma level >2 for both control levels. Conclusion: Unsatisfactory sigma levels (<3 where achieved for all parameters using both control levels, this shows instability and low consistency of results. There is the need for detailed assessment of the analytical procedures and the strengthening of the laboratory control systems in order to achieve effective six sigma levels for the laboratory.

  18. Informing the judgments of fingerprint analysts using quality metric and statistical assessment tools.

    Science.gov (United States)

    Langenburg, Glenn; Champod, Christophe; Genessay, Thibault

    2012-06-10

    The aim of this research was to evaluate how fingerprint analysts would incorporate information from newly developed tools into their decision making processes. Specifically, we assessed effects using the following: (1) a quality tool to aid in the assessment of the clarity of the friction ridge details, (2) a statistical tool to provide likelihood ratios representing the strength of the corresponding features between compared fingerprints, and (3) consensus information from a group of trained fingerprint experts. The measured variables for the effect on examiner performance were the accuracy and reproducibility of the conclusions against the ground truth (including the impact on error rates) and the analyst accuracy and variation for feature selection and comparison. The results showed that participants using the consensus information from other fingerprint experts demonstrated more consistency and accuracy in minutiae selection. They also demonstrated higher accuracy, sensitivity, and specificity in the decisions reported. The quality tool also affected minutiae selection (which, in turn, had limited influence on the reported decisions); the statistical tool did not appear to influence the reported decisions.

  19. A management-oriented framework for selecting metrics used to assess habitat- and path-specific quality in spatially structured populations

    Science.gov (United States)

    Sam Nicol,; Ruscena Wiederholt,; Diffendorfer, James E.; Brady Mattsson,; Thogmartin, Wayne E.; Semmens, Darius J.; Laura Lopez-Hoffman,; Ryan Norris,

    2016-01-01

    Mobile species with complex spatial dynamics can be difficult to manage because their population distributions vary across space and time, and because the consequences of managing particular habitats are uncertain when evaluated at the level of the entire population. Metrics to assess the importance of habitats and pathways connecting habitats in a network are necessary to guide a variety of management decisions. Given the many metrics developed for spatially structured models, it can be challenging to select the most appropriate one for a particular decision. To guide the management of spatially structured populations, we define three classes of metrics describing habitat and pathway quality based on their data requirements (graph-based, occupancy-based, and demographic-based metrics) and synopsize the ecological literature relating to these classes. Applying the first steps of a formal decision-making approach (problem framing, objectives, and management actions), we assess the utility of metrics for particular types of management decisions. Our framework can help managers with problem framing, choosing metrics of habitat and pathway quality, and to elucidate the data needs for a particular metric. Our goal is to help managers to narrow the range of suitable metrics for a management project, and aid in decision-making to make the best use of limited resources.

  20. Assessments of habitat preferences and quality depend on spatial scale and metrics of fitness

    Science.gov (United States)

    Chalfoun, A.D.; Martin, T.E.

    2007-01-01

    1. Identifying the habitat features that influence habitat selection and enhance fitness is critical for effective management. Ecological theory predicts that habitat choices should be adaptive, such that fitness is enhanced in preferred habitats. However, studies often report mismatches between habitat preferences and fitness consequences across a wide variety of taxa based on a single spatial scale and/or a single fitness component. 2. We examined whether habitat preferences of a declining shrub steppe songbird, the Brewer's sparrow Spizella breweri, were adaptive when multiple reproductive fitness components and spatial scales (landscape, territory and nest patch) were considered. 3. We found that birds settled earlier and in higher densities, together suggesting preference, in landscapes with greater shrub cover and height. Yet nest success was not higher in these landscapes; nest success was primarily determined by nest predation rates. Thus landscape preferences did not match nest predation risk. Instead, nestling mass and the number of nesting attempts per pair increased in preferred landscapes, raising the possibility that landscapes were chosen on the basis of food availability rather than safe nest sites. 4. At smaller spatial scales (territory and nest patch), birds preferred different habitat features (i.e. density of potential nest shrubs) that reduced nest predation risk and allowed greater season-long reproductive success. 5. Synthesis and applications. Habitat preferences reflect the integration of multiple environmental factors across multiple spatial scales, and individuals may have more than one option for optimizing fitness via habitat selection strategies. Assessments of habitat quality for management prescriptions should ideally include analysis of diverse fitness consequences across multiple ecologically relevant spatial scales. ?? 2007 The Authors.

  1. Towards Reliable Stereoscopic 3D Quality Evaluation: Subjective Assessment and Objective Metrics

    OpenAIRE

    Xing, Liyuan

    2013-01-01

    Stereoscopic three-dimensional (3D) services have become more popular recently amid promise of providing immersive quality of experience (QoE) to the end-users with the help of binocular depth. However, various arisen artifacts in the stereoscopic 3D processing chain might cause discomfort and severely degrade the QoE. Unfortunately, although the causes and nature of artifacts have already been clearly understood, it is impossible to eliminate them under the limitation of current stereoscopic...

  2. The Development and Demonstration of The Metric Assessment Tool

    Science.gov (United States)

    1993-09-01

    motivate continuous improvement and likewise quality. Attributen of MNaninafui Metrica Section Overview. The importance of metrics cannot be overstated...some of the attributes of meaningful measures discussed earlier in this chapter. The Metrica Handbook. This guide is utilized by a variety of Air...Metric Assessment Tool. 3-8 Metrica Belaction The metric assessment tool was designed to apply to any type of metric. Two criteria were established for

  3. Quality Metric Development Framework (qMDF

    Directory of Open Access Journals (Sweden)

    K. Mustafa

    2005-01-01

    Full Text Available Several object-oriented metrics have been developed and used in conjunction with the quality models to predict the overall quality of software. However, it may not be enough to propose metrics. The fundamental question may be of their validity, utility and reliability. It may be much significant to be sure that these metrics are really useful and for that their construct validity must be assured. Thereby, good quality metrics must be developed using a foolproof and sound framework / model. A critical review of literature on the attempts in this regard reveals that there is no standard framework or model available for such an important activity. This study presents a framework for the quality metric development called Metric Development Framework (qMDF, which is prescriptive in nature. qMDF is a general framework but it has been established specially with ideas of object-oriented metrics. qMDF has been implemented to develop a good quality design metric, as a validation of proposed framework. Finally, it is defended that adaptation of qMDF by metric developers would yield good quality metrics, while ensuring their construct validity, utility, reliability and reduced developmental effort.

  4. THE QUALITY METRICS OF INFORMATION SYSTEMS

    Directory of Open Access Journals (Sweden)

    Zora Arsovski

    2008-06-01

    Full Text Available Information system is a special kind of products which is depend upon great number variables related to nature, conditions during implementation and organizational clime and culture. Because that quality metrics of information system (QMIS has to reflect all previous aspects of information systems. In this paper are presented basic elements of QMIS, characteristics of implementation and operation metrics for IS, team - management quality metrics for IS and organizational aspects of quality metrics. In second part of this paper are presented results of study of QMIS in area of MIS (Management IS.

  5. Application of sigma metrics for the assessment of quality assurance in clinical biochemistry laboratory in India: a pilot study.

    Science.gov (United States)

    Singh, Bhawna; Goswami, Binita; Gupta, Vinod Kumar; Chawla, Ranjna; Mallika, Venkatesan

    2011-04-01

    Ensuring quality of laboratory services is the need of the hour in the field of health care. Keeping in mind the revolution ushered by six sigma concept in corporate world, health care sector may reap the benefits of the same. Six sigma provides a general methodology to describe performance on sigma scale. We aimed to gauge our laboratory performance by sigma metrics. Internal quality control (QC) data was analyzed retrospectively over a period of 6 months from July 2009 to December 2009. Laboratory mean, standard deviation and coefficient of variation were calculated for all the parameters. Sigma was calculated for both the levels of internal QC. Satisfactory sigma values (>6) were elicited for creatinine, triglycerides, SGOT, CPK-Total and Amylase. Blood urea performed poorly on the sigma scale with sigma six sigma standards for all the analytical processes.

  6. Program for implementing software quality metrics

    Energy Technology Data Exchange (ETDEWEB)

    Yule, H.P.; Riemer, C.A.

    1992-04-01

    This report describes a program by which the Veterans Benefit Administration (VBA) can implement metrics to measure the performance of automated data systems and demonstrate that they are improving over time. It provides a definition of quality, particularly with regard to software. Requirements for management and staff to achieve a successful metrics program are discussed. It lists the attributes of high-quality software, then describes the metrics or calculations that can be used to measure these attributes in a particular system. Case studies of some successful metrics programs used by business are presented. The report ends with suggestions on which metrics the VBA should use and the order in which they should be implemented.

  7. How to evaluate objective video quality metrics reliably

    DEFF Research Database (Denmark)

    Korhonen, Jari; Burini, Nino; You, Junyong;

    2012-01-01

    The typical procedure for evaluating the performance of different objective quality metrics and indices involves comparisons between subjective quality ratings and the quality indices obtained using the objective metrics in question on the known video sequences. Several correlation indicators can...

  8. Contribution of landscape metrics to the assessment of scenic quality – the example of the landscape structure plan Havelland/Germany

    Directory of Open Access Journals (Sweden)

    H. Herbst

    2009-03-01

    Full Text Available The scenic quality of a landscape is a natural resource that is to be preserved according to German and international law. One important indicator for the evaluation of this value is the structural diversity of the landscape. Although Landscape Metrics (LM represent a well-known instrument for the quantification of landscape patterns, they are hardly used in applied landscape and environmental planning. This study shows possibilities for the integration of LM into a commonly used method to assess scenic quality by the example of a Landscape Structure Plan. First results indicate that especially Shannon’s Diversity Index and Edge Density are suitable to achieve an objective evaluation of the structural diversity as indicator for scenic quality. The addition of qualitative parameters to the objective structural analysis is discussed. Moreover, the use of landscape scenery units and raster cells as basic geometry has been compared. It shows that LM can support the evaluation of the aesthetic quality in environmental planning, especially when integrated into commonly used evaluation methods.

  9. Towards Video Quality Metrics Based on Colour Fractal Geometry

    Directory of Open Access Journals (Sweden)

    Richard Noël

    2010-01-01

    Full Text Available Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.

  10. Ocean Model Assessment with Lagrangian Metrics

    Science.gov (United States)

    2016-06-07

    Ocean Model Assessment With Lagrangian Metrics” Pearn P. Niiler Scripps Institution of Oceanography 9500 Gilman Drive MC 0213 La Jolla, CA...project are to aid in the development of accurate modeling of upper ocean circulation by using data on circulation observations to test models . These tests...or metrics, will be statistical measures of model and data comparisons. It is believed that having accurate models of upper ocean currents will

  11. Quality metric for spherical panoramic video

    Science.gov (United States)

    Zakharchenko, Vladyslav; Choi, Kwang Pyo; Park, Jeong Hoon

    2016-09-01

    Virtual reality (VR)/ augmented reality (AR) applications allow users to view artificial content of a surrounding space simulating presence effect with a help of special applications or devices. Synthetic contents production is well known process form computer graphics domain and pipeline has been already fixed in the industry. However emerging multimedia formats for immersive entertainment applications such as free-viewpoint television (FTV) or spherical panoramic video require different approaches in content management and quality assessment. The international standardization on FTV has been promoted by MPEG. This paper is dedicated to discussion of immersive media distribution format and quality estimation process. Accuracy and reliability of the proposed objective quality estimation method had been verified with spherical panoramic images demonstrating good correlation results with subjective quality estimation held by a group of experts.

  12. A Metric to Assess the Performance of MLIR Services

    Directory of Open Access Journals (Sweden)

    N. Moganarangan

    2014-03-01

    Full Text Available Information Retrieval plays a vital role in extraction of relevant information. Many researches have been working on for satisfying user needs, tough the problem arises when accessing multilingual information. This Multilingual environment provides a platform where a query can be formed in one language and the result can be in the same language and/or different languages. Performance evaluation of Information Retrieval for monolingual environments especially for English are developed and standardized from its inception. There is no specialized evaluation model available for evaluating the performance of services related to multilingual environments or systems. The unavailability of MLIR domain specific standards is a challenging task. This paper presented enhanced metric to assess the performance of MLIR systems over its counterpart IR metric. This analysis shows that the performance of the enhanced metric is better than the conventional metric. And also these metric can facilitate the researchers and developers to improve the quality of the MLIR systems in the present and future scenarios.

  13. Improved structural similarity metric for the visible quality measurement of images

    Science.gov (United States)

    Lee, Daeho; Lim, Sungsoo

    2016-11-01

    The visible quality assessment of images is important to evaluate the performance of image processing methods such as image correction, compressing, and enhancement. The structural similarity is widely used to determine the visible quality; however, existing structural similarity metrics cannot correctly assess the perceived human visibility of images that have been slightly geometrically transformed or images that have undergone significant regional distortion. We propose an improved structural similarity metric that is more close to human visible evaluation. Compared with the existing metrics, the proposed method can more correctly evaluate the similarity between an original image and various distorted images.

  14. [Clinical trial data management and quality metrics system].

    Science.gov (United States)

    Chen, Zhao-hua; Huang, Qin; Deng, Ya-zhong; Zhang, Yue; Xu, Yu; Yu, Hao; Liu, Zong-fan

    2015-11-01

    Data quality management system is essential to ensure accurate, complete, consistent, and reliable data collection in clinical research. This paper is devoted to various choices of data quality metrics. They are categorized by study status, e.g. study start up, conduct, and close-out. In each category, metrics for different purposes are listed according to ALCOA+ principles such us completeness, accuracy, timeliness, traceability, etc. Some general quality metrics frequently used are also introduced. This paper contains detail information as much as possible to each metric by providing definition, purpose, evaluation, referenced benchmark, and recommended targets in favor of real practice. It is important that sponsors and data management service providers establish a robust integrated clinical trial data quality management system to ensure sustainable high quality of clinical trial deliverables. It will also support enterprise level of data evaluation and bench marking the quality of data across projects, sponsors, data management service providers by using objective metrics from the real clinical trials. We hope this will be a significant input to accelerate the improvement of clinical trial data quality in the industry.

  15. Experiences with Software Quality Metrics in the EMI middleware

    Science.gov (United States)

    Alandes, M.; Kenny, E. M.; Meneses, D.; Pucciani, G.

    2012-12-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering - Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project to extract “code metrics” on the status of the software products and “process metrics” related to the quality of the development and support process such as reaction time to critical bugs, requirements tracking and delays in product releases.

  16. A universal color image quality metric

    NARCIS (Netherlands)

    Toet, A.; Lucassen, M.P.

    2003-01-01

    We extend a recently introduced universal grayscale image quality index to a newly developed perceptually decorrelated color space. The resulting color image quality index quantifies the distortion of a processed color image relative to its original version. We evaluated the new color image quality

  17. Experiences with Software Quality Metrics in the EMI middlewate

    CERN Document Server

    Alandes, M; Meneses, D; Pucciani, G

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project to ...

  18. Experiences with Software Quality Metrics in the EMI Middleware

    CERN Document Server

    CERN. Geneva

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project t...

  19. Quality metrics can help the expert during neurological clinical trials

    Science.gov (United States)

    Mahé, L.; Autrusseau, F.; Desal, H.; Guédon, J.; Der Sarkissian, H.; Le Teurnier, Y.; Davila, S.

    2016-03-01

    Carotid surgery is a frequent act corresponding to 15 to 20 thousands operations per year in France. Cerebral perfusion has to be tracked before and after carotid surgery. In this paper, a diagnosis support using quality metrics is proposed to detect vascular lesions on MR images. Our key stake is to provide a detection tool mimicking the human visual system behavior during the visual inspection. Relevant Human Visual System (HVS) properties should be integrated in our lesion detection method, which must be robust to common distortions in medical images. Our goal is twofold: to help the neuroradiologist to perform its task better and faster but also to provide a way to reduce the risk of bias in image analysis. Objective quality metrics (OQM) are methods whose goal is to predict the perceived quality. In this work, we use Objective Quality Metrics to detect perceivable differences between pairs of images.

  20. "Assessment of different bioequivalent metrics in Rifampin bioequivalence study "

    OpenAIRE

    "Rouini MR; Tajer Zadeh H; Valad Khani M "

    2002-01-01

    The use of secondary metrics has become special interest in bioequivalency studies. The applicability of partial area method, truncated AUC and Cmax/AUC has been argued by many authors. This study aims to evaluate the possible superiority of these metrics to primary metrics (i.e. AUCinf, Cmax and Tmax). The suitability of truncated AUC for assessment of absorption extent as well as Cmax/AUC and partial AUC for the evaluation of absorption rate in bioequivalency determination was investigated ...

  1. A priori discretization quality metrics for distributed hydrologic modeling applications

    Science.gov (United States)

    Liu, Hongli; Tolson, Bryan; Craig, James; Shafii, Mahyar; Basu, Nandita

    2016-04-01

    In distributed hydrologic modelling, a watershed is treated as a set of small homogeneous units that address the spatial heterogeneity of the watershed being simulated. The ability of models to reproduce observed spatial patterns firstly depends on the spatial discretization, which is the process of defining homogeneous units in the form of grid cells, subwatersheds, or hydrologic response units etc. It is common for hydrologic modelling studies to simply adopt a nominal or default discretization strategy without formally assessing alternative discretization levels. This approach lacks formal justifications and is thus problematic. More formalized discretization strategies are either a priori or a posteriori with respect to building and running a hydrologic simulation model. A posteriori approaches tend to be ad-hoc and compare model calibration and/or validation performance under various watershed discretizations. The construction and calibration of multiple versions of a distributed model can become a seriously limiting computational burden. Current a priori approaches are more formalized and compare overall heterogeneity statistics of dominant variables between candidate discretization schemes and input data or reference zones. While a priori approaches are efficient and do not require running a hydrologic model, they do not fully investigate the internal spatial pattern changes of variables of interest. Furthermore, the existing a priori approaches focus on landscape and soil data and do not assess impacts of discretization on stream channel definition even though its significance has been noted by numerous studies. The primary goals of this study are to (1) introduce new a priori discretization quality metrics considering the spatial pattern changes of model input data; (2) introduce a two-step discretization decision-making approach to compress extreme errors and meet user-specified discretization expectations through non-uniform discretization threshold

  2. Metrics for assessing improvements in primary health care.

    Science.gov (United States)

    Stange, Kurt C; Etz, Rebecca S; Gullett, Heidi; Sweeney, Sarah A; Miller, William L; Jaén, Carlos Roberto; Crabtree, Benjamin F; Nutting, Paul A; Glasgow, Russell E

    2014-01-01

    Metrics focus attention on what is important. Balanced metrics of primary health care inform purpose and aspiration as well as performance. Purpose in primary health care is about improving the health of people and populations in their community contexts. It is informed by metrics that include long-term, meaning- and relationship-focused perspectives. Aspirational uses of metrics inspire evolving insights and iterative improvement, using a collaborative, developmental perspective. Performance metrics assess the complex interactions among primary care tenets of accessibility, a whole-person focus, integration and coordination of care, and ongoing relationships with individuals, families, and communities; primary health care principles of inclusion and equity, a focus on people's needs, multilevel integration of health, collaborative policy dialogue, and stakeholder participation; basic and goal-directed health care, prioritization, development, and multilevel health outcomes. Environments that support reflection, development, and collaborative action are necessary for metrics to advance health and minimize unintended consequences.

  3. Pragmatic quality metrics for evolutionary software development models

    Science.gov (United States)

    Royce, Walker

    1990-01-01

    Due to the large number of product, project, and people parameters which impact large custom software development efforts, measurement of software product quality is a complex undertaking. Furthermore, the absolute perspective from which quality is measured (customer satisfaction) is intangible. While we probably can't say what the absolute quality of a software product is, we can determine the relative quality, the adequacy of this quality with respect to pragmatic considerations, and identify good and bad trends during development. While no two software engineers will ever agree on an optimum definition of software quality, they will agree that the most important perspective of software quality is its ease of change. We can call this flexibility, adaptability, or some other vague term, but the critical characteristic of software is that it is soft. The easier the product is to modify, the easier it is to achieve any other software quality perspective. This paper presents objective quality metrics derived from consistent lifecycle perspectives of rework which, when used in concert with an evolutionary development approach, can provide useful insight to produce better quality per unit cost/schedule or to achieve adequate quality more efficiently. The usefulness of these metrics is evaluated by applying them to a large, real world, Ada project.

  4. Spread spectrum image watermarking based on perceptual quality metric.

    Science.gov (United States)

    Zhang, Fan; Liu, Wenyu; Lin, Weisi; Ngan, King Ngi

    2011-11-01

    Efficient image watermarking calls for full exploitation of the perceptual distortion constraint. Second-order statistics of visual stimuli are regarded as critical features for perception. This paper proposes a second-order statistics (SOS)-based image quality metric, which considers the texture masking effect and the contrast sensitivity in Karhunen-Loève transform domain. Compared with the state-of-the-art metrics, the quality prediction by SOS better correlates with several subjectively rated image databases, in which the images are impaired by the typical coding and watermarking artifacts. With the explicit metric definition, spread spectrum watermarking is posed as an optimization problem: we search for a watermark to minimize the distortion of the watermarked image and to maximize the correlation between the watermark pattern and the spread spectrum carrier. The simple metric guarantees the optimal watermark a closed-form solution and a fast implementation. The experiments show that the proposed watermarking scheme can take full advantage of the distortion constraint and improve the robustness in return.

  5. Recommendations for mass spectrometry data quality metrics for open access data (corollary to the Amsterdam Principles).

    Science.gov (United States)

    Kinsinger, Christopher R; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W; Deutsch, Eric W; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L; Omenn, Gilbert S; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L; Simpson, Richard J; Slotta, Douglas; Smith, Richard D; Stein, Stephen E; Tabb, David L; Tagle, Danilo; Yates, John R; Rodriguez, Henry

    2012-02-03

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the U.S. National Cancer Institute (NCI) convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: (1) an evolving list of comprehensive quality metrics and (2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals.

  6. Design For Six Sigma with Critical-To-Quality Metrics for Research Investments

    Energy Technology Data Exchange (ETDEWEB)

    Logan, R W

    2005-06-22

    Design for Six Sigma (DFSS) has evolved as a worthy predecessor to the application of Six-Sigma principles to production, process control, and quality. At Livermore National Laboratory (LLNL), we are exploring the interrelation of our current research, development, and design safety standards as they would relate to the principles of DFSS and Six-Sigma. We have had success in prioritization of research and design using a quantitative scalar metric for value, so we further explore the use of scalar metrics to represent the outcome of our use of the DFSS process. We use the design of an automotive component as an example of combining DFSS metrics into a scalar decision quantity. We then extend this concept to a high-priority, personnel safety example representing work that is toward the mature end of DFSS, and begins the transition into Six-Sigma for safety assessments in a production process. This latter example and objective involves the balance of research investment, quality control, and system operation and maintenance of high explosive handling at LLNL and related production facilities. Assuring a sufficiently low probability of failure (reaction of a high explosive given an accidental impact) is a Critical-To-Quality (CTQ) component of our weapons and stockpile stewardship operation and cost. Our use of DFSS principles, with quantification and merging of CTQ metrics, provides ways to quantify clear (preliminary) paths forward for both the automotive example and the explosive safety example. The presentation of simple, scalar metrics to quantify the path forward then provides a focal point for qualitative caveats and discussion for inclusion of other metrics besides a single, provocative scalar. In this way, carrying a scalar decision metric along with the DFSS process motivates further discussion and ideas for process improvement from the DFSS into the Six-Sigma phase of the product. We end with an example of how our DFSS-generated scalar metric could be

  7. Assessment of the Log-Euclidean Metric Performance in Diffusion Tensor Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mostafa Charmi

    2010-06-01

    Full Text Available Introduction: Appropriate definition of the distance measure between diffusion tensors has a deep impact on Diffusion Tensor Image (DTI segmentation results. The geodesic metric is the best distance measure since it yields high-quality segmentation results. However, the important problem with the geodesic metric is a high computational cost of the algorithms based on it. The main goal of this paper is to assess the possible substitution of the geodesic metric with the Log-Euclidean one to reduce the computational cost of a statistical surface evolution algorithm. Materials and Methods: We incorporated the Log-Euclidean metric in the statistical surface evolution algorithm framework. To achieve this goal, the statistics and gradients of diffusion tensor images were defined using the Log-Euclidean metric. Numerical implementation of the segmentation algorithm was performed in the MATLAB software using the finite difference techniques. Results: In the statistical surface evolution framework, the Log-Euclidean metric was able to discriminate the torus and helix patterns in synthesis datasets and rat spinal cords in biological phantom datasets from the background better than the Euclidean and J-divergence metrics. In addition, similar results were obtained with the geodesic metric. However, the main advantage of the Log-Euclidean metric over the geodesic metric was the dramatic reduction of computational cost of the segmentation algorithm, at least by 70 times. Discussion and Conclusion: The qualitative and quantitative results have shown that the Log-Euclidean metric is a good substitute for the geodesic metric when using a statistical surface evolution algorithm in DTIs segmentation.

  8. Quality metric in matched Laplacian of Gaussian response domain for blind adaptive optics image deconvolution

    Science.gov (United States)

    Guo, Shiping; Zhang, Rongzhi; Yang, Yikang; Xu, Rong; Liu, Changhai; Li, Jisheng

    2016-04-01

    Adaptive optics (AO) in conjunction with subsequent postprocessing techniques have obviously improved the resolution of turbulence-degraded images in ground-based astronomical observations or artificial space objects detection and identification. However, important tasks involved in AO image postprocessing, such as frame selection, stopping iterative deconvolution, and algorithm comparison, commonly need manual intervention and cannot be performed automatically due to a lack of widely agreed on image quality metrics. In this work, based on the Laplacian of Gaussian (LoG) local contrast feature detection operator, we propose a LoG domain matching operation to perceive effective and universal image quality statistics. Further, we extract two no-reference quality assessment indices in the matched LoG domain that can be used for a variety of postprocessing tasks. Three typical space object images with distinct structural features are tested to verify the consistency of the proposed metric with perceptual image quality through subjective evaluation.

  9. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)*

    Science.gov (United States)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the “International Workshop on Proteomic Data Quality Metrics” in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: 1) an evolving list of comprehensive quality metrics and 2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals. PMID:22052993

  10. Detection of image quality metamers based on the metric for unified image quality

    Science.gov (United States)

    Miyata, Kimiyoshi; Tsumura, Norimichi

    2012-01-01

    In this paper, we introduce a concept of the image quality metamerism as an expanded version of the metamerism defined in the color science. The concept is used to unify different image quality attributes, and applied to introduce a metric showing the degree of image quality metamerism to analyze a cultural property. Our global goal is to build a metric to evaluate total quality of images acquired by different imaging systems and observed under different viewing conditions. As the basic step to the global goal, the metric is consisted of color, spectral and texture information in this research, and applied to detect image quality metamers to investigate the cultural property. The property investigated is the oldest extant version of folding screen paintings that depict the thriving city of Kyoto designated as a nationally important cultural property in Japan. Gold colored areas painted by using high granularity colorants compared with other color areas in the property are evaluated based on the metric, then the metric is visualized as a map showing the possibility of the image quality metamer to the reference pixel.

  11. Comparison of macroinvertebrate-derived stream quality metrics between snag and riffle habitats

    Science.gov (United States)

    Stepenuck, K.F.; Crunkilton, R.L.; Bozek, Michael A.; Wang, L.

    2008-01-01

    We compared benthic macroinvertebrate assemblage structure at snag and riffle habitats in 43 Wisconsin streams across a range of watershed urbanization using a variety of stream quality metrics. Discriminant analysis indicated that dominant taxa at riffles and snags differed; Hydropsychid caddisflies (Hydropsyche betteni and Cheumatopsyche spp.) and elmid beetles (Optioservus spp. and Stenemlis spp.) typified riffles, whereas isopods (Asellus intermedius) and amphipods (Hyalella azteca and Gammarus pseudolimnaeus) predominated in snags. Analysis of covariance indicated that samples from snag and riffle habitats differed significantly in their response to the urbanization gradient for the Hilsenhoff biotic index (BI), Shannon's diversity index, and percent of filterers, shredders, and pollution intolerant Ephemeroptera, Plecoptera, and Trichoptera (EPT) at each stream site (p ??? 0.10). These differences suggest that although macroinvertebrate assemblages present in either habitat type are sensitive to detecting the effects of urbanization, metrics derived from different habitats should not be intermixed when assessing stream quality through biomonitoring. This can be a limitation to resource managers who wish to compare water quality among streams where the same habitat type is not available at all stream locations, or where a specific habitat type (i.e., a riffle) is required to determine a metric value (i.e., BI). To account for differences in stream quality at sites lacking riffle habitat, snag-derived metric values can be adjusted based on those obtained from riffles that have been exposed to the same level of urbanization. Comparison of nonlinear regression equations that related stream quality metric values from the two habitat types to percent watershed urbanization indicated that snag habitats had on average 30.2 fewer percent EPT individuals, a lower diversity index value than riffles, and a BI value of 0.29 greater than riffles. ?? 2008 American Water

  12. Environmental Quality and Aquatic Invertebrate Metrics Relationships at Patagonian Wetlands Subjected to Livestock Grazing Pressures

    Science.gov (United States)

    2015-01-01

    Livestock grazing can compromise the biotic integrity and health of wetlands, especially in remotes areas like Patagonia, which provide habitat for several endemic terrestrial and aquatic species. Understanding the effects of these land use practices on invertebrate communities can help prevent the deterioration of wetlands and provide insights for restoration. In this contribution, we assessed the responses of 36 metrics based on the structural and functional attributes of invertebrates (130 taxa) at 30 Patagonian wetlands that were subject to different levels of livestock grazing intensity. These levels were categorized as low, medium and high based on eight features (livestock stock densities plus seven wetland measurements). Significant changes in environmental features were detected across the gradient of wetlands, mainly related to pH, conductivity, and nutrient values. Regardless of rainfall gradient, symptoms of eutrophication were remarkable at some highly disturbed sites. Seven invertebrate metrics consistently and accurately responded to livestock grazing on wetlands. All of them were negatively related to increased levels of grazing disturbance, with the number of insect families appearing as the most robust measure. A multivariate approach (RDA) revealed that invertebrate metrics were significantly affected by environmental variables related to water quality: in particular, pH, conductivity, dissolved oxygen, nutrient concentrations, and the richness and coverage of aquatic plants. Our results suggest that the seven aforementioned metrics could be used to assess ecological quality in the arid and semi-arid wetlands of Patagonia, helping to ensure the creation of protected areas and their associated ecological services. PMID:26448652

  13. Environmental Quality and Aquatic Invertebrate Metrics Relationships at Patagonian Wetlands Subjected to Livestock Grazing Pressures.

    Directory of Open Access Journals (Sweden)

    Luis Beltrán Epele

    Full Text Available Livestock grazing can compromise the biotic integrity and health of wetlands, especially in remotes areas like Patagonia, which provide habitat for several endemic terrestrial and aquatic species. Understanding the effects of these land use practices on invertebrate communities can help prevent the deterioration of wetlands and provide insights for restoration. In this contribution, we assessed the responses of 36 metrics based on the structural and functional attributes of invertebrates (130 taxa at 30 Patagonian wetlands that were subject to different levels of livestock grazing intensity. These levels were categorized as low, medium and high based on eight features (livestock stock densities plus seven wetland measurements. Significant changes in environmental features were detected across the gradient of wetlands, mainly related to pH, conductivity, and nutrient values. Regardless of rainfall gradient, symptoms of eutrophication were remarkable at some highly disturbed sites. Seven invertebrate metrics consistently and accurately responded to livestock grazing on wetlands. All of them were negatively related to increased levels of grazing disturbance, with the number of insect families appearing as the most robust measure. A multivariate approach (RDA revealed that invertebrate metrics were significantly affected by environmental variables related to water quality: in particular, pH, conductivity, dissolved oxygen, nutrient concentrations, and the richness and coverage of aquatic plants. Our results suggest that the seven aforementioned metrics could be used to assess ecological quality in the arid and semi-arid wetlands of Patagonia, helping to ensure the creation of protected areas and their associated ecological services.

  14. "Assessment of different bioequivalent metrics in Rifampin bioequivalence study "

    Directory of Open Access Journals (Sweden)

    "Rouini MR

    2002-08-01

    Full Text Available The use of secondary metrics has become special interest in bioequivalency studies. The applicability of partial area method, truncated AUC and Cmax/AUC has been argued by many authors. This study aims to evaluate the possible superiority of these metrics to primary metrics (i.e. AUCinf, Cmax and Tmax. The suitability of truncated AUC for assessment of absorption extent as well as Cmax/AUC and partial AUC for the evaluation of absorption rate in bioequivalency determination was investigated following administration of same product as test and reference to 7 healthy volunteers. Among the pharmacokinetic parameters obtained, Cmax/AUCinf was a better indicator or absorption rate and the AUCinf was more sensitive than truncated AUC in evaluation of absorption extent.

  15. Operator-based metric for nuclear operations automation assessment

    Energy Technology Data Exchange (ETDEWEB)

    Zacharias, G.L.; Miao, A.X.; Kalkan, A. [Charles River Analytics Inc., Cambridge, MA (United States)] [and others

    1995-04-01

    Continuing advances in real-time computational capabilities will support enhanced levels of smart automation and AI-based decision-aiding systems in the nuclear power plant (NPP) control room of the future. To support development of these aids, we describe in this paper a research tool, and more specifically, a quantitative metric, to assess the impact of proposed automation/aiding concepts in a manner that can account for a number of interlinked factors in the control room environment. In particular, we describe a cognitive operator/plant model that serves as a framework for integrating the operator`s information-processing capabilities with his procedural knowledge, to provide insight as to how situations are assessed by the operator, decisions made, procedures executed, and communications conducted. Our focus is on the situation assessment (SA) behavior of the operator, the development of a quantitative metric reflecting overall operator awareness, and the use of this metric in evaluating automation/aiding options. We describe the results of a model-based simulation of a selected emergency scenario, and metric-based evaluation of a range of contemplated NPP control room automation/aiding options. The results demonstrate the feasibility of model-based analysis of contemplated control room enhancements, and highlight the need for empirical validation.

  16. Sustainability metrics: life cycle assessment and green design in polymers.

    Science.gov (United States)

    Tabone, Michaelangelo D; Cregg, James J; Beckman, Eric J; Landis, Amy E

    2010-11-01

    This study evaluates the efficacy of green design principles such as the "12 Principles of Green Chemistry," and the "12 Principles of Green Engineering" with respect to environmental impacts found using life cycle assessment (LCA) methodology. A case study of 12 polymers is presented, seven derived from petroleum, four derived from biological sources, and one derived from both. The environmental impacts of each polymer's production are assessed using LCA methodology standardized by the International Organization for Standardization (ISO). Each polymer is also assessed for its adherence to green design principles using metrics generated specifically for this paper. Metrics include atom economy, mass from renewable sources, biodegradability, percent recycled, distance of furthest feedstock, price, life cycle health hazards and life cycle energy use. A decision matrix is used to generate single value metrics for each polymer evaluating either adherence to green design principles or life-cycle environmental impacts. Results from this study show a qualified positive correlation between adherence to green design principles and a reduction of the environmental impacts of production. The qualification results from a disparity between biopolymers and petroleum polymers. While biopolymers rank highly in terms of green design, they exhibit relatively large environmental impacts from production. Biopolymers rank 1, 2, 3, and 4 based on green design metrics; however they rank in the middle of the LCA rankings. Polyolefins rank 1, 2, and 3 in the LCA rankings, whereas complex polymers, such as PET, PVC, and PC place at the bottom of both ranking systems.

  17. Content based no-reference image quality metrics

    OpenAIRE

    Marini,, A.C.

    2012-01-01

    Images are playing a more and more important role in sharing, expressing, mining and exchanging information in our daily lives. Now we can all easily capture and share images anywhere and anytime. Since digital images are subject to a wide variety of distortions during acquisition, processing, compression, storage, transmission and reproduction; it becomes necessary to assess the Image Quality. In this thesis, starting from an organized overview of available Image Quality Assessment methods, ...

  18. Enhancing the quality metric of protein microarray image

    Institute of Scientific and Technical Information of China (English)

    王立强; 倪旭翔; 陆祖康; 郑旭峰; 李映笙

    2004-01-01

    The novel method of improving the quality metric of protein microarray image presented in this paper reduces impulse noise by using an adaptive median filter that employs the switching scheme based on local statistics characters; and achieves the impulse detection by using the difference between the standard deviation of the pixels within the filter window and the current pixel of concern. It also uses a top-hat filter to correct the background variation. In order to decrease time consumption, the top-hat filter core is cross structure. The experimental results showed that, for a protein microarray image contaminated by impulse noise and with slow background variation, the new method can significantly increase the signal-to-noise ratio, correct the trends in the background, and enhance the flatness of the background and the consistency of the signal intensity.

  19. Software Metrics to Estimate Software Quality using Software Component Reusability

    Directory of Open Access Journals (Sweden)

    Prakriti Trivedi

    2012-03-01

    Full Text Available Today most of the applications developed using some existing libraries, codes, open sources etc. As a code is accessed in a program, it is represented as the software component. Such as in java beans and in .net ActiveX controls are the software components. These components are ready to use programming code or controls that excel the code development. A component based software system defines the concept of software reusability. While using these components the main question arise is whether to use such components is beneficial or not. In this proposed work we are trying to present the answer for the same question. In this work we are presenting a set of software matrix that will check the interconnection between the software component and the application. How strong this relation defines the software quality after using this software component. The overall metrics will return the final result in terms of the boundless of the component with application.

  20. Metrics and the effective computational scientist: process, quality and communication.

    Science.gov (United States)

    Baldwin, Eric T

    2012-09-01

    Recent treatments of computational knowledge worker productivity have focused upon the value the discipline brings to drug discovery using positive anecdotes. While this big picture approach provides important validation of the contributions of these knowledge workers, the impact accounts do not provide the granular detail that can help individuals and teams perform better. I suggest balancing the impact-focus with quantitative measures that can inform the development of scientists. Measuring the quality of work, analyzing and improving processes, and the critical evaluation of communication can provide immediate performance feedback. The introduction of quantitative measures can complement the longer term reporting of impacts on drug discovery. These metric data can document effectiveness trends and can provide a stronger foundation for the impact dialogue.

  1. Macroinvertebrate and diatom metrics as indicators of water-quality conditions in connected depression wetlands in the Mississippi Alluvial Plain

    Science.gov (United States)

    Justus, Billy; Burge, David; Cobb, Jennifer; Marsico, Travis; Bouldin, Jennifer

    2016-01-01

    Methods for assessing wetland conditions must be established so wetlands can be monitored and ecological services can be protected. We evaluated biological indices compiled from macroinvertebrate and diatom metrics developed primarily for streams to assess their ability to indicate water quality in connected depression wetlands. We collected water-quality and biological samples at 24 connected depressions dominated by water tupelo (Nyssa aquatica) or bald cypress (Taxodium distichum) (water depths = 0.5–1.0 m). Water quality of the least-disturbed connected depressions was characteristic of swamps in the southeastern USA, which tend to have low specific conductance, nutrient concentrations, and pH. We compared 162 macroinvertebrate metrics and 123 diatom metrics with a water-quality disturbance gradient. For most metrics, we evaluated richness, % richness, abundance, and % relative abundance values. Three of the 4 macroinvertebrate metrics that were most beneficial for identifying disturbance in connected depressions decreased along the disturbance gradient even though they normally increase relative to stream disturbance. The negative relationship to disturbance of some taxa (e.g., dipterans, mollusks, and crustaceans) that are considered tolerant in streams suggests that the tolerance scale for some macroinvertebrates can differ markedly between streams and wetlands. Three of the 4 metrics chosen for the diatom index reflected published tolerances or fit the usual perception of metric response to disturbance. Both biological indices may be useful in connected depressions elsewhere in the Mississippi Alluvial Plain Ecoregion and could have application in other wetland types. Given the paradoxical relationship of some macroinvertebrate metrics to dissolved O2 (DO), we suggest that the diatom metrics may be easier to interpret and defend for wetlands with low DO concentrations in least-disturbed conditions.

  2. Sigma metrics in clinical chemistry laboratory – A guide to quality control

    Directory of Open Access Journals (Sweden)

    Usha S. Adiga

    2015-10-01

    Full Text Available Background: Six sigma is a process of quality measurement and improvement program used in industries. Sigma methodology can be applied wherever an outcome of a process is to be measured. A poor outcome is counted as an error or defect. This is quantified as defects per million (DPM. Six sigma provides a more quantitative frame work for evaluating process performance with evidence for process improvement and describes how many sigma fit within the tolerance limits. Sigma metrics can be used effectively in laboratory services. The present study was undertaken to evaluate the quality of the analytical performance of clinical chemistry laboratory by calculating sigma metrics. Methodology: The study was conducted in the clinical biochemistry laboratory of Karwar Institute of Medical Sciences, Karwar. Sigma metrics of 15 parameters with automated chemistry analyzer, transasia XL 640 were analyzed. The analytes assessed were glucose, urea, creatinine, uric acid, total bilirubin (BT, direct bilirubin (BD, total protein, albumin, SGOT, SGPT, ALP, Total cholesterol, triglycerides, HDL and Calcium. Results: We have sigma values <3 for Urea, ALT, BD, BT, Ca, creatinine (L1 and urea, AST, BD (L2. Sigma lies between 3-6 for Glucose, AST, cholesterol, uric acid, total protein(L1 and ALT, cholesterol, BT, calcium, creatinine and glucose (L2.Sigma was more than 6 for Triglyceride, ALP, HDL, albumin (L1 and TG, uric acid, ALP, HDL, albumin, total protein(L2. Conclusion: Sigma metrics helps to assess analytical methodologies and augment laboratory performance. It acts as a guide for planning quality control strategy. It can be a self assessment tool regarding the functioning of clinical laboratory.

  3. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale

    CERN Document Server

    Emmons, Scott; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie network clustering. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms -- Blondel, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 o...

  4. Urban Landscape Metrics for Climate and Sustainability Assessments

    Science.gov (United States)

    Cochran, F. V.; Brunsell, N. A.

    2014-12-01

    To test metrics for rapid identification of urban classes and sustainable urban forms, we examine the configuration of urban landscapes using satellite remote sensing data. We adopt principles from landscape ecology and urban planning to evaluate urban heterogeneity and design themes that may constitute more sustainable urban forms, including compactness (connectivity), density, mixed land uses, diversity, and greening. Using 2-D wavelet and multi-resolution analysis, landscape metrics, and satellite-derived indices of vegetation fraction and impervious surface, the spatial variability of Landsat and MODIS data from metropolitan areas of Manaus and São Paulo, Brazil are investigated. Landscape metrics for density, connectivity, and diversity, like the Shannon Diversity Index, are used to assess the diversity of urban buildings, geographic extent, and connectedness. Rapid detection of urban classes for low density, medium density, high density, and tall building district at the 1-km scale are needed for use in climate models. If the complexity of finer-scale urban characteristics can be related to the neighborhood scale both climate and sustainability assessments may be more attainable across urban areas.

  5. Applicability of Existing Objective Metrics of Perceptual Quality for Adaptive Video Streaming

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Krasula, Lukás; Shahid, Muhammad;

    2016-01-01

    Objective video quality metrics are designed to estimate the quality of experience of the end user. However, these objective metrics are usually validated with video streams degraded under common distortion types. In the presented work, we analyze the performance of published and known full-refer...

  6. Design and Implementation of Performance Metrics for Evaluation of Assessments Data

    CERN Document Server

    Ahmed, Irfan

    2015-01-01

    The objective of this paper is to design performance metrics and respective formulas to quantitatively evaluate the achievement of set objectives and expected outcomes both at the course and program levels. Evaluation is defined as one or more processes for interpreting the data acquired through the assessment processes in order to determine how well the set objectives and outcomes are being attained. Even though assessment processes for accreditation are well documented but existence of an evaluation process is assumed. This paper focuses on evaluation process to provide insights and techniques for data interpretation. It gives a complete evaluation process from the data collection through various assessment methods, performance metrics, to the presentations in the form of tables and graphs. Authors hope that the articulated description of evaluation formulas will help convergence to high quality standard in evaluation process.

  7. Economic Benefits: Metrics and Methods for Landscape Performance Assessment

    Directory of Open Access Journals (Sweden)

    Zhen Wang

    2016-04-01

    Full Text Available This paper introduces an expanding research frontier in the landscape architecture discipline, landscape performance research, which embraces the scientific dimension of landscape architecture through evidence-based designs that are anchored in quantitative performance assessment. Specifically, this paper summarizes metrics and methods for determining landscape-derived economic benefits that have been utilized in the Landscape Performance Series (LPS initiated by the Landscape Architecture Foundation. This paper identifies 24 metrics and 32 associated methods for the assessment of economic benefits found in 82 published case studies. Common issues arising through research in quantifying economic benefits for the LPS are discussed and the various approaches taken by researchers are clarified. The paper also provides an analysis of three case studies from the LPS that are representative of common research methods used to quantify economic benefits. The paper suggests that high(er levels of sustainability in the built environment require the integration of economic benefits into landscape performance assessment portfolios in order to forecast project success and reduce uncertainties. Therefore, evidence-based design approaches increase the scientific rigor of landscape architecture education and research, and elevate the status of the profession.

  8. Metrics-based assessments of research: incentives for 'institutional plagiarism'?

    Science.gov (United States)

    Berry, Colin

    2013-06-01

    The issue of plagiarism--claiming credit for work that is not one's own, rightly, continues to cause concern in the academic community. An analysis is presented that shows the effects that may arise from metrics-based assessments of research, when credit for an author's outputs (chiefly publications) is given to an institution that did not support the research but which subsequently employs the author. The incentives for what is termed here "institutional plagiarism" are demonstrated with reference to the UK Research Assessment Exercise in which submitting units of assessment are shown in some instances to derive around twice the credit for papers produced elsewhere by new recruits, compared to papers produced 'in-house'.

  9. Recommendations for mass spectrometry data quality metrics for open access data(corollary to the Amsterdam principles)

    Energy Technology Data Exchange (ETDEWEB)

    Kingsinger, Christopher R.; Apffel, James; Baker, Mark S.; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph A.; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William S.; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry

    2011-12-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the U.S. National Cancer Institute (NCI) convened the 'International Workshop on Proteomic Data Quality Metrics' in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the search community, journals, funding agencies, and data repositories. Attendees discussed and agreed upon two primary needs for the wide use of quality metrics: (i)an evolving list of comprehensive quality metrics and (ii)standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in Proteomics, Proteomics Clinical Applications, Journal of Proteome Research, and Molecular and Cellular Proteomics, as a public service to the research community.The peer review process was a coordinated effort conducted by a panel of referees selected by the journals.

  10. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  11. Evaluation of cassette-based digital radiography detectors using standardized image quality metrics: AAPM TG-150 Draft Image Detector Tests.

    Science.gov (United States)

    Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E

    2016-09-08

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be

  12. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2012-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use...... and agreed up on two primary needs for the wide use of quality metrics: 1) an evolving list of comprehensive quality metrics and 2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics...

  13. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use...... and agreed up on two primary needs for the wide use of quality metrics: 1) an evolving list of comprehensive quality metrics and 2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics...

  14. 层次型Java软件质量度量模型研究%Research on Layered Quality Metrics Model for Java Programme

    Institute of Scientific and Technical Information of China (English)

    黄璜; 周欣; 孙家骕

    2003-01-01

    Metrics model is in fact a cluster of criterions to assess software, which may show the characteristics ofdifferent software systems or modules and then serve different demands from users. The research on software metricstries to give characteristic evaluations to software components in component extraction, and then supports users to se-lect reusable components in high quality.Java has been one of the main languages today. With consideration of characteristics of Java and research on somegeneral metrics model, our model: Quality Metrics Model for Java is born.Following the principle of "Factor-Criterion-Metrics", more detailed descriptions of factors, criterions and met-rics of our model are given. In fact, the metrics model shows us some way for consideration. Through this model, wehope to normalize the point of the views of users.In JavaSQMM, four activities organize software quality evaluating: understanding, function implementing,maintaining and reusing, and then four corresponding factors of quality come to birth, which are mixed by criteria andmetrics.When designing our Java metrics model, the original development of Object Oriented Metrics Model Tool for Ja-va(OOMTJava)provides the support to process of metrics semi-automatically.

  15. Large-scale seismic waveform quality metric calculation using Hadoop

    Science.gov (United States)

    Magana-Zook, S.; Gaylord, J. M.; Knapp, D. R.; Dodge, D. A.; Ruppert, S. D.

    2016-09-01

    In this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of 0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. These experiments were conducted multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely

  16. Metrical Segmentation in Dutch: Vowel Quality or Stress?

    Science.gov (United States)

    Quene, Hugo; Koster, Mariette L.

    1998-01-01

    Examines metrical segmentation strategy in Dutch. The first experiment shows that stress strongly affects Dutch listeners' ability and speed in spotting Dutch monosyllabic words in disyllabic nonwords. The second experiment finds the same stress effect when only the target words are presented without a subsequent syllable triggering segmentation.…

  17. Extracting Patterns from Educational Traces via Clustering and Associated Quality Metrics

    NARCIS (Netherlands)

    Mihaescu, Marian; Tanasie, Alexandru; Dascalu, Mihai; Trausan-Matu, Stefan

    2016-01-01

    Clustering algorithms, pattern mining techniques and associated quality metrics emerged as reliable methods for modeling learners’ performance, comprehension and interaction in given educational scenarios. The specificity of available data such as missing values, extreme values or outliers, creates

  18. Quality Metrics and Reliability Analysis of Laser Communication System

    Directory of Open Access Journals (Sweden)

    A. Arockia Bazil Raj

    2016-03-01

    Full Text Available Beam wandering is the main cause for major power loss in laser communication. To analyse this prerequisite at our environment, a 155 Mbps data transmission experimental setup is built with necessary optoelectronic components for the link range of 0.5 km at an altitude of 15.25 m. A neuro-controller is developed inside the FPGA and used to stabilise the received beam at the centre of detector plane. The Q-factor and bit error rate variation profiles are calculated using the signal statistics obtained from the eye-diagram. The performance improvements on the laser communication system due to the incorporation of beam wandering mitigation control are investigated and discussed in terms of various communication quality assessment key parameters.Defence Science Journal, Vol. 66, No. 2, March 2016, pp. 175-185, DOI: http://dx.doi.org/10.14429/dsj.66.9707

  19. Better Metrics to Automatically Predict the Quality of a Text Summary

    Directory of Open Access Journals (Sweden)

    Judith D. Schlesinger

    2012-09-01

    Full Text Available In this paper we demonstrate a family of metrics for estimating the quality of a text summary relative to one or more human-generated summaries. The improved metrics are based on features automatically computed from the summaries to measure content and linguistic quality. The features are combined using one of three methods—robust regression, non-negative least squares, or canonical correlation, an eigenvalue method. The new metrics significantly outperform the previous standard for automatic text summarization evaluation, ROUGE.

  20. Quality metrics in high-dimensional data visualization: an overview and systematization.

    Science.gov (United States)

    Bertini, Enrico; Tatu, Andrada; Keim, Daniel

    2011-12-01

    In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research.

  1. The use of Software Quality Metrics in Software Maintenance

    OpenAIRE

    Kafura, Dennis G.; Reddy, Geereddy R.

    1985-01-01

    This paper reports on a modest study which relates seven different software complexity metrics to the experience of maintenance activities performed on a medium size sofhvare system. Three different versions of the system that evolved over aperiod of three years were analyzed in this study. A major revision of the system, while still in its design phase, was also analyzed. The results of this study indicate: (1) that the growth in system complexity as determined by the software...

  2. Pragmatic guidelines and quality metrics in business process modeling: a case study

    Directory of Open Access Journals (Sweden)

    Isel Moreno-Montes-de-Oca

    2014-04-01

    Full Text Available Business process modeling is one of the first steps towards achieving organizational goals. This is why business process modeling quality is an essential aspect for the development and technical support of any company. This work focuses on the quality of business process models at a conceptual l evel (design and evaluation. In the literature there are works that propose practical guidelines for modeling, while others focus on quality metrics that allow the evaluation of the models. In this paper we use practical guidelines during the modeling phase of a business process for postgraduate studies. We applied a set of quality metrics and compare the results with those obtained from a similar model that did not use guidelines. The results provide support for the use of guidelines as a way for business process modeling quality improvement, and the practical utility of quality metrics in their evaluation.

  3. Metrics for Assessment of Smart Grid Data Integrity Attacks

    Energy Technology Data Exchange (ETDEWEB)

    Annarita Giani; Miles McQueen; Russell Bent; Kameshwar Poolla; Mark Hinrichs

    2012-07-01

    There is an emerging consensus that the nation’s electricity grid is vulnerable to cyber attacks. This vulnerability arises from the increasing reliance on using remote measurements, transmitting them over legacy data networks to system operators who make critical decisions based on available data. Data integrity attacks are a class of cyber attacks that involve a compromise of information that is processed by the grid operator. This information can include meter readings of injected power at remote generators, power flows on transmission lines, and relay states. These data integrity attacks have consequences only when the system operator responds to compromised data by redispatching generation under normal or contingency protocols. These consequences include (a) financial losses from sub-optimal economic dispatch to service loads, (b) robustness/resiliency losses from placing the grid at operating points that are at greater risk from contingencies, and (c) systemic losses resulting from cascading failures induced by poor operational choices. This paper is focused on understanding the connections between grid operational procedures and cyber attacks. We first offer two examples to illustrate how data integrity attacks can cause economic and physical damage by misleading operators into taking inappropriate decisions. We then focus on unobservable data integrity attacks involving power meter data. These are coordinated attacks where the compromised data are consistent with the physics of power flow, and are therefore passed by any bad data detection algorithm. We develop metrics to assess the economic impact of these attacks under re-dispatch decisions using optimal power flow methods. These metrics can be use to prioritize the adoption of appropriate countermeasures including PMU placement, encryption, hardware upgrades, and advance attack detection algorithms.

  4. SVD-based quality metric for image and video using machine learning.

    Science.gov (United States)

    Narwaria, Manish; Lin, Weisi

    2012-04-01

    We study the use of machine learning for visual quality evaluation with comprehensive singular value decomposition (SVD)-based visual features. In this paper, the two-stage process and the relevant work in the existing visual quality metrics are first introduced followed by an in-depth analysis of SVD for visual quality assessment. Singular values and vectors form the selected features for visual quality assessment. Machine learning is then used for the feature pooling process and demonstrated to be effective. This is to address the limitations of the existing pooling techniques, like simple summation, averaging, Minkowski summation, etc., which tend to be ad hoc. We advocate machine learning for feature pooling because it is more systematic and data driven. The experiments show that the proposed method outperforms the eight existing relevant schemes. Extensive analysis and cross validation are performed with ten publicly available databases (eight for images with a total of 4042 test images and two for video with a total of 228 videos). We use all publicly accessible software and databases in this study, as well as making our own software public, to facilitate comparison in future research.

  5. Design of video quality metrics with multi-way data analysis a data driven approach

    CERN Document Server

    Keimel, Christian

    2016-01-01

    This book proposes a data-driven methodology using multi-way data analysis for the design of video-quality metrics. It also enables video- quality metrics to be created using arbitrary features. This data- driven design approach not only requires no detailed knowledge of the human visual system, but also allows a proper consideration of the temporal nature of video using a three-way prediction model, corresponding to the three-way structure of video. Using two simple example metrics, the author demonstrates not only that this purely data- driven approach outperforms state-of-the-art video-quality metrics, which are often optimized for specific properties of the human visual system, but also that multi-way data analysis methods outperform the combination of two-way data analysis methods and temporal pooling. .

  6. SU-E-T-776: Use of Quality Metrics for a New Hypo-Fractionated Pre-Surgical Mesothelioma Protocol

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, S; Mehta, V [Swedish Cancer Institute, Seattle, WA (United States)

    2015-06-15

    Purpose: The “SMART” (Surgery for Mesothelioma After Radiation Therapy) approach involves hypo-fractionated radiotherapy of the lung pleura to 25Gy over 5 days followed by surgical resection within 7. Early clinical results suggest that this approach is very promising, but also logistically challenging due to the multidisciplinary involvement. Due to the compressed schedule, high dose, and shortened planning time, the delivery of the planned doses were monitored for safety with quality metric software. Methods: Hypo-fractionated IMRT treatment plans were developed for all patients and exported to Quality Reports™ software. Plan quality metrics or PQMs™ were created to calculate an objective scoring function for each plan. This allows for an objective assessment of the quality of the plan and a benchmark for plan improvement for subsequent patients. The priorities of various components were incorporated based on similar hypo-fractionated protocols such as lung SBRT treatments. Results: Five patients have been treated at our institution using this approach. The plans were developed, QA performed, and ready within 5 days of simulation. Plan Quality metrics utilized in scoring included doses to OAR and target coverage. All patients tolerated treatment well and proceeded to surgery as scheduled. Reported toxicity included grade 1 nausea (n=1), grade 1 esophagitis (n=1), grade 2 fatigue (n=3). One patient had recurrent fluid accumulation following surgery. No patients experienced any pulmonary toxicity prior to surgery. Conclusion: An accelerated course of pre-operative high dose radiation for mesothelioma is an innovative and promising new protocol. Without historical data, one must proceed cautiously and monitor the data carefully. The development of quality metrics and scoring functions for these treatments allows us to benchmark our plans and monitor improvement. If subsequent toxicities occur, these will be easy to investigate and incorporate into the

  7. Study on the quality evaluation metrics for compressed spaceborne hyperspectral data

    Institute of Scientific and Technical Information of China (English)

    LI; Xiaohui; ZHANG; Jing; LI; Chuanrong; LIU; Yi; LI; Ziyang; ZHU; Jiajia; ZENG; Xiangzhao

    2015-01-01

    Based on the raw data of spaceborne dispersive and interferometry imaging spectrometer,a set of quality evaluation metrics for compressed hyperspectral data is initially established in this paper.These quality evaluation metrics,which consist of four aspects including compression statistical distortion,sensor performance evaluation,data application performance and image quality,are suited to the comprehensive and systematical analysis of the impact of lossy compression in spaceborne hyperspectral remote sensing data quality.Furthermore,the evaluation results would be helpful to the selection and optimization of satellite data compression scheme.

  8. Development of a noise metric for assessment of exposure risk to complex noises.

    Science.gov (United States)

    Zhu, Xiangdong; Kim, Jay H; Song, Won Joon; Murphy, William J; Song, Seongho

    2009-08-01

    Many noise guidelines currently use A-weighted equivalent sound pressure level L(Aeq) as the noise metric and the equal energy hypothesis to assess the risk of occupational noises. Because of the time-averaging effect involved with the procedure, the current guidelines may significantly underestimate the risk associated with complex noises. This study develops and evaluates several new noise metrics for more accurate assessment of exposure risks to complex and impulsive noises. The analytic wavelet transform was used to obtain time-frequency characteristics of the noise. 6 basic, unique metric forms that reflect the time-frequency characteristics were developed, from which 14 noise metrics were derived. The noise metrics were evaluated utilizing existing animal test data that were obtained by exposing 23 groups of chinchillas to, respectively, different types of noise. Correlations of the metrics with the hearing losses observed in chinchillas were compared and the most promising noise metric was identified.

  9. Visual signal quality assessment quality of experience (QOE)

    CERN Document Server

    Ma, Lin; Lin, Weisi; Ngan, King

    2015-01-01

    This book provides comprehensive coverage of the latest trends/advances in subjective and objective quality evaluation for traditional visual signals, such as 2D images and video, as well as the most recent challenges for the field of multimedia quality assessment and processing, such as mobile video and social media. Readers will learn how to ensure the highest storage/delivery/ transmission quality of visual content (including image, video, graphics, animation, etc.) from the server to the consumer, under resource constraints, such as computation, bandwidth, storage space, battery life, etc.    Provides an overview of quality assessment for traditional visual signals; Covers newly emerged visual signals such as social media, 3D image/video, mobile video, high dynamic range (HDR) images, graphics/animation, etc., which demand better quality of experience (QoE); Helps readers to develop better quality metrics and processing methods for newly emerged visual signals; Enables testing, optimizing, benchmarking...

  10. Using business intelligence to monitor clinical quality metrics.

    Science.gov (United States)

    Resetar, Ervina; Noirot, Laura A; Reichley, Richard M; Storey, Patricia; Skiles, Ann M; Traynor, Patrick; Dunagan, W Claiborne; Bailey, Thomas C

    2007-10-11

    BJC HealthCare (BJC) uses a number of industry standard indicators to monitor the quality of services provided by each of its hospitals. By establishing an enterprise data warehouse as a central repository of clinical quality information, BJC is able to monitor clinical quality performance in a timely manner and improve clinical outcomes.

  11. A NEW OBJECT BASED QUALITY METRIC BASED ON SIFT AND SSIM

    OpenAIRE

    Decombas, Marc; Dufaux, Frederic; Renan, Erwann; Pesquet-Popescu, Beatrice; Capman, Francois

    2012-01-01

    ICIP2012; We propose a full reference visual quality metric to evaluate a semantic coding system which may not preserve exactly the position and/or the shape of objects. The metric is based on Scale-Invariant Feature Transform (SIFT) points. More specifically, Structural SIMilarity (SSIM) on windows around the SIFT points measures the compression artifacts (SSIM_SIFT). Conversely, the standard deviation of the matching distance between the SIFT points measures the geometric distortion (GEOMET...

  12. An Approach towards Software Quality Assessment

    Science.gov (United States)

    Srivastava, Praveen Ranjan; Kumar, Krishan

    Software engineer needs to determine the real purpose of the software, which is a prime point to keep in mind: The customer’s needs come first, and they include particular levels of quality, not just functionality. Thus, the software engineer has a responsibility to elicit quality requirements that may not even be explicit at the outset and to discuss their importance and the difficulty of attaining them. All processes associated with software quality (e.g. building, checking, improving quality) will be designed with these in mind and carry costs based on the design. Therefore, it is important to have in mind some of the possible attributes of quality. We start by identifying the metrics and measurement approaches that can be used to assess the quality of software product. Most of them can be measured subjectively because there is no solid statistics regarding them. Here, in this paper we propose an approach to measure the software quality statistically.

  13. Patent Assessment Quality

    DEFF Research Database (Denmark)

    Burke, Paul F.; Reitzig, Markus

    2006-01-01

    The increasing number of patent applications worldwide and the extension of patenting to the areas of software and business methods have triggered a debate on "patent quality". While patent quality may have various dimensions, this paper argues that consistency in the decision making on the side...... of the patent office is one important dimension, particularly in new patenting areas (emerging technologies). In order to understand whether patent offices appear capable of providing consistent assessments of a patent's technological quality in such novel industries from the beginning, we study the concordance...... of the European Patent Office's (EPO's) granting and opoposition decisions for individual patents. We use the historical example of biotech patens filed between 1978 until 1986, the early stage of the industry. Our results indicate that the EPO shows systematically different assessments of technological quality...

  14. Diet quality assessment indexes

    Directory of Open Access Journals (Sweden)

    Kênia Mara Baiocchi de Carvalho

    2014-10-01

    Full Text Available Various indices and scores based on admittedly healthy dietary patterns or food guides for the general population, or aiming at the prevention of diet-related diseases have been developed to assess diet quality. The four indices preferred by most studies are: the Diet Quality Index; the Healthy Eating Index; the Mediterranean Diet Score; and the Overall Nutritional Quality Index. Other instruments based on these indices have been developed and the words 'adapted', 'revised', or 'new version I, II or III' added to their names. Even validated indices usually find only modest associations between diet and risk of disease or death, raising questions about their limitations and the complexity associated with measuring the causal relationship between diet and health parameters. The objective of this review is to describe the main instruments used for assessing diet quality, and the applications and limitations related to their use and interpretation.

  15. Using full-reference image quality metrics for automatic image sharpening

    Science.gov (United States)

    Krasula, Lukas; Fliegel, Karel; Le Callet, Patrick; Klíma, Miloš

    2014-05-01

    Image sharpening is a post-processing technique employed for the artificial enhancement of the perceived sharpness by shortening the transitions between luminance levels or increasing the contrast on the edges. The greatest challenge in this area is to determine the level of perceived sharpness which is optimal for human observers. This task is complex because the enhancement is gained only until the certain threshold. After reaching it, the quality of the resulting image drops due to the presence of annoying artifacts. Despite the effort dedicated to the automatic sharpness estimation, none of the existing metrics is designed for localization of this threshold. Nevertheless, it is a very important step towards the automatic image sharpening. In this work, possible usage of full-reference image quality metrics for finding the optimal amount of sharpening is proposed and investigated. The intentionally over-sharpened "anchor image" was included to the calculation as the "anti-reference" and the final metric score was computed from the differences between reference, processed, and anchor versions of the scene. Quality scores obtained from the subjective experiment were used to determine the optimal combination of partial metric values. Five popular fidelity metrics - SSIM, MS-SSIM, IW-SSIM, VIF, and FSIM - were tested. The performance of the proposed approach was then verified in the subjective experiment.

  16. Comparison of surface-based and image-based quality metrics for the analysis of dimensional computed tomography data

    Directory of Open Access Journals (Sweden)

    Francisco A. Arenhart

    2016-11-01

    Full Text Available This paper presents a comparison of surface-based and image-based quality metrics for dimensional X-ray computed tomography (CT data. The chosen metrics are used to characterize two key aspects in acquiring signals with CT systems: the loss of information (blurring and the adding of unwanted information (noise. A set of structured experiments was designed to test the response of the metrics to different influencing factors. It is demonstrated that, under certain circumstances, the results of both types of metrics become conflicting, emphasizing the importance of using surface information for evaluating the quality dimensional CT data. Specific findings using both types of metrics are also discussed.

  17. 基于感知重要性的立体图像质量评价方法%An Objective Quality Assessment Metric for Stereoscopic Images Based on Perceptual Significance

    Institute of Scientific and Technical Information of China (English)

    段芬芳; 邵枫; 蒋刚毅; 郁梅; 李福翠

    2013-01-01

    Stereoscopic image quality assessment is an effective way to evaluate the performance of stereoscopic video system. However, how to utilize human visual characteristics in quality assessment is still an unsolved issue. In this paper, an objective stereoscopic image quality assessment method is proposed based on perceptual significance. Firstly, by analyzing the effects of visual saliency and distortion on perceptual quality, we construct perceptual significance model of stereoscopic images. Then, we separate the stereoscopic image into four types of regions:salient distortion region, salient non-distortion region, non-salient distortion region and non-salient non-distortion region, and evaluate them independently. Finally, all evaluation results are integrated into an overall score. Experimental results show that the proposed method can achieve higher consistency with the subjective assessment of stereoscopic images and effectively reflect human visual system.%立体图像质量评价是评价立体视频系统性能的有效途径,而如何对立体图像质量进行有效的客观评价是目前的研究难点。本文提出了一种基于感知重要性的立体图像质量评价方法。该评价方法通过分析视觉显著和失真对感知质量的影响,建立立体图像视觉感知重要性模型,将立体图像分为四类区域:显著失真区域、显著非失真区域、非显著失真区域和非显著非失真区域,然后对各个区域分别进行评价,最后通过对各个区域赋予不同的权值从而预测得到最终的客观评价值。实验结果表明,该方法与主观评价结果有较好的相关性,符合人眼视觉系统。

  18. Quality metric for accurate overlay control in <20nm nodes

    Science.gov (United States)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  19. ENHANCED ENSEMBLE PREDICTION ALGORITHMS FOR DETECTING FAULTY MODULES IN OBJECT ORIENTED SYSTEMS USING QUALITY METRICS

    Directory of Open Access Journals (Sweden)

    M. Punithavalli

    2012-01-01

    Full Text Available The high usage of software system poses high quality demand from users, which results in increased software complexity. To address these complexities, software quality engineering methods should be updated accordingly and enhance their quality assuring methods. Fault prediction, a sub-task of SQE, is designed to solve this issue and provide a strategy to identify faulty parts of a program, so that the testing process can concentrate only on those regions. This will improve the testing process and indirectly help to reduce development life cycle, project risks, resource and infrastructure costs. Measuring quality using software metrics for fault identification is gaining wide interest in software industry as they help to reduce time and cost. Existing system use either traditional simple metrics or object oriented metrics during fault detection combined with single classifier prediction system. This study combines the use of simple and object oriented metrics and uses a multiple classifier prediction system to identify module faults. In this study, a total of 20 metrics combining both traditional and OO metrics are used for fault detection. To analyze the performance of these metrics on fault module detection, the study proposes the use of ensemble classifiers that uses three frequently used classifiers, Back Propagation Neural Network (BPNN, Support Vector Machine (SVM and K-Nearest Neighbour (KNN. A novel classifier aggregation method is proposed to combine the classification results. Four methods, Sequential Selection, Random Selection with No Replacement, Selection with Bagging and Selection with Boosting, are used to generate different variants of input dataset. The three classifiers were grouped together as 2-classifier and 3-classifier prediction ensemble models. A total of 16 ensemble models were proposed for fault prediction. The performance of the proposed prediciton models was analyzed using accuracy, precision, recall and F

  20. The impact of interhospital transfers on surgical quality metrics for academic medical centers.

    Science.gov (United States)

    Crippen, Cristina J; Hughes, Steven J; Chen, Sugong; Behrns, Kevin E

    2014-07-01

    The emergence of pay-for-performance systems pose a risk to an academic medical center's (AMC) mission to provide care for interhospital surgical transfer patients. This study examines quality metrics and resource consumption for a sample of these patients from the University Health System Consortium (UHC) and our Department of Surgery (DOS). Standard benchmarks, including mortality rate, length of stay (LOS), and cost, were used to evaluate the impact of interhospital surgical transfers versus direct admission (DA) patients from January 2010 to December 2012. For 1,423,893 patients, the case mix index for transfer patients was 38 per cent (UHC) and 21 per cent (DOS) greater than DA patients. Mortality rates were 5.70 per cent (UHC) and 6.93 per cent (DOS) in transferred patients compared with 1.79 per cent (UHC) and 2.93 per cent (DOS) for DA patients. Mean LOS for DA patients was 4 days shorter. Mean total costs for transferred patients were greater $13,613 (UHC) and $13,356 (DOS). Transfer patients have poorer outcomes and consume more resources than DA patients. Early recognition and transfer of complex surgical patients may improve patient rescue and decrease resource consumption. Surgeons at AMCs and in the community should develop collaborative programs that permit collective assessment and decision-making for complicated surgical patients.

  1. Program analysis methodology Office of Transportation Technologies: Quality Metrics final report

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2002-03-01

    "Quality Metrics" is the analytical process for measuring and estimating future energy, environmental and economic benefits of US DOE Office of Energy Efficiency and Renewable Energy (EE/RE) programs. This report focuses on the projected benefits of the programs currently supported by the Office of Transportation Technologies (OTT) within EE/RE. For analytical purposes, these various benefits are subdivided in terms of Planning Units which are related to the OTT program structure.

  2. Reduced-reference image quality assessment using moment method

    Science.gov (United States)

    Yang, Diwei; Shen, Yuantong; Shen, Yongluo; Li, Hongwei

    2016-10-01

    Reduced-reference image quality assessment (RR IQA) aims to evaluate the perceptual quality of a distorted image through partial information of the corresponding reference image. In this paper, a novel RR IQA metric is proposed by using the moment method. We claim that the first and second moments of wavelet coefficients of natural images can have approximate and regular change that are disturbed by different types of distortions, and that this disturbance can be relevant to human perceptions of quality. We measure the difference of these statistical parameters between reference and distorted image to predict the visual quality degradation. The introduced IQA metric is suitable for implementation and has relatively low computational complexity. The experimental results on Laboratory for Image and Video Engineering (LIVE) and Tampere Image Database (TID) image databases indicate that the proposed metric has a good predictive performance.

  3. Integrated Metrics for Improving the Life Cycle Approach to Assessing Product System Sustainability

    Directory of Open Access Journals (Sweden)

    Wesley Ingwersen

    2014-03-01

    Full Text Available Life cycle approaches are critical for identifying and reducing environmental burdens of products. While these methods can indicate potential environmental impacts of a product, current Life Cycle Assessment (LCA methods fail to integrate the multiple impacts of a system into unified measures of social, economic or environmental performance related to sustainability. Integrated metrics that combine multiple aspects of system performance based on a common scientific or economic principle have proven to be valuable for sustainability evaluation. In this work, we propose methods of adapting four integrated metrics for use with LCAs of product systems: ecological footprint, emergy, green net value added, and Fisher information. These metrics provide information on the full product system in land, energy, monetary equivalents, and as a unitless information index; each bundled with one or more indicators for reporting. When used together and for relative comparison, integrated metrics provide a broader coverage of sustainability aspects from multiple theoretical perspectives that is more likely to illuminate potential issues than individual impact indicators. These integrated metrics are recommended for use in combination with traditional indicators used in LCA. Future work will test and demonstrate the value of using these integrated metrics and combinations to assess product system sustainability.

  4. Implementing Composite Quality Metrics for Bipolar Disorder: Towards a More Comprehensive Approach to Quality Measurement

    Science.gov (United States)

    Kilbourne, Amy M.; Farmer, Carrie; Welsh, Deborah; Pincus, Harold Alan; Lasky, Elaine; Perron, Brian; Bauer, Mark S.

    2011-01-01

    Objective We implemented a set of processes of care measures for bipolar disorder that reflect psychosocial, patient preference, and continuum of care approaches to mental health, and examined whether veterans with bipolar disorder receive care concordant with these practices. Method Data from medical record reviews were used to assess key processes of care for 433 VA mental health outpatients with bipolar disorder. Both composite and individual processes of care measures were operationalized. Results Based on composite measures, 17% had documented assessment of psychiatric symptoms (e.g., psychotic, hallucinatory), 28% had documented patient treatment preferences (e.g., reasons for treatment discontinuation), 56% had documented substance abuse and psychiatric comorbidity assessment, and 62% had documentation of adequate cardiometabolic assessment. No-show visits were followed up 20% of the time and monitoring of weight gain was noted in only 54% of the patient charts. In multivariate analyses, history of homelessness (OR=1.61; 95% CI=1.05-2.46) and nonwhite race (OR=1.74; 95%CI=1.02-2.98) were associated with documentation of psychiatric symptoms and comorbidities, respectively. Conclusions Only half of patients diagnosed with bipolar disorder received care in accordance with clinical practice guidelines. High quality treatment of bipolar disorder includes not only adherence to treatment guidelines but also patient-centered care processes. PMID:21112457

  5. Assessing group-level participation in fluid teams: testing a new metric.

    Science.gov (United States)

    Paletz, Susannah B F; Schunn, Christian D

    2011-06-01

    Participation is an important factor in team success. We propose a new metric of participation equality that provides an unbiased estimate across groups of different sizes and across those that change size over time. Using 11 h of transcribed utterances from informal, fluid, colocated workgroup meetings, we compared the associations of this metric with coded equality of participation and standard deviation. While coded participation and our metric had similar patterns of findings, standard deviation had a somewhat different pattern, suggesting that it might lead to incorrect assessments with fluid teams. Exploratory analyses suggest that, as compared with mixed-age/status groups, groups of younger faculty had more equal participation and that the presence of negative affect words was associated with more dominated participation. Future research can take advantage of this new metric to further theory on team processes in both face-to-face and distributed settings.

  6. Information System Quality Assessment Methods

    OpenAIRE

    2014-01-01

    This thesis explores challenging topic of information system quality assessment and mainly process assessment. In this work the term Information System Quality is defined as well as different approaches in a quality definition for different domains of information systems are outlined. Main methods of process assessment are overviewed and their relationships are described. Process assessment methods are divided into two categories: ISO standards and best practices. The main objective of this w...

  7. Assessment of data quality in ATLAS

    CERN Document Server

    Wilson, M G

    2008-01-01

    Assessing the quality of data recorded with the ATLAS detector is crucial for commissioning and operating the detector to achieve sound physics measurements. In particular, the fast assessment of complex quantities obtained during event reconstruction and the ability to easily track them over time are especially important given the large data throughput and the distributed nature of the analysis environment. The data are processed once on a computer farm comprising O(1, 000) nodes before being distributed on the Grid, and reliable, centralized methods must be used to organize, merge, present, and archive data-quality metrics for performance experts and analysts. A review of the tools and approaches employed by the detector and physics groups in this environment and a summary of their performances during commissioning are presented.

  8. Revision and extension of Eco-LCA metrics for sustainability assessment of the energy and chemical processes.

    Science.gov (United States)

    Yang, Shiying; Yang, Siyu; Kraslawski, Andrzej; Qian, Yu

    2013-12-17

    Ecologically based life cycle assessment (Eco-LCA) is an appealing approach for the evaluation of resources utilization and environmental impacts of the process industries from an ecological scale. However, the aggregated metrics of Eco-LCA suffer from some drawbacks: the environmental impact metric has limited applicability; the resource utilization metric ignores indirect consumption; the renewability metric fails to address the quantitative distinction of resources availability; the productivity metric seems self-contradictory. In this paper, the existing Eco-LCA metrics are revised and extended for sustainability assessment of the energy and chemical processes. A new Eco-LCA metrics system is proposed, including four independent dimensions: environmental impact, resource utilization, resource availability, and economic effectiveness. An illustrative example of comparing assessment between a gas boiler and a solar boiler process provides insight into the features of the proposed approach.

  9. Mass, surface area and number metrics in diesel occupational exposure assessment.

    Science.gov (United States)

    Ramachandran, Gurumurthy; Paulsen, Dwane; Watts, Winthrop; Kittelson, David

    2005-07-01

    While diesel aerosol exposure assessment has traditionally been based on the mass concentration metric, recent studies have suggested that particle number and surface area concentrations may be more health-relevant. In this study, we evaluated the exposures of three occupational groups-bus drivers, parking garage attendants, and bus mechanics-using the mass concentration of elemental carbon (EC) as well as surface area and number concentrations. These occupational groups are exposed to mixtures of diesel and gasoline exhaust on a regular basis in various ratios. The three groups had significantly different exposures to workshift TWA EC with the highest levels observed in the bus garage mechanics and the lowest levels in the parking ramp booth attendants. In terms of surface area, parking ramp attendants had significantly greater exposures than bus garage mechanics, who in turn had significantly greater exposures than bus drivers. In terms of number concentrations, the exposures of garage mechanics exceeded those of ramp booth attendants by a factor of 5-6. Depending on the exposure metric chosen, the three occupational groups had quite different exposure rankings. This illustrates the importance of the choice of exposure metric in epidemiological studies. If these three occupational groups were part of an epidemiological study, depending on the metric used, they may or may not be part of the same similarly exposed group (SEG). The exposure rankings (e.g., low, medium, or high) of the three groups also changes with the metric used. If the incorrect metric is used, significant misclassification errors may occur.

  10. Application of Fractal Dimension to Assess the Effect of Scale on the Sensitivity of Landscape Metrics

    Directory of Open Access Journals (Sweden)

    R. Afrakhteh

    2016-09-01

    Full Text Available The sensitivity of landscape metrics to the scale effect is one of the most challenging issues in landscape ecology and quantification of land use spatial patterns. In this study, fractal dimension was employed to assess the effect of scale on the sensitivity of landscape metric in the north of Iran (around Sari as the case study. Land use/ cover maps were derived from Landsat-8 (OLI sensor image processing and its spatial scale was downgraded to 30, 60, 120, 150, 200, 250, and 300 by cell-center method. After that, landscape-level metrics were quantified. Finally, linear regressions were formed for the every metric based on the logarithmic transformation and the Coefficient of Determination and Fractal Dimension were computed as well. The coefficient of determination for all measures of diversity was zero and for other measures in two general categories: high sensitivity (R- redundant and without sensitivity (N- no effect. Results acquired from the two indicators were perfectly delineated the sensitivity of landscape metrics to the scale effect (Coefficient of Determination as well as the direction and magnitude of the landscape metrics (Fractal dimension.

  11. Optimal Rate Control in H.264 Video Coding Based on Video Quality Metric

    Directory of Open Access Journals (Sweden)

    R. Karthikeyan

    2014-05-01

    Full Text Available The aim of this research is to find a method for providing better visual quality across the complete video sequence in H.264 video coding standard. H.264 video coding standard with its significantly improved coding efficiency finds important applications in various digital video streaming, storage and broadcast. To achieve comparable quality across the complete video sequence with the constrains on bandwidth availability and buffer fullness, it is important to allocate more bits to frames with high complexity or a scene change and fewer bits to other less complex frames. A frame layer bit allocation scheme is proposed based on the perceptual quality metric as indicator of the frame complexity. The proposed model computes the Quality Index ratio (QIr of the predicted quality index of the current frame to the average quality index of all the previous frames in the group of pictures which is used for bit allocation to the current frame along with bits computed based on buffer availability. The standard deviation of the perceptual quality indicator MOS computed for the proposed model is significantly less which means the quality of the video sequence is identical throughout the full video sequence. Thus the experiment results shows that the proposed model effectively handles the scene changes and scenes with high motion for better visual quality.

  12. Image Signature Based Mean Square Error for Image Quality Assessment

    Institute of Scientific and Technical Information of China (English)

    CUI Ziguan; GAN Zongliang; TANG Guijin; LIU Feng; ZHU Xiuchang

    2015-01-01

    Motivated by the importance of Human visual system (HVS) in image processing, we propose a novel Image signature based mean square error (ISMSE) metric for full reference Image quality assessment (IQA). Efficient image signature based describer is used to predict visual saliency map of the reference image. The saliency map is incorporated into luminance diff erence between the reference and distorted images to obtain image quality score. The eff ect of luminance diff erence on visual quality with larger saliency value which is usually corresponding to foreground objects is highlighted. Experimental results on LIVE database release 2 show that by integrating the eff ects of image signature based saliency on luminance dif-ference, the proposed ISMSE metric outperforms several state-of-the-art HVS-based IQA metrics but with lower complexity.

  13. Survey of Quantitative Research Metrics to Assess Pilot Performance in Upset Recovery

    Science.gov (United States)

    Le Vie, Lisa R.

    2016-01-01

    Accidents attributable to in-flight loss of control are the primary cause for fatal commercial jet accidents worldwide. The National Aeronautics and Space Administration (NASA) conducted a literature review to determine and identify the quantitative standards for assessing upset recovery performance. This review contains current recovery procedures for both military and commercial aviation and includes the metrics researchers use to assess aircraft recovery performance. Metrics include time to first input, recognition time and recovery time and whether that input was correct or incorrect. Other metrics included are: the state of the autopilot and autothrottle, control wheel/sidestick movement resulting in pitch and roll, and inputs to the throttle and rudder. In addition, airplane state measures, such as roll reversals, altitude loss/gain, maximum vertical speed, maximum/minimum air speed, maximum bank angle and maximum g loading are reviewed as well.

  14. Portfolio Assessment and Quality Teaching

    Science.gov (United States)

    Kim, Youb; Yazdian, Lisa Sensale

    2014-01-01

    Our article focuses on using portfolio assessment to craft quality teaching. Extant research literature on portfolio assessment suggests that the primary purpose of assessment is to serve learning, and portfolio assessments facilitate the process of making linkages among assessment, curriculum, and student learning (Asp, 2000; Bergeron, Wermuth,…

  15. Advancing Efforts to Achieve Health Equity: Equity Metrics for Health Impact Assessment Practice

    Directory of Open Access Journals (Sweden)

    Jonathan Heller

    2014-10-01

    Full Text Available Equity is a core value of Health Impact Assessment (HIA. Many compelling moral, economic, and health arguments exist for prioritizing and incorporating equity considerations in HIA practice. Decision-makers, stakeholders, and HIA practitioners see the value of HIAs in uncovering the impacts of policy and planning decisions on various population subgroups, developing and prioritizing specific actions that promote or protect health equity, and using the process to empower marginalized communities. There have been several HIA frameworks developed to guide the inclusion of equity considerations. However, the field lacks clear indicators for measuring whether an HIA advanced equity. This article describes the development of a set of equity metrics that aim to guide and evaluate progress toward equity in HIA practice. These metrics also intend to further push the field to deepen its practice and commitment to equity in each phase of an HIA. Over the course of a year, the Society of Practitioners of Health Impact Assessment (SOPHIA Equity Working Group took part in a consensus process to develop these process and outcome metrics. The metrics were piloted, reviewed, and refined based on feedback from reviewers. The Equity Metrics are comprised of 23 measures of equity organized into four outcomes: (1 the HIA process and products focused on equity; (2 the HIA process built the capacity and ability of communities facing health inequities to engage in future HIAs and in decision-making more generally; (3 the HIA resulted in a shift in power benefiting communities facing inequities; and (4 the HIA contributed to changes that reduced health inequities and inequities in the social and environmental determinants of health. The metrics are comprised of a measurement scale, examples of high scoring activities, potential data sources, and example interview questions to gather data and guide evaluators on scoring each metric.

  16. Impact of artifact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data.

    Science.gov (United States)

    Carroll, Thomas S; Liang, Ziwei; Salama, Rafik; Stark, Rory; de Santiago, Ines

    2014-01-01

    With the advent of ChIP-seq multiplexing technologies and the subsequent increase in ChIP-seq throughput, the development of working standards for the quality assessment of ChIP-seq studies has received significant attention. The ENCODE consortium's large scale analysis of transcription factor binding and epigenetic marks as well as concordant work on ChIP-seq by other laboratories has established a new generation of ChIP-seq quality control measures. The use of these metrics alongside common processing steps has however not been evaluated. In this study, we investigate the effects of blacklisting and removal of duplicated reads on established metrics of ChIP-seq quality and show that the interpretation of these metrics is highly dependent on the ChIP-seq preprocessing steps applied. Further to this we perform the first investigation of the use of these metrics for ChIP-exo data and make recommendations for the adaptation of the NSC statistic to allow for the assessment of ChIP-exo efficiency.

  17. Impact of artefact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data.

    Directory of Open Access Journals (Sweden)

    Thomas Samuel Carroll

    2014-04-01

    Full Text Available With the advent of ChIP-seq multiplexing technologies and the subsequent increase in ChIP-seq throughput, the development of working standards for the quality assessment of ChIP-seq studies has received significant attention. The ENCODE consortium’s large scale analysis of transcription factor binding and epigenetic marks as well as concordant work on ChIP-seq by other laboratories has established a new generation of ChIP-seq quality control measures. The use of these metrics alongside common processing steps has however not been evaluated. In this study, we investigate the effects of blacklisting and removal of duplicated reads on established metrics of ChIP-seq quality and show that the interpretation of these metrics is highly dependent on the ChIP-seq preprocessing steps applied. Further to this we perform the first investigation of the use of these metrics for ChIP-exo data and make recommendations for the adaptation of the NSC statistic to allow for the assessment of ChIP-exo efficiency.

  18. Alternative "global warming" metrics in life cycle assessment: a case study with existing transportation data.

    Science.gov (United States)

    Peters, Glen P; Aamaas, Borgar; T Lund, Marianne; Solli, Christian; Fuglestvedt, Jan S

    2011-10-15

    The Life Cycle Assessment (LCA) impact category "global warming" compares emissions of long-lived greenhouse gases (LLGHGs) using Global Warming Potential (GWP) with a 100-year time-horizon as specified in the Kyoto Protocol. Two weaknesses of this approach are (1) the exclusion of short-lived climate forcers (SLCFs) and biophysical factors despite their established importance, and (2) the use of a particular emission metric (GWP) with a choice of specific time-horizons (20, 100, and 500 years). The GWP and the three time-horizons were based on an illustrative example with value judgments and vague interpretations. Here we illustrate, using LCA data of the transportation sector, the importance of SLCFs relative to LLGHGs, different emission metrics, and different treatments of time. We find that both the inclusion of SLCFs and the choice of emission metric can alter results and thereby change mitigation priorities. The explicit inclusion of time, both for emissions and impacts, can remove value-laden assumptions and provide additional information for impact assessments. We believe that our results show that a debate is needed in the LCA community on the impact category "global warming" covering which emissions to include, the emission metric(s) to use, and the treatment of time.

  19. Social Advertising Quality: Assessment Criteria

    Directory of Open Access Journals (Sweden)

    S. B. Kalmykov

    2017-01-01

    Full Text Available Purpose: the The purpose of the publication is development of existing criterial assessment in social advertising sphere. The next objectives are provided for its achievement: to establish research methodology, to develop the author’s version of necessary notional apparatus and conceptual generalization, to determine the elements of social advertising quality, to establish the factors of its quality, to conduct the systematization of existing criteria and measuring instruments of quality assessment, to form new criteria of social advertising quality, to apply received results for development of criterial assessment to determine the further research perspectives. Methods: the methodology of research of management of social advertising interaction with target audience, which has dynamic procedural character with use of sociological knowledge multivariate paradigmatic status, has been proposed. Results: the primary received results: the multivariate paradigmatic research basis with use of works of famous domestic and foreign scientists in sociology, qualimetry and management spheres; the definitions of social advertising, its quality, sociological quality provision system, target audience behavior model during social advertising interaction are offered; the quality factors with three groups by level of effect on consumer are established; the systematization of existing quality and its measure instruments assessment criteria by detected social advertising quality elements are conducted; the two new criteria and its management quality assessment measuring instruments in social advertising sphere are developed; the one of the common groups of production quality criteria – adaptability with considering of new management quality criteria and conducted systematization of existing social advertising creative quality assessment criteria development; the perspective of further perfection of quality criterial assessment based on social advertising

  20. Holistic Metrics for Assessment of the Greenness of Chemical Reactions in the Context of Chemical Education

    Science.gov (United States)

    Ribeiro, M. Gabriela T. C.; Machado, Adelio A. S. C.

    2013-01-01

    Two new semiquantitative green chemistry metrics, the green circle and the green matrix, have been developed for quick assessment of the greenness of a chemical reaction or process, even without performing the experiment from a protocol if enough detail is provided in it. The evaluation is based on the 12 principles of green chemistry. The…

  1. A convolutional neural network approach for objective video quality assessment.

    Science.gov (United States)

    Le Callet, Patrick; Viard-Gaudin, Christian; Barba, Dominique

    2006-09-01

    This paper describes an application of neural networks in the field of objective measurement method designed to automatically assess the perceived quality of digital videos. This challenging issue aims to emulate human judgment and to replace very complex and time consuming subjective quality assessment. Several metrics have been proposed in literature to tackle this issue. They are based on a general framework that combines different stages, each of them addressing complex problems. The ambition of this paper is not to present a global perfect quality metric but rather to focus on an original way to use neural networks in such a framework in the context of reduced reference (RR) quality metric. Especially, we point out the interest of such a tool for combining features and pooling them in order to compute quality scores. The proposed approach solves some problems inherent to objective metrics that should predict subjective quality score obtained using the single stimulus continuous quality evaluation (SSCQE) method. This latter has been adopted by video quality expert group (VQEG) in its recently finalized reduced referenced and no reference (RRNR-TV) test plan. The originality of such approach compared to previous attempts to use neural networks for quality assessment, relies on the use of a convolutional neural network (CNN) that allows a continuous time scoring of the video. Objective features are extracted on a frame-by-frame basis on both the reference and the distorted sequences; they are derived from a perceptual-based representation and integrated along the temporal axis using a time-delay neural network (TDNN). Experiments conducted on different MPEG-2 videos, with bit rates ranging 2-6 Mb/s, show the effectiveness of the proposed approach to get a plausible model of temporal pooling from the human vision system (HVS) point of view. More specifically, a linear correlation criteria, between objective and subjective scoring, up to 0.92 has been obtained on

  2. What are we assessing when we measure food security? A compendium and review of current metrics.

    Science.gov (United States)

    Jones, Andrew D; Ngure, Francis M; Pelto, Gretel; Young, Sera L

    2013-09-01

    The appropriate measurement of food security is critical for targeting food and economic aid; supporting early famine warning and global monitoring systems; evaluating nutrition, health, and development programs; and informing government policy across many sectors. This important work is complicated by the multiple approaches and tools for assessing food security. In response, we have prepared a compendium and review of food security assessment tools in which we review issues of terminology, measurement, and validation. We begin by describing the evolving definition of food security and use this discussion to frame a review of the current landscape of measurement tools available for assessing food security. We critically assess the purpose/s of these tools, the domains of food security assessed by each, the conceptualizations of food security that underpin each metric, as well as the approaches that have been used to validate these metrics. Specifically, we describe measurement tools that 1) provide national-level estimates of food security, 2) inform global monitoring and early warning systems, 3) assess household food access and acquisition, and 4) measure food consumption and utilization. After describing a number of outstanding measurement challenges that might be addressed in future research, we conclude by offering suggestions to guide the selection of appropriate food security metrics.

  3. The role of metrics and measurements in a software intensive total quality management environment

    Science.gov (United States)

    Daniels, Charles B.

    1992-01-01

    Paramax Space Systems began its mission as a member of the Rockwell Space Operations Company (RSOC) team which was the successful bidder on a massive operations consolidation contract for the Mission Operations Directorate (MOD) at JSC. The contract awarded to the team was the Space Transportation System Operations Contract (STSOC). Our initial challenge was to accept responsibility for a very large, highly complex and fragmented collection of software from eleven different contractors and transform it into a coherent, operational baseline. Concurrently, we had to integrate a diverse group of people from eleven different companies into a single, cohesive team. Paramax executives recognized the absolute necessity to develop a business culture based on the concept of employee involvement to execute and improve the complex process of our new environment. Our executives clearly understood that management needed to set the example and lead the way to quality improvement. The total quality management policy and the metrics used in this endeavor are presented.

  4. Definition of Metric Dependencies for Monitoring the Impact of Quality of Services on Quality of Processes

    OpenAIRE

    2007-01-01

    Service providers have to monitor the quality of offered services and to ensure the compliance of service levels provider and requester agreed on. Thereby, a service provider should notify a service requester about violations of service level agreements (SLAs). Furthermore, the provider should point to impacts on affected processes in which services are invoked. For that purpose, a model is needed to define dependencies between quality of processes and quality of invoked services. In order to...

  5. Assessing the metrics of climate change. Current methods and future possibilities

    Energy Technology Data Exchange (ETDEWEB)

    Fuglestveit, Jan S.; Berntsen, Terje K.; Godal, Odd; Sausen, Robert; Shine, Keith P.; Skodvin, Tora

    2001-07-01

    With the principle of comprehensiveness embedded in the UN Framework Convention on Climate Change (Art. 3), a multi-gas abatement strategy with emphasis also on non-CO2 greenhouse gases as targets for reduction and control measures has been adopted in the international climate regime. In the Kyoto Protocol, the comprehensive approach is made operative as the aggregate anthropogenic carbon dioxide equivalent emissions of six specified greenhouse gases or groups of gases (Art. 3). With this operationalisation, the emissions of a set of greenhouse gases with very different atmospheric lifetimes and radiative properties are transformed into one common unit - CO2 equivalents. This transformation is based on the Global Warming Potential (GWP) index, which in turn is based on the concept of radiative forcing. The GWP metric and its application in policy making has been debated, and several other alternative concepts have been suggested. In this paper, we review existing and alternative metrics of climate change, with particular emphasis on radiative forcing and GWPs, in terms of their scientific performance. This assessment focuses on questions such as the climate impact (end point) against which gases are weighted; the extent to which and how temporality is included, both with regard to emission control and with regard to climate impact; how cost issues are dealt with; and the sensitivity of the metrics to various assumptions. It is concluded that the radiative forcing concept is a robust and useful metric of the potential climatic impact of various agents and that there are prospects for improvement by weighing different forcings according to their effectiveness. We also find that although the GWP concept is associated with serious shortcomings, it retains advantages over any of the proposed alternatives in terms of political feasibility. Alternative metrics, however, make a significant contribution to addressing important issues, and this contribution should be taken

  6. A Code Level Based Programmer Assessment and Selection Criterion Using Metric Tools

    Directory of Open Access Journals (Sweden)

    Ezekiel U. Okike

    2014-11-01

    Full Text Available this study presents a code level measurement of computer programs developed by computer programmers using a Chidamber and Kemerer Java metric (CKJM tool and the Myers Briggs Type Indicator (MBTI tool. The identification of potential computer programmers using personality trait factors does not seem to be the best approach without a code level measurement of the quality of programs. Hence the need to evolve a metric tool which measures both personality traits of programmers and code level quality of programs developed by programmers. This is the focus of this study. In this experiment, a set of Java based programming tasks were given to 33 student programmers who could confidently use the Java programming language. The codes developed by these students were analyzed for quality using a CKJM tool. Cohesion, coupling and number of public methods (NPM metrics were used in the study. The choice of these three metrics from the CKJM suite was because they are useful in measuring well designed codes. By examining the cohesion values of classes, high cohesion ranges [0,1] and low coupling imply well designed code. Also number of methods (NPM in a well-designed class is always less than 5 when cohesion range is [0,1]. Results from this study show that 19 of the 33 programmers developed good and cohesive programs while 14 did not. Further analysis revealed the personality traits of programmers and the number of good programs written by them. Programmers with Introverted Sensing Thinking Judging (ISTJ traits produced the highest number of good programs, followed by Introverted iNtuitive Thinking Perceiving (INTP, Introverted iNtuitive Feelingng Perceiving (INTP, and Extroverted Sensing Thinking Judging (ESTJ

  7. 允许总误差在西格玛度量用于评价临床化学检测项目分析质量上的应用研究%Application of allowable total error in sigma metrics for assessing the analytical quality of clinical chemistry determination

    Institute of Scientific and Technical Information of China (English)

    张路; 王薇; 王治国

    2015-01-01

    Objective To investigate the importance of allowable total error (TEa) source in sigma(σ) metrics for assessing the analytical quality of clinical chemistry determination.Methods In this study, the data were collected from the second internal quality control of routine chemistry and the first external quality assessment of routine chemistry in 2014 organized by the National Center for Clinical Laboratory.One of the laboratories was selected for its coefficient of variation ( CV) and the bias of 19 clinical chemistry items from the data.σof 2 runs were calculated by 5 different TEa. The σmetrics′performance for assessing the analytical quality of clinical chemistry determination was analyzed comparatively.Results σmetrics varied with the changes of TEa and imprecision.Under the National Health Industry Standard, the majorσvalues(68.4%)for control 1 ranged from 2 to 4 and from 3 to 6 for control 2(58%).Under RiliBÄK, except triglyceride (negative) and alanine aminotransferase (ALT)(3, up to 7.69, and 84.2% of control 2 showed a σvalue >3, up to 10.43.Under biological variability, theσvalue of control 1 ranged from 1 to 5, and the most (63%) was <3, and that of control 2 ranged from 1 to 6, but those of 9 from 19 were <3.Under the TEa of Australian, theσvalue of control 1 was <3, and that of 79%control 2 was <3 .The σvalue of control 2 was generally higher than that of control 1.Conclusions The 6σis an efficient way to control quality, but the lack of TEa for many analytes and inconsistent TEa from different sources are important variables for the interpretation ofσmetrics in a routine clinical laboratory.%目的:探讨不同允许总误差( TEa)来源对西格玛(σ)度量评价实验室临床化学检验项目分析质量的重要性。方法用卫生部临床检验中心2014年第2次常规化学室内质控数据及2014年第1次常规化学室间质评某实验室19个检测项目的变异系数( CV)及百分差值(用其

  8. Irrigation water quality assessments

    Science.gov (United States)

    Increasing demands on fresh water supplies by municipal and industrial users means decreased fresh water availability for irrigated agriculture in semi arid and arid regions. There is potential for agricultural use of treated wastewaters and low quality waters for irrigation but this will require co...

  9. Area of Concern: A new paradigm in life cycle assessment for the development of footprint metrics

    DEFF Research Database (Denmark)

    Ridoutt, Bradley G.; Pfister, Stephan; Manzardo, Alessandro

    2016-01-01

    operating under the auspices of the UNEP/SETAC Life Cycle Initiative project on environmental life cycle impact assessment (LCIA) has been working to develop generic guidance for developers of footprint metrics. The purpose of this paper is to introduce a universal footprint definition and related...... terminology as well as to discuss modelling implications. The task force has worked from the perspective that footprints should be based on LCA methodology, underpinned by the same data systems and models as used in LCA. However, there are important differences in purpose and orientation relative to LCA...... area of concern as the basis for a universal footprint definition. In the same way that LCA uses impact category indicators to assess impacts that follow a common causeeffect pathway toward areas of protection, footprint metrics address areas of concern. The critical difference is that areas of concern...

  10. New perspectives on article-level metrics: developing ways to assess research uptake and impact online

    Directory of Open Access Journals (Sweden)

    Jean Liu

    2013-07-01

    Full Text Available Altmetrics were born from a desire to see and measure research impact differently. Complementing traditional citation analysis, altmetrics are intended to reflect more broad views of research impact by taking into account the use of digital scholarly communication tools. Aggregating online attention paid to individual scholarly articles and data sets is the approach taken by Altmetric LLP, an altmetrics tool provider. Potential uses for article-level metrics collected by Altmetric include: 1 the assessment of an article's impact within a particular community, 2 the assessment of the overall impact of a body of scholarly work, and 3 the characterization of entire author and reader communities that engage with particular articles online. Although attention metrics are still being refined, qualitative altmetrics data are beginning to illustrate the rich new world of scholarly communication, and are emerging as ways to highlight the immediate societal impacts of research.

  11. Assessing spelling in kindergarten: further comparison of scoring metrics and their relation to reading skills.

    Science.gov (United States)

    Clemens, Nathan H; Oslund, Eric L; Simmons, Leslie E; Simmons, Deborah

    2014-02-01

    Early reading and spelling development share foundational skills, yet spelling assessment is underutilized in evaluating early reading. This study extended research comparing the degree to which methods for scoring spelling skills at the end of kindergarten were associated with reading skills measured at the same time as well as at the end of first grade. Five strategies for scoring spelling responses were compared: totaling the number of words spelled correctly, totaling the number of correct letter sounds, totaling the number of correct letter sequences, using a rubric for scoring invented spellings, and calculating the Spelling Sensitivity Score (Masterson & Apel, 2010b). Students (N=287) who were identified at kindergarten entry as at risk for reading difficulty and who had received supplemental reading intervention were administered a standardized spelling assessment in the spring of kindergarten, and measures of phonological awareness, decoding, word recognition, and reading fluency were administered concurrently and at the end of first grade. The five spelling scoring metrics were similar in their strong relations with factors summarizing reading subskills (phonological awareness, decoding, and word reading) on a concurrent basis. Furthermore, when predicting first-grade reading skills based on spring-of-kindergarten performance, spelling scores from all five metrics explained unique variance over the autoregressive effects of kindergarten word identification. The practical advantages of using a brief spelling assessment for early reading evaluation and the relative tradeoffs of each scoring metric are discussed.

  12. Quality assessment of urban environment

    Science.gov (United States)

    Ovsiannikova, T. Y.; Nikolaenko, M. N.

    2015-01-01

    This paper is dedicated to the research applicability of quality management problems of construction products. It is offered to expand quality management borders in construction, transferring its principles to urban systems as economic systems of higher level, which qualitative characteristics are substantially defined by quality of construction product. Buildings and structures form spatial-material basis of cities and the most important component of life sphere - urban environment. Authors justify the need for the assessment of urban environment quality as an important factor of social welfare and life quality in urban areas. The authors suggest definition of a term "urban environment". The methodology of quality assessment of urban environment is based on integrated approach which includes the system analysis of all factors and application of both quantitative methods of assessment (calculation of particular and integrated indicators) and qualitative methods (expert estimates and surveys). The authors propose the system of indicators, characterizing quality of the urban environment. This indicators fall into four classes. The authors show the methodology of their definition. The paper presents results of quality assessment of urban environment for several Siberian regions and comparative analysis of these results.

  13. Workshop summary: 'Integrating air quality and climate mitigation - is there a need for new metrics to support decision making?'

    Science.gov (United States)

    von Schneidemesser, E.; Schmale, J.; Van Aardenne, J.

    2013-12-01

    Air pollution and climate change are often treated at national and international level as separate problems under different regulatory or thematic frameworks and different policy departments. With air pollution and climate change being strongly linked with regard to their causes, effects and mitigation options, the integration of policies that steer air pollutant and greenhouse gas emission reductions might result in cost-efficient, more effective and thus more sustainable tackling of the two problems. To support informed decision making and to work towards an integrated air quality and climate change mitigation policy requires the identification, quantification and communication of present-day and potential future co-benefits and trade-offs. The identification of co-benefits and trade-offs requires the application of appropriate metrics that are well rooted in science, easy to understand and reflect the needs of policy, industry and the public for informed decision making. For the purpose of this workshop, metrics were loosely defined as a quantified measure of effect or impact used to inform decision-making and to evaluate mitigation measures. The workshop held on October 9 and 10 and co-organized between the European Environment Agency and the Institute for Advanced Sustainability Studies brought together representatives from science, policy, NGOs, and industry to discuss whether current available metrics are 'fit for purpose' or whether there is a need to develop alternative metrics or reassess the way current metrics are used and communicated. Based on the workshop outcome the presentation will (a) summarize the informational needs and current application of metrics by the end-users, who, depending on their field and area of operation might require health, policy, and/or economically relevant parameters at different scales, (b) provide an overview of the state of the science of currently used and newly developed metrics, and the scientific validity of these

  14. Assessing the quality of restored images in optical long-baseline interferometry

    CERN Document Server

    Gomes, Nuno; Thiébaut, Éric

    2016-01-01

    Assessing the quality of aperture synthesis maps is relevant for benchmarking image reconstruction algorithms, for the scientific exploitation of data from optical long-baseline interferometers, and for the design/upgrade of new/existing interferometric imaging facilities. Although metrics have been proposed in these contexts, no systematic study has been conducted on the selection of a robust metric for quality assessment. This article addresses the question: what is the best metric to assess the quality of a reconstructed image? It starts by considering several metrics, and selecting a few based on general properties. Then, a variety of image reconstruction cases is considered. The observational scenarios are phase closure and phase referencing at the Very Large Telescope Interferometer (VLTI), for a combination of two, three, four and six telescopes. End-to-end image reconstruction is accomplished with the MiRA software, and several merit functions are put to test. It is found that convolution by an effect...

  15. The Northeast Stream Quality Assessment

    Science.gov (United States)

    Van Metre, Peter C.; Riva-Murray, Karen; Coles, James F.

    2016-04-22

    In 2016, the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) is assessing stream quality in the northeastern United States. The goal of the Northeast Stream Quality Assessment (NESQA) is to assess the quality of streams in the region by characterizing multiple water-quality factors that are stressors to aquatic life and evaluating the relation between these stressors and biological communities. The focus of NESQA in 2016 will be on the effects of urbanization and agriculture on stream quality in all or parts of eight states: Connecticut, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont.Findings will provide the public and policymakers with information about the most critical factors affecting stream quality, thus providing insights about possible approaches to protect the health of streams in the region. The NESQA study will be the fourth regional study conducted as part of NAWQA and will be of similar design and scope to the first three, in the Midwest in 2013, the Southeast in 2014, and the Pacific Northwest in 2015 (http://txpub.usgs.gov/RSQA/).

  16. The California stream quality assessment

    Science.gov (United States)

    Van Metre, Peter C.; Egler, Amanda L.; May, Jason T.

    2017-03-06

    In 2017, the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) project is assessing stream quality in coastal California, United States. The USGS California Stream Quality Assessment (CSQA) will sample streams over most of the Central California Foothills and Coastal Mountains ecoregion (modified from Griffith and others, 2016), where rapid urban growth and intensive agriculture in the larger river valleys are raising concerns that stream health is being degraded. Findings will provide the public and policy-makers with information regarding which human and natural factors are the most critical in affecting stream quality and, thus, provide insights about possible approaches to protect the health of streams in the region.

  17. Indoor Climate Quality Assessment -

    DEFF Research Database (Denmark)

    Ansaldi, Roberta; Asadi, Ehsan; Costa, José Joaquim

    This Guidebook gives building professionals useful support in the practical measurements and monitoring of the indoor climate in buildings. It is evident that energy consumption in a building is directly influenced by required and maintained indoor comfort level. Wireless technologies for measure......This Guidebook gives building professionals useful support in the practical measurements and monitoring of the indoor climate in buildings. It is evident that energy consumption in a building is directly influenced by required and maintained indoor comfort level. Wireless technologies...... for measurement and monitoring have allowed a significantly increased number of possible applications, especially in existing buildings. The Guidebook illustrates several cases with the instrumentation of the monitoring and assessment of indoor climate....

  18. Implementing composite quality metrics for bipolar disorder: towards a more comprehensive approach to quality measurement

    OpenAIRE

    Kilbourne, Amy M.; Farmer Teh, Carrie; Welsh, Deborah; Pincus, Harold Alan; Lasky, Elaine; Perron, Brian; Bauer, Mark S

    2010-01-01

    Objective We implemented a set of processes of care measures for bipolar disorder that reflect psychosocial, patient preference, and continuum of care approaches to mental health, and examined whether veterans with bipolar disorder receive care concordant with these practices. Method Data from medical record reviews were used to assess key processes of care for 433 VA mental health outpatients with bipolar disorder. Both composite and individual processes of care measures were ope...

  19. A New Normalizing Algorithm for BAC CGH Arrays with Quality Control Metrics

    Directory of Open Access Journals (Sweden)

    Jeffrey C. Miecznikowski

    2011-01-01

    Full Text Available The main focus in pin-tip (or print-tip microarray analysis is determining which probes, genes, or oligonucleotides are differentially expressed. Specifically in array comparative genomic hybridization (aCGH experiments, researchers search for chromosomal imbalances in the genome. To model this data, scientists apply statistical methods to the structure of the experiment and assume that the data consist of the signal plus random noise. In this paper we propose “SmoothArray”, a new method to preprocess comparative genomic hybridization (CGH bacterial artificial chromosome (BAC arrays and we show the effects on a cancer dataset. As part of our R software package “aCGHplus,” this freely available algorithm removes the variation due to the intensity effects, pin/print-tip, the spatial location on the microarray chip, and the relative location from the well plate. removal of this variation improves the downstream analysis and subsequent inferences made on the data. Further, we present measures to evaluate the quality of the dataset according to the arrayer pins, 384-well plates, plate rows, and plate columns. We compare our method against competing methods using several metrics to measure the biological signal. With this novel normalization algorithm and quality control measures, the user can improve their inferences on datasets and pinpoint problems that may arise in their BAC aCGH technology.

  20. Blind image quality assessment via deep learning.

    Science.gov (United States)

    Hou, Weilong; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-06-01

    This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness.

  1. Calculating Air Quality and Climate Co-Benefits Metrics from Adjoint Elasticities in Chemistry-Climate Models

    Science.gov (United States)

    Spak, S.; Henze, D. K.; Carmichael, G. R.

    2013-12-01

    The science and policy communities both need common metrics that clearly, comprehensively, and intuitively communicate the relative sensitivities of air quality and climate to emissions control strategies, include emissions and process uncertainties, and minimize the range of error that is transferred to the metric. This is particularly important because most emissions control policies impact multiple short-lived climate forcing agents, and non-linear climate and health responses in space and time limit the accuracy and policy value of simple emissions-based calculations. Here we describe and apply new second-order elasticity metrics to support the direct comparison of emissions control policies for air quality and health co-benefits analyses using adjoint chemical transport and chemistry-climate models. Borrowing an econometric concept, the simplest elasticities in the atmospheric system are the percentage changes in concentrations due to a percentage change in the emissions. We propose a second-order elasticity metric, the Emissions Reduction Efficiency, which supports comparison across compounds, to long-lived climate forcing agents like CO2, and to other air quality impacts, at any temporal or spatial scale. These adjoint-based metrics (1) possess a single uncertainty range; (2) allow for the inclusion of related health and other impacts effects within the same framework; (3) take advantage of adjoint and forward sensitivity models; and (4) are easily understood. Using global simulations with the adjoint of GEOS-Chem, we apply these metrics to identify spatial and sectoral variability in the climate and health co-benefits of sectoral emissions controls on black carbon, sulfur dioxide, and PM2.5. We find spatial gradients in optimal control strategies on every continent, along with differences among megacities.

  2. APP软件质量度量的研究%Research on Quality Metrics of Mobile Applications

    Institute of Scientific and Technical Information of China (English)

    刘莉芳

    2016-01-01

    本文分析了APP软件的特点和应关注的质量特性,并对每个质量特性提出了相应的度量指标和度量方法。%This paper discusses the characteristics and essential quality attributes of apps, and puts forward corresponding metrics indexs for each quality attribute and measurement method.

  3. Application Comparison of Two Source of Allowable Total Errors inσMetrics Assessing the Analytical Quality and Selecting Quality Control Procedures for Automated Clinical Chemistry%国内两种允许总误差标准在评估临床化学检测项目分析质量及选择质控程序中的应用比较

    Institute of Scientific and Technical Information of China (English)

    张路; 王薇; 王治国

    2015-01-01

    evaluate the difference of two sources of allowable total errors provided by National Health Industry Standard (WS/T 403-2012,analytical quality specification for routine analytes in clinical biochemistry)and National Stand-ard (GB/T 20470-2006,requirements of external quality assessment for clinical laboratories)in assessing the analytical qual-ity byσmetrics,and selecting quality control procedures using operational process specifications graphs.Methods Selected one of the laboratories participating in the internal quality control activity of routine chemistry of February,2014 and the first time external quality assessment activity of routine chemistry in 2014 organized by National Center for Clinical Labora-tories for its coefficient of variation and the bias of nineteen clinical chemistry tests.With the CV% and Bia%,σmetrics of controls at two analyte concentrations were calculated using two different allowable total errors targets (National Health In-dustry Standard (WS/T 403-2012)and National Standard (GB/T 20470-2006).Could obtain a operational process specifica-tions graph by which Could select quality control procedures using the Quality control computer simulat software developed by National Center for Clinical Laboratories and the company zhongchuangyida.Results The σ metrics under National Health Industry Standard (WS/T 403-2012)were from 0 to 7.Most of the values (86% and 76.2%)under National Stand-ard (GB/T 20470-2006)were from 3 to 15.On the normalized method decision chart,the assay quality using the allowable total errors targets of National Standard (GB/T 20470-2006)was at least one hierarchy more than one using National Health Industry Standard (WS/T 403-2012).The quality control rules under National Health Industry Standard (WS/T 403-2012)were obviously more strict than that under National Standard (GB/T 20470-2006).Among the control procedures using National Health Industry Standard (WS/T 403-2012),multirule (n=4):ALB,ALP,Ca,Cl,TC,Crea,Glu,LDH,K, Na

  4. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    Science.gov (United States)

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  5. NEW VISUAL PERCEPTUAL POOLING STRATEGY FOR IMAGE QUALITY ASSESSMENT

    Institute of Scientific and Technical Information of China (English)

    Zhou Wujie; Jiang Gangyi; Yu Mei

    2012-01-01

    Most of Image Quality Assessment (IQA) metrics consist of two processes.In the first process,quality map of image is measured locally.In the second process,the last quality score is converted from the quality map by using the pooling strategy.The first process had been made effective and significant progresses,while the second process was always done in simple ways.In the second process of the pooling strategy,the optimal perceptual pooling weights should be determined and computed according to Human Visual System (HVS).Thus,a reliable spatial pooling mathematical model based on HVS is an important issue worthy of study.In this paper,a new Visual Perceptual Pooling Strategy (VPPS) for IQA is presented based on contrast sensitivity and luminance sensitivity of HVS.Experimental results with the LIVE database show that the visual perceptual weights,obtained by the proposed pooling strategy,can effectively and significantly improve the performances of the IQA metrics with Mean Structural SIMilarity (MSSIM) or Phase Quantization Code (PQC).It is confirmed that the proposed VPPS demonstrates promising results for improving the performances of existing IQA metrics.

  6. Comparing apples and oranges: assessment of the relative video quality in the presence of different types of distortions

    DEFF Research Database (Denmark)

    Reiter, Ulrich; Korhonen, Jari; You, Junyong

    2011-01-01

    . However, video compression typically produces different kinds of visual artifacts than transmission errors. In this article, we focus on a novel subjective quality assessment method that is suitable for comparing different types of quality distortions. The proposed method has been used to evaluate how...... well different objective quality metrics estimate the relative subjective quality levels for content with different types of quality distortions. Our conclusion is that none of the studied objective metrics works reliably for assessing the co-impact of compression artifacts and transmission errors......Video quality assessment is essential for the performance analysis of visual communication applications. Objective metrics can be used for estimating the relative quality differences, but they typically give reliable results only if the compared videos contain similar types of quality distortion...

  7. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2012-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed....... This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical...

  8. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed....... This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical...

  9. Timeliness “at a glance”: assessing the turnaround time through the six sigma metrics.

    Science.gov (United States)

    Ialongo, Cristiano; Bernardini, Sergio

    2016-01-01

    Almost thirty years of systematic analysis have proven the turnaround time to be a fundamental dimension for the clinical laboratory. Several indicators are to date available to assess and report quality with respect to timeliness, but they sometimes lack the communicative immediacy and accuracy. The six sigma is a paradigm developed within the industrial domain for assessing quality and addressing goal and issues. The sigma level computed through the Z-score method is a simple and straightforward tool which delivers quality by a universal dimensionless scale and allows to handle non-normal data. Herein we report our preliminary experience in using the sigma level to assess the change in urgent (STAT) test turnaround time due to the implementation of total automation. We found that the Z-score method is a valuable and easy to use method for assessing and communicating the quality level of laboratory timeliness, providing a good correspondence with the actual change in efficiency which was retrospectively observed.

  10. Determine metrics and set targets for soil quality on agriculture residue and energy crop pathways

    Energy Technology Data Exchange (ETDEWEB)

    Ian Bonner; David Muth

    2013-09-01

    There are three objectives for this project: 1) support OBP in meeting MYPP stated performance goals for the Sustainability Platform, 2) develop integrated feedstock production system designs that increase total productivity of the land, decrease delivered feedstock cost to the conversion facilities, and increase environmental performance of the production system, and 3) deliver to the bioenergy community robust datasets and flexible analysis tools for establishing sustainable and viable use of agricultural residues and dedicated energy crops. The key project outcome to date has been the development and deployment of a sustainable agricultural residue removal decision support framework. The modeling framework has been used to produce a revised national assessment of sustainable residue removal potential. The national assessment datasets are being used to update national resource assessment supply curves using POLYSIS. The residue removal modeling framework has also been enhanced to support high fidelity sub-field scale sustainable removal analyses. The framework has been deployed through a web application and a mobile application. The mobile application is being used extensively in the field with industry, research, and USDA NRCS partners to support and validate sustainable residue removal decisions. The results detailed in this report have set targets for increasing soil sustainability by focusing on primary soil quality indicators (total organic carbon and erosion) in two agricultural residue management pathways and a dedicated energy crop pathway. The two residue pathway targets were set to, 1) increase residue removal by 50% while maintaining soil quality, and 2) increase soil quality by 5% as measured by Soil Management Assessment Framework indicators. The energy crop pathway was set to increase soil quality by 10% using these same indicators. To demonstrate the feasibility and impact of each of these targets, seven case studies spanning the US are presented

  11. Quality assessment of images displayed on LCD screen with local backlight dimming

    DEFF Research Database (Denmark)

    Mantel, Claire; Burini, Nino; Korhonen, Jari;

    2013-01-01

    This paper presents a subjective experiment collecting quality assessment of images displayed on a LCD with local backlight dimming using two methodologies: absolute category ratings and paired-comparison. Some well-known objective quality metrics are then applied to the stimuli and their respect...

  12. Critical Assessment of the Foundations of Power Transmission and Distribution Reliability Metrics and Standards.

    Science.gov (United States)

    Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan

    2016-01-01

    The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events.

  13. Enforcing Quality Metrics over Equipment Utilization Rates as Means to Reduce Centers for Medicare and Medicaid Services Imaging Costs and Improve Quality of Care

    Directory of Open Access Journals (Sweden)

    Amit Sura

    2011-01-01

    On examining quality metrics, such as appropriateness criteria and pre-authorization, promising results have ensued. The development and enforcement of appropriateness criteria lowers overutilization of studies without requiring unattainable fixed rates. Pre-authorization educates ordering physicians as to when imaging is indicated.

  14. TerrorCat: a translation error categorization-based MT quality metric

    OpenAIRE

    2012-01-01

    We present TerrorCat, a submission to the WMT’12 metrics shared task. TerrorCat uses frequencies of automatically obtained translation error categories as base for pairwise comparison of translation hypotheses, which is in turn used to generate a score for every translation. The metric shows high overall correlation with human judgements on the system level and more modest results on the level of individual sentences.

  15. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    Directory of Open Access Journals (Sweden)

    Daniel Laney

    2014-01-01

    Full Text Available This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. We compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.

  16. Assessing water quality trends in catchments with contrasting hydrological regimes

    Science.gov (United States)

    Sherriff, Sophie C.; Shore, Mairead; Mellander, Per-Erik

    2016-04-01

    Environmental resources are under increasing pressure to simultaneously achieve social, economic and ecological aims. Increasing demand for food production, for example, has expanded and intensified agricultural systems globally. In turn, greater risks of diffuse pollutant delivery (suspended sediment (SS) and Phosphorus (P)) from land to water due to higher stocking densities, fertilisation rates and soil erodibility has been attributed to deterioration of chemical and ecological quality of aquatic ecosystems. Development of sustainable and resilient management strategies for agro-ecosystems must detect and consider the impact of land use disturbance on water quality over time. However, assessment of multiple monitoring sites over a region is challenged by hydro-climatic fluctuations and the propagation of events through catchments with contrasting hydrological regimes. Simple water quality metrics, for example, flow-weighted pollutant exports have potential to normalise the impact of catchment hydrology and better identify water quality fluctuations due to land use and short-term climate fluctuations. This paper assesses the utility of flow-weighted water quality metrics to evaluate periods and causes of critical pollutant transfer. Sub-hourly water quality (SS and P) and discharge data were collected from hydrometric monitoring stations at the outlets of five small (~10 km2) agricultural catchments in Ireland. Catchments possess contrasting land uses (predominantly grassland or arable) and soil drainage (poorly, moderately or well drained) characteristics. Flow-weighted water quality metrics were calculated and evaluated according to fluctuations in source pressure and rainfall. Flow-weighted water quality metrics successfully identified fluctuations in pollutant export which could be attributed to land use changes through the agricultural calendar, i.e., groundcover fluctuations. In particular, catchments with predominantly poor or moderate soil drainage

  17. Orion Entry Handling Qualities Assessments

    Science.gov (United States)

    Bihari, B.; Tiggers, M.; Strahan, A.; Gonzalez, R.; Sullivan, K.; Stephens, J. P.; Hart, J.; Law, H., III; Bilimoria, K.; Bailey, R.

    2011-01-01

    The Orion Command Module (CM) is a capsule designed to bring crew back from the International Space Station (ISS), the moon and beyond. The atmospheric entry portion of the flight is deigned to be flown in autopilot mode for nominal situations. However, there exists the possibility for the crew to take over manual control in off-nominal situations. In these instances, the spacecraft must meet specific handling qualities criteria. To address these criteria two separate assessments of the Orion CM s entry Handling Qualities (HQ) were conducted at NASA s Johnson Space Center (JSC) using the Cooper-Harper scale (Cooper & Harper, 1969). These assessments were conducted in the summers of 2008 and 2010 using the Advanced NASA Technology Architecture for Exploration Studies (ANTARES) six degree of freedom, high fidelity Guidance, Navigation, and Control (GN&C) simulation. This paper will address the specifics of the handling qualities criteria, the vehicle configuration, the scenarios flown, the simulation background and setup, crew interfaces and displays, piloting techniques, ratings and crew comments, pre- and post-fight briefings, lessons learned and changes made to improve the overall system performance. The data collection tools, methods, data reduction and output reports will also be discussed. The objective of the 2008 entry HQ assessment was to evaluate the handling qualities of the CM during a lunar skip return. A lunar skip entry case was selected because it was considered the most demanding of all bank control scenarios. Even though skip entry is not planned to be flown manually, it was hypothesized that if a pilot could fly the harder skip entry case, then they could also fly a simpler loads managed or ballistic (constant bank rate command) entry scenario. In addition, with the evaluation set-up of multiple tasks within the entry case, handling qualities ratings collected in the evaluation could be used to assess other scenarios such as the constant bank angle

  18. A City and National Metric measuring Isolation from the Global Market for Food Security Assessment

    Science.gov (United States)

    Brown, Molly E.; Silver, Kirk Coleman; Rajagopalan, Krishnan

    2013-01-01

    The World Bank has invested in infrastructure in developing countries for decades. This investment aims to reduce the isolation of markets, reducing both seasonality and variability in food availability and food prices. Here we combine city market price data, global distance to port, and country infrastructure data to create a new Isolation Index for countries and cities around the world. Our index quantifies the isolation of a city from the global market. We demonstrate that an index built at the country level can be applied at a sub-national level to quantify city isolation. In doing so, we offer policy makers with an alternative metric to assess food insecurity. We compare our isolation index with other indices and economic data found in the literature.We show that our Index measures economic isolation regardless of economic stability using correlation and analysis

  19. Software Architecture Coupling Metric for Assessing Operational Responsiveness of Trading Systems

    Directory of Open Access Journals (Sweden)

    Claudiu VINTE

    2012-01-01

    Full Text Available The empirical observation that motivates our research relies on the difficulty to assess the performance of a trading architecture beyond a few synthetic indicators like response time, system latency, availability or volume capacity. Trading systems involve complex software architectures of distributed resources. However, in the context of a large brokerage firm, which offers a global coverage from both, market and client perspectives, the term distributed gains a critical significance indeed. Offering a low latency ordering system by nowadays standards is relatively easily achievable, but integrating it in a flexible manner within the broader information system architecture of a broker/dealer requires operational aspects to be factored in. We propose a metric for measuring the coupling level within software architecture, and employ it to identify architectural designs that can offer a higher level of operational responsiveness, which ultimately would raise the overall real-world performance of a trading system.

  20. Is your ethics committee efficient? Using "IRB Metrics" as a self-assessment tool for continuous improvement at the Faculty of Tropical Medicine, Mahidol University, Thailand.

    Science.gov (United States)

    Adams, Pornpimon; Kaewkungwal, Jaranit; Limphattharacharoen, Chanthima; Prakobtham, Sukanya; Pengsaa, Krisana; Khusmith, Srisin

    2014-01-01

    Tensions between researchers and ethics committees have been reported in several institutions. Some reports suggest researchers lack confidence in the quality of institutional review board (IRB) reviews, and that emphasis on strict procedural compliance and ethical issues raised by the IRB might unintentionally lead to delays in correspondence between researchers and ethics committees, and/or even encourage prevarication/equivocation, if researchers perceive committee concerns and criticisms unjust. This study systematically analyzed the efficiency of different IRB functions, and the relationship between efficiency and perceived quality of the decision-making process. The major purposes of this study were thus (1) to use the IRB Metrics developed by the Faculty of Tropical Medicine, Mahidol University, Thailand (FTM-EC) to assess the operational efficiency and perceived effectiveness of its ethics committees, and (2) to determine ethical issues that may cause the duration of approval process to be above the target limit of 60 days. Based on a literature review of definitions and methods used and proposed for use, in assessing aspects of IRB quality, an "IRB Metrics" was developed to assess IRB processes using a structure-process-outcome measurement model. To observe trends in the indicators evaluated, data related to all protocols submitted to the two panels of the FTM-EC (clinical and non-clinical), between January 2010-September 2013, were extracted and analyzed. Quantitative information based on IRB Metrics structure-process-outcome illuminates different areas for internal-process improvement. Ethical issues raised with researchers by the IRB, which were associated with the duration of the approval process in protocol review, could be considered root causes of tensions between the parties. The assessment of IRB structure-process-outcome thus provides a valuable opportunity to strengthen relationships and reduce conflicts between IRBs and researchers, with

  1. Landscape Metric Modeling - a Technique for Forest Disturbance Assessment in Shendurney Wildlife Sanctuary

    Directory of Open Access Journals (Sweden)

    Subin Jose

    2011-12-01

    Full Text Available Deforestation and forest degradation are associated and progressive processes result in the anthropogenic stress, climate change, and conversion of the forest area into a mosaic of mature forest fragments, pasture, and degraded habitat. The present study addresses forest degradation assessment of landscape using landscape metrics. Geospatial techniques including GIS, remote sensing and fragstat methods are powerful tools in the assessment of forest degradation. The present study is carried out in Shendurney wildlife sanctuary located in the mega biodiversity hot spot of Western ghats, Kerala. A large extent of forest is affected by degradation in this region leading to depletion of forest biodiversity. For conservation of forest biodiversity and implementation of conservation strategies, forest degradation assessment of habitat destruction area is important. Two types of data are used in the study i.e. spatial and non-spatial data. Non-spatial data include both anthropogenic stress and climate data. The study shows that the disturbance index value ranges from 2.5 to 7.5 which has been reclassified into four disturbance zones as low disturbed, medium disturbed, high disturbed and very high disturbed. The analysis would play a key role in the formulation and implementation of forest conservation and management strategies.

  2. Higher Education Quality Assessment Model: Towards Achieving Educational Quality Standard

    Science.gov (United States)

    Noaman, Amin Y.; Ragab, Abdul Hamid M.; Madbouly, Ayman I.; Khedra, Ahmed M.; Fayoumi, Ayman G.

    2017-01-01

    This paper presents a developed higher education quality assessment model (HEQAM) that can be applied for enhancement of university services. This is because there is no universal unified quality standard model that can be used to assess the quality criteria of higher education institutes. The analytical hierarchy process is used to identify the…

  3. A multi-model multi-objective study to evaluate the role of metric choice on sensitivity assessment

    Science.gov (United States)

    Haghnegahdar, Amin; Razavi, Saman; Wheater, Howard; Gupta, Hoshin

    2016-04-01

    Sensitivity analysis (SA) is an essential tool for providing insight into model behavior, calibration, and uncertainty assessment. It is often overlooked that the metric choice can significantly change the assessment of model sensitivity. In order to identify important hydrological processes across various case studies, we conducted a multi-model multi-criteria sensitivity analysis using a novel and efficient technique, Variogram Analysis of Response Surfaces (VARS). The analysis was conducted using three physically-based hydrological models, applied at various scales ranging from small (hillslope) to large (watershed) scale. In each case, the sensitivity of simulated streamflow to model processes (represented through parameters) were measured using different metrics selected based on various hydrograph characteristics including high flows, low flows, and volume. It is demonstrated that metric choice has a significant influence on SA results and must be aligned with study objectives. Guidelines for identifying important model parameters from a multi-objective SA perspective is discussed as part of this study.

  4. Quality assessment of digital annotated ECG data from clinical trials by the FDA ECG Warehouse.

    Science.gov (United States)

    Sarapa, Nenad

    2007-09-01

    The FDA mandates that digital electrocardiograms (ECGs) from 'thorough' QTc trials be submitted into the ECG Warehouse in Health Level 7 extended markup language format with annotated onset and offset points of waveforms. The FDA did not disclose the exact Warehouse metrics and minimal acceptable quality standards. The author describes the Warehouse scoring algorithms and metrics used by FDA, points out ways to improve FDA review and suggests Warehouse benefits for pharmaceutical sponsors. The Warehouse ranks individual ECGs according to their score for each quality metric and produces histogram distributions with Warehouse-specific thresholds that identify ECGs of questionable quality. Automatic Warehouse algorithms assess the quality of QT annotation and duration of manual QT measurement by the central ECG laboratory.

  5. Total Probability of Collision as a Metric for Finite Conjunction Assessment and Collision Risk Management

    Science.gov (United States)

    Frigm, R.; Johnson, L.

    The Probability of Collision (Pc) has become a universal metric and statement of on-orbit collision risk. Although several flavors of the computation exist and are well-documented in the literature, the basic calculation requires the same input: estimates for the position, position uncertainty, and sizes of the two objects involved. The Pc is used operationally to make decisions on whether a given conjunction poses significant collision risk to the primary object (or space asset of concern). It is also used to determine necessity and degree of mitigative action (typically in the form of an orbital maneuver) to be performed. The predicted post-maneuver Pc also informs the maneuver planning process into regarding the timing, direction, and magnitude of the maneuver needed to mitigate the collision risk. Although the data sources, techniques, decision calculus, and workflows vary for different agencies and organizations, they all have a common thread. The standard conjunction assessment and collision risk concept of operations (CONOPS) predicts conjunctions, assesses the collision risk (typically, via the Pc), and plans and executes avoidance activities for conjunctions as a discrete events. As the space debris environment continues to increase and improvements are made to remote sensing capabilities and sensitivities to detect, track, and predict smaller debris objects, the number of conjunctions will in turn continue to increase. The expected order-of-magnitude increase in the number of predicted conjunctions will challenge the paradigm of treating each conjunction as a discrete event. The challenge will not be limited to workload issues, such as manpower and computing performance, but also the ability for satellite owner/operators to successfully execute their mission while also managing on-orbit collision risk. Executing a propulsive maneuver occasionally can easily be absorbed into the mission planning and operations tempo; whereas, continuously planning evasive

  6. Assessing the quality of restored images in optical long-baseline interferometry

    Science.gov (United States)

    Gomes, Nuno; Garcia, Paulo J. V.; Thiébaut, Éric

    2017-03-01

    Assessing the quality of aperture synthesis maps is relevant for benchmarking image reconstruction algorithms, for the scientific exploitation of data from optical long-baseline interferometers, and for the design/upgrade of new/existing interferometric imaging facilities. Although metrics have been proposed in these contexts, no systematic study has been conducted on the selection of a robust metric for quality assessment. This article addresses the question: what is the best metric to assess the quality of a reconstructed image? It starts by considering several metrics and selecting a few based on general properties. Then, a variety of image reconstruction cases are considered. The observational scenarios are phase closure and phase referencing at the Very Large Telescope Interferometer (VLTI), for a combination of two, three, four and six telescopes. End-to-end image reconstruction is accomplished with the MIRA software, and several merit functions are put to test. It is found that convolution by an effective point spread function is required for proper image quality assessment. The effective angular resolution of the images is superior to naive expectation based on the maximum frequency sampled by the array. This is due to the prior information used in the aperture synthesis algorithm and to the nature of the objects considered. The ℓ1-norm is the most robust of all considered metrics, because being linear it is less sensitive to image smoothing by high regularization levels. For the cases considered, this metric allows the implementation of automatic quality assessment of reconstructed images, with a performance similar to human selection.

  7. Fifty shades of grey: Variability in metric-based assessment of surface waters using macroinvertebrates

    NARCIS (Netherlands)

    Keizer-Vlek, H.E.

    2014-01-01

    Since the introduction of the European Water Framework Directive (WFD) in 2000, every member state is obligated to assess the effects of human activities on the ecological quality status of all water bodies and to indicate the level of confidence and precision of the results provided by the monitori

  8. Color Image Quality Assessment Based on CIEDE2000

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2012-01-01

    Full Text Available Combining the color difference formula of CIEDE2000 and the printing industry standard for visual verification, we present an objective color image quality assessment method correlated with subjective vision perception. An objective score conformed to subjective perception (OSCSP Q was proposed to directly reflect the subjective visual perception. In addition, we present a general method to calibrate correction factors of color difference formula under real experimental conditions. Our experiment results show that the present DE2000-based metric can be consistent with human visual system in general application environment.

  9. Total Probability of Collision as a Metric for Finite Conjunction Assessment and Collision Risk Management

    Science.gov (United States)

    Frigm, Ryan C.; Hejduk, Matthew D.; Johnson, Lauren C.; Plakalovic, Dragan

    2015-01-01

    On-orbit collision risk is becoming an increasing mission risk to all operational satellites in Earth orbit. Managing this risk can be disruptive to mission and operations, present challenges for decision-makers, and is time-consuming for all parties involved. With the planned capability improvements to detecting and tracking smaller orbital debris and capacity improvements to routinely predict on-orbit conjunctions, this mission risk will continue to grow in terms of likelihood and effort. It is very real possibility that the future space environment will not allow collision risk management and mission operations to be conducted in the same manner as it is today. This paper presents the concept of a finite conjunction assessment-one where each discrete conjunction is not treated separately but, rather, as a continuous event that must be managed concurrently. The paper also introduces the Total Probability of Collision as an analogous metric for finite conjunction assessment operations and provides several options for its usage in a Concept of Operations.

  10. Challenges, Solutions, and Quality Metrics of Personal Genome Assembly in Advancing Precision Medicine.

    Science.gov (United States)

    Xiao, Wenming; Wu, Leihong; Yavas, Gokhan; Simonyan, Vahan; Ning, Baitang; Hong, Huixiao

    2016-04-22

    Even though each of us shares more than 99% of the DNA sequences in our genome, there are millions of sequence codes or structure in small regions that differ between individuals, giving us different characteristics of appearance or responsiveness to medical treatments. Currently, genetic variants in diseased tissues, such as tumors, are uncovered by exploring the differences between the reference genome and the sequences detected in the diseased tissue. However, the public reference genome was derived with the DNA from multiple individuals. As a result of this, the reference genome is incomplete and may misrepresent the sequence variants of the general population. The more reliable solution is to compare sequences of diseased tissue with its own genome sequence derived from tissue in a normal state. As the price to sequence the human genome has dropped dramatically to around $1000, it shows a promising future of documenting the personal genome for every individual. However, de novo assembly of individual genomes at an affordable cost is still challenging. Thus, till now, only a few human genomes have been fully assembled. In this review, we introduce the history of human genome sequencing and the evolution of sequencing platforms, from Sanger sequencing to emerging "third generation sequencing" technologies. We present the currently available de novo assembly and post-assembly software packages for human genome assembly and their requirements for computational infrastructures. We recommend that a combined hybrid assembly with long and short reads would be a promising way to generate good quality human genome assemblies and specify parameters for the quality assessment of assembly outcomes. We provide a perspective view of the benefit of using personal genomes as references and suggestions for obtaining a quality personal genome. Finally, we discuss the usage of the personal genome in aiding vaccine design and development, monitoring host immune-response, tailoring

  11. Convective Weather Forecast Quality Metrics for Air Traffic Management Decision-Making

    Science.gov (United States)

    Chatterji, Gano B.; Gyarfas, Brett; Chan, William N.; Meyn, Larry A.

    2006-01-01

    the process described in Refs. 5 through 7, in terms of percentage coverage or confidence level is notionally sound compared to characterizing in terms of probabilities because the probability of the forecast being correct can only be determined using actual observations. References 5 through 7 only use the forecast data and not the observations. The method for computing the probability of detection, false alarm ratio and several forecast quality metrics (Skill Scores) using both the forecast and observation data are given in Ref. 2. This paper extends the statistical verification method in Ref. 2 to determine co-occurrence probabilities. The method consists of computing the probability that a severe weather cell (grid location) is detected in the observation data in the neighborhood of the severe weather cell in the forecast data. Probabilities of occurrence at the grid location and in its neighborhood with higher severity, and with lower severity in the observation data compared to that in the forecast data are examined. The method proposed in Refs. 5 through 7 is used for computing the probability that a certain number of cells in the neighborhood of severe weather cells in the forecast data are seen as severe weather cells in the observation data. Finally, the probability of existence of gaps in the observation data in the neighborhood of severe weather cells in forecast data is computed. Gaps are defined as openings between severe weather cells through which an aircraft can safely fly to its intended destination. The rest of the paper is organized as follows. Section II summarizes the statistical verification method described in Ref. 2. The extension of this method for computing the co-occurrence probabilities in discussed in Section HI. Numerical examples using NCWF forecast data and NCWD observation data are presented in Section III to elucidate the characteristics of the co-occurrence probabilities. This section also discusses the procedure for computing

  12. Kurtosis corrected sound pressure level as a noise metric for risk assessment of occupational noises.

    Science.gov (United States)

    Goley, G Steven; Song, Won Joon; Kim, Jay H

    2011-03-01

    Current noise guidelines use an energy-based noise metric to predict the risk of hearing loss, and thus ignore the effect of temporal characteristics of the noise. The practice is widely considered to underestimate the risk of a complex noise environment, where impulsive noises are embedded in a steady-state noise. A basic form for noise metrics is designed by combining the equivalent sound pressure level (SPL) and a temporal correction term defined as a function of kurtosis of the noise. Several noise metrics are developed by varying this basic form and evaluated utilizing existing chinchilla noise exposure data. It is shown that the kurtosis correction term significantly improves the correlation of the noise metric with the measured hearing losses in chinchillas. The average SPL of the frequency components of the noise that define the hearing loss with a kurtosis correction term is identified as the best noise metric among tested. One of the investigated metrics, the kurtosis-corrected A-weighted SPL, is applied to a human exposure study data as a preview of applying the metrics to human guidelines. The possibility of applying the noise metrics to human guidelines is discussed.

  13. Perceptual full-reference quality assessment of stereoscopic images by considering binocular visual characteristics.

    Science.gov (United States)

    Shao, Feng; Lin, Weisi; Gu, Shanbo; Jiang, Gangyi; Srikanthan, Thambipillai

    2013-05-01

    Perceptual quality assessment is a challenging issue in 3D signal processing research. It is important to study 3D signal directly instead of studying simple extension of the 2D metrics directly to the 3D case as in some previous studies. In this paper, we propose a new perceptual full-reference quality assessment metric of stereoscopic images by considering the binocular visual characteristics. The major technical contribution of this paper is that the binocular perception and combination properties are considered in quality assessment. To be more specific, we first perform left-right consistency checks and compare matching error between the corresponding pixels in binocular disparity calculation, and classify the stereoscopic images into non-corresponding, binocular fusion, and binocular suppression regions. Also, local phase and local amplitude maps are extracted from the original and distorted stereoscopic images as features in quality assessment. Then, each region is evaluated independently by considering its binocular perception property, and all evaluation results are integrated into an overall score. Besides, a binocular just noticeable difference model is used to reflect the visual sensitivity for the binocular fusion and suppression regions. Experimental results show that compared with the relevant existing metrics, the proposed metric can achieve higher consistency with subjective assessment of stereoscopic images.

  14. Applying Undertaker to quality assessment

    DEFF Research Database (Denmark)

    Archie, John G.; Paluszewski, Martin; Karplus, Kevin

    2009-01-01

    Our group tested three quality assessment functions in CASP8: a function which used only distance constraints derived from alignments (SAM-T08-MQAO), a function which added other single-model terms to the distance constraints (SAM-T08-MQAU), and a function which used both single-model and consensus...... terms (SAM-T08-MQAC). We analyzed the functions both for ranking models for a single target and for producing an accurate estimate of GDT_TS. Our functions were optimized for the ranking problem, so are perhaps more appropriate for metaserver applications than for providing trustworthiness estimates...... for single models. On the CASP8 test, the functions with more terms performed better. The MQAC consensus method was substantially better than either single-model function, and the MQAU function was substantially better than the MQAO function that used only constraints from alignments. Proteins 2009. © 2009...

  15. Visual Perception Based Objective Stereo Image Quality Assessment for 3D Video Communication

    Directory of Open Access Journals (Sweden)

    Gangyi Jiang

    2014-04-01

    Full Text Available Stereo image quality assessment is a crucial and challenging issue in 3D video communication. One of major difficulties is how to weigh binocular masking effect. In order to establish the assessment mode more in line with the human visual system, Watson model is adopted, which defines visibility threshold under no distortion composed of contrast sensitivity, masking effect and error in this study. As a result, we propose an Objective Stereo Image Quality Assessment method (OSIQA, organically combining a new Left-Right view Image Quality Assessment (LR-IQA metric and Depth Perception Image Quality Assessment (DP-IQA metric. The new LR-IQA metric is first given to calculate the changes of perception coefficients in each sub-band utilizing Watson model and human visual system after wavelet decomposition of left and right images in stereo image pair, respectively. Then, a concept of absolute difference map is defined to describe abstract differential value between the left and right view images and the DP-IQA metric is presented to measure structure distortion of the original and distorted abstract difference maps through luminance function, error sensitivity and contrast function. Finally, an OSIQA metric is generated by using multiplicative fitting of the LR-IQA and DP-IQA metrics based on weighting. Experimental results shows that the proposed method are highly correlated with human visual judgments (Mean Opinion Score and the correlation coefficient and monotony are more than 0.92 under five types of distortions such as Gaussian blur, Gaussian noise, JP2K compression, JPEG compression and H.264 compression.

  16. Assessing the effect of scale on the ability of landscape structure metrics to discriminate landscape types in Mediterranean forest districts

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Feced, C.; Saura, S.; Elena-Rosello, R.

    2010-07-01

    Scale is a key concept in landscape ecology. Although several studies have analyzed the effect of scale on landscape structure metrics, there is still a need to focus on the ability of these metrics to discriminate between landscape types at different scales, particularly in Mediterranean forest landscapes. In this paper we assess the scaling behavior and correlation patterns of eight commonly-used landscape metrics in two Spanish forest districts (Pinares in Burgos and Soria, and Alto Tajo in Guadalajara) in order to detect at which grain sizes the landscape type differences are emphasized. This occurred in both districts at fine spatial resolutions (25 m) for the metrics related to shape complexity and the amount of boundaries, while a coarser spatial resolution (500 m) was required for the landscape diversity and mixture metrics, suggesting that the differences in the spatial and compositional diversity of these landscape types are not so large locally (alpha diversity) but amplified at broader scales (gamma diversity). The maximum variability for the fragmentation-related metrics did not appear at the same scale in both districts, because forest fragmentation in the Pinares district is mainly driven by harvesting treatments that operate at considerably different scales from those related to the less intensively managed district of Alto Tajo. Our methodology and results allow identifying and separately assessing those complex land cover mosaics that result from a similar set of biological and social forces and constraints. This should be valuable for an improved forest landscape planning and monitoring with a quantitative ecological basis in the Mediterranean and other temperate areas. (Author) 43 refs.

  17. Assessing the Greenness of Chemical Reactions in the Laboratory Using Updated Holistic Graphic Metrics Based on the Globally Harmonized System of Classification and Labeling of Chemicals

    Science.gov (United States)

    Ribeiro, M. Gabriela T. C.; Yunes, Santiago F.; Machado, Adelio A. S. C.

    2014-01-01

    Two graphic holistic metrics for assessing the greenness of synthesis, the "green star" and the "green circle", have been presented previously. These metrics assess the greenness by the degree of accomplishment of each of the 12 principles of green chemistry that apply to the case under evaluation. The criteria for assessment…

  18. Software Metrics Evaluation Based on Entropy

    CERN Document Server

    Selvarani, R; Ramachandran, Muthu; Prasad, Kamakshi

    2010-01-01

    Software engineering activities in the Industry has come a long way with various improve- ments brought in various stages of the software development life cycle. The complexity of modern software, the commercial constraints and the expectation for high quality products demand the accurate fault prediction based on OO design metrics in the class level in the early stages of software development. The object oriented class metrics are used as quality predictors in the entire OO software development life cycle even when a highly iterative, incremental model or agile software process is employed. Recent research has shown some of the OO design metrics are useful for predicting fault-proneness of classes. In this paper the empirical validation of a set of metrics proposed by Chidamber and Kemerer is performed to assess their ability in predicting the software quality in terms of fault proneness and degradation. We have also proposed the design complexity of object-oriented software with Weighted Methods per Class m...

  19. Algorithm for automatic forced spirometry quality assessment: technological developments.

    Directory of Open Access Journals (Sweden)

    Umberto Melia

    Full Text Available We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1 an initial version using the standard FS curves recommended by the ATS; and, (2 a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95% and sensitivity (96%. The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.

  20. Algorithm for automatic forced spirometry quality assessment: technological developments.

    Science.gov (United States)

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.

  1. Healthcare quality maturity assessment model based on quality drivers.

    Science.gov (United States)

    Ramadan, Nadia; Arafeh, Mazen

    2016-04-18

    Purpose - Healthcare providers differ in their readiness and maturity levels regarding quality and quality management systems applications. The purpose of this paper is to serve as a useful quantitative quality maturity-level assessment tool for healthcare organizations. Design/methodology/approach - The model proposes five quality maturity levels (chaotic, primitive, structured, mature and proficient) based on six quality drivers: top management, people, operations, culture, quality focus and accreditation. Findings - Healthcare managers can apply the model to identify the status quo, quality shortcomings and evaluating ongoing progress. Practical implications - The model has been incorporated in an interactive Excel worksheet that visually displays the quality maturity-level risk meter. The tool has been applied successfully to local hospitals. Originality/value - The proposed six quality driver scales appear to measure healthcare provider maturity levels on a single quality meter.

  2. How to assess the quality of your analytical method?

    Science.gov (United States)

    Topic, Elizabeta; Nikolac, Nora; Panteghini, Mauro; Theodorsson, Elvar; Salvagno, Gian Luca; Miler, Marijana; Simundic, Ana-Maria; Infusino, Ilenia; Nordin, Gunnar; Westgard, Sten

    2015-10-01

    Laboratory medicine is amongst the fastest growing fields in medicine, crucial in diagnosis, support of prevention and in the monitoring of disease for individual patients and for the evaluation of treatment for populations of patients. Therefore, high quality and safety in laboratory testing has a prominent role in high-quality healthcare. Applied knowledge and competencies of professionals in laboratory medicine increases the clinical value of laboratory results by decreasing laboratory errors, increasing appropriate utilization of tests, and increasing cost effectiveness. This collective paper provides insights into how to validate the laboratory assays and assess the quality of methods. It is a synopsis of the lectures at the 15th European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Continuing Postgraduate Course in Clinical Chemistry and Laboratory Medicine entitled "How to assess the quality of your method?" (Zagreb, Croatia, 24-25 October 2015). The leading topics to be discussed include who, what and when to do in validation/verification of methods, verification of imprecision and bias, verification of reference intervals, verification of qualitative test procedures, verification of blood collection systems, comparability of results among methods and analytical systems, limit of detection, limit of quantification and limit of decision, how to assess the measurement uncertainty, the optimal use of Internal Quality Control and External Quality Assessment data, Six Sigma metrics, performance specifications, as well as biological variation. This article, which continues the annual tradition of collective papers from the EFLM continuing postgraduate courses in clinical chemistry and laboratory medicine, aims to provide further contributions by discussing the quality of laboratory methods and measurements and, at the same time, to offer continuing professional development to the attendees.

  3. Quality assurance in performance assessments

    Energy Technology Data Exchange (ETDEWEB)

    Maul, P.R.; Watkins, B.M.; Salter, P.; Mcleod, R [QuantiSci Ltd, Henley-on-Thames (United Kingdom)

    1999-01-01

    Following publication of the Site-94 report, SKI wishes to review how Quality Assurance (QA) issues could be treated in future work both in undertaking their own Performance Assessment (PA) calculations and in scrutinising documents supplied by SKB (on planning a repository for spent fuels in Sweden). The aim of this report is to identify the key QA issues and to outline the nature and content of a QA plan which would be suitable for SKI, bearing in mind the requirements and recommendations of relevant standards. Emphasis is on issues which are specific to Performance Assessments for deep repositories for radioactive wastes, but consideration is also given to issues which need to be addressed in all large projects. Given the long time over which the performance of a deep repository system must be evaluated, the demonstration that a repository is likely to perform satisfactorily relies on the use of computer-generated model predictions of system performance. This raises particular QA issues which are generally not encountered in other technical areas (for instance, power station operations). The traceability of the arguments used is a key QA issue, as are conceptual model uncertainty, and code verification and validation; these were all included in the consideration of overall uncertainties in the Site-94 project. Additionally, issues which are particularly relevant to SKI include: How QA in a PA fits in with the general QA procedures of the organisation undertaking the work. The relationship between QA as applied by the regulator and the implementor of a repository development programme. Section 2 introduces the discussion of these issues by reviewing the standards and guidance which are available from national and international organisations. This is followed in Section 3 by a review of specific issues which arise from the Site-94 exercise. An outline procedure for managing QA issues in SKI is put forward as a basis for discussion in Section 4. It is hoped that

  4. Fuzzy Multiple Metrics Link Assessment for Routing in Mobile Ad-Hoc Network

    Science.gov (United States)

    Soo, Ai Luang; Tan, Chong Eng; Tay, Kai Meng

    2011-06-01

    In this work, we investigate on the use of Sugeno fuzzy inference system (FIS) in route selection for mobile Ad-Hoc networks (MANETs). Sugeno FIS is introduced into Ad-Hoc On Demand Multipath Distance Vector (AOMDV) routing protocol, which is derived from its predecessor, Ad-Hoc On Demand Distance Vector (AODV). Instead of using the conventional way that considering only a single metric to choose the best route, our proposed fuzzy decision making model considers up to three metrics. In the model, the crisp inputs of the three parameters are fed into an FIS and being processed in stages, i.e., fuzzification, inference, and defuzzification. Finally, after experiencing all the stages, a single value score is generated from the combination metrics, which will be used to measure all the discovered routes credibility. Results obtained from simulations show a promising improvement as compared to AOMDV and AODV.

  5. Assessing Quality in Home Visiting Programs

    Science.gov (United States)

    Korfmacher, Jon; Laszewski, Audrey; Sparr, Mariel; Hammel, Jennifer

    2013-01-01

    Defining quality and designing a quality assessment measure for home visitation programs is a complex and multifaceted undertaking. This article summarizes the process used to create the Home Visitation Program Quality Rating Tool (HVPQRT) and identifies next steps for its development. The HVPQRT measures both structural and dynamic features of…

  6. Adding A Spending Metric To Medicare's Value-Based Purchasing Program Rewarded Low-Quality Hospitals.

    Science.gov (United States)

    Das, Anup; Norton, Edward C; Miller, David C; Ryan, Andrew M; Birkmeyer, John D; Chen, Lena M

    2016-05-01

    In fiscal year 2015 the Centers for Medicare and Medicaid Services expanded its Hospital Value-Based Purchasing program by rewarding or penalizing hospitals for their performance on both spending and quality. This represented a sharp departure from the program's original efforts to incentivize hospitals for quality alone. How this change redistributed hospital bonuses and penalties was unknown. Using data from 2,679 US hospitals that participated in the program in fiscal years 2014 and 2015, we found that the new emphasis on spending rewarded not only low-spending hospitals but some low-quality hospitals as well. Thirty-eight percent of low-spending hospitals received bonuses in fiscal year 2014, compared to 100 percent in fiscal year 2015. However, low-quality hospitals also began to receive bonuses (0 percent in fiscal year 2014 compared to 17 percent in 2015). All high-quality hospitals received bonuses in both years. The Centers for Medicare and Medicaid Services should consider incorporating a minimum quality threshold into the Hospital Value-Based Purchasing program to avoid rewarding low-quality, low-spending hospitals.

  7. Cost of Quality (CoQ) metrics for telescope operations and project management

    Science.gov (United States)

    Radziwill, Nicole M.

    2006-06-01

    This study describes the goals, foundational work, and early returns associated with establishing a pilot quality cost program at the Robert C. Byrd Green Bank Telescope (GBT). Quality costs provide a means to communicate the results of process improvement efforts in the universal language of project management: money. This scheme stratifies prevention, appraisal, internal failure and external failure costs, and seeks to quantify and compare the up-front investment in planning and risk management versus the cost of rework. An activity-based Cost of Quality (CoQ) model was blended with the Cost of Software Quality (CoSQ) model that has been successfully deployed at Raytheon Electronic Systems (RES) for this pilot program, analyzing the efforts of the GBT Software Development Division. Using this model, questions that can now be answered include: What is an appropriate length for our development cycle? Are some observing modes more reliable than others? Are we testing too much, or not enough? How good is our software quality, not in terms of defects reported and fixed, but in terms of its impact on the user? The ultimate goal is to provide a higher quality of service to customers of the telescope.

  8. 软件质量度量过程及模型研究%Research on Process and Model of Software Quality Metrics

    Institute of Scientific and Technical Information of China (English)

    杜金环; 金璐璐

    2014-01-01

    软件质量度量是加强软件项目管理的重要工作,可以预测软件中潜在的错误,在软件产品完成之前进行度量,并根据度量结果改进软件质量。针对软件质量难于度量的现状,文中对度量过程及模型进行综合研究。首先,结合IEEE和CMMI的相关文献研究软件质量度量过程;接着,构建软件质量度量指标体系递阶层次分析结构模型;然后,以线性加权综合法理论为基础建立软件质量度量模型;最后,通过实例分析来说明模型的具体应用。文中的研究内容为软件度量提供了新方法,但由于软件质量涉及的不确定性因素较多,在实际运用时要充分考虑到软件的特殊性并借鉴其他学科的度量方法。%Software quality metric is the important work to strengthen the management of software projects,it can predict the potential er-rors in software,metric it before the completion of software product,and improve the software quality based on the metric results. For the current of software quality is difficult to metric,have a comprehensive study on metric process and model. First,combine the literature of IEEE and CMMI to research software quality metric process. Second, build the index system of software quality metrics and analysis structure model hierarchically. Then,build software quality metrics model based on the synthesis method theory of linear weighted. Final-ly,through cases analysis illustrate the specific application of this model. The content provides a new method for software metrics,but due to the lots uncertainties factors involved in software quality,the practical application should fully take the special of software into account, and learn it from other disciplines.

  9. Assessing Field Spectroscopy Metadata Quality

    Directory of Open Access Journals (Sweden)

    Barbara A. Rasaiah

    2015-04-01

    Full Text Available This paper presents the proposed criteria for measuring the quality and completeness of field spectroscopy metadata in a spectral archive. Definitions for metadata quality and completeness for field spectroscopy datasets are introduced. Unique methods for measuring quality and completeness of metadata to meet the requirements of field spectroscopy datasets are presented. Field spectroscopy metadata quality can be defined in terms of (but is not limited to logical consistency, lineage, semantic and syntactic error rates, compliance with a quality standard, quality assurance by a recognized authority, and reputational authority of the data owners/data creators. Two spectral libraries are examined as case studies of operationalized metadata policies, and the degree to which they are aligned with the needs of field spectroscopy scientists. The case studies reveal that the metadata in publicly available spectral datasets are underperforming on the quality and completeness measures. This paper is part two in a series examining the issues central to a metadata standard for field spectroscopy datasets.

  10. Compression-based classification of biological sequences and structures via the Universal Similarity Metric: experimental assessment

    Directory of Open Access Journals (Sweden)

    Manzini Giovanni

    2007-07-01

    Full Text Available Abstract Background Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity, NCD (Normalized Compression Dissimilarity and CD (Compression Dissimilarity. Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. Results We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC

  11. Use of Frequency Response Metrics to Assess the Planning and Operating Requirements for Reliable Integration of Variable Renewable Generation

    Energy Technology Data Exchange (ETDEWEB)

    Eto, Joseph H.; Undrill, John; Mackin, Peter; Daschmans, Ron; Williams, Ben; Haney, Brian; Hunt, Randall; Ellis, Jeff; Illian, Howard; Martinez, Carlos; O' Malley, Mark; Coughlin, Katie; LaCommare, Kristina Hamachi

    2010-12-20

    An interconnected electric power system is a complex system that must be operated within a safe frequency range in order to reliably maintain the instantaneous balance between generation and load. This is accomplished by ensuring that adequate resources are available to respond to expected and unexpected imbalances and restoring frequency to its scheduled value in order to ensure uninterrupted electric service to customers. Electrical systems must be flexible enough to reliably operate under a variety of"change" scenarios. System planners and operators must understand how other parts of the system change in response to the initial change, and need tools to manage such changes to ensure reliable operation within the scheduled frequency range. This report presents a systematic approach to identifying metrics that are useful for operating and planning a reliable system with increased amounts of variable renewable generation which builds on existing industry practices for frequency control after unexpected loss of a large amount of generation. The report introduces a set of metrics or tools for measuring the adequacy of frequency response within an interconnection. Based on the concept of the frequency nadir, these metrics take advantage of new information gathering and processing capabilities that system operators are developing for wide-area situational awareness. Primary frequency response is the leading metric that will be used by this report to assess the adequacy of primary frequency control reserves necessary to ensure reliable operation. It measures what is needed to arrest frequency decline (i.e., to establish frequency nadir) at a frequency higher than the highest set point for under-frequency load shedding within an interconnection. These metrics can be used to guide the reliable operation of an interconnection under changing circumstances.

  12. Local diffusion homogeneity (LDH): an inter-voxel diffusion MRI metric for assessing inter-subject white matter variability.

    Science.gov (United States)

    Gong, Gaolang

    2013-01-01

    Many diffusion parameters and indices (e.g., fractional anisotropy [FA] and mean diffusivity [MD]) have been derived from diffusion magnetic resonance imaging (MRI) data. These parameters have been extensively applied as imaging markers for localizing white matter (WM) changes under various conditions (e.g., development, degeneration and disease). However, the vast majority of the existing parameters is derived from intra-voxel analyses and represents the diffusion properties solely within the voxel unit. Other types of parameters that characterize inter-voxel relationships have been largely overlooked. In the present study, we propose a novel inter-voxel metric referred to as the local diffusion homogeneity (LDH). This metric quantifies the local coherence of water molecule diffusion in a model-free manner. It can serve as an additional marker for evaluating the WM microstructural properties of the brain. To assess the distinguishing features between LDH and FA/MD, the metrics were systematically compared across space and subjects. As an example, both the LDH and FA/MD metrics were applied to measure age-related WM changes. The results indicate that LDH reveals unique inter-subject variability in specific WM regions (e.g., cerebral peduncle, internal capsule and splenium). Furthermore, there are regions in which measurements of age-related WM alterations with the LDH and FA/MD metrics yield discrepant results. These findings suggest that LDH and FA/MD have different sensitivities to specific WM microstructural properties. Taken together, the present study shows that LDH is complementary to the conventional diffusion-MRI markers and may provide additional insights into inter-subject WM variability. Further studies, however, are needed to uncover the neuronal mechanisms underlying the LDH.

  13. Local diffusion homogeneity (LDH: an inter-voxel diffusion MRI metric for assessing inter-subject white matter variability.

    Directory of Open Access Journals (Sweden)

    Gaolang Gong

    Full Text Available Many diffusion parameters and indices (e.g., fractional anisotropy [FA] and mean diffusivity [MD] have been derived from diffusion magnetic resonance imaging (MRI data. These parameters have been extensively applied as imaging markers for localizing white matter (WM changes under various conditions (e.g., development, degeneration and disease. However, the vast majority of the existing parameters is derived from intra-voxel analyses and represents the diffusion properties solely within the voxel unit. Other types of parameters that characterize inter-voxel relationships have been largely overlooked. In the present study, we propose a novel inter-voxel metric referred to as the local diffusion homogeneity (LDH. This metric quantifies the local coherence of water molecule diffusion in a model-free manner. It can serve as an additional marker for evaluating the WM microstructural properties of the brain. To assess the distinguishing features between LDH and FA/MD, the metrics were systematically compared across space and subjects. As an example, both the LDH and FA/MD metrics were applied to measure age-related WM changes. The results indicate that LDH reveals unique inter-subject variability in specific WM regions (e.g., cerebral peduncle, internal capsule and splenium. Furthermore, there are regions in which measurements of age-related WM alterations with the LDH and FA/MD metrics yield discrepant results. These findings suggest that LDH and FA/MD have different sensitivities to specific WM microstructural properties. Taken together, the present study shows that LDH is complementary to the conventional diffusion-MRI markers and may provide additional insights into inter-subject WM variability. Further studies, however, are needed to uncover the neuronal mechanisms underlying the LDH.

  14. Using Landscape Metrics Analysis and Analytic Hierarchy Process to Assess Water Harvesting Potential Sites in Jordan

    Directory of Open Access Journals (Sweden)

    Abeer Albalawneh

    2015-09-01

    Full Text Available Jordan is characterized as a “water scarce” country. Therefore, conserving ecosystem services such as water regulation and soil retention is challenging. In Jordan, rainwater harvesting has been adapted to meet those challenges. However, the spatial composition and configuration features of a target landscape are rarely considered when selecting a rainwater-harvesting site. This study aimed to introduce landscape spatial features into the schemes for selecting a proper water-harvesting site. Landscape metrics analysis was used to quantify 10 metrics for three potential landscapes (i.e., Watershed 104 (WS 104, Watershed 59 (WS 59, and Watershed 108 (WS 108 located in the Jordanian Badia region. Results of the metrics analysis showed that the three non–vegetative land cover types in the three landscapes were highly suitable for serving as rainwater harvesting sites. Furthermore, Analytic Hierarchy Process (AHP was used to prioritize the fitness of the three target sites by comparing their landscape metrics. Results of AHP indicate that the non-vegetative land cover in the WS 104 landscape was the most suitable site for rainwater harvesting intervention, based on its dominance, connectivity, shape, and low degree of fragmentation. Our study advances the water harvesting network design by considering its landscape spatial pattern.

  15. EMF exposure assessment in the Finnish garment industry: evaluation of proposed EMF exposure metrics.

    Science.gov (United States)

    Hansen, N H; Sobel, E; Davanipour, Z; Gillette, L M; Niiranen, J; Wilson, B W

    2000-01-01

    Recently published studies indicate that having worked in occupations that involve moderate to high electromagnetic field (EMF) exposure is a risk factor for neurodegenerative diseases, including Alzheimer's disease. In these studies, the occupational groups most over-represented for EMF exposure comprised seamstresses, dressmakers, and tailors. Future epidemiologic studies designed to evaluate the possibility of a causal relationship between exposure to EMF and a neuro degenerative disease endpoint such as incidence of Alzheimer's disease, will benefit from the measurement of electromagnetic field metrics with potential biological relevance. Data collection methodology in such studies would be highly dependent upon how the metrics are defined. In this research the authors developed and demonstrated (1) protocols for collecting EMF exposure data suitable for estimating a variety of exposure metrics that may have biological relevance, and (2) analytical methods for calculation of these metrics. The authors show how exposure might be estimated under each of the three prominent EMF health-effects mechanism theories and evaluate the assertion that relative exposure ranking is dependent on which mechanism is assumed. The authors also performed AC RMS magnetic flux density measurements, confirming previously reported findings. The results indicate that seamstresses, as an occupational group, should be considered for study of the possible health effects of long-term EMF exposure.

  16. How the choice of flood damage metrics influences urban flood risk assessment

    NARCIS (Netherlands)

    Ten Veldhuis, J.A.E.

    2011-01-01

    This study presents a first attempt to quantify tangible and intangible flood damage according to two different damage metrics: monetary values and number of people affected by flooding. Tangible damage includes material damage to buildings and infrastructure; intangible damage includes damages that

  17. Quality Assessment in the Primary care

    OpenAIRE

    Muharrem Ak

    2013-01-01

    -Quality Assessment in the Primary care Dear Editor; I have read the article titled as “Implementation of Rogi Kalyan Samiti (RKS) at Primary Health Centre Durvesh” with great interest. Shrivastava et all concluded that assessment mechanism for the achievement of objectives for the suggested RKS model was not successful (1). Hereby I would like to emphasize the importance of quality assessment (QA) especially in the era of newly established primary care implementations in our coun...

  18. Metric for Estimating Congruity between Quantum Images

    Directory of Open Access Journals (Sweden)

    Abdullah M. Iliyasu

    2016-10-01

    Full Text Available An enhanced quantum-based image fidelity metric, the QIFM metric, is proposed as a tool to assess the “congruity” between two or more quantum images. The often confounding contrariety that distinguishes between classical and quantum information processing makes the widely accepted peak-signal-to-noise-ratio (PSNR ill-suited for use in the quantum computing framework, whereas the prohibitive cost of the probability-based similarity score makes it imprudent for use as an effective image quality metric. Unlike the aforementioned image quality measures, the proposed QIFM metric is calibrated as a pixel difference-based image quality measure that is sensitive to the intricacies inherent to quantum image processing (QIP. As proposed, the QIFM is configured with in-built non-destructive measurement units that preserve the coherence necessary for quantum computation. This design moderates the cost of executing the QIFM in order to estimate congruity between two or more quantum images. A statistical analysis also shows that our proposed QIFM metric has a better correlation with digital expectation of likeness between images than other available quantum image quality measures. Therefore, the QIFM offers a competent substitute for the PSNR as an image quality measure in the quantum computing framework thereby providing a tool to effectively assess fidelity between images in quantum watermarking, quantum movie aggregation and other applications in QIP.

  19. Measuring scientific impact beyond academia: An assessment of existing impact metrics and proposed improvements

    Science.gov (United States)

    Liakata, Maria; Clare, Amanda; Duma, Daniel

    2017-01-01

    How does scientific research affect the world around us? Being able to answer this question is of great importance in order to appropriately channel efforts and resources in science. The impact by scientists in academia is currently measured by citation based metrics such as h-index, i-index and citation counts. These academic metrics aim to represent the dissemination of knowledge among scientists rather than the impact of the research on the wider world. In this work we are interested in measuring scientific impact beyond academia, on the economy, society, health and legislation (comprehensive impact). Indeed scientists are asked to demonstrate evidence of such comprehensive impact by authoring case studies in the context of the Research Excellence Framework (REF). We first investigate the extent to which existing citation based metrics can be indicative of comprehensive impact. We have collected all recent REF impact case studies from 2014 and we have linked these to papers in citation networks that we constructed and derived from CiteSeerX, arXiv and PubMed Central using a number of text processing and information retrieval techniques. We have demonstrated that existing citation-based metrics for impact measurement do not correlate well with REF impact results. We also consider metrics of online attention surrounding scientific works, such as those provided by the Altmetric API. We argue that in order to be able to evaluate wider non-academic impact we need to mine information from a much wider set of resources, including social media posts, press releases, news articles and political debates stemming from academic work. We also provide our data as a free and reusable collection for further analysis, including the PubMed citation network and the correspondence between REF case studies, grant applications and the academic literature. PMID:28278243

  20. Elliptical Local Vessel Density: a Fast and Robust Quality Metric for Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Chaum, Edward [ORNL; Karnowski, Thomas Paul [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL; Abramoff, M.D. [University of Iowa

    2008-01-01

    A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches to the problem of automatically judging the image quality. We propose a new set of features independent of Field of View or resolution to describe the morphology of the patient's vessels. Our initial results suggest that they can be used to estimate the image quality in a time one order of magnitude shorter respect to previous techniques.

  1. Data connectivity: A critical tool for external quality assessment

    Directory of Open Access Journals (Sweden)

    Ben Cheng

    2016-10-01

    Full Text Available Point-of-care (POC tests have been useful in increasing access to testing and treatment monitoring for HIV. Decentralising testing from laboratories to hundreds of sites around a country presents tremendous challenges in training and quality assurance. In order to address these concerns, companies are now either embedding connectivity in their new POC diagnostic instruments or providing some form of channel for electronic result exchange. These will allow automated key performance and operational metrics from devices in the field to a central database. Setting up connectivity between these POC devices and a central database at the Ministries of Health will allow automated data transmission, creating an opportunity for real- time information on diagnostic instrument performance as well as the competency of the operator through external quality assessment. A pilot programme in Zimbabwe shows that connectivity has significantly improve the turn-around time of external quality assessment result submissions and allow corrective actions to be provided in a timely manner. Furthermore, by linking the data to existing supply chain management software, stock-outs can be minimised. As countries are looking forward to achieving the 90-90-90 targets for HIV, such innovative technologies can automate disease surveillance, improve the quality of testing and strengthen the efficiency of health systems.

  2. A metrics-based comparison of secondary user quality between iOS and Android

    NARCIS (Netherlands)

    Amman, T.

    2014-01-01

    Native mobile applications gain popularity in the commercial market. There is no other econom- ical sector that grows as fast. A lot of economical research is done in this sector, but there is very little research that deals with qualities for mobile application developers. This paper compares the q

  3. Metrics of Risk Associated with Defects Rediscovery

    CERN Document Server

    Miranskyy, Andriy V; Reesor, Mark

    2011-01-01

    Software defects rediscovered by a large number of customers affect various stakeholders and may: 1) hint at gaps in a software manufacturer's Quality Assurance (QA) processes, 2) lead to an over-load of a software manufacturer's support and maintenance teams, and 3) consume customers' resources, leading to a loss of reputation and a decrease in sales. Quantifying risk associated with the rediscovery of defects can help all of these stake-holders. In this chapter we present a set of metrics needed to quantify the risks. The metrics are designed to help: 1) the QA team to assess their processes; 2) the support and maintenance teams to allocate their resources; and 3) the customers to assess the risk associated with using the software product. The paper includes a validation case study which applies the risk metrics to industrial data. To calculate the metrics we use mathematical instruments like the heavy-tailed Kappa distribution and the G/M/k queuing model.

  4. Algal Attributes: An Autecological Classification of Algal Taxa Collected by the National Water-Quality Assessment Program

    Science.gov (United States)

    Porter, Stephen D.

    2008-01-01

    Algae are excellent indicators of water-quality conditions, notably nutrient and organic enrichment, and also are indicators of major ion, dissolved oxygen, and pH concentrations and stream microhabitat conditions. The autecology, or physiological optima and tolerance, of algal species for various water-quality contaminants and conditions is relatively well understood for certain groups of freshwater algae, notably diatoms. However, applications of autecological information for water-quality assessments have been limited because of challenges associated with compiling autecological literature from disparate sources, tracking name changes for a large number of algal species, and creating an autecological data base from which algal-indicator metrics can be calculated. A comprehensive summary of algal autecological attributes for North American streams and rivers does not exist. This report describes a large, digital data file containing 28,182 records for 5,939 algal taxa, generally species or variety, collected by the U.S. Geological Survey?s National Water-Quality Assessment (NAWQA) Program. The data file includes 37 algal attributes classified by over 100 algal-indicator codes or metrics that can be calculated easily with readily available software. Algal attributes include qualitative classifications based on European and North American autecological literature, and semi-quantitative, weighted-average regression approaches for estimating optima using regional and national NAWQA data. Applications of algal metrics in water-quality assessments are discussed and national quartile distributions of metric scores are shown for selected indicator metrics.

  5. Subjective and Objective Quality Assessment of Single-Channel Speech Separation Algorithms

    DEFF Research Database (Denmark)

    Mowlaee, Pejman; Saeidi, Rahim; Christensen, Mads Græsbøll;

    2012-01-01

    Previous studies on performance evaluation of single-channel speech separation (SCSS) algorithms mostly focused on automatic speech recognition (ASR) accuracy as their performance measure. Assessing the separated signals by different metrics other than this has the benefit that the results...... methods for audio source separation (PEASS) measures. In our experiments, we apply these measures on the separated signals obtained by two well-known systems in the SCSS challenge to assess the objective and subjective quality of their output signals. Comparing subjective and objective measurements shows...... that PESQ and PEASS quality metrics predict well the subjective quality of separated signals obtained by the separation systems. From the results it is observed that the short-time objective intelligibility (STOI) measure predict the speech intelligibility results....

  6. Iris Image Quality Assessment for Biometric Application

    Directory of Open Access Journals (Sweden)

    U. M. Chaskar

    2012-05-01

    Full Text Available Image quality assessment plays an important role in the performance of biometric system involving iris images. Data quality assessment is a key issue in order to broaden the applicability of iris biometrics to unconstrained imaging conditions. In this paper, we have proposed the quality factors of individual iris images by assessing their prominent factors by their scores. The work has been carried out for the following databases: CASIA, UBIRIS, UPOL, MMU and our own created COEP Database using HIS 5000 HUVITZ Iris Camera. The comparison is also done with existing databases which in turn will act as a benchmark in increasing the efficiency of further processing.

  7. A subjective study to evaluate video quality assessment algorithms

    Science.gov (United States)

    Seshadrinathan, Kalpana; Soundararajan, Rajiv; Bovik, Alan C.; Cormack, Lawrence K.

    2010-02-01

    Automatic methods to evaluate the perceptual quality of a digital video sequence have widespread applications wherever the end-user is a human. Several objective video quality assessment (VQA) algorithms exist, whose performance is typically evaluated using the results of a subjective study performed by the video quality experts group (VQEG) in 2000. There is a great need for a free, publicly available subjective study of video quality that embodies state-of-the-art in video processing technology and that is effective in challenging and benchmarking objective VQA algorithms. In this paper, we present a study and a resulting database, known as the LIVE Video Quality Database, where 150 distorted video sequences obtained from 10 different source video content were subjectively evaluated by 38 human observers. Our study includes videos that have been compressed by MPEG-2 and H.264, as well as videos obtained by simulated transmission of H.264 compressed streams through error prone IP and wireless networks. The subjective evaluation was performed using a single stimulus paradigm with hidden reference removal, where the observers were asked to provide their opinion of video quality on a continuous scale. We also present the performance of several freely available objective, full reference (FR) VQA algorithms on the LIVE Video Quality Database. The recent MOtion-based Video Integrity Evaluation (MOVIE) index emerges as the leading objective VQA algorithm in our study, while the performance of the Video Quality Metric (VQM) and the Multi-Scale Structural SIMilarity (MS-SSIM) index is noteworthy. The LIVE Video Quality Database is freely available for download1 and we hope that our study provides researchers with a valuable tool to benchmark and improve the performance of objective VQA algorithms.

  8. The Use of Performance Metrics for the Assessment of Safeguards Effectiveness at the State Level

    Energy Technology Data Exchange (ETDEWEB)

    Bachner K. M.; George Anzelon, Lawrence Livermore National Laboratory, Livermore, CA Yana Feldman, Lawrence Livermore National Laboratory, Livermore, CA Mark Goodman,Department of State, Washington, DC Dunbar Lockwood, National Nuclear Security Administration, Washington, DC Jonathan B. Sanborn, JBS Consulting, LLC, Arlington, VA.

    2016-07-24

    In the ongoing evolution of International Atomic Energy Agency (IAEA) safeguards at the state level, many safeguards implementation principles have been emphasized: effectiveness, efficiency, non-discrimination, transparency, focus on sensitive materials, centrality of material accountancy for detecting diversion, independence, objectivity, and grounding in technical considerations, among others. These principles are subject to differing interpretations and prioritizations and sometimes conflict. This paper is an attempt to develop metrics and address some of the potential tradeoffs inherent in choices about how various safeguards policy principles are implemented. The paper carefully defines effective safeguards, including in the context of safeguards approaches that take account of the range of state-specific factors described by the IAEA Secretariat and taken note of by the Board in September 2014, and (2) makes use of performance metrics to help document, and to make transparent, how safeguards implementation would meet such effectiveness requirements.

  9. How the choice of flood damage metrics influences urban flood risk assessment

    OpenAIRE

    J. A. E. ten Veldhuis

    2011-01-01

    This study presents a first attempt to quantify tangible and intangible flood damage according to two different damage metrics: monetary values and number of people affected by flooding. Tangible damage includes material damage to buildings and infrastructure; intangible damage includes damages that are difficult to quantify exactly, such as stress and inconvenience. The data used are representative of lowland flooding incidents with return periods up to 10 years. The results show that moneta...

  10. Comparison of Two Probabilistic Fatigue Damage Assessment Approaches Using Prognostic Performance Metrics

    Directory of Open Access Journals (Sweden)

    Xuefei Guan

    2011-01-01

    Full Text Available In this paper, two probabilistic prognosis updating schemes are compared. One is based on the classical Bayesian approach and the other is based on newly developed maximum relative entropy (MRE approach. The algorithm performance of the two models is evaluated using a set of recently developed prognostics-based metrics. Various uncertainties from measurements, modeling, and parameter estimations are integrated into the prognosis framework as random input variables for fatigue damage of materials. Measures of response variables are then used to update the statistical distributions of random variables and the prognosis results are updated using posterior distributions. Markov Chain Monte Carlo (MCMC technique is employed to provide the posterior samples for model updating in the framework. Experimental data are used to demonstrate the operation of the proposed probabilistic prognosis methodology. A set of prognostics-based metrics are employed to quantitatively evaluate the prognosis performance and compare the proposed entropy method with the classical Bayesian updating algorithm. In particular, model accuracy, precision, robustness and convergence are rigorously evaluated in addition to the qualitative visual comparison. Following this, potential development and improvement for the prognostics-based metrics are discussed in detail.

  11. Quality Assessment for a University Curriculum.

    Science.gov (United States)

    Hjalmered, Jan-Olof; Lumsden, Kenth

    1994-01-01

    In 1992, a national quality assessment report covering courses in all the Swedish schools of mechanical engineering was presented. This article comments on the general ideas and specific proposals presented, and offers an analysis of the consequences. Presents overall considerations regarding quality issues, the philosophy behind the new…

  12. Privacy Metrics and Boundaries

    NARCIS (Netherlands)

    L-F. Pau (Louis-François)

    2005-01-01

    textabstractThis paper aims at defining a set of privacy metrics (quantitative and qualitative) in the case of the relation between a privacy protector ,and an information gatherer .The aims with such metrics are: -to allow to assess and compare different user scenarios and their differences; for ex

  13. ON SOIL QUALITY AND ITS ASSESSING

    Directory of Open Access Journals (Sweden)

    N. Florea

    2007-10-01

    Full Text Available The term of “soil quality” is utilized until present with different connotations; its meaning became nowadays more comprehensive. The most adequate definition of the “soil quality” is: “the capacity of a specific kind of soil to function, within natural or managed ecosystem boundaries, to sustain plant and animal productivity, maintain or enhance water and air quality and support human health and habitation” (Karlen et al, 1998 One distinguishes a native soil quality, in natural conditions, and a meta-native soil quality, in managed conditions. Also, one can distinguish a stable side and a variable side of the soil quality. It is useful to consider also the term of “soilscape quality”, defined as weighted average of soil qualities of all the soils entering soil cover and their arrangement (expressed by the pedogeographical assemblage. The assessing soil quality can be made indirectly by a set of indicators. The kind and number of the quality indicators depend on the evaluation scale and the objective of the assessment. New researches are necessary to define more accurately the soil quality and to develop its evaluation. Assessing and monitoring soil quality have global implication in environment and society.

  14. SIMPLE QUALITY ASSESSMENT FOR BINARY IMAGES

    Institute of Scientific and Technical Information of China (English)

    Zhang Chun'e; Qiu Zhengding

    2007-01-01

    Usually image assessment methods could be classified into two categories: subjective assessments and objective ones. The latter are judged by the correlation coefficient with subjective quality measurement MOS (Mean Opinion Score). This paper presents an objective quality assessment algorithm special for binary images. In the algorithm, noise energy is measured by Euclidean distance between noises and signals and the structural effects caused by noise are described by Euler number change. The assessment on image quality is calculated quantitatively in terms of PSNR (Peak Signal to Noise Ratio). Our experiments show that the results of the algorithm are highly correlative with subjective MOS and the algorithm is more simple and computational saving than traditional objective assessment methods.

  15. Quality Assessment in the Primary care

    Directory of Open Access Journals (Sweden)

    Muharrem Ak

    2013-04-01

    Full Text Available -Quality Assessment in the Primary care Dear Editor; I have read the article titled as “Implementation of Rogi Kalyan Samiti (RKS at Primary Health Centre Durvesh” with great interest. Shrivastava et all concluded that assessment mechanism for the achievement of objectives for the suggested RKS model was not successful (1. Hereby I would like to emphasize the importance of quality assessment (QA especially in the era of newly established primary care implementations in our country. Promotion of quality has been fundamental part of primary care health services. Nevertheless variations in quality of care exist even in the developed countries. Accomplishment of quality in the primary care has some barriers like administration and directorial factors, absence of evidence-based medicine practice lack of continuous medical education. Quality of health care is no doubt multifaceted model that covers all components of health structures and processes of care. Quality in the primary care set up includes patient physician relationship, immunization, maternal, adolescent, adult and geriatric health care, referral, non-communicable disease management and prescribing (2. Most countries are recently beginning the implementation of quality assessments in all walks of healthcare. Organizations like European society for quality and safety in family practice (EQuiP endeavor to accomplish quality by collaboration. There are reported developments and experiments related to the methodology, processes and outcomes of quality assessments of health care. Quality assessments will not only contribute the accomplishment of the program / project but also detect the areas where obstacles also exist. In order to speed up the adoption of QA and to circumvent the occurrence of mistakes, health policy makers and family physicians from different parts of the world should share their experiences. Consensus on quality in preventive medicine implementations can help to yield

  16. ASSESSMENT OF QUALITY OF INNOVATIVE TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    Larisa Alexejevna Ismagilova

    2016-12-01

    Full Text Available We consider the topical issue of implementation of innovative technologies in the aircraft engine building industry. In this industry, products with high reliability requirements are developed and mass-produced. These products combine the latest achievements of science and technology. To make a decision on implementation of innovative technologies, a comprehensive assessment is carried out. It affects the efficiency of the innovations realization. In connection with this, the assessment of quality of innovative technologies is a key aspect in the selection of technological processes for their implementation. Problems concerning assessment of the quality of new technologies and processes of production are considered in the suggested method with respect to new positions. The developed method of assessing the quality of innovative technologies stands out for formed system of the qualimetric characteristics ensuring the effectiveness, efficiency, adaptability of innovative technologies and processes. The feature of suggested system of assessment is that it is based on principles of matching and grouping of quality indicators of innovative technologies and the characteristics of technological processes. The indicators are assessed from the standpoint of feasibility, technologies competiveness and commercial demand of products. In this paper, we discuss the example of implementing the approach of assessing the quality of the innovative technology of high-tech products such as turbine aircraft engine.

  17. Data Matching, Integration, and Interoperability for a Metric Assessment of Monographs

    DEFF Research Database (Denmark)

    Zuccala, Alesia Ann; Cornacchia, Roberto

    2016-01-01

    This paper details a unique data experiment carried out at the University of Amsterdam, Center for Digital Humanities. Data pertaining to monographs were collected from three autonomous resources, the Scopus Journal Index, WorldCat.org and Goodreads, and linked according to unique identifiers...... in a new Microsoft SQL database. The purpose of the experiment was to investigate co-varied metrics for a list of book titles based on their citation impact (from Scopus), presence in international libraries (WorldCat.org) and visibility as publically reviewed items (Goodreads). The results of our data...

  18. Measuring Research Quality Using the Journal Impact Factor, Citations and "Ranked Journals": Blunt Instruments or Inspired Metrics?

    Science.gov (United States)

    Jarwal, Som D.; Brion, Andrew M.; King, Maxwell L.

    2009-01-01

    This paper examines whether three bibliometric indicators--the journal impact factor, citations per paper and the Excellence in Research for Australia (ERA) initiative's list of "ranked journals"--can predict the quality of individual research articles as assessed by international experts, both overall and within broad disciplinary…

  19. SOFTWARE METRICS VALIDATION METHODOLOGIES IN SOFTWARE ENGINEERING

    Directory of Open Access Journals (Sweden)

    K.P. Srinivasan

    2014-12-01

    Full Text Available In the software measurement validations, assessing the validation of software metrics in software engineering is a very difficult task due to lack of theoretical methodology and empirical methodology [41, 44, 45]. During recent years, there have been a number of researchers addressing the issue of validating software metrics. At present, software metrics are validated theoretically using properties of measures. Further, software measurement plays an important role in understanding and controlling software development practices and products. The major requirement in software measurement is that the measures must represent accurately those attributes they purport to quantify and validation is critical to the success of software measurement. Normally, validation is a collection of analysis and testing activities across the full life cycle and complements the efforts of other quality engineering functions and validation is a critical task in any engineering project. Further, validation objective is to discover defects in a system and assess whether or not the system is useful and usable in operational situation. In the case of software engineering, validation is one of the software engineering disciplines that help build quality into software. The major objective of software validation process is to determine that the software performs its intended functions correctly and provides information about its quality and reliability. This paper discusses the validation methodology, techniques and different properties of measures that are used for software metrics validation. In most cases, theoretical and empirical validations are conducted for software metrics validations in software engineering [1-50].

  20. MICROWAVE REMOTE SENSING IN SOIL QUALITY ASSESSMENT

    Directory of Open Access Journals (Sweden)

    S. K. Saha

    2012-08-01

    Full Text Available Information of spatial and temporal variations of soil quality (soil properties is required for various purposes of sustainable agriculture development and management. Traditionally, soil quality characterization is done by in situ point soil sampling and subsequent laboratory analysis. Such methodology has limitation for assessing the spatial variability of soil quality. Various researchers in recent past showed the potential utility of hyperspectral remote sensing technique for spatial estimation of soil properties. However, limited research studies have been carried out showing the potential of microwave remote sensing data for spatial estimation of various soil properties except soil moisture. This paper reviews the status of microwave remote sensing techniques (active and passive for spatial assessment of soil quality parameters such as soil salinity, soil erosion, soil physical properties (soil texture & hydraulic properties; drainage condition; and soil surface roughness. Past and recent research studies showed that both active and passive microwave remote sensing techniques have great potentials for assessment of these soil qualities (soil properties. However, more research studies on use of multi-frequency and full polarimetric microwave remote sensing data and modelling of interaction of multi-frequency and full polarimetric microwave remote sensing data with soil are very much needed for operational use of satellite microwave remote sensing data in soil quality assessment.

  1. Health outcomes in diabetics measured with Minnesota Community Measurement quality metrics

    Directory of Open Access Journals (Sweden)

    Takahashi PY

    2014-12-01

    Full Text Available Paul Y Takahashi,1 Jennifer L St Sauver,2 Lila J Finney Rutten,2 Robert M Jacobson,3 Debra J Jacobson,2 Michaela E McGree,2 Jon O Ebbert1 1Department of Internal Medicine, Division of Primary Care Internal Medicine, 2Department of Health Sciences Research, Mayo Clinic Robert D and Patricia E Kern Center for the Science of Health Care Delivery, 3Department of Pediatric and Adolescent Medicine, Division of Community Pediatrics, Mayo Clinic, Rochester, MN, USA Objective: Our objective was to understand the relationship between optimal diabetes control, as defined by Minnesota Community Measurement (MCM, and adverse health outcomes including emergency department (ED visits, hospitalizations, 30-day rehospitalization, intensive care unit (ICU stay, and mortality. Patients and methods: In 2009, we conducted a retrospective cohort study of empaneled Employee and Community Health patients with diabetes mellitus. We followed patients from 1 September 2009 until 30 June 2011 for hospitalization and until 5 January 2014 for mortality. Optimal control of diabetes mellitus was defined as achieving the following three measures: low-density lipoprotein (LDL cholesterol <100 mg/mL, blood pressure <140/90 mmHg, and hemoglobin A1c <8%. Using the electronic medical record, we assessed hospitalizations, ED visits, ICU stays, 30-day rehospitalizations, and mortality. The chi-square or Wilcoxon rank-sum tests were used to compare those with and without optimal control. We used Cox proportional hazard models to estimate the associations between optimal diabetes mellitus status and each outcome. Results: We identified 5,731 empaneled patients with diabetes mellitus; 2,842 (49.6% were in the optimal control category. After adjustment, we observed that non-optimally controlled patients had higher risks for hospitalization (hazard ratio [HR] 1.11; 95% confidence interval [CI] 1.00–1.23, ED visits (HR 1.15; 95% CI 1.06–1.25, and mortality (HR 1.29; 95% CI 1.09–1

  2. Quality Management Plan for the Environmental Assessment and Innovation Division

    Science.gov (United States)

    Quality management plan (QMP) which identifies the mission, roles, responsibilities of personnel with regard to quality assurance and quality management for the environmental assessment and innovation division.

  3. Assessing product image quality for online shopping

    Science.gov (United States)

    Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq

    2012-01-01

    Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.

  4. Quality assessment in meta-analisys

    Directory of Open Access Journals (Sweden)

    Giuseppe La Torre

    2006-06-01

    Full Text Available

    Background: An important characteristic of meta-analysis is that the results are determined both by the management of the meta-analysis process and by the features of studies included. The scientific rigor of potential primary studies varies considerably and the common objection to meta-analytic summaries is that they combine results from studies of different quality. Researchers began to develop quality scales for experimental studies, however now the interest of researchers is also focusing on observational studies. Since 1980, when Chalmers developed the first quality scale to assess primary studies included in metaanalysis, more than 100 scales have been developed, which vary dramatically in the quality and quantity of the items included. No standard lists of items exist, and the used quality scales lack empirically-supported components.

    Methods: Two of the most important and diffuse quality scales for experimental studies, Jadad system and Chalmers’ scale, and a quality scale used for observational studies, developed by Angelillo et al., are described and compared.

    Conclusion: The fallibility of meta-analysis is not surprising, considering the various bias that may be introduced by the processes of locating and selecting studies, including publication bias, language bias and citation bias. Quality assessment of the studies offers an estimate of the likelihood that their results will express the truth.

  5. Data Matching, Integration, and Interoperability for a Metric Assessment of Monographs

    DEFF Research Database (Denmark)

    Zuccala, Alesia Ann; Cornacchia, Roberto

    2016-01-01

    in a new Microsoft SQL database. The purpose of the experiment was to investigate co-varied metrics for a list of book titles based on their citation impact (from Scopus), presence in international libraries (WorldCat.org) and visibility as publically reviewed items (Goodreads). The results of our data......This paper details a unique data experiment carried out at the University of Amsterdam, Center for Digital Humanities. Data pertaining to monographs were collected from three autonomous resources, the Scopus Journal Index, WorldCat.org and Goodreads, and linked according to unique identifiers...... experiment highlighted current problems related citation indices and the way that books are recorded by different citing authors. Our research further demonstrates the primary problem of matching book titles as ‘cited objects’ with book titles held in a union library catalog, given that books are always...

  6. User-Perceived Quality Assessment for VoIP Applications

    CERN Document Server

    Beuran, R; CERN. Geneva

    2004-01-01

    We designed and implemented a system that permits the measurement of network Quality of Service (QoS) parameters. This system allows us to objectively evaluate the requirements of network applications for delivering user-acceptable quality. To do this we compute accurately the network QoS parameters: one-way delay, jitter, packet loss and throughput. The measurement system makes use of a global clock to synchronise the time measurements in different points of the network. To study the behaviour of real network applications specific metrics must be defined in order to assess the user-perceived quality (UPQ) for each application. Since we measure simultaneously network QoS and application UPQ, we are able to correlate them. Determining application requirements has two main uses: (i) to predict the expected UPQ for an application running over a given network (based on the corresponding measured QoS parameters) and understand the causes of application failure; (ii) to design/configure networks that provide the ne...

  7. Electrical Inspection Oriented Thermal Image Quality Assessment

    Science.gov (United States)

    Lin, Ying; Wang, Menglin; Gong, Xiaojin; Guo, Zhihong; Geng, Yujie; Bai, Demeng

    2017-01-01

    This paper presents an approach to access the quality of thermal images that are specially used in electrical inspection. In this application, no reference images are given for quality assessment. Therefore, we first analyze the characteristics for these thermal images. Then, four quantitative measurements, which are one-dimensional (1D) entropy, two-dimensional (2D) entropy, centrality, and No-Reference Structural Sharpness (NRSS), are investigated to measure the information content, the centrality for objects of interest, and the sharpness of images. Moreover, in order to provide a more intuitive measure for human operators, we assign each image with a discrete rate based on these quantitative measurements via the k-nearest neighbor (KNN) method. The proposed approach has been validated in a dataset composed of 2,336 images. Experiments show that our quality assessment results are consistent with subjective assessment.

  8. Quality Assessment of Domesticated Animal Genome Assemblies.

    Science.gov (United States)

    Seemann, Stefan E; Anthon, Christian; Palasca, Oana; Gorodkin, Jan

    2015-01-01

    The era of high-throughput sequencing has made it relatively simple to sequence genomes and transcriptomes of individuals from many species. In order to analyze the resulting sequencing data, high-quality reference genome assemblies are required. However, this is still a major challenge, and many domesticated animal genomes still need to be sequenced deeper in order to produce high-quality assemblies. In the meanwhile, ironically, the extent to which RNAseq and other next-generation data is produced frequently far exceeds that of the genomic sequence. Furthermore, basic comparative analysis is often affected by the lack of genomic sequence. Herein, we quantify the quality of the genome assemblies of 20 domesticated animals and related species by assessing a range of measurable parameters, and we show that there is a positive correlation between the fraction of mappable reads from RNAseq data and genome assembly quality. We rank the genomes by their assembly quality and discuss the implications for genotype analyses.

  9. Can we go beyond burned area assessment with fire patch metrics from global remote rensing?

    Science.gov (United States)

    Nogueira Pereira Messias, Joana; Ruffault, Julien; Chuvieco, Emilio; Mouillot, Florent

    2016-04-01

    Fire is a major event influencing global biogeochemical cycles and contribute to the emissions of CO2 and other greenhouse gases to the atmosphere. Global burned area (BA) datasets from remote sensing have provided the fruitful information for quantifying carbon emissions in global biogeochemical models, and for DGVM's benchmarking. Patch level analysis from pixel level information recently emerged as an informative additional feature of the regime as fire size distribution. The aim of this study is to evaluate the ability of global BA products to accurately represent characteristics of fire patches (size, complexity shape and spatial orientation). We selected a site in the Brazilian savannas (Cerrado), one of the most fire prone biome and one of the validation test site for the ESA fire-Cci project. We used the pixel-level burned area detected by Landsat, MCD45A1 and the newly delivered MERIS ESA fire-Cci for the period 2002-2009. A flood-fill algorithm adapted from Archibald and Roy (2009) was used to identify the individual fire patches (patch ID) according to the burned date (BD). For each patch ID, we calculated a panel of patch metrics as area, perimeter and core area, shape complexity (shape index and fractal dimension) and the feature of the ellipse fitted over the spatial distribution of pixels composing the patch (eccentricity and direction of the main axis). Paired fire patches overlapping between each BA products were compared. The correlation between patch metrics were evaluated by linear regression models for each inter-product comparison according to fire size classes. Our results showed significant patch overlaps (>30%) between products for patches with areas larger than 270ha, with more than 90% of patches overlapping between MERIS and MCD45A1. Fire Patch metrics correlations showed R2>0.6 for all comparisons of patch Area and Core Area, with a slope of 0.99 between MERIS and MCD45A1 illustrating the agreement between the two global products. The

  10. Metrical Quantization

    CERN Document Server

    Klauder, J R

    1998-01-01

    Canonical quantization may be approached from several different starting points. The usual approaches involve promotion of c-numbers to q-numbers, or path integral constructs, each of which generally succeeds only in Cartesian coordinates. All quantization schemes that lead to Hilbert space vectors and Weyl operators---even those that eschew Cartesian coordinates---implicitly contain a metric on a flat phase space. This feature is demonstrated by studying the classical and quantum ``aggregations'', namely, the set of all facts and properties resident in all classical and quantum theories, respectively. Metrical quantization is an approach that elevates the flat phase space metric inherent in any canonical quantization to the level of a postulate. Far from being an unwanted structure, the flat phase space metric carries essential physical information. It is shown how the metric, when employed within a continuous-time regularization scheme, gives rise to an unambiguous quantization procedure that automatically ...

  11. Assessing the performance of macroinvertebrate metrics in the Challhuaco-Ñireco System (Northern Patagonia, Argentina

    Directory of Open Access Journals (Sweden)

    Melina Mauad

    2015-09-01

    Full Text Available ABSTRACT Seven sites were examined in the Challhuaco-Ñireco system, located in the reserve of the Nahuel Huapi National Park, however part of the catchment is urbanized, being San Carlos de Bariloche (150,000 inhabitants placed in the lower part of the basin. Physico-chemical variables were measured and benthic macroinvertebrates were collected during three consecutive years at seven sites from the headwater to the river outlet. Sites near the source of the river were characterised by Plecoptera, Ephemeroptera, Trichoptera and Diptera, whereas sites close to the river mouth were dominated by Diptera, Oligochaeta and Mollusca. Regarding functional feeding groups, collector-gatherers were dominant at all sites and this pattern was consistent among years. Ordination Analysis (RDA revealed that species assemblages distribution responded to the climatic and topographic gradient (temperature and elevation, but also were associated with variables related to human impact (conductivity, nitrate and phosphate contents. Species assemblages at headwaters were mostly represented by sensitive insects, whereas tolerant taxa such as Tubificidae, Lumbriculidae, Chironomidae and crustacean Aegla sp. were dominant at urbanised sites. Regarding macroinvertebrate metrics employed, total richness, EPT taxa, Shannon diversity index and Biotic Monitoring Patagonian Stream index resulted fairly consistent and evidenced different levels of disturbances at the stream, meaning that this measures are suitable for evaluation of the status of Patagonian mountain streams.

  12. What is "fallback"?: metrics needed to assess telemetry tag effects on anadromous fish behavior

    Science.gov (United States)

    Frank, Holly J.; Mather, Martha E.; Smith, Joseph M.; Muth, Robert M.; Finn, John T.; McCormick, Stephen D.

    2009-01-01

    Telemetry has allowed researchers to document the upstream migrations of anadromous fish in freshwater. In many anadromous alosine telemetry studies, researchers use downstream movements (“fallback”) as a behavioral field bioassay for adverse tag effects. However, these downstream movements have not been uniformly reported or interpreted. We quantified movement trajectories of radio-tagged anadromous alewives (Alosa pseudoharengus) in the Ipswich River, Massachusetts (USA) and tested blood chemistry of tagged and untagged fish held 24 h. A diverse repertoire of movements was observed, which could be quantified using (a) direction of initial movements, (b) timing, and (c) characteristics of bouts of coupled upstream and downstream movements (e.g., direction, distance, duration, and speed). Because downstream movements of individual fish were almost always made in combination with upstream movements, these should be examined together. Several of the movement patterns described here could fall under the traditional definition of “fallback” but were not necessarily aberrant. Because superficially similar movements could have quite different interpretations, post-tagging trajectories need more precise definitions. The set of metrics we propose here will help quantify tag effects in the field, and provide the basis for a conceptual framework that helps define the complicated behaviors seen in telemetry studies on alewives and other fish in the field.

  13. Soil quality assessment under emerging regulatory requirements.

    Science.gov (United States)

    Bone, James; Head, Martin; Barraclough, Declan; Archer, Michael; Scheib, Catherine; Flight, Dee; Voulvoulis, Nikolaos

    2010-08-01

    New and emerging policies that aim to set standards for protection and sustainable use of soil are likely to require identification of geographical risk/priority areas. Soil degradation can be seen as the change or disturbance in soil quality and it is therefore crucial that soil and soil quality are well understood to protect soils and to meet legislative requirements. To increase this understanding a review of the soil quality definition evaluated its development, with a formal scientific approach to assessment beginning in the 1970s, followed by a period of discussion and refinement. A number of reservations about soil quality assessment expressed in the literature are summarised. Taking concerns into account, a definition of soil quality incorporating soil's ability to meet multifunctional requirements, to provide ecosystem services, and the potential for soils to affect other environmental media is described. Assessment using this definition requires a large number of soil function dependent indicators that can be expensive, laborious, prone to error, and problematic in comparison. Findings demonstrate the need for a method that is not function dependent, but uses a number of cross-functional indicators instead. This method to systematically prioritise areas where detailed investigation is required, using a ranking based against a desired level of action, could be relatively quick, easy and cost effective. As such this has potential to fill in gaps and compliment existing monitoring programs and assist in development and implementation of current and future soil protection legislation.

  14. Quality Assessment of Urinary Stone Analysis

    DEFF Research Database (Denmark)

    Siener, Roswitha; Buchholz, Noor; Daudon, Michel

    2016-01-01

    and chemical analysis. The aim of the present study was to assess the quality of urinary stone analysis of laboratories in Europe. Nine laboratories from eight European countries participated in six quality control surveys for urinary calculi analyses of the Reference Institute for Bioanalytics, Bonn, Germany......, between 2010 and 2014. Each participant received the same blinded test samples for stone analysis. A total of 24 samples, comprising pure substances and mixtures of two or three components, were analysed. The evaluation of the quality of the laboratory in the present study was based on the attainment...... spectra and qualification of the staff for an accurate analysis of stone composition. Regular quality control is essential in carrying out routine stone analysis....

  15. Surface water quality assessment by environmetric methods.

    Science.gov (United States)

    Boyacioglu, Hülya; Boyacioglu, Hayal

    2007-08-01

    This environmetric study deals with the interpretation of river water monitoring data from the basin of the Buyuk Menderes River and its tributaries in Turkey. Eleven variables were measured to estimate water quality at 17 sampling sites. Factor analysis was applied to explain the correlations between the observations in terms of underlying factors. Results revealed that, water quality was strongly affected from agricultural uses. Cluster analysis was used to classify stations with similar properties and results distinguished three groups of stations. Water quality at downstream of the river was quite different from the other part. It is recommended to involve the environmetric data treatment as a substantial procedure in assessment of water quality data.

  16. OBJECTIVE QUALITY ASSESSMENT OF IMAGE ENHANCEMENT METHODS IN DIGITAL MAMMOGRAPHY-A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Sheba K.U

    2016-08-01

    Full Text Available Mammography is the primary and most reliable technique for detection of breast cancer. Mammograms are examined for the presence of malignant masses and indirect signs of malignancy such as micro calcifications, architectural distortion and bilateral asymmetry. However, Mammograms are X-ray images taken with low radiation dosage which results in low contrast, noisy images. Also, malignancies in dense breast are difficult to detect due to opaque uniform background in mammograms. Hence, techniques for improving visual screening of mammograms are essential. Image enhancement techniques are used to improve the visual quality of the images. This paper presents the comparative study of different preprocessing techniques used for enhancement of mammograms in mini-MIAS data base. Performance of the image enhancement techniques is evaluated using objective image quality assessment techniques. They include simple statistical error metrics like PSNR and human visual system (HVS feature based metrics such as SSIM, NCC, UIQI, and Discrete Entropy

  17. Assessing Quality of Data Standards: Framework and Illustration Using XBRL GAAP Taxonomy

    Science.gov (United States)

    Zhu, Hongwei; Wu, Harris

    The primary purpose of data standards or metadata schemas is to improve the interoperability of data created by multiple standard users. Given the high cost of developing data standards, it is desirable to assess the quality of data standards. We develop a set of metrics and a framework for assessing data standard quality. The metrics include completeness and relevancy. Standard quality can also be indirectly measured by assessing interoperability of data instances. We evaluate the framework using data from the financial sector: the XBRL (eXtensible Business Reporting Language) GAAP (Generally Accepted Accounting Principles) taxonomy and US Securities and Exchange Commission (SEC) filings produced using the taxonomy by approximately 500 companies. The results show that the framework is useful and effective. Our analysis also reveals quality issues of the GAAP taxonomy and provides useful feedback to taxonomy users. The SEC has mandated that all publicly listed companies must submit their filings using XBRL. Our findings are timely and have practical implications that will ultimately help improve the quality of financial data.

  18. Air Quality Assessment Using Interpolation Technique

    Directory of Open Access Journals (Sweden)

    Awkash Kumar

    2016-07-01

    Full Text Available Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance Weighting (IDW of Geographical Information System (GIS has been used to perform interpolation with the help of concentration data on air quality at three locations of Mumbai for the year 2008. The classification was done for the spatial and temporal variation in air quality levels for Mumbai region. The seasonal and annual variations of air quality levels for SO2, NOx and SPM (Suspended Particulate Matter have been focused in this study. Results show that SPM concentration always exceeded the permissible limit of National Ambient Air Quality Standard. Also, seasonal trends of pollutant SPM was low in monsoon due rain fall. The finding of this study will help to formulate control strategies for rational management of air pollution and can be used for many other regions.

  19. Water Quality Assessment using Satellite Remote Sensing

    Science.gov (United States)

    Haque, Saad Ul

    2016-07-01

    The two main global issues related to water are its declining quality and quantity. Population growth, industrialization, increase in agriculture land and urbanization are the main causes upon which the inland water bodies are confronted with the increasing water demand. The quality of surface water has also been degraded in many countries over the past few decades due to the inputs of nutrients and sediments especially in the lakes and reservoirs. Since water is essential for not only meeting the human needs but also to maintain natural ecosystem health and integrity, there are efforts worldwide to assess and restore quality of surface waters. Remote sensing techniques provide a tool for continuous water quality information in order to identify and minimize sources of pollutants that are harmful for human and aquatic life. The proposed methodology is focused on assessing quality of water at selected lakes in Pakistan (Sindh); namely, HUBDAM, KEENJHAR LAKE, HALEEJI and HADEERO. These lakes are drinking water sources for several major cities of Pakistan including Karachi. Satellite imagery of Landsat 7 (ETM+) is used to identify the variation in water quality of these lakes in terms of their optical properties. All bands of Landsat 7 (ETM+) image are analyzed to select only those that may be correlated with some water quality parameters (e.g. suspended solids, chlorophyll a). The Optimum Index Factor (OIF) developed by Chavez et al. (1982) is used for selection of the optimum combination of bands. The OIF is calculated by dividing the sum of standard deviations of any three bands with the sum of their respective correlation coefficients (absolute values). It is assumed that the band with the higher standard deviation contains the higher amount of 'information' than other bands. Therefore, OIF values are ranked and three bands with the highest OIF are selected for the visual interpretation. A color composite image is created using these three bands. The water quality

  20. Integrating bioassessment and ecological risk assessment: an approach to developing numerical water-quality criteria.

    Science.gov (United States)

    King, Ryan S; Richardson, Curtis J

    2003-06-01

    Ioassessment is used worldwide to monitor aquatic health but is infrequently used with risk-assessment objectives, such as supporting the development of defensible, numerical water-quality criteria. To this end, we present a generalized approach for detecting potential ecological thresholds using assemblage-level attributes and a multimetric index (Index of Biological Integrity-IBI) as endpoints in response to numerical changes in water quality. To illustrate the approach, we used existing macroinvertebrate and surface-water total phosphorus (TP) datasets from an observed P gradient and a P-dosing experiment in wetlands of the south Florida coastal plain nutrient ecoregion. Ten assemblage attributes were identified as potential metrics using the observational data, and five were validated in the experiment. These five core metrics were subjected individually and as an aggregated Nutrient-IBI to nonparametric changepoint analysis (nCPA) to estimate cumulative probabilities of a threshold response to TP. Threshold responses were evident for all metrics and the IBI, and were repeatable through time. Results from the observed gradient indicated that a threshold was > or = 50% probable between 12.6 and 19.4 microg/L TP for individual metrics and 14.8 microg/L TP for the IBI. Results from the P-dosing experiment revealed > or = 50% probability of a response between 11.2 and 13.0 microg/L TP for the metrics and 12.3 microg/L TP for the IBI. Uncertainty analysis indicated a low (typically > or = 5%) probability that an IBI threshold occurred at or = 95% certainty that the threshold was 12-15 microg/L is likely to cause degradation of macroinvertebrate assemblage structure and function, a reflection of biological integrity, in the study area. This finding may assist in the development of a numerical water-quality criterion for TP in this ecoregion, and illustrates the utility of bioassessment to environmental decision-making.

  1. Mass Customization Measurements Metrics

    DEFF Research Database (Denmark)

    Nielsen, Kjeld; Brunø, Thomas Ditlev; Jørgensen, Kaj Asbjørn

    2014-01-01

    A recent survey has indicated that 17 % of companies have ceased mass customizing less than 1 year after initiating the effort. This paper presents measurement for a company’s mass customization performance, utilizing metrics within the three fundamental capabilities: robust process design, choice...... navigation, and solution space development. A mass customizer when assessing performance with these metrics can identify within which areas improvement would increase competitiveness the most and enable more efficient transition to mass customization....

  2. Assessing the Quality of Diabetic Patients Care

    Directory of Open Access Journals (Sweden)

    Belkis Vicente Sánchez

    2012-12-01

    Full Text Available Background: to improve the efficiency and effectiveness of the actions of family doctors and nurses in this area is an indispensable requisite in order to achieve a comprehensive health care. Objective: to assess the quality of health care provided to diabetic patients by the family doctor in Abreus health area. Methods: a descriptive and observational study based on the application of tools to assess the performance of family doctors in the treatment of diabetes mellitus in the five family doctors consultation in Abreus health area from January to July 2011 was conducted. The five doctors working in these consultations, as well as the 172 diabetic patients were included in the study. At the same time, 172 randomly selected medical records were also revised. Through observation, the existence of some necessary material resources and the quality of their performance as well as the quality of medical records were evaluated. Patient criteria served to assess the quality of the health care provided. Results: scientific and technical training on diabetes mellitus has been insufficient; the necessary equipment for the appropriate care and monitoring of patients with diabetes is available; in 2.9% of medical records reviewed, interrogation appears in its complete form including the complete physical examination in 12 of them and the complete medical indications in 26. Conclusions: the quality of comprehensive medical care to diabetic patients included in the study is compromised. Doctors interviewed recognized the need to be trained in the diagnosis and treatment of diabetes in order to improve their professional performance and enhance the quality of the health care provided to these patients.

  3. QoS Metrics for Cloud Computing Services Evaluation

    Directory of Open Access Journals (Sweden)

    Amid Khatibi Bardsiri

    2014-11-01

    Full Text Available Cloud systems are transforming the Information Technology trade by facultative the companies to provide admission to their structure and also software products to the membership foundation. Because of the vast range within the delivered Cloud solutions, from the customer’s perspective of an aspect, it's emerged as troublesome to decide whose providers they need to utilize and then what's the thought of his or her option. Especially, employing suitable metrics is vital in assessing practices. Nevertheless, to the most popular of our knowledge, there's no methodical explanation relating to metrics for estimating Cloud products and services. QoS (Quality of Service metrics playing an important role in selecting Cloud providers and also optimizing resource utilization efficiency. While many reports have got to devote to exploitation QoS metrics, relatively not much equipment supports the remark and investigation of QoS metrics of Cloud programs. To guarantee a specialized product is published, describing metrics for assessing the QoS might be an essential necessity. So, this text suggests various QoS metrics for service vendors, especially thinking about the consumer’s worry. This article provides the metrics list may stand to help the future study and also assessment within the field of Cloud service's evaluation.

  4. A multi-scale metrics approach to forest fragmentation for Strategic Environmental Impact Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eunyoung, E-mail: eykim@kei.re.kr [Korea Environment Institute, 215 Jinheungno, Eunpyeong-gu, Seoul 122-706 (Korea, Republic of); Song, Wonkyong, E-mail: wksong79@gmail.com [Suwon Research Institute, 145 Gwanggyo-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-270 (Korea, Republic of); Lee, Dongkun, E-mail: dklee7@snu.ac.kr [Department of Landscape Architecture and Rural System Engineering, Seoul National University, 599 Gwanakro, Gwanak-gu, Seoul 151-921 (Korea, Republic of); Research Institute for Agriculture and Life Sciences, Seoul National University, Seoul 151-921 (Korea, Republic of)

    2013-09-15

    Forests are becoming severely fragmented as a result of land development. South Korea has responded to changing community concerns about environmental issues. The nation has developed and is extending a broad range of tools for use in environmental management. Although legally mandated environmental compliance requirements in South Korea have been implemented to predict and evaluate the impacts of land-development projects, these legal instruments are often insufficient to assess the subsequent impact of development on the surrounding forests. It is especially difficult to examine impacts on multiple (e.g., regional and local) scales in detail. Forest configuration and size, including forest fragmentation by land development, are considered on a regional scale. Moreover, forest structure and composition, including biodiversity, are considered on a local scale in the Environmental Impact Assessment process. Recently, the government amended the Environmental Impact Assessment Act, including the SEA, EIA, and small-scale EIA, to require an integrated approach. Therefore, the purpose of this study was to establish an impact assessment system that minimizes the impacts of land development using an approach that is integrated across multiple scales. This study focused on forest fragmentation due to residential development and road construction sites in selected Congestion Restraint Zones (CRZs) in the Greater Seoul Area of South Korea. Based on a review of multiple-scale impacts, this paper integrates models that assess the impacts of land development on forest ecosystems. The applicability of the integrated model for assessing impacts on forest ecosystems through the SEIA process is considered. On a regional scale, it is possible to evaluate the location and size of a land-development project by considering aspects of forest fragmentation, such as the stability of the forest structure and the degree of fragmentation. On a local scale, land-development projects should

  5. Alternative metrics

    Science.gov (United States)

    2012-11-01

    As the old 'publish or perish' adage is brought into question, additional research-impact indices, known as altmetrics, are offering new evaluation alternatives. But such metrics may need to adjust to the evolution of science publishing.

  6. Quality Assessment of Domesticated Animal Genome Assemblies

    DEFF Research Database (Denmark)

    Seemann, Stefan E; Anthon, Christian; Palasca, Oana;

    2015-01-01

    domesticated animal genomes still need to be sequenced deeper in order to produce high-quality assemblies. In the meanwhile, ironically, the extent to which RNAseq and other next-generation data is produced frequently far exceeds that of the genomic sequence. Furthermore, basic comparative analysis is often...... affected by the lack of genomic sequence. Herein, we quantify the quality of the genome assemblies of 20 domesticated animals and related species by assessing a range of measurable parameters, and we show that there is a positive correlation between the fraction of mappable reads from RNAseq data...

  7. From Log Files to Assessment Metrics: Measuring Students' Science Inquiry Skills Using Educational Data Mining

    Science.gov (United States)

    Gobert, Janice D.; Sao Pedro, Michael; Raziuddin, Juelaila; Baker, Ryan S.

    2013-01-01

    We present a method for assessing science inquiry performance, specifically for the inquiry skill of designing and conducting experiments, using educational data mining on students' log data from online microworlds in the Inq-ITS system (Inquiry Intelligent Tutoring System; www.inq-its.org). In our approach, we use a 2-step process: First we use…

  8. Assessment of Navigation Using a Hybrid Cognitive/Metric World Model

    Science.gov (United States)

    2015-01-01

    assessment was conducted to evaluate progress toward this goal. The robot’s ability to receive instructions in structured text , and to interpret those...runs based on observation of things that went wrong during the runs. Environmental issues describe problems with driving through snow and over ice

  9. Visual quality assessment by machine learning

    CERN Document Server

    Xu, Long; Kuo, C -C Jay

    2015-01-01

    The book encompasses the state-of-the-art visual quality assessment (VQA) and learning based visual quality assessment (LB-VQA) by providing a comprehensive overview of the existing relevant methods. It delivers the readers the basic knowledge, systematic overview and new development of VQA. It also encompasses the preliminary knowledge of Machine Learning (ML) to VQA tasks and newly developed ML techniques for the purpose. Hence, firstly, it is particularly helpful to the beginner-readers (including research students) to enter into VQA field in general and LB-VQA one in particular. Secondly, new development in VQA and LB-VQA particularly are detailed in this book, which will give peer researchers and engineers new insights in VQA.

  10. Quality Assessment of Urinary Stone Analysis

    DEFF Research Database (Denmark)

    Siener, Roswitha; Buchholz, Noor; Daudon, Michel;

    2016-01-01

    After stone removal, accurate analysis of urinary stone composition is the most crucial laboratory diagnostic procedure for the treatment and recurrence prevention in the stone-forming patient. The most common techniques for routine analysis of stones are infrared spectroscopy, X-ray diffraction...... and chemical analysis. The aim of the present study was to assess the quality of urinary stone analysis of laboratories in Europe. Nine laboratories from eight European countries participated in six quality control surveys for urinary calculi analyses of the Reference Institute for Bioanalytics, Bonn, Germany......, between 2010 and 2014. Each participant received the same blinded test samples for stone analysis. A total of 24 samples, comprising pure substances and mixtures of two or three components, were analysed. The evaluation of the quality of the laboratory in the present study was based on the attainment...

  11. Ecological Status of a Patagonian Mountain River: Usefulness of Environmental and Biotic Metrics for Rehabilitation Assessment

    Science.gov (United States)

    Laura, Miserendino M.; Adriana, M. Kutschker; Cecilia, Brand; La Ludmila, Manna; Cecilia, Prinzio Y. Di; Gabriela, Papazian; José, Bava

    2016-06-01

    This work evaluates the consequences of anthropogenic pressures at different sections of a Patagonian mountain river using a set of environmental and biological measures. A map of risk of soil erosion at a basin scale was also produced. The study was conducted at 12 sites along the Percy River system, where physicochemical parameters, riparian ecosystem quality, habitat condition, plants, and macroinvertebrates were investigated. While livestock and wood collection, the dominant activities at upper and mean basin sites resulted in an important loss of the forest cover still the riparian ecosystem remains in a relatively good status of conservation, as do the in-stream habitat conditions and physicochemical features. Besides, most indicators based on macroinvertebrates revealed that both upper and middle basin sections supported similar assemblages, richness, density, and most functional feeding group attributes. Instead, the lower urbanized basin showed increases in conductivity and nutrient values, poor quality in the riparian ecosystem, and habitat condition. According to the multivariate analysis, ammonia level, elevation, current velocity, and habitat conditions had explanatory power on benthos assemblages. Discharge, naturalness of the river channel, flood plain morphology, conservation status, and percent of urban areas were important moderators of plant composition. Finally, although the present land use in the basin would not produce a significant risk of soil erosion, unsustainable practices that promotes the substitution of the forest for shrubs would lead to severe consequences. Mitigation efforts should be directed to protect headwater forest, restore altered riparian ecosystem, and to control the incipient eutrophication process.

  12. Quality Markers in Cardiology. Main Markers to Measure Quality of Results (Outcomes) and Quality Measures Related to Better Results in Clinical Practice (Performance Metrics). INCARDIO (Indicadores de Calidad en Unidades Asistenciales del Área del Corazón): A SEC/SECTCV Consensus Position Paper.

    Science.gov (United States)

    López-Sendón, José; González-Juanatey, José Ramón; Pinto, Fausto; Cuenca Castillo, José; Badimón, Lina; Dalmau, Regina; González Torrecilla, Esteban; López-Mínguez, José Ramón; Maceira, Alicia M; Pascual-Figal, Domingo; Pomar Moya-Prats, José Luis; Sionis, Alessandro; Zamorano, José Luis

    2015-11-01

    Cardiology practice requires complex organization that impacts overall outcomes and may differ substantially among hospitals and communities. The aim of this consensus document is to define quality markers in cardiology, including markers to measure the quality of results (outcomes metrics) and quality measures related to better results in clinical practice (performance metrics). The document is mainly intended for the Spanish health care system and may serve as a basis for similar documents in other countries.

  13. Comparing concentration-based (AOT40) and stomatal uptake (PODY) metrics for ozone risk assessment to European forests.

    Science.gov (United States)

    Anav, Alessandro; De Marco, Alessandra; Proietti, Chiara; Alessandri, Andrea; Dell'Aquila, Alessandro; Cionni, Irene; Friedlingstein, Pierre; Khvorostyanov, Dmitry; Menut, Laurent; Paoletti, Elena; Sicard, Pierre; Sitch, Stephen; Vitale, Marcello

    2016-04-01

    Tropospheric ozone (O3) produces harmful effects to forests and crops, leading to a reduction of land carbon assimilation that, consequently, influences the land sink and the crop yield production. To assess the potential negative O3 impacts to vegetation, the European Union uses the Accumulated Ozone over Threshold of 40 ppb (AOT40). This index has been chosen for its simplicity and flexibility in handling different ecosystems as well as for its linear relationships with yield or biomass loss. However, AOT40 does not give any information on the physiological O3 uptake into the leaves since it does not include any environmental constraints to O3 uptake through stomata. Therefore, an index based on stomatal O3 uptake (i.e. PODY), which describes the amount of O3 entering into the leaves, would be more appropriate. Specifically, the PODY metric considers the effects of multiple climatic factors, vegetation characteristics and local and phenological inputs rather than the only atmospheric O3 concentration. For this reason, the use of PODY in the O3 risk assessment for vegetation is becoming recommended. We compare different potential O3 risk assessments based on two methodologies (i.e. AOT40 and stomatal O3 uptake) using a framework of mesoscale models that produces hourly meteorological and O3 data at high spatial resolution (12 km) over Europe for the time period 2000-2005. Results indicate a remarkable spatial and temporal inconsistency between the two indices, suggesting that a new definition of European legislative standard is needed in the near future. Besides, our risk assessment based on AOT40 shows a good consistency compared to both in-situ data and other model-based datasets. Conversely, risk assessment based on stomatal O3 uptake shows different spatial patterns compared to other model-based datasets. This strong inconsistency can be likely related to a different vegetation cover and its associated parameterizations.

  14. Change in visual acuity is well correlated with change in image-quality metrics for both normal and keratoconic wavefront errors.

    Science.gov (United States)

    Ravikumar, Ayeswarya; Marsack, Jason D; Bedell, Harold E; Shi, Yue; Applegate, Raymond A

    2013-11-26

    We determined the degree to which change in visual acuity (VA) correlates with change in optical quality using image-quality (IQ) metrics for both normal and keratoconic wavefront errors (WFEs). VA was recorded for five normal subjects reading simulated, logMAR acuity charts generated from the scaled WFEs of 15 normal and seven keratoconic eyes. We examined the correlations over a large range of acuity loss (up to 11 lines) and a smaller, more clinically relevant range (up to four lines). Nine IQ metrics were well correlated for both ranges. Over the smaller range of primary interest, eight were also accurate and precise in estimating the variations in logMAR acuity in both normal and keratoconic WFEs. The accuracy for these eight best metrics in estimating the mean change in logMAR acuity ranged between ±0.0065 to ±0.017 logMAR (all less than one letter), and the precision ranged between ±0.10 to ±0.14 logMAR (all less than seven letters).

  15. Harmonizing exposure metrics and methods for sustainability assessments of food contact materials

    DEFF Research Database (Denmark)

    Ernstoff, Alexi; Jolliet, Olivier; Niero, Monia

    2016-01-01

    , like LCA, finally facilitates including exposure to chemicals as a sustainable packaging design issue. Results were demonstrated in context of the pilot-scale Product Environmental Footprint regulatory method in the European Union. Increasing recycled content, decreasing greenhouse gas emissions...... by selecting plastics over glass, and adding chemicals with a design function were identified as risk management issues. We conclude developing an exposure framework, suitable for sustainability assessments commonly used for food packaging, is feasible to help guide packaging design to consider both......We aim to develop harmonized and operational methods for quantifying exposure to chemicals in food packaging specifically for sustainability assessments. Thousands of chemicals are approved for food packaging and numerous contaminates occur, e.g. through recycling. Chemical migration into food...

  16. Quality of assessments within reach: Review study of research and results of the quality of assessments

    NARCIS (Netherlands)

    Maassen, N.A.M.; Otter, den D.; Wools, S.; Hemker, B.T.; Straetmans, G.J.J.M.; Eggen, T.J.H.M.

    2015-01-01

    Educational tests and assessments are important instruments to measure a student’s knowledge and skills. The question that is addressed in this review study is: “which aspects are currently considered as important to the quality of educational assessments?” Furthermore, it is explored how this infor

  17. Assessing anthropogenic pressures on estuarine fish nurseries along the Portuguese coast: a multi-metric index and conceptual approach.

    Science.gov (United States)

    Vasconcelos, R P; Reis-Santos, P; Fonseca, V; Maia, A; Ruano, M; França, S; Vinagre, C; Costa, M J; Cabral, H

    2007-03-15

    Estuaries are among the most productive ecosystems and simultaneously among the most threatened by conflicting human activities which damage their ecological functions, namely their nursery role for many fish species. A thorough assessment of the anthropogenic pressures in Portuguese estuarine systems (Douro, Ria de Aveiro, Mondego, Tejo, Sado, Mira, Ria Formosa and Guadiana) was made applying an aggregating multi-metric index, which quantitatively evaluates influences from key components: dams, population and industry, port activities and resource exploitation. Estuaries were ranked from most (Tejo) to least pressured (Mira), and the most influential types of pressure identified. In most estuaries overall pressure was generated by a dominant group of pressure components, with several systems being afflicted by similar problematic sources. An evaluation of the influence of anthropogenic pressures on the most important sparidae, soleidae, pleuronectidae, moronidae and clupeidae species that use these estuaries as nurseries was also performed. To consolidate information and promote management an ecological conceptual model was built to identify potential problems for the nursery function played by these estuaries, identifying pressure agents, ecological impacts and endpoints for the anthropogenic sources quantified in the assessment. This will be important baseline information to safeguard these vital areas, articulating information and forecasting the potential efficacy of future management options.

  18. Metric Properties of the Neighborhood Inventory for Environmental Typology (NIfETy): An Environmental Assessment Tool for Measuring Indicators of Violence, Alcohol, Tobacco, and Other Drug Exposures

    Science.gov (United States)

    Furr-Holden, C. D. M.; Campbell, K. D. M.; Milam, A. J.; Smart, M. J.; Ialongo, N. A.; Leaf, P. J.

    2010-01-01

    Objectives: Establish metric properties of the Neighborhood Inventory for Environmental Typology (NIfETy). Method: A total of 919 residential block faces were assessed by paired raters using the NIfETy. Reliability was evaluated via interrater and internal consistency reliability; validity by comparing NIfETy data with youth self-reported…

  19. MO-D-213-06: Quantitative Image Quality Metrics Are for Physicists, Not Radiologists: How to Communicate to Your Radiologists Using Their Language

    Energy Technology Data Exchange (ETDEWEB)

    Szczykutowicz, T; Rubert, N; Ranallo, F [University Wisconsin-Madison, Madison, WI (United States)

    2015-06-15

    Purpose: A framework for explaining differences in image quality to non-technical audiences in medial imaging is needed. Currently, this task is something that is learned “on the job.” The lack of a formal methodology for communicating optimal acquisition parameters into the clinic effectively mitigates many technological advances. As a community, medical physicists need to be held responsible for not only advancing image science, but also for ensuring its proper use in the clinic. This work outlines a framework that bridges the gap between the results from quantitative image quality metrics like detectability, MTF, and NPS and their effect on specific anatomical structures present in diagnostic imaging tasks. Methods: Specific structures of clinical importance were identified for a body, an extremity, a chest, and a temporal bone protocol. Using these structures, quantitative metrics were used to identify the parameter space that should yield optimal image quality constrained within the confines of clinical logistics and dose considerations. The reading room workflow for presenting the proposed changes for imaging each of these structures is presented. The workflow consists of displaying images for physician review consisting of different combinations of acquisition parameters guided by quantitative metrics. Examples of using detectability index, MTF, NPS, noise and noise non-uniformity are provided. During review, the physician was forced to judge the image quality solely on those features they need for diagnosis, not on the overall “look” of the image. Results: We found that in many cases, use of this framework settled mis-agreements between physicians. Once forced to judge images on the ability to detect specific structures inter reader agreement was obtained. Conclusion: This framework will provide consulting, research/industrial, or in-house physicists with clinically relevant imaging tasks to guide reading room image review. This framework avoids use

  20. Metrics for Success: Strategies for Enabling Core Facility Performance and Assessing Outcomes.

    Science.gov (United States)

    Turpen, Paula B; Hockberger, Philip E; Meyn, Susan M; Nicklin, Connie; Tabarini, Diane; Auger, Julie A

    2016-04-01

    Core Facilities are key elements in the research portfolio of academic and private research institutions. Administrators overseeing core facilities (core administrators) require assessment tools for evaluating the need and effectiveness of these facilities at their institutions. This article discusses ways to promote best practices in core facilities as well as ways to evaluate their performance across 8 of the following categories: general management, research and technical staff, financial management, customer base and satisfaction, resource management, communications, institutional impact, and strategic planning. For each category, we provide lessons learned that we believe contribute to the effective and efficient overall management of core facilities. If done well, we believe that encouraging best practices and evaluating performance in core facilities will demonstrate and reinforce the importance of core facilities in the research and educational mission of institutions. It will also increase job satisfaction of those working in core facilities and improve the likelihood of sustainability of both facilities and personnel.

  1. Categorizing biomarkers of the human exposome and developing metrics for assessing environmental sustainability.

    Science.gov (United States)

    Pleil, Joachim D

    2012-01-01

    The concept of maintaining environmental sustainability broadly encompasses all human activities that impact the global environment, including the production of energy, use and management of finite resources such as petrochemicals, metals, food production (farmland, fresh and ocean waters), and potable water sources (rivers, lakes, aquifers), as well as preserving the diversity of the surrounding ecosystems. The ultimate concern is how one can manage Spaceship Earth in the long term to sustain the life, health, and welfare of the human species and the planet's flora and fauna. On a more intimate scale, one needs to consider the human interaction with the environment as expressed in the form of the exposome, which is defined as all exogenous and endogenous exposures from conception onward, including exposures from diet, lifestyle, and internal biology, as a quantity of critical interest to disease etiology. Current status and subsequent changes in the measurable components of the exposome, the human biomarkers, could thus conceivably be used to assess the sustainability of the environmental conditions with respect to human health. The basic theory is that a shift away from sustainability will be reflected in outlier measurements of human biomarkers. In this review, the philosophy of long-term environmental sustainability is explored in the context of human biomarker measurements and how empirical data can be collected and interpreted to assess if solutions to existing environmental problems might have unintended consequences. The first part discusses four conventions in the literature for categorizing environmental biomarkers and how different types of biomarker measurements might fit into the various grouping schemes. The second part lays out a sequence of data management strategies to establish statistics and patterns within the exposome that reflect human homeostasis and how changes or perturbations might be interpreted in light of external environmental

  2. Metrics for Event Driven Software

    Directory of Open Access Journals (Sweden)

    Neha Chaudhary

    2016-01-01

    Full Text Available The evaluation of Graphical User Interface has significant role to improve its quality. Very few metrics exists for the evaluation of Graphical User Interface. The purpose of metrics is to obtain better measurements in terms of risk management, reliability forecast, project scheduling, and cost repression. In this paper structural complexity metrics is proposed for the evaluation of Graphical User Interface. Structural complexity of Graphical User Interface is considered as an indicator of complexity. The goal of identifying structural complexity is to measure the GUI testability. In this testability evaluation the process of measuring the complexity of the user interface from testing perspective is proposed. For the GUI evaluation and calculating structural complexity an assessment process is designed which is based on types of events. A fuzzy model is developed to evaluate the structural complexity of GUI. This model takes five types of events as input and return structural complexity of GUI as output. Further a relationship is established between structural complexity and testability of event driven software. Proposed model is evaluated with four different applications. It is evident from the results that higher the complexities lower the testability of application.

  3. Assessing natural resource use by forest-reliant communities in Madagascar using functional diversity and functional redundancy metrics.

    Directory of Open Access Journals (Sweden)

    Kerry A Brown

    Full Text Available Biodiversity plays an integral role in the livelihoods of subsistence-based forest-dwelling communities and as a consequence it is increasingly important to develop quantitative approaches that capture not only changes in taxonomic diversity, but also variation in natural resources and provisioning services. We apply a functional diversity metric originally developed for addressing questions in community ecology to assess utilitarian diversity of 56 forest plots in Madagascar. The use categories for utilitarian plants were determined using expert knowledge and household questionnaires. We used a null model approach to examine the utilitarian (functional diversity and utilitarian redundancy present within ecological communities. Additionally, variables that might influence fluctuations in utilitarian diversity and redundancy--specifically number of felled trees, number of trails, basal area, canopy height, elevation, distance from village--were analyzed using Generalized Linear Models (GLMs. Eighteen of the 56 plots showed utilitarian diversity values significantly higher than expected. This result indicates that these habitats exhibited a low degree of utilitarian redundancy and were therefore comprised of plants with relatively distinct utilitarian properties. One implication of this finding is that minor losses in species richness may result in reductions in utilitarian diversity and redundancy, which may limit local residents' ability to switch between alternative choices. The GLM analysis showed that the most predictive model included basal area, canopy height and distance from village, which suggests that variation in utilitarian redundancy may be a result of local residents harvesting resources from the protected area. Our approach permits an assessment of the diversity of provisioning services available to local communities, offering unique insights that would not be possible using traditional taxonomic diversity measures. These analyses

  4. Quantitative Metrics and Risk Assessment: The Three Tenets Model of Cybersecurity

    Directory of Open Access Journals (Sweden)

    Jeff Hughes

    2013-08-01

    Full Text Available Progress in operational cybersecurity has been difficult to demonstrate. In spite of the considerable research and development investments made for more than 30 years, many government, industrial, financial, and consumer information systems continue to be successfully attacked and exploited on a routine basis. One of the main reasons that progress has been so meagre is that most technical cybersecurity solutions that have been proposed to-date have been point solutions that fail to address operational tradeoffs, implementation costs, and consequent adversary adaptations across the full spectrum of vulnerabilities. Furthermore, sound prescriptive security principles previously established, such as the Orange Book, have been difficult to apply given current system complexity and acquisition approaches. To address these issues, the authors have developed threat-based descriptive methodologies to more completely identify system vulnerabilities, to quantify the effectiveness of possible protections against those vulnerabilities, and to evaluate operational consequences and tradeoffs of possible protections. This article begins with a discussion of the tradeoffs among seemingly different system security properties such as confidentiality, integrity, and availability. We develop a quantitative framework for understanding these tradeoffs and the issues that arise when those security properties are all in play within an organization. Once security goals and candidate protections are identified, risk/benefit assessments can be performed using a novel multidisciplinary approach, called “QuERIES.” The article ends with a threat-driven quantitative methodology, called “The Three Tenets”, for identifying vulnerabilities and countermeasures in networked cyber-physical systems. The goal of this article is to offer operational guidance, based on the techniques presented here, for informed decision making about cyber-physical system security.

  5. Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...

  6. Service Quality and Process Maturity Assessment

    Directory of Open Access Journals (Sweden)

    Serek Radomir

    2013-12-01

    Full Text Available This article deals with service quality and the methods for its measurement and improvements to reach the so called service excellence. Besides older methods such as SERVQUAL and SERPERF, there are also shortly described capability maturity models based on which the own methodology is developed and used for process maturity assessment in organizations providing technical services. This method is equally described and accompanied by examples on pictures. The verification of method functionality is explored on finding a correlation between service employee satisfaction and average process maturity in a service organization. The results seem to be quite promising and open an arena for further studies.

  7. Quality Assessment of Landsat Surface Reflectance Products Using MODIS Data

    Science.gov (United States)

    Feng, Min; Huang, Chengquan; Channan, Saurabh; Vermote, Eric; Masek, Jeffrey G.; Townshend, John R.

    2012-01-01

    Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat

  8. Towards Web Documents Quality Assessment for Digital Humanities Scholars

    NARCIS (Netherlands)

    Ceolin, D.; Noordegraaf, J.; Aroyo, L.; van Son, C.; Wolfgang, N.

    2016-01-01

    We present a framework for assessing the quality of Web documents, and a baseline of three quality dimensions: trustworthiness, objectivity and basic scholarly quality. Assessing Web document quality is a "deep data" problem necessitating approaches to handle both data size and complexity.

  9. Objective assessment of speech and audio quality - Technology and applications

    NARCIS (Netherlands)

    Rix, A.W.; Beerends, J.G.; Kim, D.-S.; Kroon, P.; Ghitza, O.

    2006-01-01

    In the past few years, objective quality assessment models have become increasingly used for assessing or monitoring speech and audio quality. By measuring perceived quality on an easily-understood subjective scale, such as listening quality (excellent, good, fair, poor, bad), these methods provide

  10. Assessment of sleep quality in powernapping

    DEFF Research Database (Denmark)

    Kooravand Takht Sabzy, Bashaer; Thomsen, Carsten E

    2011-01-01

    The purpose of this study is to assess the Sleep Quality (SQ) in powernapping. The contributed factors for SQ assessment are time of Sleep Onset (SO), Sleep Length (SL), Sleep Depth (SD), and detection of sleep events (K-complex (KC) and Sleep Spindle (SS)). Data from daytime nap for 10 subjects, 2...... days each, including EEG and ECG were recorded. The SD and sleep events were analyzed by applying spectral analysis. The SO time was detected by a combination of signal spectral analysis, Slow Rolling Eye Movement (SREM) detection, Heart Rate Variability (HRV) analysis and EEG segmentation using both...... Autocorrelation Function (ACF), and Crosscorrelation Function (CCF) methods. The EEG derivation FP1-FP2 filtered in a narrow band and used as an alternative to EOG for SREM detection. The ACF and CCF segmentation methods were also applied for detection of sleep events. The ACF method detects segment boundaries...

  11. DAF: differential ACE filtering image quality assessment by automatic color equalization

    Science.gov (United States)

    Ouni, S.; Chambah, M.; Saint-Jean, C.; Rizzi, A.

    2008-01-01

    Ideally, a quality assessment system would perceive and measure image or video impairments just like a human being. But in reality, objective quality metrics do not necessarily correlate well with perceived quality [1]. Plus, some measures assume that there exists a reference in the form of an "original" to compare to, which prevents their usage in digital restoration field, where often there is no reference to compare to. That is why subjective evaluation is the most used and most efficient approach up to now. But subjective assessment is expensive, time consuming and does not respond, hence, to the economic requirements [2,3]. Thus, reliable automatic methods for visual quality assessment are needed in the field of digital film restoration. The ACE method, for Automatic Color Equalization [4,6], is an algorithm for digital images unsupervised enhancement. It is based on a new computational approach that tries to model the perceptual response of our vision system merging the Gray World and White Patch equalization mechanisms in a global and local way. Like our vision system ACE is able to adapt to widely varying lighting conditions, and to extract visual information from the environment efficaciously. Moreover ACE can be run in an unsupervised manner. Hence it is very useful as a digital film restoration tool since no a priori information is available. In this paper we deepen the investigation of using the ACE algorithm as a basis for a reference free image quality evaluation. This new metric called DAF for Differential ACE Filtering [7] is an objective quality measure that can be used in several image restoration and image quality assessment systems. In this paper, we compare on different image databases, the results obtained with DAF and with some subjective image quality assessments (Mean Opinion Score MOS as measure of perceived image quality). We study also the correlation between objective measure and MOS. In our experiments, we have used for the first image

  12. QUALIMETRIC QUALITY ASSESSMENT OF IODINE SUPPLEMENTS

    Directory of Open Access Journals (Sweden)

    F. S. Bazrova

    2015-01-01

    Full Text Available The article discusses the new iodine-containing supplements (ID derived from organic media collagenous animal protein (pork rind, carpatina and collagen and protein concentrates brands SCANGEN and PROMIL C95. It is shown that the use of these proteins as carriers of iodine is due to the high content of the amino acids glycine and alanine, which correlates with the degree of binding of iodine objects. New additives in addition to the special focus improve rheological properties of foods, including texture, appearance and functional properties. To assess the quality'ID and selection of preferred option the proposed qualitative assessment and a systematic approach to consider all'ID as a system to allocate its elements, to justify the principles of its construction and the requirements imposed on it, to build a General decision tree. For the construction of complex criterion for assessing the quality'ID proposed procedure formalization based on selection and evaluation of individual indicators, the definition of the laws of their change, depending on the dose, duration and temperature of exposure, and functional efficiency. For comparative evaluation of single and calculation of group indicators all of them were reduced to a single dimension by introducing the dimensionless coefficients of adequately describing the analyzed indicators. The article presents the calculated values of single and group of indicators characterizing technological properties 'ID: the degree of binding of iodine, the binding rate of iodine, heat losses of iodine and basic functional and technological properties of meat stuffing systems (water-binding, moisture-holding, emulsifying capacity and emulsion stability, obtained by the introduction of stuffing in the system studied'ID. At the final stage is the selection of the best 'ID, on the basis of an assessment of group performance.

  13. Cyber threat metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Frye, Jason Neal; Veitch, Cynthia K.; Mateski, Mark Elliot; Michalski, John T.; Harris, James Mark; Trevino, Cassandra M.; Maruoka, Scott

    2012-03-01

    Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.

  14. A multi-metric assessment of environmental contaminant exposure and effects in an urbanized reach of the Charles River near Watertown, Massachusetts

    Science.gov (United States)

    Smith, Stephen B.; Anderson, Patrick J.; Baumann, Paul C.; DeWeese, Lawrence R.; Goodbred, Steven L.; Coyle, James J.; Smith, David S.

    2012-01-01

    The Charles River Project provided an opportunity to simultaneously deploy a combination of biomonitoring techniques routinely used by the U.S. Geological Survey National Water Quality Assessment Program, the Biomonitoring of Environmental Status and Trends Project, and the Contaminant Biology Program at an urban site suspected to be contaminated with polycyclic aromatic hydrocarbons. In addition to these standardized methods, additional techniques were used to further elucidate contaminant exposure and potential impacts of exposure on biota. The purpose of the study was to generate a comprehensive, multi-metric data set to support assessment of contaminant exposure and effects at the site. Furthermore, the data set could be assessed to determine the relative performance of the standardized method suites typically used by the National Water Quality Assessment Program and the Biomonitoring of Environmental Status and Trends Project, as well as the additional biomonitoring methods used in the study to demonstrate ecological effects of contaminant exposure. The Contaminant Effects Workgroup, an advisory committee of the U.S. Geological Survey/Contaminant Biology Program, identified polycyclic aromatic hydrocarbons as the contaminant class of greatest concern in urban streams of all sizes. The reach of the Charles River near Watertown, Massachusetts, was selected as the site for this study based on the suspected presence of polycyclic aromatic hydrocarbon contamination and the presence of common carp (Cyprinus carpio), largemouth bass (Micropterus salmoides), and white sucker (Catostomus commersoni). All of these fish have extensive contaminant-exposure profiles related to polycyclic aromatic hydrocarbons and other environmental contaminants. This project represented a collaboration of universities, Department of the Interior bureaus including multiple components of the USGS (Biological Resources Discipline and Water Resources Discipline Science Centers, the

  15. Video quality assessment based on correlation between spatiotemporal motion energies

    Science.gov (United States)

    Yan, Peng; Mou, Xuanqin

    2016-09-01

    Video quality assessment (VQA) has been a hot research topic because of rapid increase of huge demand of video communications. From the earliest PSNR metric to advanced models that are perceptual aware, researchers have made great progress in this field by introducing properties of human vision system (HVS) into VQA model design. Among various algorithms that model the property of HVS perceiving motion, the spatiotemporal energy model has been validated to be high consistent with psychophysical experiments. In this paper, we take the spatiotemporal energy model into VQA model design by the following steps. 1) According to the pristine spatiotemporal energy model proposed by Adelson et al, we apply the linear filters, which are oriented in space-time and tuned in spatial frequency, to filter the reference and test videos respectively. The outputs of quadrature pairs of above filters are then squared and summed to give two measures of motion energy, which are named rightward and leftward energy responses, respectively. 2) Based on the pristine model, we calculate summation of the rightward and leftward energy responses as spatiotemporal features to represent perceptual quality information for videos, named total spatiotemporal motion energy maps. 3) The proposed FR-VQA model, named STME, is calculated with statistics based on the pixel-wise correlation between the total spatiotemporal motion energy maps of the reference and distorted videos. The STME model was validated on the LIVE VQA Database by comparing with existing FR-VQA models. Experimental results show that STME performs with excellent prediction accuracy and stays in state-of-the-art VQA models.

  16. Content-aware objective video quality assessment

    Science.gov (United States)

    Ortiz-Jaramillo, Benhur; Niño-Castañeda, Jorge; Platiša, Ljiljana; Philips, Wilfried

    2016-01-01

    Since the end-user of video-based systems is often a human observer, prediction of user-perceived video quality (PVQ) is an important task for increasing the user satisfaction. Despite the large variety of objective video quality measures (VQMs), their lack of generalizability remains a problem. This is mainly due to the strong dependency between PVQ and video content. Although this problem is well known, few existing VQMs directly account for the influence of video content on PVQ. Recently, we proposed a method to predict PVQ by introducing relevant video content features in the computation of video distortion measures. The method is based on analyzing the level of spatiotemporal activity in the video and using those as parameters of the anthropomorphic video distortion models. We focus on the experimental evaluation of the proposed methodology based on a total of five public databases, four different objective VQMs, and 105 content related indexes. Additionally, relying on the proposed method, we introduce an approach for selecting the levels of video distortions for the purpose of subjective quality assessment studies. Our results suggest that when adequately combined with content related indexes, even very simple distortion measures (e.g., peak signal to noise ratio) are able to achieve high performance, i.e., high correlation between the VQM and the PVQ. In particular, we have found that by incorporating video content features, it is possible to increase the performance of the VQM by up to 20% relative to its noncontent-aware baseline.

  17. Quadrupolar metrics

    CERN Document Server

    Quevedo, Hernando

    2016-01-01

    We review the problem of describing the gravitational field of compact stars in general relativity. We focus on the deviations from spherical symmetry which are expected to be due to rotation and to the natural deformations of mass distributions. We assume that the relativistic quadrupole moment takes into account these deviations, and consider the class of axisymmetric static and stationary quadrupolar metrics which satisfy Einstein's equations in empty space and in the presence of matter represented by a perfect fluid. We formulate the physical conditions that must be satisfied for a particular spacetime metric to describe the gravitational field of compact stars. We present a brief review of the main static and axisymmetric exact solutions of Einstein's vacuum equations, satisfying all the physical conditions. We discuss how to derive particular stationary and axisymmetric solutions with quadrupolar properties by using the solution generating techniques which correspond either to Lie symmetries and B\\"acku...

  18. Toward a No-Reference Image Quality Assessment Using Statistics of Perceptual Color Descriptors.

    Science.gov (United States)

    Lee, Dohyoung; Plataniotis, Konstantinos N

    2016-08-01

    Analysis of the statistical properties of natural images has played a vital role in the design of no-reference (NR) image quality assessment (IQA) techniques. In this paper, we propose parametric models describing the general characteristics of chromatic data in natural images. They provide informative cues for quantifying visual discomfort caused by the presence of chromatic image distortions. The established models capture the correlation of chromatic data between spatially adjacent pixels by means of color invariance descriptors. The use of color invariance descriptors is inspired by their relevance to visual perception, since they provide less sensitive descriptions of image scenes against viewing geometry and illumination variations than luminances. In order to approximate the visual quality perception of chromatic distortions, we devise four parametric models derived from invariance descriptors representing independent aspects of color perception: 1) hue; 2) saturation; 3) opponent angle; and 4) spherical angle. The practical utility of the proposed models is examined by deploying them in our new general-purpose NR IQA metric. The metric initially estimates the parameters of the proposed chromatic models from an input image to constitute a collection of quality-aware features (QAF). Thereafter, a machine learning technique is applied to predict visual quality given a set of extracted QAFs. Experimentation performed on large-scale image databases demonstrates that the proposed metric correlates well with the provided subjective ratings of image quality over commonly encountered achromatic and chromatic distortions, indicating that it can be deployed on a wide variety of color image processing problems as a generalized IQA solution.

  19. 2003 SNL ASCI applications software quality engineering assessment report.

    Energy Technology Data Exchange (ETDEWEB)

    Schofield, Joseph Richard, Jr.; Ellis, Molly A.; Williamson, Charles Michael; Bonano, Lora A.

    2004-02-01

    This document describes the 2003 SNL ASCI Software Quality Engineering (SQE) assessment of twenty ASCI application code teams and the results of that assessment. The purpose of this assessment was to determine code team compliance with the Sandia National Laboratories ASCI Applications Software Quality Engineering Practices, Version 2.0 as part of an overall program assessment.

  20. Metrical Phonology and SLA.

    Science.gov (United States)

    Tice, Bradley S.

    Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English language with the intention that it may be used in second language instruction. Stress is defined by its physical and acoustical correlates, and the principles of…

  1. THE METHODOLOGICAL PROBLEMS OF CORRELATION (OR COMPLIANCE) AND QUALITY METRIC ASSESSMENTS IN NEUROPSYCHOLOGY

    OpenAIRE

    2013-01-01

    This article highlights the strengths and weaknesses of to two research directions neuropsychology of domestic and foreign, as well as identifying possible areas of integration. One of the most acute problems is the development of experimental psychological methods to determine the quantitative and expressed characteristics of the psychic phenomena by flexibly combining qualitative and quantitative approaches, with a view to putting into practice foreign neuroscience principles and standards ...

  2. Assessment of Soil Quality of Tidal Marshes in Shanghai City

    OpenAIRE

    Wang, Qing; TAN, JUAN; SHA, Chenyan; RUAN, Junjie; Min WANG; HUANG, Shenfa; Wu, Jianqiang

    2013-01-01

    We take three types of tidal marshes in Shanghai City as the study object: tidal marshes in mainland, tidal marshes in the rim of islands, and shoal in Yangtze estuary. On the basis of assessing nutrient quality and environmental quality, respectively, we use soil quality index (SQI) to assess the soil quality of tidal flats, meanwhile formulate the quality grading standards, and analyze the current situation and characteristics of it. The results show that except the north of Hangzhou Bay, N...

  3. Measuring soil physical properties to assess soil quality

    OpenAIRE

    Raczkowski, C.W.

    2007-01-01

    Soil quality is the capacity of a soil to function within ecosystem boundaries to sustain biological productivity, maintain environmental quality, and promote plant, animal and human health. A quantitative assessment of soil quality is invaluable in determining the sustainability of land management systems. Criteria for soil quality assessment are: 1) Choose indicators of soil quality based on the multiple functions of soil that maintain productivity and environmental health, 2)must include s...

  4. Dynamic time warping assessment of high-resolution melt curves provides a robust metric for fungal identification

    Science.gov (United States)

    Phatak, Sayali S.; Li, Dongmei; Luka, Janos; Calderone, Richard A.

    2017-01-01

    Fungal infections are a global problem imposing considerable disease burden. One of the unmet needs in addressing these infections is rapid, sensitive diagnostics. A promising molecular diagnostic approach is high-resolution melt analysis (HRM). However, there has been little effort in leveraging HRM data for automated, objective identification of fungal species. The purpose of these studies was to assess the utility of distance methods developed for comparison of time series data to classify HRM curves as a means of fungal species identification. Dynamic time warping (DTW), first introduced in the context of speech recognition to identify temporal distortion of similar sounds, is an elastic distance measure that has been successfully applied to a wide range of time series data. Comparison of HRM curves of the rDNA internal transcribed spacer (ITS) region from 51 strains of 18 fungal species using DTW distances allowed accurate classification and clustering of all 51 strains. The utility of DTW distances for species identification was demonstrated by matching HRM curves from 243 previously identified clinical isolates against a database of curves from standard reference strains. The results revealed a number of prior misclassifications, discriminated species that are not resolved by routine phenotypic tests, and accurately identified all 243 test strains. In addition to DTW, several other distance functions, Edit Distance on Real sequence (EDR) and Shape-based Distance (SBD), showed promise. It is concluded that DTW-based distances provide a useful metric for the automated identification of fungi based on HRM curves of the ITS region and that this provides the foundation for a robust and automatable method applicable to the clinical setting. PMID:28264030

  5. Metrical Phonology: German Sound System.

    Science.gov (United States)

    Tice, Bradley S.

    Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English and German languages. The objective is to promote use of metrical phonology as a tool for enhancing instruction in stress patterns in words and sentences, particularly in…

  6. Engineering performance metrics

    Science.gov (United States)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  7. The Challenges of Data Quality and Data Quality Assessment in the Big Data Era

    Directory of Open Access Journals (Sweden)

    Li Cai

    2015-05-01

    Full Text Available High-quality data are the precondition for analyzing and using big data and for guaranteeing the value of the data. Currently, comprehensive analysis and research of quality standards and quality assessment methods for big data are lacking. First, this paper summarizes reviews of data quality research. Second, this paper analyzes the data characteristics of the big data environment, presents quality challenges faced by big data, and formulates a hierarchical data quality framework from the perspective of data users. This framework consists of big data quality dimensions, quality characteristics, and quality indexes. Finally, on the basis of this framework, this paper constructs a dynamic assessment process for data quality. This process has good expansibility and adaptability and can meet the needs of big data quality assessment. The research results enrich the theoretical scope of big data and lay a solid foundation for the future by establishing an assessment model and studying evaluation algorithms.

  8. Groundwater quality data from the National Water-Quality Assessment Project, May 2012 through December 2013

    Science.gov (United States)

    Arnold, Terri L.; DeSimone, Leslie A.; Bexfield, Laura M.; Lindsey, Bruce D.; Barlow, Jeannie R.; Kulongoski, Justin T.; Musgrove, Marylynn; Kingsbury, James A.; Belitz, Kenneth

    2016-06-20

    Groundwater-quality data were collected from 748 wells as part of the National Water-Quality Assessment Project of the U.S. Geological Survey National Water-Quality Program from May 2012 through December 2013. The data were collected from four types of well networks: principal aquifer study networks, which assess the quality of groundwater used for public water supply; land-use study networks, which assess land-use effects on shallow groundwater quality; major aquifer study networks, which assess the quality of groundwater used for domestic supply; and enhanced trends networks, which evaluate the time scales during which groundwater quality changes. Groundwater samples were analyzed for a large number of water-quality indicators and constituents, including major ions, nutrients, trace elements, volatile organic compounds, pesticides, and radionuclides. These groundwater quality data are tabulated in this report. Quality-control samples also were collected; data from blank and replicate quality-control samples are included in this report.

  9. Performance assessment of geospatial simulation models of land-use change--a landscape metric-based approach.

    Science.gov (United States)

    Sakieh, Yousef; Salmanmahiny, Abdolrassoul

    2016-03-01

    Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.

  10. Balancing Attended and Global Stimuli in Perceived Video Quality Assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2011-01-01

    The visual attention mechanism plays a key role in the human perception system and it has a significant impact on our assessment of perceived video quality. In spite of receiving less attention from the viewers, unattended stimuli can still contribute to the understanding of the visual content....... This paper proposes a quality model based on the late attention selection theory, assuming that the video quality is perceived via two mechanisms: global and local quality assessment. First we model several visual features influencing the visual attention in quality assessment scenarios to derive...... an attention map using appropriate fusion techniques. The global quality assessment as based on the assumption that viewers allocate their attention equally to the entire visual scene, is modeled by four carefully designed quality features. By employing these same quality features, the local quality model...

  11. The quality of assessment visits in community nursing.

    NARCIS (Netherlands)

    Kerkstra, A.; Beemster, F.

    1994-01-01

    The aim of this study was the measurement of the quality of assessment visits of community nurses in The Netherlands. Process criteria were derived for the quality of the assessment visits from the quality standards of community nursing care established by Appelman et al. Over a period of 8 weeks, a

  12. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin

    2015-01-01

    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  13. A new air quality perception scale for global assessment of air pollution health effects.

    Science.gov (United States)

    Deguen, Séverine; Ségala, Claire; Pédrono, Gaëlle; Mesbah, Mounir

    2012-12-01

    Despite improvements in air quality in developed countries, air pollution remains a major public health issue. To fully assess the health impact, we must consider that air pollution exposure has both physical and psychological effects; this latter dimension, less documented, is more difficult to measure and subjective indicators constitute an appropriate alternative. In this context, this work presents the methodological development of a new scale to measure the perception of air quality, useful as an exposure or risk appraisal metric in public health contexts. On the basis of the responses from 2,522 subjects in eight French cities, psychometric methods are used to construct the scale from 22 items that assess risk perception (anxiety about health and quality of life) and the extent to which air pollution is a nuisance (sensorial perception and symptoms). The scale is robust, reproducible, and discriminates between subpopulations more susceptible to poor air pollution perception. The individual risk factors of poor air pollution perception are coherent with those findings in the risk perception literature. Perception of air pollution by the general public is a key issue in the development of comprehensive risk assessment studies as well as in air pollution risk management and policy. This study offers a useful new tool to measure such efforts and to help set priorities for air quality improvements in combination with air quality measurements.

  14. Food quality assessment by NIR hyperspectral imaging

    Science.gov (United States)

    Whitworth, Martin B.; Millar, Samuel J.; Chau, Astor

    2010-04-01

    Near infrared reflectance (NIR) spectroscopy is well established in the food industry for rapid compositional analysis of bulk samples. NIR hyperspectral imaging provides new opportunities to measure the spatial distribution of components such as moisture and fat, and to identify and measure specific regions of composite samples. An NIR hyperspectral imaging system has been constructed for food research applications, incorporating a SWIR camera with a cooled 14 bit HgCdTe detector and N25E spectrograph (Specim Ltd, Finland). Samples are scanned in a pushbroom mode using a motorised stage. The system has a spectral resolution of 256 pixels covering a range of 970-2500 nm and a spatial resolution of 320 pixels covering a swathe adjustable from 8 to 300 mm. Images are acquired at a rate of up to 100 lines s-1, enabling samples to be scanned within a few seconds. Data are captured using SpectralCube software (Specim) and analysed using ENVI and IDL (ITT Visual Information Solutions). Several food applications are presented. The strength of individual absorbance bands enables the distribution of particular components to be assessed. Examples are shown for detection of added gluten in wheat flour and to study the effect of processing conditions on fat distribution in chips/French fries. More detailed quantitative calibrations have been developed to study evolution of the moisture distribution in baguettes during storage at different humidities, to assess freshness of fish using measurements of whole cod and fillets, and for prediction of beef quality by identification and separate measurement of lean and fat regions.

  15. Blind image quality assessment using statistical independence in the divisive normalization transform domain

    Science.gov (United States)

    Chu, Ying; Mou, Xuanqin; Fu, Hong; Ji, Zhen

    2015-11-01

    We present a general purpose blind image quality assessment (IQA) method using the statistical independence hidden in the joint distributions of divisive normalization transform (DNT) representations for natural images. The DNT simulates the redundancy reduction process of the human visual system and has good statistical independence for natural undistorted images; meanwhile, this statistical independence changes as the images suffer from distortion. Inspired by this, we investigate the changes in statistical independence between neighboring DNT outputs across the space and scale for distorted images and propose an independence uncertainty index as a blind IQA (BIQA) feature to measure the image changes. The extracted features are then fed into a regression model to predict the image quality. The proposed BIQA metric is called statistical independence (STAIND). We evaluated STAIND on five public databases: LIVE, CSIQ, TID2013, IRCCyN/IVC Art IQA, and intentionally blurred background images. The performances are relatively high for both single- and cross-database experiments. When compared with the state-of-the-art BIQA algorithms, as well as representative full-reference IQA metrics, such as SSIM, STAIND shows fairly good performance in terms of quality prediction accuracy, stability, robustness, and computational costs.

  16. Quality of life assessment in haemophilia.

    Science.gov (United States)

    Bullinger, Monika; von Mackensen, Sylvia

    2004-03-01

    Quality of life (QoL) is a recent focus of research in haemophilia. It can be defined--in analogy to the World Health Organization (WHO) definition of health--as patient-perceived wellbeing and function in terms of physical, emotional, mental, social and behavioural life domains. The paper describes conceptual, methodological and practical foundations of QoL research in adults and children at an international level. It then proceeds to review the QoL literature in the field of haemophilia. With regard to assessment of QoL in haemophilia patients, both generic and very recently targeted instruments have been applied. Recent publications have focused on describing QoL in adults, showing specific impairments in terms of physical function (arthropathy) and mental wellbeing (HIV infection) as well as focusing on the cost-benefit (QoL) ratio of haemophilia care. In paediatric haemophilia, research has suggested the beneficial QoL outcomes with prophylaxis and stressed the role of the family for patients' wellbeing and function. QoL research is a relevant area for haemophilia research which should be pursued further.

  17. Kinematic Metrics Based on the Virtual Reality System Toyra as an Assessment of the Upper Limb Rehabilitation in People with Spinal Cord Injury

    Directory of Open Access Journals (Sweden)

    Fernando Trincado-Alonso

    2014-01-01

    Full Text Available The aim of this study was to develop new strategies based on virtual reality that can provide additional information to clinicians for the rehabilitation assessment. Virtual reality system Toyra has been used to record kinematic information of 15 patients with cervical spinal cord injury (SCI while performing evaluation sessions using the mentioned system. Positive correlation, with a moderate and very strong association, has been found between clinical scales and kinematic data, considering only the subscales more closely related to the upper limb function. A set of metrics was defined combining these kinematic data to obtain parameters of reaching amplitude, joint amplitude, agility, accuracy, and repeatability during the evaluation sessions of the virtual reality system Toyra. Strong and moderate correlations have been also found between the metrics reaching and joint amplitude and the clinical scales.

  18. Attention modeling for video quality assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2010-01-01

    averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in modeling visual attention, which is then integrated into IQMs to calculate the local quality of a video frame...

  19. MODERN PRINCIPLES OF QUALITY ASSESSMENT OF CARDIOVASCULAR DISEASES TREATMENT

    Directory of Open Access Journals (Sweden)

    A. Yu. Suvorov

    2014-01-01

    Full Text Available The most common ways of assessment of cardiovascular diseases treatment abroad, approaches to creation of such assessment methods are considered, as well as data on the principles of the assessment of treatment in Russia. Some foreign registers of acute myocardial infarction, the aim of which was therapy quality assessment, are given as examples. The problem of high-quality treatment based on data from evidence-based medicine, some legal aspects related to clinical guidelines in Russia are considered, as well as various ways of treatment quality assessment.

  20. Assessing the relationship between patient satisfaction and clinical quality in an ambulatory setting.

    Science.gov (United States)

    Bosko, Tawnya; Wilson, Kathryn

    2016-10-10

    Purpose The purpose of this paper is to assess the relationship between patient satisfaction and a variety of clinical quality measures in an ambulatory setting to determine if there is significant overlap between patient satisfaction and clinical quality or if they are separate domains of overall physician quality. Assessing this relationship will help to determine whether there is congruence between different types of clinical quality performance and patient satisfaction and therefore provide insight to appropriate financial structures for physicians. Design/methodology/approach Ordered probit regression analysis is conducted with overall rating of physician from patient satisfaction responses to the Clinician and Groups Consumer Assessment of Healthcare Providers and Systems survey as the dependent variable. Physician clinical quality is measured across five composite groups based on 26 Healthcare Effectiveness Data and Information Set (HEDIS) measures aggregated from patient electronic health records. Physician and patient demographic variables are also included in the model. Findings Better physician performance on HEDIS measures are correlated with increases in patient satisfaction for three composite measures: antibiotics, generics, and vaccination; it has no relationship for chronic conditions and is correlated with decrease in patient satisfaction for preventative measures, although the negative relationship for preventative measures is not robust in sensitivity analysis. In addition, younger physicians and male physicians have higher satisfaction scores even with the HEDIS quality measures in the regression. Research limitations/implications There are four primary limitations to this study. First, the data for the study come from a single hospital provider organization. Second, the survey response rate for the satisfaction measure is low. Third, the physician clinical quality measure is the percent of the physician's relevant patient population that met

  1. Toward metrics and model validation in web-site QEM

    OpenAIRE

    Olsina Santos, Luis Antonio; Pons, Claudia; Rossi, Gustavo Héctor

    2000-01-01

    In this work, a conceptual framework and the associated strategies for metrics and model validation are analyzed regarding website measurement and evaluation. Particularly, we have conducted three case studies in different Web domains in order to evaluate and compare the quality of sites. For such an end the quantitative, model-based methodology, so-called Web-site QEM (Quality Evaluation Methodology), was utilized. In the assessment process of sites, definition of attributes and measurements...

  2. Survey and Assessment of Land Ecological Quality in Cixi City

    Institute of Scientific and Technical Information of China (English)

    Junbao; LIU; Zhiyuan; CHEN; Weifeng; PAN; Shaojuan; XIE

    2013-01-01

    Soil,atmosphere,water and quality of agricultural product constitute the content of land ecological quality.Cixi City,through survey pilot project of basic farmland quality,carried out high precision soil geochemical survey and survey of agricultural products,irrigation water and air quality,and established ecological quality evaluation model of land.Based on the evaluation of soil geochemical quality,we conducted comprehensive quality assessment of atmosphere,water,agricultural products,and assessed the ecological quality of agricultural land in Cixi City.The evaluation results show that the ecological quality of most agricultural land in Cixi City is excellent,and there is ecological risk only in some local areas such as urban periphery.The experimental results provide demonstration and basis for the fine management of basic farmland and ecological protection.

  3. Self-Assessment of High-Quality Academic Enrichment Practices

    Science.gov (United States)

    Holstead, Jenell; King, Mindy Hightower

    2011-01-01

    Self-assessment is an often-overlooked alternative to external assessment. Program staff can use self-assessment processes to systematically review the quality of their afterschool programming and to facilitate discussions on ways to enhance it. Self-assessment of point-of-service activities, which can provide a wealth of valuable information…

  4. A Review of Quality Measures for Assessing the Impact of Antimicrobial Stewardship Programs in Hospitals

    Directory of Open Access Journals (Sweden)

    Mary Richard Akpan

    2016-01-01

    Full Text Available The growing problem of antimicrobial resistance (AMR has led to calls for antimicrobial stewardship programs (ASP to control antibiotic use in healthcare settings. Key strategies include prospective audit with feedback and intervention, and formulary restriction and preauthorization. Education, guidelines, clinical pathways, de-escalation, and intravenous to oral conversion are also part of some programs. Impact and quality of ASP can be assessed using process or outcome measures. Outcome measures are categorized as microbiological, patient or financial outcomes. The objective of this review was to provide an overview of quality measures for assessing ASP and the reported impact of ASP in peer-reviewed studies, focusing particularly on patient outcomes. A literature search of papers published in English between 1990 and June 2015 was conducted in five databases using a combination of search terms. Primary studies of any design were included. A total of 63 studies were included in this review. Four studies defined quality metrics for evaluating ASP. Twenty-one studies assessed the impact of ASP on antimicrobial utilization and cost, 25 studies evaluated impact on resistance patterns and/or rate of Clostridium difficile infection (CDI. Thirteen studies assessed impact on patient outcomes including mortality, length of stay (LOS and readmission rates. Six of these 13 studies reported non-significant difference in mortality between pre- and post-ASP intervention, and five reported reductions in mortality rate. On LOS, six studies reported shorter LOS post intervention; a significant reduction was reported in one of these studies. Of note, this latter study reported significantly (p < 0.001 higher unplanned readmissions related to infections post-ASP. Patient outcomes need to be a key component of ASP evaluation. The choice of metrics is influenced by data and resource availability. Controlling for confounders must be considered in the design of

  5. Integration of MODIS-derived metrics to assess interannual variability in snowpack, lake ice, and NDVI in southwest Alaska

    Science.gov (United States)

    Reed, Bradley C.; Budde, Michael E.; Spencer, Page; Miller, Amy E.

    2009-01-01

    Impacts of global climate change are expected to result in greater variation in the seasonality of snowpack, lake ice, and vegetation dynamics in southwest Alaska. All have wide-reaching physical and biological ecosystem effects in the region. We used Moderate Resolution Imaging Spectroradiometer (MODIS) calibrated radiance, snow cover extent, and vegetation index products for interpreting interannual variation in the duration and extent of snowpack, lake ice, and vegetation dynamics for southwest Alaska. The approach integrates multiple seasonal metrics across large ecological regions. Throughout the observation period (2001-2007), snow cover duration was stable within ecoregions, with variable start and end dates. The start of the lake ice season lagged the snow season by 2 to 3??months. Within a given lake, freeze-up dates varied in timing and duration, while break-up dates were more consistent. Vegetation phenology varied less than snow and ice metrics, with start-of-season dates comparatively consistent across years. The start of growing season and snow melt were related to one another as they are both temperature dependent. Higher than average temperatures during the El Ni??o winter of 2002-2003 were expressed in anomalous ice and snow season patterns. We are developing a consistent, MODIS-based dataset that will be used to monitor temporal trends of each of these seasonal metrics and to map areas of change for the study area.

  6. Water depletion: An improved metric for incorporating seasonal and dry-year water scarcity into water risk assessments

    Directory of Open Access Journals (Sweden)

    Kate A. Brauman

    2016-01-01

    Full Text Available Abstract We present an improved water-scarcity metric we call water depletion, calculated as the fraction of renewable water consumptively used for human activities. We employ new data from the WaterGAP3 integrated global water resources model to illustrate water depletion for 15,091 watersheds worldwide, constituting 90% of total land area. Our analysis illustrates that moderate water depletion at an annual time scale is better characterized as high depletion at a monthly time scale and we are thus able to integrate seasonal and dry-year depletion into the water depletion metric, providing a more accurate depiction of water shortage that could affect irrigated agriculture, urban water supply, and freshwater ecosystems. Applying the metric, we find that the 2% of watersheds that are more than 75% depleted on an average annual basis are home to 15% of global irrigated area and 4% of large cities. An additional 30% of watersheds are depleted by more than 75% seasonally or in dry years. In total, 71% of world irrigated area and 47% of large cities are characterized as experiencing at least periodic water shortage.

  7. Assessing Pre-Service Teachers' Quality Teaching Practices

    Science.gov (United States)

    Chen, Weiyun; Hendricks, Kristin; Archibald, Kelsi

    2011-01-01

    The purpose of this study was to design and validate the Assessing Quality Teaching Rubrics (AQTR) that assesses the pre-service teachers' quality teaching practices in a live lesson or a videotaped lesson. Twenty-one lessons taught by 13 Physical Education Teacher Education (PETE) students were videotaped. The videotaped lessons were evaluated…

  8. Validity of portfolio assessment: which qualities determine ratings?

    NARCIS (Netherlands)

    Driessen, E.W.; Overeem, K.; Tartwijk, J. van; Vleuten, C.P.M. van der; Muijtjens, A.M.M.

    2006-01-01

    The portfolio is becoming increasingly accepted as a valuable tool for learning and assessment. The validity of portfolio assessment, however, may suffer from bias due to irrelevant qualities, such as lay-out and writing style. We examined the possible effects of such qualities in a portfolio progra

  9. Higher Education Quality Assessment in China: An Impact Study

    Science.gov (United States)

    Liu, Shuiyun

    2015-01-01

    This research analyses an external higher education quality assessment scheme in China, namely, the Quality Assessment of Undergraduate Education (QAUE) scheme. Case studies were conducted in three Chinese universities with different statuses. Analysis shows that the evaluated institutions responded to the external requirements of the QAUE…

  10. Assessing the Quality of a Student-Generated Question Repository

    Science.gov (United States)

    Bates, Simon P.; Galloway, Ross K.; Riise, Jonathan; Homer, Danny

    2014-01-01

    We present results from a study that categorizes and assesses the quality of questions and explanations authored by students in question repositories produced as part of the summative assessment in introductory physics courses over two academic sessions. Mapping question quality onto the levels in the cognitive domain of Bloom's taxonomy, we find…

  11. Real Time Face Quality Assessment for Face Log Generation

    DEFF Research Database (Denmark)

    Kamal, Nasrollahi; Moeslund, Thomas B.

    2009-01-01

    Summarizing a long surveillance video to just a few best quality face images of each subject, a face-log, is of great importance in surveillance systems. Face quality assessment is the back-bone for face log generation and improving the quality assessment makes the face logs more reliable. Develo....... Developing a real time face quality assessment system using the most important facial features and employing it for face logs generation are the concerns of this paper. Extensive tests using four databases are carried out to validate the usability of the system....

  12. Quality Assurance of Assessment and Moderation Discourses Involving Sessional Staff

    Science.gov (United States)

    Grainger, Peter; Adie, Lenore; Weir, Katie

    2016-01-01

    Quality assurance is a major agenda in tertiary education. The casualisation of academic work, especially in teaching, is also a quality assurance issue. Casual or sessional staff members teach and assess more than 50% of all university courses in Australia, and yet the research in relation to the role sessional staff play in quality assurance of…

  13. The Emergence of Quality Assessment in Brazilian Basic Education

    Science.gov (United States)

    Kauko, Jaakko; Centeno, Vera Gorodski; Candido, Helena; Shiroma, Eneida; Klutas, Anni

    2016-01-01

    The focus in this article is on Brazilian education policy, specifically quality assurance and evaluation. The starting point is that quality, measured by means of large-scale assessments, is one of the key discursive justifications for educational change. The article addresses the questions of how quality evaluation became a significant feature…

  14. Evaluation of a malting barley quality assessment system

    NARCIS (Netherlands)

    Lonkhuijsen, H.J. van; Douma, A.C.; Angelino, S.A.G.F.

    1998-01-01

    New malting barley varieties are annually tested for their malting and brewing potential according to a field trial set-up combined with quality evaluation on pilot scale. To assess the effects of trial year and location on quality evaluation data, a data base consisting of quality data from Dutch m

  15. Assessment of the Quality Management Models in Higher Education

    Science.gov (United States)

    Basar, Gulsun; Altinay, Zehra; Dagli, Gokmen; Altinay, Fahriye

    2016-01-01

    This study involves the assessment of the quality management models in Higher Education by explaining the importance of quality in higher education and by examining the higher education quality assurance system practices in other countries. The qualitative study was carried out with the members of the Higher Education Planning, Evaluation,…

  16. On Improving Higher Vocational College Education Quality Assessment

    Science.gov (United States)

    Wu, Xiang; Chen, Yan; Zhang, Jie; Wang, Yi

    Teaching quality assessment is a judgment process by using the theory and technology of education evaluation system to test whether the process and result of teaching have got to a certain quality level. Many vocational schools have established teaching quality assessment systems of their own characteristics as the basic means to do self-examination and teaching behavior adjustment. Combined with the characteristics and requirements of the vocational education and by analyzing the problems exist in contemporary vocational school, form the perspective of the content, assessment criteria and feedback system of the teaching quality assessment to optimize the system, to complete the teaching quality information net and offer suggestions for feedback channels, to make the institutionalization, standardization of the vocational schools and indeed to make contribution for the overall improvement of the quality of vocational schools.

  17. Food quality assessment in parent-offspring dyads

    DEFF Research Database (Denmark)

    Bech-Larsen, Tino; Jensen, Birger Boutrup

    When the buyer and the consumer of a food product are not identical, the risk of discrepancies between food quality expectations and experiences is even higher. We introduce the concept of dyadic quality assessment and apply it to an exploration of parents' willingness to pay for new and healthier...... in-between meals for their children. Results show poor congruence between parent and child quality assessment due to the two parties emphasising quite different quality aspects. Improved parental knowledge of their children's quality experience however has a significant effect on parents' willingness...... to pay. Accordingly, both parents and children should be involved when developing and testing healthy in-between meals....

  18. Perceived image quality assessment for color images on mobile displays

    Science.gov (United States)

    Jang, Hyesung; Kim, Choon-Woo

    2015-01-01

    With increase in size and resolution of mobile displays and advances in embedded processors for image enhancement, perceived quality of images on mobile displays has been drastically improved. This paper presents a quantitative method to evaluate perceived image quality of color images on mobile displays. Three image quality attributes, colorfulness, contrast and brightness, are chosen to represent perceived image quality. Image quality assessment models are constructed based on results of human visual experiments. In this paper, three phase human visual experiments are designed to achieve credible outcomes while reducing time and resources needed for visual experiments. Values of parameters of image quality assessment models are estimated based on results from human visual experiments. Performances of different image quality assessment models are compared.

  19. Microbiological methods for assessing soil quality

    NARCIS (Netherlands)

    Bloem, J.; Hopkins, D.W.; Benedetti, A.

    2006-01-01

    This book provides a selection of microbiological methods that are already applied in regional or national soil quality monitoring programs. It is split into two parts: part one gives an overview of approaches to monitoring, evaluating and managing soil quality. Part two provides a selection of meth

  20. SOIL QUALITY ASSESSMENT USING FUZZY MODELING

    Science.gov (United States)

    Maintaining soil productivity is essential if agriculture production systems are to be sustainable, thus soil quality is an essential issue. However, there is a paucity of tools for measurement for the purpose of understanding changes in soil quality. Here the possibility of using fuzzy modeling t...

  1. Assessing water quality in Lake Naivasha

    NARCIS (Netherlands)

    Ndungu, Jane Njeri

    2014-01-01

    Water quality in aquatic systems is important because it maintains the ecological processes that support biodiversity. However, declining water quality due to environmental perturbations threatens the stability of the biotic integrity and therefore hinders the ecosystem services and functions of aqu

  2. The use of the kurtosis metric in the evaluation of occupational hearing loss in workers in China: Implications for hearing risk assessment

    Directory of Open Access Journals (Sweden)

    Robert I Davis

    2012-01-01

    Full Text Available This study examined: (1 the value of using the statistical metric, kurtosis [β(t], along with an energy metric to determine the hazard to hearing from high level industrial noise environments, and (2 the accuracy of the International Standard Organization (ISO-1999:1990 model for median noise-induced permanent threshold shift (NIPTS estimates with actual recent epidemiological data obtained on 240 highly screened workers exposed to high-level industrial noise in China. A cross-sectional approach was used in this study. Shift-long temporal waveforms of the noise that workers were exposed to for evaluation of noise exposures and audiometric threshold measures were obtained on all selected subjects. The subjects were exposed to only one occupational noise exposure without the use of hearing protection devices. The results suggest that: (1 the kurtosis metric is an important variable in determining the hazards to hearing posed by a high-level industrial noise environment for hearing conservation purposes, i.e., the kurtosis differentiated between the hazardous effects produced by Gaussian and non-Gaussian noise environments, (2 the ISO-1999 predictive model does not accurately estimate the degree of median NIPTS incurred to high level kurtosis industrial noise, and (3 the inherent large variability in NIPTS among subjects emphasize the need to develop and analyze a larger database of workers with well-documented exposures to better understand the effect of kurtosis on NIPTS incurred from high level industrial noise exposures. A better understanding of the role of the kurtosis metric may lead to its incorporation into a new generation of more predictive hearing risk assessment for occupational noise exposure.

  3. MEASURING OBJECT-ORIENTED SYSTEMS BASED ON THE EXPERIMENTAL ANALYSIS OF THE COMPLEXITY METRICS

    Directory of Open Access Journals (Sweden)

    J.S.V.R.S.SASTRY,

    2011-05-01

    Full Text Available Metrics are used to help a software engineer in quantitative analysis to assess the quality of the design before a system is built. The focus of Object-Oriented metrics is on the class which is the fundamental building block of the Object-Oriented architecture. These metrics are focused on internal object structure and external object structure. Internal object structure reflects the complexity of each individual entity such as methods and classes. External complexity measures the interaction among entities such as Coupling and Inheritance. This paper mainly focuses on a set of object oriented metrics that can be used to measure the quality of an object oriented design. Two types of complexity metrics in Object-Oriented paradigm namely Mood metrics and Lorenz & Kidd metrics. Mood metrics consist of Method inheritance factor(MIF, Coupling factor(CF, Attribute inheritance factor(AIF, Method hiding factor(MHF, Attribute hiding factor(AHF, and polymorphism factor(PF. Lorenz & Kidd metrics consist of Number of operations overridden (NOO, Number operations added (NOA, Specialization index(SI. Mood metrics and Lorenz & Kidd metrics measurements are used mainly by designers and testers. Designers uses these metrics to access the software early in process,making changes that will reduce complexity and improve the continuing capability of the design. Testers use to test the software for finding the complexity, performance of the system, quality of the software. This paper reviews Mood metrics and Lorenz & Kidd metrics are validates theoretically and empirically methods. In thispaper, work has been done to explore the quality of design of software components using object oriented paradigm. A number of object oriented metrics have been proposed in the literature for measuring the design attributes such as inheritance, coupling, polymorphism etc. This paper, metrics have been used to analyzevarious features of software component. Complexity of methods

  4. Quality Assessment of Compressed Video for Automatic License Plate Recognition

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Støttrup-Andersen, Jesper; Forchhammer, Søren;

    2014-01-01

    Definition of video quality requirements for video surveillance poses new questions in the area of quality assessment. This paper presents a quality assessment experiment for an automatic license plate recognition scenario. We explore the influence of the compression by H.264/AVC and H.265/HEVC...... recognition in our study has a behavior similar to human recognition, allowing the use of the same mathematical models. We furthermore propose an application of one of the models for video surveillance systems...

  5. Modeling the Color Image and Video Quality on Liquid Crystal Displays with Backlight Dimming

    DEFF Research Database (Denmark)

    Korhonen, Jari; Mantel, Claire; Burini, Nino;

    2013-01-01

    ) and show how the modeled image can be used as an input to quality assessment algorithms. For quality assessment, we propose an image quality metric, based on Peak Signal-to-Noise Ratio (PSNR) computation in the CIE L*a*b* color space. The metric takes luminance reduction, color distortion and loss...

  6. Computing and Interpreting Fisher Information as a Metric of Sustainability: Regime Changes in the United States Air Quality

    Science.gov (United States)

    As a key tool in information theory, Fisher Information has been used to explore the observable behavior of a variety of systems. In particular, recent work has demonstrated its ability to assess the dynamic order of real and model systems. However, in order to solidify the use o...

  7. National Water Quality Assessment (NAWQA) Program

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — National scope of NAWQA water-quality sample- and laboratory-result data and other supporting information obtained from NWIS systems hosted by individual Water...

  8. QUALITY ASSESSMENT OF BISCUITS USING COMPUTER VISION

    Directory of Open Access Journals (Sweden)

    Archana A. Bade

    2016-08-01

    Full Text Available As the developments and customer expectations in the high quality foods are increasing day by day, it becomes very essential for the food industries to maintain the quality of the product. Therefore it is necessary to have the quality inspection system for the product before packaging. Automation in the industry gives better inspection speed as compared to the human vision. The automation based on the computer vision is cost effective, flexible and provides one of the best alternatives for more accurate, fast inspection system. Image processing and image analysis are the vital part of the computer vision system. In this paper, we discuss real time quality inspection of the biscuits of premium class using computer vision. It contains the designing of the system, implementing, verifying it and installation of the complete system at the biscuit industry. Overall system contains Image acquisition, Preprocessing, Important feature extraction using segmentation, Color variations and Interpretation and the system hardware.

  9. Doctors or technicians: assessing quality of medical education

    Directory of Open Access Journals (Sweden)

    Tayyab Hasan

    2010-09-01

    Full Text Available Tayyab HasanPAPRSB Institute of Health Sciences, University Brunei Darussalam, Bandar Seri Begawan, BruneiAbstract: Medical education institutions usually adapt industrial quality management models that measure the quality of the process of a program but not the quality of the product. The purpose of this paper is to analyze the impact of industrial quality management models on medical education and students, and to highlight the importance of introducing a proper educational quality management model. Industrial quality management models can measure the training component in terms of competencies, but they lack the educational component measurement. These models use performance indicators to assess their process improvement efforts. Researchers suggest that the performance indicators used in educational institutions may only measure their fiscal efficiency without measuring the quality of the educational experience of the students. In most of the institutions, where industrial models are used for quality assurance, students are considered as customers and are provided with the maximum services and facilities possible. Institutions are required to fulfill a list of recommendations from the quality control agencies in order to enhance student satisfaction and to guarantee standard services. Quality of medical education should be assessed by measuring the impact of the educational program and quality improvement procedures in terms of knowledge base development, behavioral change, and patient care. Industrial quality models may focus on academic support services and processes, but educational quality models should be introduced in parallel to focus on educational standards and products.Keywords: educational quality, medical education, quality control, quality assessment, quality management models

  10. Making metrics meaningful

    Directory of Open Access Journals (Sweden)

    Linda Bennett

    2013-07-01

    Full Text Available Continuing purchase of AHSS resources is threatened more by library budget squeezes than that of STM resources. Librarians must justify all expenditure, but quantitative metrical analysis to assess the value to the institution of journals and specialized research databases for AHSS subjects can be inconclusive; often the number of recorded transactions is lower than for STM, as the resource may be relevant to a smaller number of users. This paper draws on a literature review and extensive primary research, including a survey of 570 librarians and academics across the Anglophone countries, findings from focus group meetings and the analysis of user behaviour at a UK university before and after the installation of the Summon discovery system. It concludes that providing a new approach to metrics can help to develop resources strategies that meet changing user needs; and that usage statistics can be complemented with supplementary ROI measures to make them more meaningful.

  11. Coastal Water Quality Assessment by Self-Organizing Map

    Institute of Scientific and Technical Information of China (English)

    NIU Zhiguang; ZHANG Hongwei; ZHANG Ying

    2005-01-01

    A new approach to coastal water quality assessment was put forward through study on self-organizing map (SOM). Firstly, the water quality data of Bohai Bay from 1999 to 2002 were prepared. Then, a set of software for coastal water quality assessment was developed based on the batch version algorithm of SOM and SOM toolbox in MATLAB environment. Furthermore, the training results of SOM could be analyzed with single water quality indexes, the value of N: P( atomic ratio) and the eutrophication index E so that the data were clustered into five different pollution types using k-means clustering method. Finally, it was realized that the monitoring data serial trajectory could be tracked and the new data be classified and assessed automatically. Through application it is found that this study helps to analyze and assess the coastal water quality by several kinds of graphics, which offers an easy decision support for recognizing pollution status and taking corresponding measures.

  12. Metrics for Radiologists in the Era of Value-based Health Care Delivery.

    Science.gov (United States)

    Sarwar, Ammar; Boland, Giles; Monks, Annamarie; Kruskal, Jonathan B

    2015-01-01

    Accelerated by the Patient Protection and Affordable Care Act of 2010, health care delivery in the United States is poised to move from a model that rewards the volume of services provided to one that rewards the value provided by such services. Radiology department operations are currently managed by an array of metrics that assess various departmental missions, but many of these metrics do not measure value. Regulators and other stakeholders also influence what metrics are used to assess medical imaging. Metrics such as the Physician Quality Reporting System are increasingly being linked to financial penalties. In addition, metrics assessing radiology's contribution to cost or outcomes are currently lacking. In fact, radiology is widely viewed as a contributor to health care costs without an adequate understanding of its contribution to downstream cost savings or improvement in patient outcomes. The new value-based system of health care delivery and reimbursement will measure a provider's contribution to reducing costs and improving patient outcomes with the intention of making reimbursement commensurate with adherence to these metrics. The authors describe existing metrics and their application to the practice of radiology, discuss the so-called value equation, and suggest possible metrics that will be useful for demonstrating the value of radiologists' services to their patients.

  13. The utility metric: a novel method to assess the overall performance of discrete brain-computer interfaces.

    Science.gov (United States)

    Dal Seno, Bernardo; Matteucci, Matteo; Mainardi, Luca T

    2010-02-01

    A relevant issue in a brain-computer interface (BCI) is the capability to efficiently convert user intentions into correct actions, and how to properly measure this efficiency. Usually, the evaluation of a BCI system is approached through the quantification of the classifier performance, which is often measured by means of the information transfer rate (ITR). A shortcoming of this approach is that the control interface design is neglected, and hence a poor description of the overall performance is obtained for real systems. To overcome this limitation, we propose a novel metric based on the computation of BCI Utility. The new metric can accurately predict the overall performance of a BCI system, as it takes into account both the classifier and the control interface characteristics. It is therefore suitable for design purposes, where we have to select the best options among different components and different parameters setup. In the paper, we compute Utility in two scenarios, a P300 speller and a P300 speller with an error correction system (ECS), for different values of accuracy of the classifier and recall of the ECS. Monte Carlo simulations confirm that Utility predicts the performance of a BCI better than ITR.

  14. Assessment time of the Welfare Quality protocol for dairy cattle

    NARCIS (Netherlands)

    Vries, de M.; Engel, B.; Uijl, I.; Schaik, van G.; Dijkstra, T.; Boer, de I.J.M.; Bokkers, E.A.M.

    2013-01-01

    The Welfare Quality® (WQ) protocols are increasingly used for assessing welfare of farm animals. These protocols are time consuming (about one day per farm) and, therefore, costly. Our aim was to assess the scope for reduction of on-farm assessment time of the WQ protocol for dairy cattle. Seven tra

  15. Factors Influencing Assessment Quality in Higher Vocational Education

    Science.gov (United States)

    Baartman, Liesbeth; Gulikers, Judith; Dijkstra, Asha

    2013-01-01

    The development of assessments that are fit to assess professional competence in higher vocational education requires a reconsideration of assessment methods, quality criteria and (self)evaluation. This article examines the self-evaluations of nine courses of a large higher vocational education institute. Per course, 4-11 teachers and 3-10…

  16. Factors influencing assessment quality in higher vocational education

    NARCIS (Netherlands)

    Baartman, L.; Gulikers, J.T.M.; Dijkstra, A.

    2013-01-01

    The development of assessments that are fit to assess professional competence in higher vocational education requires a reconsideration of assessment methods, quality criteria and (self)evaluation. This article examines the self-evaluations of nine courses of a large higher vocational education inst

  17. Dried fruits quality assessment by hyperspectral imaging

    Science.gov (United States)

    Serranti, Silvia; Gargiulo, Aldo; Bonifazi, Giuseppe

    2012-05-01

    Dried fruits products present different market values according to their quality. Such a quality is usually quantified in terms of freshness of the products, as well as presence of contaminants (pieces of shell, husk, and small stones), defects, mould and decays. The combination of these parameters, in terms of relative presence, represent a fundamental set of attributes conditioning dried fruits humans-senses-detectable-attributes (visual appearance, organolectic properties, etc.) and their overall quality in terms of marketable products. Sorting-selection strategies exist but sometimes they fail when a higher degree of detection is required especially if addressed to discriminate between dried fruits of relatively small dimensions and when aiming to perform an "early detection" of pathogen agents responsible of future moulds and decays development. Surface characteristics of dried fruits can be investigated by hyperspectral imaging (HSI). In this paper, specific and "ad hoc" applications addressed to propose quality detection logics, adopting a hyperspectral imaging (HSI) based approach, are described, compared and critically evaluated. Reflectance spectra of selected dried fruits (hazelnuts) of different quality and characterized by the presence of different contaminants and defects have been acquired by a laboratory device equipped with two HSI systems working in two different spectral ranges: visible-near infrared field (400-1000 nm) and near infrared field (1000-1700 nm). The spectra have been processed and results evaluated adopting both a simple and fast wavelength band ratio approach and a more sophisticated classification logic based on principal component (PCA) analysis.

  18. Iowa Child Care Quality Rating System: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    Science.gov (United States)

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Iowa's Child Care Quality Rating System prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile is divided into the following categories: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for Family Child Care Programs;…

  19. Exploring the Notion of Quality in Quality Higher Education Assessment in a Collaborative Future

    Science.gov (United States)

    Maguire, Kate; Gibbs, Paul

    2013-01-01

    The purpose of this article is to contribute to the debate on the notion of quality in higher education with particular focus on "objectifying through articulation" the assessment of quality by professional experts. The article gives an overview of the differentiations of quality as used in higher education. It explores a substantial piece of…

  20. Exploring the Notion of Quality in Quality Higher Education Assessment in a Collaborative Future

    Science.gov (United States)

    Maguire, Kate; Gibbs, Paul

    2013-01-01

    The purpose of this article is to contribute to the debate on the notion of quality in higher education with particular focus on "objectifying through articulation" the assessment of quality by professional experts. The article gives an overview of the differentiations of quality as used in higher education. It explores a substantial…

  1. Virginia Star Quality Initiative: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    Science.gov (United States)

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Virginia's Star Quality Initiative prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators…

  2. Arbuscular mycorrhiza in soil quality assessment

    DEFF Research Database (Denmark)

    Kling, M.; Jakobsen, I.

    1998-01-01

    quantitative and qualitative measurements of this important biological resource. Various methods for the assessment of the potential for mycorrhiza formation and function are presented. Examples are given of the application of these methods to assess the impact of pesticides on the mycorrhiza....

  3. Assessing the colour quality of LED sources

    DEFF Research Database (Denmark)

    Jost-Boissard, S.; Avouac, P.; Fontoynont, Marc

    2015-01-01

    The CIE General Colour Rendering Index is currently the criterion used to describe and measure the colour-rendering properties of light sources. But over the past years, there has been increasing evidence of its limitations particularly its ability to predict the perceived colour quality of light...... sources and especially some LEDs. In this paper, several aspects of perceived colour quality are investigated using a side-by-side paired comparison method, and the following criteria: naturalness of fruits and vegetables, colourfulness of the Macbeth Color Checker chart, visual appreciation...

  4. Image and Video Quality Assessment Using Neural Network and SVM

    Institute of Scientific and Technical Information of China (English)

    DING Wenrui; TONG Yubing; ZHANG Qishan; YANG Dongkai

    2008-01-01

    An image and video quality assessment method was developed using neural network and support vector machines (SVM) with the peak signal to noise ratio (PSNR) and the structure similarity indexes used to describe image quality. The neural network was used to obtain the mapping functions between the objec-tive quality assessment indexes and subjective quality assessment. The SVM was used to classify the im-ages into different types which were accessed using different mapping functions. Video quality was as-sessed based on the quality of each frame in the video sequence with various weights to describe motion and scene changes in the video. The number of isolated points in the correlations of the image and video subjective and objective quality assessments was reduced by this method. Simulation results show that the method accurately accesses image quality. The monotonicity of the method for images is 6.94% higher than with the PSNR method, and the root mean square error is at least 35.90% higher than with the PSNR.

  5. Objective and Subjective Assessment of Digital Pathology Image Quality

    Directory of Open Access Journals (Sweden)

    Prarthana Shrestha

    2015-03-01

    Full Text Available The quality of an image produced by the Whole Slide Imaging (WSI scanners is of critical importance for using the image in clinical diagnosis. Therefore, it is very important to monitor and ensure the quality of images. Since subjective image quality assessments by pathologists are very time-consuming, expensive and difficult to reproduce, we propose a method for objective assessment based on clinically relevant and perceptual image parameters: sharpness, contrast, brightness, uniform illumination and color separation; derived from a survey of pathologists. We developed techniques to quantify the parameters based on content-dependent absolute pixel performance and to manipulate the parameters in a predefined range resulting in images with content-independent relative quality measures. The method does not require a prior reference model. A subjective assessment of the image quality is performed involving 69 pathologists and 372 images (including 12 optimal quality images and their distorted versions per parameter at 6 different levels. To address the inter-reader variability, a representative rating is determined as a one-tailed 95% confidence interval of the mean rating. The results of the subjective assessment support the validity of the proposed objective image quality assessment method to model the readers’ perception of image quality. The subjective assessment also provides thresholds for determining the acceptable level of objective quality per parameter. The images for both the subjective and objective quality assessment are based on the HercepTestTM slides scanned by the Philips Ultra Fast Scanners, developed at Philips Digital Pathology Solutions. However, the method is applicable also to other types of slides and scanners.

  6. E-Services quality assessment framework for collaborative networks

    Science.gov (United States)

    Stegaru, Georgiana; Danila, Cristian; Sacala, Ioan Stefan; Moisescu, Mihnea; Mihai Stanescu, Aurelian

    2015-08-01

    In a globalised networked economy, collaborative networks (CNs) are formed to take advantage of new business opportunities. Collaboration involves shared resources and capabilities, such as e-Services that can be dynamically composed to automate CN participants' business processes. Quality is essential for the success of business process automation. Current approaches mostly focus on quality of service (QoS)-based service selection and ranking algorithms, overlooking the process of service composition which requires interoperable, adaptable and secure e-Services to ensure seamless collaboration, data confidentiality and integrity. Lack of assessment of these quality attributes can result in e-Service composition failure. The quality of e-Service composition relies on the quality of each e-Service and on the quality of the composition process. Therefore, there is the need for a framework that addresses quality from both views: product and process. We propose a quality of e-Service composition (QoESC) framework for quality assessment of e-Service composition for CNs which comprises of a quality model for e-Service evaluation and guidelines for quality of e-Service composition process. We implemented a prototype considering a simplified telemedicine use case which involves a CN in e-Healthcare domain. To validate the proposed quality-driven framework, we analysed service composition reliability with and without using the proposed framework.

  7. Development of a dementia assessment quality database

    DEFF Research Database (Denmark)

    Johannsen, P.; Jørgensen, Kasper; Korner, A.

    2011-01-01

    database for dementia evaluation in the secondary health system. One volume and seven process quality indicators on dementia evaluations are monitored. Indicators include frequency of demented patients, percentage of patients evaluated within three months, whether the work-up included blood tests, Mini...

  8. Soil quality assessment in rice production systems

    NARCIS (Netherlands)

    Rodrigues de Lima, A.C.

    2007-01-01

    In the state of Rio Grande do Sul, Brazil, rice production is one of the most important regional activities. Farmers are concerned that the land use practices for rice production in the Camaquã region may not be sustainable because of detrimental effects on soil quality. The study presented in this

  9. Assessment of Quality Management Practices Within the Healthcare Industry

    Directory of Open Access Journals (Sweden)

    William J. Miller

    2009-01-01

    Full Text Available Problem Statement: Considerable effort has been devoted over the years by many organizations to adopt quality management practices, but few studies have assessed critical factors that affect quality practices in healthcare organizations. The problem addressed in this study was to assess the critical factors influencing the quality management practices in a single important industry (i.e., healthcare. Approach: A survey instrument was adapted from business quality literature and was sent to all hospitals in a large US Southeastern state. Valid responses were received from 147 of 189 hospitals yielding a 75.6% response rate. Factor analysis using principal component analysis with an orthogonal rotation was performed to assess 58 survey items designed to measure ten dimensions of hospital quality management practices. Results: Eight factors were shown to have a statistically significant effect on quality management practices and were classified into two groups: (1 four strategic factors (role of management leadership, role of the physician, customer focus, training resources investment and (2 four operational factors (role of quality department, quality data/reporting, process management/training and employee relations. The results of this study showed that a valid and reliable instrument was developed and used to assess quality management practices in hospitals throughout a large US state. Conclusion: The implications of this study provided an understanding that management of quality required both a focus on longer-term strategic leadership, as well as day-to-day operational management. It was recommended that healthcare researchers and practitioners focus on the critical factors identified and employ this survey instrument to manage and better understand the nature of hospital quality management practices across wider geographical regions and over longer time periods. Furthermore, this study extended the scope of existing quality management

  10. Rating methodological quality: toward improved assessment and investigation.

    Science.gov (United States)

    Moyer, Anne; Finney, John W

    2005-01-01

    Assessing methodological quality is considered essential in deciding what investigations to include in research syntheses and in detecting potential sources of bias in meta-analytic results. Quality assessment is also useful in characterizing the strengths and limitations of the research in an area of study. Although numerous instruments to measure research quality have been developed, they have lacked empirically-supported components. In addition, different summary quality scales have yielded different findings when they were used to weight treatment effect estimates for the same body of research. Suggestions for developing improved quality instruments include: distinguishing distinct domains of quality, such as internal validity, external validity, the completeness of the study report, and adherence to ethical practices; focusing on individual aspects, rather than domains of quality; and focusing on empirically-verified criteria. Other ways to facilitate the constructive use of quality assessment are to improve and standardize the reporting of research investigations, so that the quality of studies can be more equitably and thoroughly compared, and to identify optimal methods for incorporating study quality ratings into meta-analyses.

  11. Exoatmospheric Kill Vehicle Quality Assurance and Reliability Assessment - Part A

    Science.gov (United States)

    2014-09-08

    the prime contractor Boeing. Our assessment resulted in two separate reports. Part A: Assess Raytheon conformity to Aerospace Standard (AS)9100C...MDA conformed to the requirements of DoD Directive 7650.3; therefore, we do not require additional comments. We appreciate the courtesies extended to...assessment resulted in two parts: • Part A (Unclassified): Assess Raytheon conformity to Aerospace Standard (AS)9100C, “Quality Management Systems

  12. Development of ambient air quality population-weighted metrics for use in time-series health studies.

    Science.gov (United States)

    Ivy, Diane; Mulholland, James A; Russell, Armistead G

    2008-05-01

    A robust methodology was developed to compute population-weighted daily measures of ambient air pollution for use in time-series studies of acute health effects. Ambient data, including criteria pollutants and four fine particulate matter (PM) components, from monitors located in the 20-county metropolitan Atlanta area over the time period of 1999-2004 were normalized, spatially resolved using inverse distance-square weighting to Census tracts, denormalized using descriptive spatial models, and population-weighted. Error associated with applying this procedure with fewer than the maximum number of observations was also calculated. In addition to providing more representative measures of ambient air pollution for the health study population than provided by a central monitor alone and dampening effects of measurement error and local source impacts, results were used to evaluate spatial variability and to identify air pollutants for which ambient concentrations are poorly characterized. The decrease in correlation of daily monitor observations with daily population-weighted average values with increasing distance of the monitor from the urban center was much greater for primary pollutants than for secondary pollutants. Of the criteria pollutant gases, sulfur dioxide observations were least representative because of the failure of ambient networks to capture the spatial variability of this pollutant for which concentrations are dominated by point source impacts. Daily fluctuations in PM of particles less than 10 microm in aerodynamic diameter (PM10) mass were less well characterized than PM of particles less than 2.5 microm in aerodynamic diameter (PM2.5) mass because of a smaller number of PM10 monitors with daily observations. Of the PM2.5 components, the carbon fractions were less well spatially characterized than sulfate and nitrate both because of primary emissions of elemental and organic carbon and because of differences in measurement techniques used to assess

  13. Acoustical Quality Assessment of the Classroom Environment

    CERN Document Server

    George, Marian

    2012-01-01

    Teaching is one of the most important factors affecting any education system. Many research efforts have been conducted to facilitate the presentation modes used by instructors in classrooms as well as provide means for students to review lectures through web browsers. Other studies have been made to provide acoustical design recommendations for classrooms like room size and reverberation times. However, using acoustical features of classrooms as a way to provide education systems with feedback about the learning process was not thoroughly investigated in any of these studies. We propose a system that extracts different sound features of students and instructors, and then uses machine learning techniques to evaluate the acoustical quality of any learning environment. We infer conclusions about the students' satisfaction with the quality of lectures. Using classifiers instead of surveys and other subjective ways of measures can facilitate and speed such experiments which enables us to perform them continuously...

  14. National Water-Quality Assessment (NAWQA) Area-Characterization Toolbox

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This is release 1.0 of the National Water-Quality Assessment (NAWQA) Area-Characterization Toolbox. These tools are designed to be accessed using ArcGIS Desktop...

  15. Water quality assessment of razorback sucker grow-out ponds

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — Water quality parameters had never been assessed in these grow-out ponds. Historically growth, condition, and survival of razorback suckers have been variable...

  16. National Impact Assessment of CMS Quality Measures Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The National Impact Assessment of the Centers for Medicare and Medicaid Services (CMS) Quality Measures Reports (Impact Reports) are mandated by section 3014(b), as...

  17. PIE Nacelle Flow Analysis and TCA Inlet Flow Quality Assessment

    Science.gov (United States)

    Shieh, C. F.; Arslan, Alan; Sundaran, P.; Kim, Suk; Won, Mark J.

    1999-01-01

    This presentation includes three topics: (1) Analysis of isolated boattail drag; (2) Computation of Technology Concept Airplane (TCA)-installed nacelle effects on aerodynamic performance; and (3) Assessment of TCA inlet flow quality.

  18. Assessment of Water Quality Conditions: Agassiz National Wildlife Refuge, 2012

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This is an assessment of water quality data collected from source water, discharge and within Agassiz Pool. In the summer of 2012, the U.S. Fish and Wildlife Service...

  19. Quality assessment of forest cutting with chainsaw

    Directory of Open Access Journals (Sweden)

    Octávio Barbosa Plaster

    2012-06-01

    Full Text Available This research evaluated the quality of forest harvest using chainsaw, in farms in the south of Espirito Santo state, Brazil, considering aspects of quality and loss of wood left in the strains. A total of 250 m² plots were launched to collect data of forest cut with chainsaw, for evaluating the quality of the cut related mto: presence of skewers; crack damage; strains burst range nonstandard; strains without the notch directional, and the remaining height of the strain, in order to measure the loss of wood held in the strains. The main results were: the spike was present in 21.9% of the strains, the cracks in 17.2% of the strains, non-standard strains in 44.6% of them and unnotched directional strains in 34.5% of the evaluations. To check the influence of the realization of the directional notch on the height of the strains t-test, at 5% probability, has shown that there is an increased contribution to height of the strains, where the cut was made without the directional notch. The amount of wood held in the strains above the recommended maximum was, on average, 2.43 m³.ha-1, representing a loss of R$ 172.53 ha-1. It was verified that the loss of timber remaining in eucalyptus strains was higher in places where, for the logging, there was not done the directional notch. The items evaluated showed uneven quality, indicating the need to improve cutting with chainsaw.

  20. Using big data for quality assessment in oncology.

    Science.gov (United States)

    Broughman, James R; Chen, Ronald C

    2016-05-01

    There is increasing attention in the US healthcare system on the delivery of high-quality care, an issue central to oncology. In the report 'Crossing the Quality Chasm', the Institute of Medicine identified six aims for improving healthcare quality: safe, effective, patient-centered, timely, efficient and equitable. This article describes how current big data resources can be used to assess these six dimensions, and provides examples of published studies in oncology. Strengths and limitations of current big data resources for the evaluation of quality of care are also discussed. Finally, this article outlines a vision where big data can be used not only to retrospectively assess the quality of oncologic care, but help physicians deliver high-quality care in real time.

  1. Assessing the quality of a student-generated question repository

    CERN Document Server

    Bates, Simon P; Homer, Danny; Riise, Jonathan

    2013-01-01

    We present results from a study that categorizes and assesses the quality of questions and explanations authored by students, in question repositories produced as part of the summative assessment in introductory physics courses over the past two years. Mapping question quality onto the levels in the cognitive domain of Bloom's taxonomy, we find that students produce questions of high quality. More than three-quarters of questions fall into categories beyond simple recall, in contrast to similar studies of student-authored content in different subject domains. Similarly, the quality of student-authored explanations for questions was also high, with approximately 60% of all explanations classified as being of high or outstanding quality. Overall, 75% of questions met combined quality criteria, which we hypothesize is due in part to the in-class scaffolding activities that we provided for students ahead of requiring them to author questions.

  2. Impact Factor and other metrics for evaluating science: essentials for public health practitioners.

    Directory of Open Access Journals (Sweden)

    Angelo G. Solimini

    2011-03-01

    Full Text Available

    Abstract: The quality of scientific evidence is doubly tied with the quality of all research activities that generates
    it (including the “value” of the scientists involved and is usually, but not always, reflected in the reporting quality of the scientific publication(s. Public health practitioners, either at research, academic or management levels, should be aware of the current metrics used to assess the quality value of journals, single publications, research projects, research scientists or entire research groups. However, this task is
    complicated by a vast variety of different metrics and assessment methods. Here we briefly review the most widely used metrics, highlighting the pros and cons of each of them. The rigid application of quantitative metrics to judge the quality of a journal, of a single publication or of a researcher suffers from many negative issues and is prone to many reasonable criticisms. A reasonable way forward could probably be the use of qualitative assessment founded on the indications coming from few but robust quantitative metrics.

  3. Assessing the link between coastal urbanization and the quality of nekton habitat in mangrove tidal tributaries

    Science.gov (United States)

    Krebs, Justin M.; Bell, Susan S.; McIvor, Carole C.

    2014-01-01

    To assess the potential influence of coastal development on habitat quality for estuarine nekton, we characterized body condition and reproduction for common nekton from tidal tributaries classified as undeveloped, industrial, urban or man-made (i.e., mosquito-control ditches). We then evaluated these metrics of nekton performance, along with several abundance-based metrics and community structure from a companion paper (Krebs et al. 2013) to determine which metrics best reflected variation in land-use and in-stream habitat among tributaries. Body condition was not significantly different among undeveloped, industrial, and man-made tidal tributaries for six of nine taxa; however, three of those taxa were in significantly better condition in urban compared to undeveloped tributaries. Palaemonetes shrimp were the only taxon in significantly poorer condition in urban tributaries. For Poecilia latipinna, there was no difference in body condition (length–weight) between undeveloped and urban tributaries, but energetic condition was significantly better in urban tributaries. Reproductive output was reduced for both P. latipinna (i.e., fecundity) and grass shrimp (i.e., very low densities, few ovigerous females) in urban tributaries; however a tradeoff between fecundity and offspring size confounded meaningful interpretation of reproduction among land-use classes for P. latipinna. Reproductive allotment by P. latipinna did not differ significantly among land-use classes. Canonical correspondence analysis differentiated urban and non-urban tributaries based on greater impervious surface, less natural mangrove shoreline, higher frequency of hypoxia and lower, more variable salinities in urban tributaries. These characteristics explained 36 % of the variation in nekton performance, including high densities of poeciliid fishes, greater energetic condition of sailfin mollies, and low densities of several common nekton and economically important taxa from urban tributaries

  4. Quality Assessment of Library Website of Iranian State Universities:

    OpenAIRE

    Farideh Osareh; Zeinab Papi

    2008-01-01

    The present study carries out a quality assessment of the library websites in Iranian State Universities in order to rank them accordingly. The evaluation tool used is the normalized Web Quality Evaluation Tools (WQET). 41 Active library websites were studied and assessed qualitatively over two time periods (Feb 2006 and May 2006) using WQET. Data were collected by direct observation of the website. The evaluation was based on user characteristics, website purpose, upload speed, structural st...

  5. A unifying process capability metric

    Directory of Open Access Journals (Sweden)

    John Jay Flaig

    2009-07-01

    Full Text Available A new economic approach to process capability assessment is presented, which differs from the commonly used engineering metrics. The proposed metric consists of two economic capability measures – the expected profit and the variation in profit of the process. This dual economic metric offers a number of significant advantages over other engineering or economic metrics used in process capability analysis. First, it is easy to understand and communicate. Second, it is based on a measure of total system performance. Third, it unifies the fraction nonconforming approach and the expected loss approach. Fourth, it reflects the underlying interest of management in knowing the expected financial performance of a process and its potential variation.

  6. Guidance on Data Quality Assessment for Life Cycle Inventory Data

    Science.gov (United States)

    Data quality within Life Cycle Assessment (LCA) is a significant issue for the future support and development of LCA as a decision support tool and its wider adoption within industry. In response to current data quality standards such as the ISO 14000 series, various entities wit...

  7. Assessing Educational Processes Using Total-Quality-Management Measurement Tools.

    Science.gov (United States)

    Macchia, Peter, Jr.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) assessment tools in educational settings highlights and gives examples of fishbone diagrams, or cause and effect charts; Pareto diagrams; control charts; histograms and check sheets; scatter diagrams; and flowcharts. Variation and quality are discussed in terms of continuous process…

  8. The Wheel of Competency Assessment: Presenting Quality Criteria for Competency Assessment Programs

    NARCIS (Netherlands)

    Baartman, Liesbeth; Bastiaens, Theo; Kirschner, Paul A.; Van der Vleuten, Cees

    2009-01-01

    Baartman, L. K. J., Bastiaens, T. J., Kirschner, P. A., & Van der Vleuten, C. P. M. (2006). The wheel of competency assessment: Presenting quality criteria for Competency Assessment Programmes. Studies in Educational Evaluation, 32, 153-170.

  9. Quality of life assessment in dogs and cats receiving chemotherapy

    DEFF Research Database (Denmark)

    Vøls, Kåre K.; Heden, Martin A.; Kristensen, Annemarie Thuri

    2016-01-01

    This study aimed to review currently reported methods of assessing the effects of chemotherapy on the quality of life (QoL) of canine and feline patients and to explore novel ways to assess QoL in such patients in the light of the experience to date in human pediatric oncology. A qualitative...... comparative analysis of published papers on the effects of chemotherapy on QoL in dogs and cats were conducted. This was supplemented with a comparison of the parameters and domains used in veterinary QoL-assessments with those used in the Pediatric Quality of Life Inventory (PedsQL™) questionnaire designed...... to assess QoL in toddlers. Each of the identified publications including QoL-assessment in dogs and cats receiving chemotherapy applied a different method of QoL-assessment. In addition, the veterinary QoL-assessments were mainly focused on physical clinical parameters, whereas the emotional (6/11), social...

  10. A new embedding quality assessment method for manifold learning

    CERN Document Server

    Zhang, Peng; Zhang, Bo

    2011-01-01

    Manifold learning is a hot research topic in the field of computer science. A crucial issue with current manifold learning methods is that they lack a natural quantitative measure to assess the quality of learned embeddings, which greatly limits their applications to real-world problems. In this paper, a new embedding quality assessment method for manifold learning, named as Normalization Independent Embedding Quality Assessment (NIEQA), is proposed. Compared with current assessment methods which are limited to isometric embeddings, the NIEQA method has a much larger application range due to two features. First, it is based on a new measure which can effectively evaluate how well local neighborhood geometry is preserved under normalization, hence it can be applied to both isometric and normalized embeddings. Second, it can provide both local and global evaluations to output an overall assessment. Therefore, NIEQA can serve as a natural tool in model selection and evaluation tasks for manifold learning. Experi...

  11. 基于仿生模型的图像质量评价方法%An image quality metric based on bionic models#

    Institute of Scientific and Technical Information of China (English)

    侯伟龙; 高新波; 何立火; 高飞

    2011-01-01

    图像质量的客观评价是图像处理领域中的一个重要分支,其评价指标可以作为一种测度或者准则用来校准图像处理系统,抑或用于图像处理算法的优化及参数的优选。鉴于人眼是图像的最终受体,而视觉注意机制在人眼观看图像过程中起到非常重要的作用。因此本文针对图像质量评价的基本问题,提出了一种新的基于视觉注意机制的仿生学图像质量评价算法。结合视觉注意机制形成的原理,利用高斯塔式分解将图像分解为不同的空域尺度,从而模拟人类视觉系统的多通道特性。采用对比敏感度函数对不同的空域尺度进行视觉感知滤波,然后利用人类视觉系统的中央-周边感受野特性与侧抑制机制对图像特征进行提取,进而利用该特征来捕捉由图像降质引起的视觉感知的差异。实验结果表明,本方法能较准确地反映人眼对图像质量的主观感受,且计算复杂度较低,性能优于同类评价算法。%Image quality evaluation is to use some computational models to predict the quality of the specified image automatically and accurately. Since human eyes are ultimate receptor of images, it is better to mimic human visual system (HVS) to perceive the image quality. Based on the properties of the HVS, a novel bionic image quality metric (IQA) is proposed, which adopts several bionic characteristics, e.g. multi-channel decomposition, contrast sensitivity function, center-surround operation and lateral inhibition mechanism. Experimental results demonstrate that the performance of the proposed IQA method outperforms those of the existing methods.

  12. Quality assessment of a placental perfusion protocol

    DEFF Research Database (Denmark)

    Mathiesen, Line; Mose, Tina; Mørck, Thit Juul;

    2010-01-01

    the placental perfusion model in Copenhagen including control substances. The positive control substance antipyrine shows no difference in transport regardless of perfusion media used or of terms of delivery (n=59, pmarked dextran correspond with leakage criteria (...mlh(-1) from the fetal reservoir) when adding 2 (n=7) and 20mg (n=9) FITC-dextran/100ml fetal perfusion media. Success rate of the Copenhagen placental perfusions is provided in this study, including considerations and quality control parameters. Three checkpoints suggested to determine success rate...

  13. Can We Go Beyond Burned Area in the Assessment of Global Remote Sensing Products with Fire Patch Metrics?

    Directory of Open Access Journals (Sweden)

    Joana M. P. Nogueira

    2016-12-01

    Full Text Available Global burned area (BA datasets from satellite Earth observations provide information for carbon emission and for Dynamic Global Vegetation Model (DGVM benchmarking. Fire patch identification from pixel-level information recently emerged as an additional way of providing informative features about fire regimes through the analysis of patch size distribution. We evaluated the ability of global BA products to accurately represent morphological features of fire patches, in the fire-prone Brazilian savannas. We used the pixel-level burned area from LANDSAT images, as well as two global products: MODIS MCD45A1 and the European Space Agency (ESA fire Climate Change Initiative (FIRE_CCI product for the 2002–2009 time period. Individual fire patches were compared by linear regressions to test the consistency of global products as a source of burned patch shape information. Despite commission and omission errors respectively reaching 0.74 and 0.81 for ESA FIRE_CCI and 0.64 and 0.62 for MCD45A1 when compared to LANDSAT due to missing small fires, correlations between patch areas showed R2 > 0.6 for all comparisons, with a slope of 0.99 between ESA FIRE_CCI and MCD45A1 but a lower slope (0.6–0.8 when compared to the LANDSAT data. Shape complexity between global products was less correlated (R2 = 0.5 with lower values (R2 = 0.2 between global products and LANDSAT data, due to their coarser resolution. For the morphological features of the ellipse fitted over fire patches, R2 reached 0.6 for the ellipse’s eccentricity and varied from 0.4 to 0.8 for its azimuthal directional angle. We conclude that global BA products underestimate total BA as they miss small fires, but they also underestimate burned patch areas. Patch complexity is the least correlated variable, but ellipse features appear to provide information to be further used for quality product assessment, global pyrogeography or DGVM benchmarking.

  14. Image Quality Assessment Based on Local Linear Information and Distortion-Specific Compensation.

    Science.gov (United States)

    Wang, Hanli; Fu, Jie; Lin, Weisi; Hu, Sudeng; Kuo, C-C Jay; Zuo, Lingxuan

    2016-12-14

    Image Quality Assessment (IQA) is a fundamental yet constantly developing task for computer vision and image processing. Most IQA evaluation mechanisms are based on the pertinence of subjective and objective estimation. Each image distortion type has its own property correlated with human perception. However, this intrinsic property may not be fully exploited by existing IQA methods. In this paper, we make two main contributions to the IQA field. First, a novel IQA method is developed based on a local linear model that examines the distortion between the reference and the distorted images for better alignment with human visual experience. Second, a distortion-specific compensation strategy is proposed to offset the negative effect on IQA modeling caused by different image distortion types. These score offsets are learned from several known distortion types. Furthermore, for an image with an unknown distortion type, a Convolutional Neural Network (CNN) based method is proposed to compute the score offset automatically. Finally, an integrated IQA metric is proposed by combining the aforementioned two ideas. Extensive experiments are performed to verify the proposed IQA metric, which demonstrate that the local linear model is useful in human perception modeling, especially for individual image distortion, and the overall IQA method outperforms several state-of-the-art IQA approaches.

  15. A Trustability Metric for Code Search based on Developer Karma

    CERN Document Server

    Gysin, Florian S

    2010-01-01

    The promise of search-driven development is that developers will save time and resources by reusing external code in their local projects. To efficiently integrate this code, users must be able to trust it, thus trustability of code search results is just as important as their relevance. In this paper, we introduce a trustability metric to help users assess the quality of code search results and therefore ease the cost-benefit analysis they undertake trying to find suitable integration candidates. The proposed trustability metric incorporates both user votes and cross-project activity of developers to calculate a "karma" value for each developer. Through the karma value of all its developers a project is ranked on a trustability scale. We present JBender, a proof-of-concept code search engine which implements our trustability metric and we discuss preliminary results from an evaluation of the prototype.

  16. Metrics for Evaluating the Accuracy of Solar Power Forecasting: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, J.; Hodge, B. M.; Florita, A.; Lu, S.; Hamann, H. F.; Banunarayanan, V.

    2013-10-01

    Forecasting solar energy generation is a challenging task due to the variety of solar power systems and weather regimes encountered. Forecast inaccuracies can result in substantial economic losses and power system reliability issues. This paper presents a suite of generally applicable and value-based metrics for solar forecasting for a comprehensive set of scenarios (i.e., different time horizons, geographic locations, applications, etc.). In addition, a comprehensive framework is developed to analyze the sensitivity of the proposed metrics to three types of solar forecasting improvements using a design of experiments methodology, in conjunction with response surface and sensitivity analysis methods. The results show that the developed metrics can efficiently evaluate the quality of solar forecasts, and assess the economic and reliability impact of improved solar forecasting.

  17. Random Kaehler metrics

    Energy Technology Data Exchange (ETDEWEB)

    Ferrari, Frank, E-mail: frank.ferrari@ulb.ac.be [Service de Physique Theorique et Mathematique, Universite Libre de Bruxelles and International Solvay Institutes, Campus de la Plaine, CP 231, 1050 Bruxelles (Belgium); Klevtsov, Semyon, E-mail: semyon.klevtsov@ulb.ac.be [Service de Physique Theorique et Mathematique, Universite Libre de Bruxelles and International Solvay Institutes, Campus de la Plaine, CP 231, 1050 Bruxelles (Belgium); ITEP, B. Cheremushkinskaya 25, Moscow 117218 (Russian Federation); Zelditch, Steve, E-mail: zelditch@math.northwestern.edu [Department of Mathematics, Northwestern University, Evanston, IL 60208 (United States)

    2013-04-01

    The purpose of this article is to propose a new method to define and calculate path integrals over metrics on a Kaehler manifold. The main idea is to use finite dimensional spaces of Bergman metrics, as an approximation to the full space of Kaehler metrics. We use the theory of large deviations to decide when a sequence of probability measures on the spaces of Bergman metrics tends to a limit measure on the space of all Kaehler metrics. Several examples are considered.

  18. Quantifying landscape pattern and assessing the land cover changes in Piatra Craiului National Park and Bucegi Natural Park, Romania, using satellite imagery and landscape metrics.

    Science.gov (United States)

    Vorovencii, Iosif

    2015-11-01

    Protected areas of Romania have enjoyed particular importance after 1989, but, at the same time, they were subject to different anthropogenic and natural pressures which resulted in the occurrence of land cover changes. These changes have generally led to landscape degradation inside and at the borders of the protected areas. In this article, 12 landscape metrics were used in order to quantify landscape pattern and assess land cover changes in two protected areas, Piatra Craiului National Park (PCNP) and Bucegi Natural Park (BNP). The landscape metrics were obtained from land cover maps derived from Landsat Thematic Mapper (TM) and Landsat Enhanced Thematic Mapper Plus (ETM+) images from 1987, 1993, 2000, 2009 and 2010. Three land cover classes were analysed in PCNP and five land cover map classes in BNP. The results show a landscape fragmentation trend for both parks, affecting different types of land covers. Between 1987 and 2010, in PCNP fragmentation was, in principle, the result not only of anthropogenic activities such as forest cuttings and illegal logging but also of natural causes. In BNP, between 1987 and 2009, the fragmentation affected the pasture which resulted in the occurrence of bare land and rocky areas because of the erosion on the Bucegi Plateau.

  19. Web metrics for library and information professionals

    CERN Document Server

    Stuart, David

    2014-01-01

    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional.Th...

  20. A new assessment method for image fusion quality

    Science.gov (United States)

    Li, Liu; Jiang, Wanying; Li, Jing; Yuchi, Ming; Ding, Mingyue; Zhang, Xuming

    2013-03-01

    Image fusion quality assessment plays a critically important role in the field of medical imaging. To evaluate image fusion quality effectively, a lot of assessment methods have been proposed. Examples include mutual information (MI), root mean square error (RMSE), and universal image quality index (UIQI). These image fusion assessment methods could not reflect the human visual inspection effectively. To address this problem, we have proposed a novel image fusion assessment method which combines the nonsubsampled contourlet transform (NSCT) with the regional mutual information in this paper. In this proposed method, the source medical images are firstly decomposed into different levels by the NSCT. Then the maximum NSCT coefficients of the decomposed directional images at each level are obtained to compute the regional mutual information (RMI). Finally, multi-channel RMI is computed by the weighted sum of the obtained RMI values at the various levels of NSCT. The advantage of the proposed method lies in the fact that the NSCT can represent image information using multidirections and multi-scales and therefore it conforms to the multi-channel characteristic of human visual system, leading to its outstanding image assessment performance. The experimental results using CT and MRI images demonstrate that the proposed assessment method outperforms such assessment methods as MI and UIQI based measure in evaluating image fusion quality and it can provide consistent results with human visual assessment.

  1. Assessment of Consumers' Satisfaction with the Automotive Product Quality

    Science.gov (United States)

    Amineh, Hadi; Kosach, Nataliya

    2016-01-01

    Relevance of article is caused by the fact that customer's satisfaction currently serves as the mechanism allowing the carmakers to be competitive in the market. The paper describes issues of assessment of the quality of products manufactured by automobile companies. The assessment is based on widely applicable complex characteristics of the…

  2. Ultrasonic assessment of oil quality during frying.

    Science.gov (United States)

    Benedito, Jose; Mulet, Antonio; Velasco, J; Dobarganes, M Carmen

    2002-07-31

    In this paper, changes in ultrasonic properties during thermoxidation of virgin olive oil were studied. Samples of virgin olive oil were heated over different periods of time from 2 to 16 h at 200 degrees C. Oil degradation was characterized by means of physical and chemical changes, i.e., viscosity, color, polar compounds, polymers, and polar fatty acids. Ultrasonic measurements were carried out while the oil sample was cooled from 35 to 25 degrees C. It was found that velocity and attenuation measurements were related to viscosity measurements through a classical equation for viscous liquids. The ultrasonic measurements were also related to the percentages of polar compounds and polymers, which shows the feasibility of using ultrasonic properties to monitor oil quality. Nevertheless, as long as the ultrasonic measurements are temperature dependent, this variable must be controlled in order to obtain repetitive and reliable measurements.

  3. Assessment of daylight quality in simple rooms

    DEFF Research Database (Denmark)

    Johnsen, Kjeld; Dubois, Marie-Claude; Sørensen, Karl Grau

    the windows). A number of light indicators allowed understanding and describing the geometry of daylight in the space in a very detailed and thorough manner. The inclusion of the daylight factor, horizontal illuminance, luminance distribution, cylindrical illuminance, the Daylight Glare Index, vertical......-to-horizontal illuminance ratio as well as scale of shadow gave valuable information allowing a detailed description of the three-dimensional geometry of daylight in the space. It should be mentioned however, that there is no universal definition of light quality. The approach of this study was to analyse differences...... in daylighting conditions for a number of lighting parameters. The results gave clear indications of, for instance, which room would be the brightest, under which conditions might glare be a problem and which type of window would yield the greatest luminous variation (or visual interest), etc....

  4. EPIDEMIOLOGI UNTUK 'QUALITY ASSESSMENT' PELAYANAN KESEHATAN GIGI MULUT

    Directory of Open Access Journals (Sweden)

    Zaura Anggraeni Matram

    2015-08-01

    Full Text Available The need for quality assessment and assurance in health and oral health becomes an issue of major concern in Indonesia, particularly in relation to the significant decrease of available resources due to the persistence economical crisis. Financial and socioeconomical impacts have led to the need for low cost - high quality accessible oral care. Dentists are ultimately responsibel for the quality of care performed in Public Helath Center (Puskesmas especially for School and Community Dental Programmes often performed by various type of health manpower such as dental nurses and cadres (volunteers. In this paper, emphasis has been placed on two epidemiological models to assess the quality of outcomes of service as well as management control for quality assessment in School Dental Programme. Respectively epidemiological moderls were developed for assessing the effectiveness of oral health education and simple oral prophylaxis carried out the School Dental Programme (known as UKGS. With these epidemiological approaches, it is hope dentists will gain increase appreciation for qualitative assessment quality of care instead of just quantitavely meeting the target that many health administrations use it to indicate success.

  5. Measuring data quality for ongoing improvement a data quality assessment framework

    CERN Document Server

    Sebastian-Coleman, Laura

    2013-01-01

    The Data Quality Assessment Framework shows you how to measure and monitor data quality, ensuring quality over time. You'll start with general concepts of measurement and work your way through a detailed framework of more than three dozen measurement types related to five objective dimensions of quality: completeness, timeliness, consistency, validity, and integrity. Ongoing measurement, rather than one time activities will help your organization reach a new level of data quality. This plain-language approach to measuring data can be understood by both business and IT and provides pra

  6. ASSESSMENT OF QUALITY OF LIFE IN CANCER PATIENTS

    Directory of Open Access Journals (Sweden)

    Fereshteh Farzianpour

    2014-01-01

    Full Text Available Standards of Joint Commission International (JCI emphasize on the organizational performance level in basic functional domains including patient right, patient care, medical safety and infection control. These standards are focused on two principles: Expectations of the actual organizational performance and assessment of organizational capabilities to provide high quality and safe Health Care Services (HCS. The aim of this study was to analyze the regression model of the Quality of Life (QOL in cancer patients from Mazandaran province in 2013. This descriptive cross-sectional study was carried out on 185 cases after a chemotherapy treatment session during in the first three months that was referred to Rajaee Chemotherapy Center in 2013. The method of sampling was Purposive. General quality of life was assessed using WHO questionnaire (WHOQOL-BREF and particular life quality was assessed using researcher-developed questionnaire. Data analysis was consisted of a multiple regression method and for comparison one-sample test of Kolmogrov-Smirnov was used. Statistical analysis showed that the average of general life quality, particular life quality and total average was evaluated, 1<0.96<5, 1<1.13<5 and 1<1.04<5, respectively. Due to the low quality of general and particular life, fully integration of the care program of patient care in primary health care system, easy access and facilitation in intervention to improve the quality of life is offered. Our motivation behind the research and the implications of the research was improvement of QOL cancer patients.

  7. An information theoretic approach for privacy metrics

    Directory of Open Access Journals (Sweden)

    Michele Bezzi

    2010-12-01

    Full Text Available Organizations often need to release microdata without revealing sensitive information. To this scope, data are anonymized and, to assess the quality of the process, various privacy metrics have been proposed, such as k-anonymity, l-diversity, and t-closeness. These metrics are able to capture different aspects of the disclosure risk, imposing minimal requirements on the association of an individual with the sensitive attributes. If we want to combine them in a optimization problem, we need a common framework able to express all these privacy conditions. Previous studies proposed the notion of mutual information to measure the different kinds of disclosure risks and the utility, but, since mutual information is an average quantity, it is not able to completely express these conditions on single records. We introduce here the notion of one-symbol information (i.e., the contribution to mutual information by a single record that allows to express and compare the disclosure risk metrics. In addition, we obtain a relation between the risk values t and l, which can be used for parameter setting. We also show, by numerical experiments, how l-diversity and t-closeness can be represented in terms of two different, but equally acceptable, conditions on the information gain..

  8. AN ASSESSMENT AND OPTIMIZATION OF QUALITY OF STRATEGY PROCESS

    Directory of Open Access Journals (Sweden)

    Snezana Nestic

    2013-12-01

    Full Text Available In order to improve the quality of their processes companies usually rely on quality management systems and the requirements of ISO 9001:2008. The small and medium-sized companies are faced with a series of challenges in objectification, evaluation and assessment of the quality of processes. In this paper, the strategy process is decomposed for one typical medium size of manufacturing company and the indicators of the defined sub processes, based on the requirements of ISO 9001:2008, are developed. The weights of sub processes are calculated using fuzzy set approach. Finally, the developed solution based on the genetic algorithm approach is presented and tested on data from 142 manufacturing companies. The presented solution enables assessment of the quality of a strategy process, ranks the indicators and provides a basis for successful improvement of the quality of the strategy process.

  9. An innovative road marking quality assessment mechanism using computer vision

    Directory of Open Access Journals (Sweden)

    Kuo-Liang Lin

    2016-06-01

    Full Text Available Aesthetic quality acceptance for road marking works has been relied on subjective visual examination. Due to a lack of quantitative operation procedures, acceptance outcome can be biased and results in great quality variation. To improve aesthetic quality acceptance procedure of road marking, we develop an innovative road marking quality assessment mechanism, utilizing machine vision technologies. Using edge smoothness as a quantitative aesthetic indicator, the proposed prototype system first receives digital images of finished road marking surface and has the images processed and analyzed to capture the geometric characteristics of the marking. The geometric characteristics are then evaluated to determine the quality level of the finished work. System is demonstrated through two real cases to show how it works. In the end, a test comparing the assessment results between the proposed system and expert inspection is conducted to enhance the accountability of the proposed mechanism.

  10. A priori discretization error metrics for distributed hydrologic modeling applications

    Science.gov (United States)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under

  11. Reliability of medical audit in quality assessment of medical care

    Directory of Open Access Journals (Sweden)

    Camacho Luiz Antonio Bastos

    1996-01-01

    Full Text Available Medical audit of hospital records has been a major component of quality of care assessment, although physician judgment is known to have low reliability. We estimated interrater agreement of quality assessment in a sample of patients with cardiac conditions admitted to an American teaching hospital. Physician-reviewers used structured review methods designed to improve quality assessment based on judgment. Chance-corrected agreement for the items considered more relevant to process and outcome of care ranged from low to moderate (0.2 to 0.6, depending on the review item and the principal diagnoses and procedures the patients underwent. Results from several studies seem to converge on this point. Comparisons among different settings should be made with caution, given the sensitivity of agreement measurements to prevalence rates. Reliability of review methods in their current stage could be improved by combining the assessment of two or more reviewers, and by emphasizing outcome-oriented events.

  12. Applying Sigma Metrics to Reduce Outliers.

    Science.gov (United States)

    Litten, Joseph

    2017-03-01

    Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods.

  13. Assessment on reliability of water quality in water distribution systems

    Institute of Scientific and Technical Information of China (English)

    伍悦滨; 田海; 王龙岩

    2004-01-01

    Water leaving the treatment works is usually of a high quality but its properties change during the transportation stage. Increasing awareness of the quality of the service provided within the water industry today and assessing the reliability of the water quality in a distribution system has become a major significance for decision on system operation based on water quality in distribution networks. Using together a water age model, a chlorine decay model and a model of acceptable maximum water age can assess the reliability of the water quality in a distribution system. First, the nodal water age values in a certain complex distribution system can be calculated by the water age model. Then, the acceptable maximum water age value in the distribution system is obtained based on the chlorine decay model. The nodes at which the water age values are below the maximum value are regarded as reliable nodes. Finally, the reliability index on the percentile weighted by the nodal demands reflects the reliability of the water quality in the distribution system. The approach has been applied in a real water distribution network. The contour plot based on the water age values determines a surface of the reliability of the water quality. At any time, this surface is used to locate high water age but poor reliability areas, which identify parts of the network that may be of poor water quality. As a result, the contour water age provides a valuable aid for a straight insight into the water quality in the distribution system.

  14. Preliminary quality assessment of bovine colostrum

    Directory of Open Access Journals (Sweden)

    Alessandro Taranto

    2013-02-01

    Full Text Available Data on bovine colostrum quality are scarce or absent, although Commission Regulations No 1662/2006 and No 1663/2006 include colostrum in the context of chapters on milk. Thus the aim of the present work is to study some physical, chemical, hygiene and safety quality parameters of bovine colostrum samples collected from Sicily and Calabria dairy herds. Thirty individual samples were sampled after 2-3 days from partum. The laboratory tests included: pH, fat (FT, total nitrogen (TN, lactose (LTS and dry matter (NM percentage (Lactostar and somatic cell count (CCS (DeLaval cell counter DCC. Bacterial counts included: standard plate count (SPC, total psychrophilic aerobic count (PAC, total, fecal coliforms by MPN (Most Probable Number, sulphite-reducing bacteria (SR. Salmonella spp. was determined. Bacteriological examinations were performed according to the American Public Health Association (APHA methods, with some adjustements related to the requirements of the study. Statistical analysis of data was performed by Spearman’s rank correlation coefficient. The results showed a low variability of pH values and FT, TN and DM percentage between samples; whereas LTS trend was less noticeable. A significant negative correlation (P<0.01 was observed between pH, TN and LTS amount. The correlation between LTS and TN contents was highly significant (P<0.001. Highly significant and negative was the correlation (P<0.001 between DM, NT and LTS content. SPC mean values were 7.54 x106 CFU/mL; PAC mean values were also high (3.3x106 CFU/mL. Acceptable values of coagulase positive staphylococci were showed; 3 Staphylococcus aureus and 1 Staphylococcus epidermidis strains was isolated. Coagulase negative staphylococci counts were low. A high variability in the number of TC, as for FC was observed; bacterial loads were frequently fairly high. Salmonella spp. and SR bacteria were absent. It was assumed that bacteria from samples had a prevailing environmental origin

  15. DEVELOPMENT OF THE METHOD AND U.S. NORMALIZATION DATABASE FOR LIFE CYCLE IMPACT ASSESSMENT AND SUSTAINABILITY METRICS

    Science.gov (United States)

    Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as, life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relati...

  16. Can International Large-Scale Assessments Inform a Global Learning Goal? Insights from the Learning Metrics Task Force

    Science.gov (United States)

    Winthrop, Rebecca; Simons, Kate Anderson

    2013-01-01

    In recent years, the global community has developed a range of initiatives to inform the post-2015 global development agenda. In the education community, International Large-Scale Assessments (ILSAs) have an important role to play in advancing a global shift in focus to access plus learning. However, there are a number of other assessment tools…

  17. Economic Journal in Russia: Quality Assessment Issues

    Directory of Open Access Journals (Sweden)

    Ol’ga Valentinovna Tret’yakova

    2016-05-01

    Full Text Available The paper attempts to assess economic journals included in the List of peer-reviewed scientific journals and editions that are authorized to publish the principal research findings of doctoral (candidate’s dissertations (the VAK List, it was established by the Decision of the Ministry of Education and Science of the Russian Federation and entered into force on December 01, 2016. The general assessment of the journals that include more than 380 titles is carried out by analyzing their bibliometric indicators in the system of the Russian Science Citation Index, in particular, by examining the values of their impact factors. The analysis conducted at the Institute of Socio-economic Development of Territories of RAS shows that a relatively small number of economic journals publish a significant proportion of articles that obtain a large share of citations. The author reveals that the new VAK List includes over 50% of journals specializing in economic sciences, which have a lower level of citation or which are virtually not cited at all. This indirectly indicates that such journals are “left behind” the “main stream of science”, that their significance is local, availability low, attractiveness for the audience and scientific authority insufficient. The analysis proves that when forming the list of peer-reviewed scientific publications recommended for publication of dissertation research findings, along with other criteria, it is advisable to use tools that help assess the level of the journal. It is very important that the evaluation had quantitative expression and served as a specific measure for ranking the journals. One of these tools may be a criterion value for the two-year impact factor, which helps identify journals with a sufficient citation level. The paper presents the results of analysis of the RSCI list, which was proposed by the Council for Science under the Ministry of Education and Science of the Russian Federation as an

  18. Quality assessment in competency based physiotherapy education

    DEFF Research Database (Denmark)

    Brandt, Jørgen

    2012-01-01

    Purpose: To ensure a transparent and competency related assessment of physiotherapy education, in order to accomplish a close relationship between competencies at entry level to the profession and challenges in current and future health practice. Relevance: Perspectives and metods regarding...... rehabilitation and health promotion change with demografic evolvement, health politics and patterns of diseases. This calls for an ever ongoing improvement and adjustment of professional competencies being achieved during physiotherapy education. At the same time the education itself is an entity, comitted...... the relationship between learning outcome and demands for professional competencies in practice. This connection is evaluated through the behavior level. It covers newly graduated students perceptions of the degree to which they comply with expectations in physiotherapy practice.Further more the effect level...

  19. Assessment of mesh simplification algorithm quality

    Science.gov (United States)

    Roy, Michael; Nicolier, Frederic; Foufou, S.; Truchetet, Frederic; Koschan, Andreas; Abidi, Mongi A.

    2002-03-01

    Traditionally, medical geneticists have employed visual inspection (anthroposcopy) to clinically evaluate dysmorphology. In the last 20 years, there has been an increasing trend towards quantitative assessment to render diagnosis of anomalies more objective and reliable. These methods have focused on direct anthropometry, using a combination of classical physical anthropology tools and new instruments tailor-made to describe craniofacial morphometry. These methods are painstaking and require that the patient remain still for extended periods of time. Most recently, semiautomated techniques (e.g., structured light scanning) have been developed to capture the geometry of the face in a matter of seconds. In this paper, we establish that direct anthropometry and structured light scanning yield reliable measurements, with remarkably high levels of inter-rater and intra-rater reliability, as well as validity (contrasting the two methods).

  20. An assessment of groundwater quality using water quality index in Chennai, Tamil Nadu, India

    Directory of Open Access Journals (Sweden)

    I Nanda Balan

    2012-01-01

    Full Text Available Context : Water, the elixir of life, is a prime natural resource. Due to rapid urbanization in India, the availability and quality of groundwater have been affected. According to the Central Groundwater Board, 80% of Chennai′s groundwater has been depleted and any further exploration could lead to salt water ingression. Hence, this study was done to assess the groundwater quality in Chennai city. Aim : To assess the groundwater quality using water quality index in Chennai city. Materials and Methods: Chennai city was divided into three zones based on the legislative constituency and from these three zones three locations were randomly selected and nine groundwater samples were collected and analyzed for physiochemical properties. Results: With the exception of few parameters, most of the water quality assessment parameters showed parameters within the accepted standard values of Bureau of Indian Standards (BIS. Except for pH in a single location of zone 1, none of the parameters exceeded the permissible values for water quality assessment as prescribed by the BIS. Conclusion: This study demonstrated that in general the groundwater quality status of Chennai city ranged from excellent to good and the groundwater is fit for human consumption based on all the nine parameters of water quality index and fluoride content.

  1. Learning Receptive Fields and Quality Lookups for Blind Quality Assessment of Stereoscopic Images.

    Science.gov (United States)

    Shao, Feng; Lin, Weisi; Wang, Shanshan; Jiang, Gangyi; Yu, Mei; Dai, Qionghai

    2016-03-01

    Blind quality assessment of 3D images encounters more new challenges than its 2D counterparts. In this paper, we propose a blind quality assessment for stereoscopic images by learning the characteristics of receptive fields (RFs) from perspective of dictionary learning, and constructing quality lookups to replace human opinion scores without performance loss. The important feature of the proposed method is that we do not need a large set of samples of distorted stereoscopic images and the corresponding human opinion scores to learn a regression model. To be more specific, in the training phase, we learn local RFs (LRFs) and global RFs (GRFs) from the reference and distorted stereoscopic images, respectively, and construct their corresponding local quality lookups (LQLs) and global quality lookups (GQLs). In the testing phase, blind quality pooling can be easily achieved by searching optimal GRF and LRF indexes from the learnt LQLs and GQLs, and the quality score is obtained by combining the LRF and GRF indexes together. Experimental results on three publicly 3D image quality assessment databases demonstrate that in comparison with the existing methods, the devised algorithm achieves high consistent alignment with subjective assessment.

  2. Voice and Speech Quality Perception Assessment and Evaluation

    CERN Document Server

    Jekosch, Ute

    2005-01-01

    Foundations of Voice and Speech Quality Perception starts out with the fundamental question of: "How do listeners perceive voice and speech quality and how can these processes be modeled?" Any quantitative answers require measurements. This is natural for physical quantities but harder to imagine for perceptual measurands. This book approaches the problem by actually identifying major perceptual dimensions of voice and speech quality perception, defining units wherever possible and offering paradigms to position these dimensions into a structural skeleton of perceptual speech and voice quality. The emphasis is placed on voice and speech quality assessment of systems in artificial scenarios. Many scientific fields are involved. This book bridges the gap between two quite diverse fields, engineering and humanities, and establishes the new research area of Voice and Speech Quality Perception.

  3. High Resolution Peripheral Quantitative Computed Tomography for Assessment of Bone Quality

    Science.gov (United States)

    Kazakia, Galateia

    2014-03-01

    The study of bone quality is motivated by the high morbidity, mortality, and societal cost of skeletal fractures. Over 10 million people are diagnosed with osteoporosis in the US alone, suffering 1.5 million osteoporotic fractures and costing the health care system over 17 billion annually. Accurate assessment of fracture risk is necessary to ensure that pharmacological and other interventions are appropriately administered. Currently, areal bone mineral density (aBMD) based on 2D dual-energy X-ray absorptiometry (DXA) is used to determine osteoporotic status and predict fracture risk. Though aBMD is a significant predictor of fracture risk, it does not completely explain bone strength or fracture incidence. The major limitation of aBMD is the lack of 3D information, which is necessary to distinguish between cortical and trabecular bone and to quantify bone geometry and microarchitecture. High resolution peripheral quantitative computed tomography (HR-pQCT) enables in vivo assessment of volumetric BMD within specific bone compartments as well as quantification of geometric and microarchitectural measures of bone quality. HR-pQCT studies have documented that trabecular bone microstructure alterations are associated with fracture risk independent of aBMD.... Cortical bone microstructure - specifically porosity - is a major determinant of strength, stiffness, and fracture toughness of cortical tissue and may further explain the aBMD-independent effect of age on bone fragility and fracture risk. The application of finite element analysis (FEA) to HR-pQCT data permits estimation of patient-specific bone strength, shown to be associated with fracture incidence independent of aBMD. This talk will describe the HR-pQCT scanner, established metrics of bone quality derived from HR-pQCT data, and novel analyses of bone quality currently in development. Cross-sectional and longitudinal HR-pQCT studies investigating the impact of aging, disease, injury, gender, race, and

  4. Assessment of air quality microsensors versus reference methods: The EuNetAir joint exercise

    Science.gov (United States)

    Borrego, C.; Costa, A. M.; Ginja, J.; Amorim, M.; Coutinho, M.; Karatzas, K.; Sioumis, Th.; Katsifarakis, N.; Konstantinidis, K.; De Vito, S.; Esposito, E.; Smith, P.; André, N.; Gérard, P.; Francis, L. A.; Castell, N.; Schneider, P.; Viana, M.; Minguillón, M. C.; Reimringer, W.; Otjes, R. P.; von Sicard, O.; Pohle, R.; Elen, B.; Suriano, D.; Pfister, V.; Prato, M.; Dipinto, S.; Penza, M.

    2016-12-01

    The 1st EuNetAir Air Quality Joint Intercomparison Exercise organized in Aveiro (Portugal) from 13th-27th October 2014, focused on the evaluation and assessment of environmental gas, particulate matter (PM) and meteorological microsensors, versus standard air quality reference methods through an experimental urban air quality monitoring campaign. The IDAD-Institute of Environment and Development Air Quality Mobile Laboratory was placed at an urban traffic location in the city centre of Aveiro to conduct continuous measurements with standard equipment and reference analysers for CO, NOx, O3, SO2, PM10, PM2.5, temperature, humidity, wind speed and direction, solar radiation and precipitation. The comparison of the sensor data generated by different microsensor-systems installed side-by-side with reference analysers, contributes to the assessment of the performance and the accuracy of microsensor-systems in a real-world context, and supports their calibration and further development. The overall performance of the sensors in terms of their statistical metrics and measurement profile indicates significant differences in the results depending on the platform and on the sensors considered. In terms of pollutants, some promising results were observed for O3 (r2: 0.12-0.77), CO (r2: 0.53-0.87), and NO2 (r2: 0.02-0.89). For PM (r2: 0.07-0.36) and SO2 (r2: 0.09-0.20) the results show a poor performance with low correlation coefficients between the reference and microsensor measurements. These field observations under specific environmental conditions suggest that the relevant microsensor platforms, if supported by the proper post processing and data modelling tools, have enormous potential for new strategies in air quality control.

  5. Using spatial metrics and surveys for the assessment of trans-boundary deforestation in protected areas of the Maya Mountain Massif: Belize-Guatemala border.

    Science.gov (United States)

    Chicas, S D; Omine, K; Ford, J B; Sugimura, K; Yoshida, K

    2017-02-01

    Understanding the trans-boundary deforestation history and patterns in protected areas along the Belize-Guatemala border is of regional and global importance. To assess deforestation history and patterns in our study area along a section of the Belize-Guatemala border, we incorporated multi-temporal deforestation rate analysis and spatial metrics with survey results. This multi-faceted approach provides spatial analysis with relevant insights from local stakeholders to better understand historic deforestation dynamics, spatial characteristics and human perspectives regarding the underlying causes thereof. During the study period 1991-2014, forest cover declined in Belize's protected areas: Vaca Forest Reserve 97.88%-87.62%, Chiquibul National Park 99.36%-92.12%, Caracol Archeological Reserve 99.47%-78.10% and Colombia River Forest Reserve 89.22%-78.38% respectively. A comparison of deforestation rates and spatial metrics indices indicated that between time periods 1991-1995 and 2012-2014 deforestation and fragmentation increased in protected areas. The major underlying causes, drivers, impacts, and barriers to bi-national collaboration and solutions of deforestation along the Belize-Guatemala border were identified by community leaders and stakeholders. The Mann-Whitney U test identified significant differences between leaders and stakeholders regarding the ranking of challenges faced by management organizations in the Maya Mountain Massif, except for the lack of assessment and quantification of deforestation (LD, SH: 18.67, 23.25, U = 148, p > 0.05). The survey results indicated that failure to integrate buffer communities, coordinate among managing organizations and establish strong bi-national collaboration has resulted in continued ecological and environmental degradation. The information provided by this research should aid managing organizations in their continued aim to implement effective deforestation mitigation strategies.

  6. Assessing the quality of a student-generated question repository

    Science.gov (United States)

    Bates, Simon P.; Galloway, Ross K.; Riise, Jonathan; Homer, Danny

    2014-12-01

    We present results from a study that categorizes and assesses the quality of questions and explanations authored by students in question repositories produced as part of the summative assessment in introductory physics courses over two academic sessions. Mapping question quality onto the levels in the cognitive domain of Bloom's taxonomy, we find that students produce questions of high quality. More than three-quarters of questions fall into categories beyond simple recall, in contrast to similar studies of student-authored content in different subject domains. Similarly, the quality of student-authored explanations for questions was also high, with approximately 60% of all explanations classified as being of high or outstanding quality. Overall, 75% of questions met combined quality criteria, which we hypothesize is due in part to the in-class scaffolding activities that we provided for students ahead of requiring them to author questions. This work presents the first systematic investigation into the quality of student produced assessment material in an introductory physics context, and thus complements and extends related studies in other disciplines.

  7. Center to Advance Palliative Care palliative care clinical care and customer satisfaction metrics consensus recommendations.

    Science.gov (United States)

    Weissman, David E; Morrison, R Sean; Meier, Diane E

    2010-02-01

    Data collection and analysis are vital for strategic planning, quality improvement, and demonstration of palliative care program impact to hospital administrators, private funders and policymakers. Since 2000, the Center to Advance Palliative Care (CAPC) has provided technical assistance to hospitals, health systems and hospices working to start, sustain, and grow nonhospice palliative care programs. CAPC convened a consensus panel in 2008 to develop recommendations for specific clinical and customer metrics that programs should track. The panel agreed on four key domains of clinical metrics and two domains of customer metrics. Clinical metrics include: daily assessment of physical/psychological/spiritual symptoms by a symptom assessment tool; establishment of patient-centered goals of care; support to patient/family caregivers; and management of transitions across care sites. For customer metrics, consensus was reached on two domains that should be tracked to assess satisfaction: patient/family satisfaction, and referring clinician satisfaction. In an effort to ensure access to reliably high-quality palliative care data throughout the nation, hospital palliative care programs are encouraged to collect and report outcomes for each of the metric domains described here.

  8. An Approach for Assessing the Signature Quality of Various Chemical Assays when Predicting the Culture Media Used to Grow Microorganisms

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, Aimee E.; Sego, Landon H.; Webb-Robertson, Bobbie-Jo M.; Kreuzer, Helen W.; Anderson, Richard M.; Unwin, Stephen D.; Weimar, Mark R.; Tardiff, Mark F.; Corley, Courtney D.

    2013-02-01

    We demonstrate an approach for assessing the quality of a signature system designed to predict the culture medium used to grow a microorganism. The system was comprised of four chemical assays designed to identify various ingredients that could be used to produce the culture medium. The analytical measurements resulting from any combination of these four assays can be used in a Bayesian network to predict the probabilities that the microorganism was grown using one of eleven culture media. We evaluated combinations of the signature system by removing one or more of the assays from the Bayes network. We measured and compared the quality of the various Bayes nets in terms of fidelity, cost, risk, and utility, a method we refer to as Signature Quality Metrics

  9. Transcription factor motif quality assessment requires systematic comparative analysis [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Caleb Kipkurui Kibet

    2016-03-01

    Full Text Available Transcription factor (TF binding site prediction remains a challenge in gene regulatory research due to degeneracy and potential variability in binding sites in the genome. Dozens of algorithms designed to learn binding models (motifs have generated many motifs available in research papers with a subset making it to databases like JASPAR, UniPROBE and Transfac. The presence of many versions of motifs from the various databases for a single TF and the lack of a standardized assessment technique makes it difficult for biologists to make an appropriate choice of binding model and for algorithm developers to benchmark, test and improve on their models. In this study, we review and evaluate the approaches in use, highlight differences and demonstrate the difficulty of defining a standardized motif assessment approach. We review scoring functions, motif length, test data and the type of performance metrics used in prior studies as some of the factors that influence the outcome of a motif assessment. We show that the scoring functions and statistics used in motif assessment influence ranking of motifs in a TF-specific manner. We also show that TF binding specificity can vary by source of genomic binding data. We also demonstrate that information content of a motif is not in isolation a measure of motif quality but is influenced by TF binding behaviour. We conclude that there is a need for an easy-to-use tool that presents all available evidence for a comparative analysis.

  10. Food quality assessment in parent–child dyads

    DEFF Research Database (Denmark)

    Bech-Larsen, Tino; Jensen, Birger Boutrup

    2011-01-01

    When the buyer and the consumer of a food product are not identical, the risk of discrepancies between food quality expectations and experience is even higher than when the buyer is also the consumer. In such situations the interpersonal aspects of food quality formation become the focus of atten...... parental knowledge of their children’s quality assessments significantly affect the willingness to pay. Accordingly, interaction between parents and children should be promoted when developing, testing and marketing new and healthier food products for children....... of attention. The purpose of this article is to discuss the interpersonal aspects of food quality formation, and to explore these in the context of parents buying new types of healthier in-between meals for their children. To pursue this we introduce the concept of dyadic quality assessment and apply...... it to a hall-test of children’s and parents’ quality formation and to the latter’s willingness to pay for such products. The findings show poor congruence between parent and child quality evaluations due to the two parties emphasising different quality aspects. Results also indicate, however, that improved...

  11. Self-Organizing Maps for Fingerprint Image Quality Assessment

    DEFF Research Database (Denmark)

    Olsen, Martin Aastrup; Tabassi, Elham; Makarov, Anton

    2013-01-01

    Fingerprint quality assessment is a crucial task which needs to be conducted accurately in various phases in the biometric enrolment and recognition processes. Neglecting quality measurement will adversely impact accuracy and efficiency of biometric recognition systems (e.g. verification and iden......Fingerprint quality assessment is a crucial task which needs to be conducted accurately in various phases in the biometric enrolment and recognition processes. Neglecting quality measurement will adversely impact accuracy and efficiency of biometric recognition systems (e.g. verification...... machine learning techniques. We train a self-organizing map (SOM) to cluster blocks of fingerprint images based on their spatial information content. The output of the SOM is a high-level representation of the finger image, which forms the input to a Random Forest trained to learn the relationship between...

  12. Assessment of Electric Power Quality in Ships'Modern Systems

    Institute of Scientific and Technical Information of China (English)

    Janusz Mindykowski; XU Xiao-yan

    2004-01-01

    The paper deals with the selected problems of electric power quality in ships'modern systems.In the introduction the fundamentals of electric power quality assessment,such as the relations and consequences among power quality phenomena and indices,secondly as the methods and tools as well as the appropriate instrumentation,have been shortly presented.Afterwards,the basic characteristic of power systems on modern ships has been given.The main focus of the paper is put on the assessment of electric power quality in ships'systems fitted with converter subsystems.The state of the art and actual tendencies in the discussed matter have been shown.Some chosen experimental results,based on the research carried out under supervision of the author,have been presented,too.Finally,some concluding issues have been shortly commented on.

  13. Assessing Quality of Care of Elderly Patients Using the ACOVE Quality Indicator Set: A Systematic Review

    NARCIS (Netherlands)

    Askari, M.; Wierenga, P.C.; Eslami, S.; Medlock, S.; de Rooij, S.E.; Abu-Hanna, A.

    2011-01-01

    Background: Care of the elderly is recognized as an increasingly important segment of health care. The Assessing Care Of Vulnerable Elderly (ACOVE) quality indicators (QIs) were developed to assess and improve the care of elderly patients. Objectives: The purpose of this review is to summarize studi

  14. Procedure for assessing visual quality for landscape planning and management

    Science.gov (United States)

    Gimblett, H. Randal; Fitzgibbon, John E.; Bechard, Kevin P.; Wightman, J. A.; Itami, Robert M.

    1987-07-01

    Incorporation of aesthetic considerations in the process of landscape planning and development has frequently met with poor results due to its lack of theoretical basis, public involvement, and failure to deal with spatial implications. This problem has been especially evident when dealing with large areas, for example, the Adirondacks, Scenic Highways, and National Forests and Parks. This study made use of public participation to evaluate scenic quality in a portion of the Niagara Escarpment in Southern Ontario, Canada. The results of this study were analyzed using the visual management model proposed by Brown and Itami (1982) as a means of assessing and evaluating scenic quality. The map analysis package formulated by Tomlin (1980) was then applied to this assessment for the purpose of spatial mapping of visual impact. The results of this study illustrate that it is possible to assess visual quality for landscape/management, preservation, and protection using a theoretical basis, public participation, and a systematic spatial mapping process.

  15. Medical education quality assessment. Perspectives in University Policlinic context.

    Directory of Open Access Journals (Sweden)

    Maricel Castellanos González

    2008-08-01

    Full Text Available Quality has currently a central role within our National Health System, particularly in the formative process of human resources where we need professionals more prepared every day and ready to face complex tasks. We make a bibliographic review related to quality assessment of educational process in health system to analyze the perspectives of the new model of University Policlinic, formative context of Medical Sciences students.

  16. A Methodology for Anatomic Ultrasound Image Diagnostic Quality Assessment

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian; Lange, Theis; Brandt, Andreas Hjelm

    2017-01-01

    is valuable in the continuing process of method optimization and guided development of new imaging methods. It includes a three phased study plan covering from initial prototype development to clinical assessment. Recommendations to the clinical assessment protocol, software, and statistical analysis......This paper discusses methods for assessment of ultrasound image quality based on our experiences with evaluating new methods for anatomic imaging. It presents a methodology to ensure a fair assessment between competing imaging methods using clinically relevant evaluations. The methodology...... to properly reveal the clinical value. The paper exemplifies the methodology using recent studies of Synthetic Aperture Sequential Beamforming tissue harmonic imaging....

  17. Quality-of-life assessment techniques for veterinarians.

    Science.gov (United States)

    Villalobos, Alice E

    2011-05-01

    The revised veterinary oath commits the profession to the prevention and relief of animal suffering. There is a professional obligation to properly assess quality of life (QoL) and confront the issues that ruin it, such as undiagnosed suffering. There are no clinical studies in the arena of QoL assessment at the end of life for pets. This author developed a user-friendly QoL scale to help make proper assessments and decisions along the way to the conclusion of a terminal patient's life. This article discusses decision aids and establishes commonsense techniques to assess a pet's QoL.

  18. The Information Quality Triangle: a methodology to assess clinical information quality.

    Science.gov (United States)

    Choquet, Rémy; Qouiyd, Samiha; Ouagne, David; Pasche, Emilie; Daniel, Christel; Boussaïd, Omar; Jaulent, Marie-Christine

    2010-01-01

    Building qualitative clinical decision support or monitoring based on information stored in clinical information (or EHR) systems cannot be done without assessing and controlling information quality. Numerous works have introduced methods and measures to qualify and enhance data, information models and terminologies quality. This paper introduces an approach based on an Information Quality Triangle that aims at providing a generic framework to help in characterizing quality measures and methods in the context of the integration of EHR data in a clinical datawarehouse. We have successfully experimented the proposed approach at the HEGP hospital in France, as part of the DebugIT EU FP7 project.

  19. Quality of life assessment in dogs and cats receiving chemotherapy

    DEFF Research Database (Denmark)

    Vøls, Kåre K.; Heden, Martin A.; Kristensen, Annemarie Thuri;

    2016-01-01

    comparative analysis of published papers on the effects of chemotherapy on QoL in dogs and cats were conducted. This was supplemented with a comparison of the parameters and domains used in veterinary QoL-assessments with those used in the Pediatric Quality of Life Inventory (PedsQL™) questionnaire designed...... to assess QoL in toddlers. Each of the identified publications including QoL-assessment in dogs and cats receiving chemotherapy applied a different method of QoL-assessment. In addition, the veterinary QoL-assessments were mainly focused on physical clinical parameters, whereas the emotional (6/11), social...... (4/11) and role (4/11) domains were less represented. QoL-assessment of cats and dogs receiving chemotherapy is in its infancy. The most commonly reported method to assess QoL was questionnaire based and mostly included physical and clinical parameters. Standardizing and including a complete range...

  20. Quality assessment of butter cookies applying multispectral imaging

    DEFF Research Database (Denmark)

    Stenby Andresen, Mette; Dissing, Bjørn Skovlund; Løje, Hanne

    2013-01-01

    A method for characterization of butter cookie quality by assessing the surface browning and water content using multispectral images is presented. Based on evaluations of the browning of butter cookies, cookies were manually divided into groups. From this categorization, reference values were...... in a forced convection electrically heated oven. In addition to the browning score, a model for predicting the average water content based on the same images is presented. This shows how multispectral images of butter cookies may be used for the assessment of different quality parameters. Statistical analysis...

  1. No-Reference Video Quality Assessment using MPEG Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2013-01-01

    We present a method for No-Reference (NR) Video Quality Assessment (VQA) for decoded video without access to the bitstream. This is achieved by extracting and pooling features from a NR image quality assessment method used frame by frame. We also present methods to identify the video coding...... and estimate the video coding parameters for MPEG-2 and H.264/AVC which can be used to improve the VQA. The analysis differs from most other video coding analysis methods since it is without access to the bitstream. The results show that our proposed method is competitive with other recent NR VQA methods...

  2. Model quality assessment using distance constraints from alignments

    DEFF Research Database (Denmark)

    Paluszewski, Martin; Karplus, Kevin

    2008-01-01

    Given a set of alternative models for a specific protein sequence, the model quality assessment (MQA) problem asks for an assignment of scores to each model in the set. A good MQA program assigns these scores such that they correlate well with real quality of the models, ideally scoring best...... with the best MQA methods that were assessed at CASP7. We also propose a new evaluation measure, Kendall's tau, that is more interpretable than conventional measures used for evaluating MQA methods (Pearson's r and Spearman's rho). We show clear examples where Kendall's tau agrees much more with our intuition...

  3. Assessing service quality satisfying the expectations of library customers

    CERN Document Server

    Hernon, Peter; Dugan, Robert

    2015-01-01

    Academic and public libraries are continuing to transform as the information landscape changes, expanding their missions into new service roles that call for improved organizational performance and accountability. Since Assessing Service Quality premiered in 1998, receiving the prestigious Highsmith Library Literature Award, scores of library managers and administrators have trusted its guidance for applying a customer-centered approach to service quality and performance evaluation. This extensively revised and updated edition explores even further the ways technology influences both the experiences of library customers and the ways libraries themselves can assess those experiences.

  4. Assessment of quality of life in bronchial asthma patients

    Directory of Open Access Journals (Sweden)

    N Nalina

    2015-01-01

    Full Text Available Introduction: Asthma is a common chronic disease that affects persons of all ages. People with asthma report impact on the physical, psychological and social domains of quality of life. Health-related quality of life (HRQoL measures have been developed to complement traditional health measures such as prevalence, mortality and hospitalization as indicators of the impact of disease. Objective and Study Design: The objective of this study was to assess HRQoL in Bronchial asthma patients and to relate the severity of asthma with their quality of life. About 85 asthma patients were evaluated for HRQoL and their pulmonary function tests values were correlated with HRQoL scores. Results and Conclusion: It was found that asthma patients had poor quality of life. There was greater impairment in quality of life in females, obese and middle age patients indicating that sex, body mass index and age are determinants of HRQoL in asthma patients.

  5. Evaluating MyPlate: An Expanded Framework Using Traditional and Nontraditional Metrics for Assessing Health Communication Campaigns

    Science.gov (United States)

    Levine, Elyse; Abbatangelo-Gray, Jodie; Mobley, Amy R.; McLaughlin, Grant R.; Herzog, Jill

    2012-01-01

    MyPlate, the icon and multimodal communication plan developed for the 2010 Dietary Guidelines for Americans (DGA), provides an opportunity to consider new approaches to evaluating the effectiveness of communication initiatives. A review of indicators used in assessments for previous DGA communication initiatives finds gaps in accounting for…

  6. Examination of the metric characteristics of a Switzerland Competence Assessment Scale as an indicator of school readiness of preschool children

    Directory of Open Access Journals (Sweden)

    Joško Sindik

    2014-12-01

    Full Text Available Competence is “ability at work”, ability that is recognized in a certain activity, while the formation of a competent individual begins as early as preschool. The main objective of this research was to determine the psychometric properties of the Competence Assessment Scale, based on practical experience in the Swiss canton of Glarus, and to describe readiness of children to attend school. The sample of children that involved four kindergartens in Zagreb, Split and Ivanić Grad was examined, with a mean age of 6.26±0.42 years, of which there was 112 girls and 146 boys. Behavioral characteristics of children using the Competence Assessment Scale have been evaluated by 60 children educators from 30 school groups of all kindergartens. There was a positive, although low to medium-high correlation between the estimated level of children's competencies. All items of all SPK subscales were satisfactorily saturated corresponding to principal components. However, SPK shows somewhat lower discriminability. Preliminary testing showed that SPK, applied to a sample of preschool children in Croatia provides a valid and reliable results, and as such can help in assessment of each child for school. Scale is more sensitive at lower levels of competencies, which allows for identification of children with less developed competencies, but not in assessing the most competent children.

  7. From Log Files to Assessment Metrics: Measuring Students' Science Inquiry Skills Using Educational Data Mining

    Science.gov (United States)

    Gobert, Janice D.; Sao Pedro, Michael; Raziuddin, Juelaila; Baker, Ryan S.

    2013-01-01

    We present a method for assessing science inquiry performance, specifically for the inquiry skill of designing and conducting experiments, using educational data mining on students' log data from online microworlds in the Inq-ITS system (Inquiry Intelligent Tutoring System; www.inq-its.org). In our approach, we use a 2-step process: First we…

  8. Assessing the Quality of MT Systems for Hindi to English Translation

    Science.gov (United States)

    Kalyani, Aditi; Kumud, Hemant; Pal Singh, Shashi; Kumar, Ajai

    2014-03-01

    Evaluation plays a vital role in checking the quality of MT output. It is done either manually or automatically. Manual evaluation is very time consuming and subjective, hence use of automatic metrics is done most of the times. This paper evaluates the translation quality of different MT Engines for Hindi-English (Hindi data is provided as input and English is obtained as output) using various automatic metrics like BLEU, METEOR etc. Further the comparison automatic evaluation results with Human ranking have also been given.

  9. Metrics for phylogenetic networks II: nodal and triplets metrics.

    Science.gov (United States)

    Cardona, Gabriel; Llabrés, Mercè; Rosselló, Francesc; Valiente, Gabriel

    2009-01-01

    The assessment of phylogenetic network reconstruction methods requires the ability to compare phylogenetic networks. This is the second in a series of papers devoted to the analysis and comparison of metrics for tree-child time consistent phylogenetic networks on the same set of taxa. In this paper, we generalize to phylogenetic networks two metrics that have already been introduced in the literature for phylogenetic trees: the nodal distance and the triplets distance. We prove that they are metrics on any class of tree-child time consistent phylogenetic networks on the same set of taxa, as well as some basic properties for them. To prove these results, we introduce a reduction/expansion procedure that can be used not only to establish properties of tree-child time consistent phylogenetic networks by induction, but also to generate all tree-child time consistent phylogenetic networks with a given number of leaves.

  10. Assessing users satisfaction with service quality in Slovenian public library

    Directory of Open Access Journals (Sweden)

    Igor Podbrežnik

    2016-07-01

    Full Text Available Purpose: A research was made into user satisfaction with regard to the quality of library services in one of the Slovenian public libraries. The aim was to establish the type of service quality level actually expected by the users, and to determine their satisfaction with the current quality level of available library services.Methodology: The research was performed by means of the SERVQUAL measuring tool which was used to determine the size and direction of the gap between the detected and the expected quality of library services among public library users.Results: Different groups of users provide different assessments of specific quality factors, and a library cannot satisfy the expectations of each and every user if most quality factors display discrepancies between the estimated perception and expectations. The users expect more reliable services and more qualified library staff members who would understand and allocate time for each user’s individual needs. The largest discrepancies from the expectations are detected among users in the under-35 age group and among the more experienced and skilled library users. The results of factor analysis confirm the fact that a higher number of quality factors can be explained by three common factors affecting the satisfaction of library users. A strong connection between user satisfaction and their assessment of the integral quality of services and loyalty has been established.Research restrictions: The research results should not be generalised and applied to all Slovenian public libraries since they differ in many important aspects. In addition, a non-random sampling method was used.Research originality/Applicability: The conducted research illustrates the use of a measuring tool that was developed with the aim of determining the satisfaction of users with the quality of library services in Slovenian public libraries. Keywords: public library, user satisfaction, quality of library services, user

  11. Assessing the nutritional stress hypothesis: Relative influence of diet quantity and quality on seabird productivity

    Science.gov (United States)

    Jodice, P.G.R.; Roby, D.D.; Turco, K.R.; Suryan, R.M.; Irons, D.B.; Piatt, J.F.; Shultz, M.T.; Roseneau, D.G.; Kettle, A.B.; Anthony, J.A.

    2006-01-01

    Food availability comprises a complex interaction of factors that integrates abundance, taxonomic composition, accessibility, and quality of the prey base. The relationship between food availability and reproductive performance can be assessed via the nutritional stress (NSH) and junkfood (JFH) hypotheses. With respect to reproductive success, NSH posits that a deficiency in any of the aforementioned metrics can have a deleterious effect on a population via poor reproductive success. JFH, a component of NSH, posits specifically that it is a decline in the quality of food (i.e. energy density and lipid content) that leads to poor reproductive success. We assessed each in relation to reproductive success in a piscivorous seabird, the black-legged kittiwake Rissa tridactyla. We measured productivity, taxonomic composition, frequency, size, and quality of meals delivered to nestlings from 1996 to 1999 at 6 colonies in Alaska, USA, 3 each in Prince William Sound and Lower Cook Inlet. Productivity varied widely among colony-years. Pacific herring Clupea pallasi, sand lance Ammodytes hexapterus, and capelin Mallotus villosus comprised ca. 80% of the diet among colony-years, and each was characterized by relatively high energy density. Diet quality for kittiwakes in this region therefore remained uniformly high during this study. Meal delivery rate and meal size were quite variable among colony-years, however, and best explained the variability in productivity. Parent kittiwakes appeared to select prey that were energy dense and that maximized the biomass provisioned to broods. While these results fail to support JFH, they do provide substantial support for NSH. ?? Inter-Research 2006.

  12. Sheaves of metric structures

    CERN Document Server

    Daza, Maicol A Ochoa

    2011-01-01

    We introduce and develop the theory of metric sheaves. A metric sheaf $\\A$ is defined on a topological space $X$ such that each fiber is a metric model. We describe the construction of the generic model as the quotient space of the sheaf through an appropriate filter. Semantics in this model is completely controlled and understood by the forcing rules in the sheaf.

  13. MO-A-16A-01: QA Procedures and Metrics: In Search of QA Usability

    Energy Technology Data Exchange (ETDEWEB)

    Sathiaseelan, V [Northwestern Memorial Hospital, Chicago, IL (United States); Thomadsen, B [University of Wisconsin, Madison, WI (United States)

    2014-06-15

    Radiation therapy has undergone considerable changes in the past two decades with a surge of new technology and treatment delivery methods. The complexity of radiation therapy treatments has increased and there has been increased awareness and publicity about the associated risks. In response, there has been proliferation of guidelines for medical physicists to adopt to ensure that treatments are delivered safely. Task Group recommendations are copious, and clinical physicists' hours are longer, stretched to various degrees between site planning and management, IT support, physics QA, and treatment planning responsibilities.Radiation oncology has many quality control practices in place to ensure the delivery of high-quality, safe treatments. Incident reporting systems have been developed to collect statistics about near miss events at many radiation oncology centers. However, tools are lacking to assess the impact of these various control measures. A recent effort to address this shortcoming is the work of Ford et al (2012) who recently published a methodology enumerating quality control quantification for measuring the effectiveness of safety barriers. Over 4000 near-miss incidents reported from 2 academic radiation oncology clinics were analyzed using quality control quantification, and a profile of the most effective quality control measures (metrics) was identified.There is a critical need to identify a QA metric to help the busy clinical physicists to focus their limited time and resources most effectively in order to minimize or eliminate errors in the radiation treatment delivery processes. In this symposium the usefulness of workflows and QA metrics to assure safe and high quality patient care will be explored.Two presentations will be given:Quality Metrics and Risk Management with High Risk Radiation Oncology ProceduresStrategies and metrics for quality management in the TG-100 Era Learning Objectives: Provide an overview and the need for QA usability

  14. Using descriptive mark-up to formalize translation quality assessment

    CERN Document Server

    Kutuzov, Andrey

    2008-01-01

    The paper deals with using descriptive mark-up to emphasize translation mistakes. The author postulates the necessity to develop a standard and formal XML-based way of describing translation mistakes. It is considered to be important for achieving impersonal translation quality assessment. Marked-up translations can be used in corpus translation studies; moreover, automatic translation assessment based on marked-up mistakes is possible. The paper concludes with setting up guidelines for further activity within the described field.

  15. No-Reference Video Quality Assessment using Codec Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    A no-reference video quality assessment (VQA) method is presented for videos distorted by H.264/AVC and MPEG-2. The assessment is performed without access to the bit-stream. Instead we analyze and estimate coefficients based on decoded pixels. The approach involves distinguishing between the two...... (SVR). For validation purposes, the proposed method was tested on two databases. In both cases good performance compared with state of the art full, reduced, and no-reference VQA algorithms was achieved....

  16. Landscape morphology metrics for urban areas: analysis of the role of vegetation in the management of the quality of urban environment

    Directory of Open Access Journals (Sweden)

    Danilo Marques de Magalhães

    2013-05-01

    Full Text Available This study has the objective to demonstrate the applicability of landscape metric analysis undertaken in fragments of urban land use. More specifically, it focuses in low vegetation cover, arboreal and shrubbery vegetation and their distribution on land use. Differences of vegetation cover in dense urban areas are explained. It also discusses briefly the state-of-the-art Landscape Ecology and landscape metrics. It develops, as an example, a case study in Belo Horizonte, Minas Gerais, Brazil. For this study, it selects the use of the area’s metrics, the relation between area, perimeter, core, and circumscribed circle. From this analysis, this paper proposes the definition of priority areas for conservation, urban parks, free spaces of common land, linear parks and green corridors. It is demonstrated that, in order to design urban landscape, studies of two-dimension landscape representations are still interesting, but should consider the systemic relation between different factors related to shape and land use.

  17. PLÉIADES PROJECT: ASSESSMENT OF GEOREFERENCING ACCURACY, IMAGE QUALITY, PANSHARPENING PERFORMENCE AND DSM/DTM QUALITY

    Directory of Open Access Journals (Sweden)

    H. Topan

    2016-06-01

    Full Text Available Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo runs a MyGIC (formerly Pléiades Users Group program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD and VNIR (2 m GSD Pléiades 1A images were investigated over Zonguldak test site (Turkey which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC orientation, using ~170 Ground Control Points (GCPs. 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common

  18. PLÉIADES Project: Assessment of Georeferencing Accuracy, Image Quality, Pansharpening Performence and Dsm/dtm Quality

    Science.gov (United States)

    Topan, Hüseyin; Cam, Ali; Özendi, Mustafa; Oruç, Murat; Jacobsen, Karsten; Taşkanat, Talha

    2016-06-01

    Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical

  19. Forensic mental health assessment in France: recommendations for quality improvement.

    Science.gov (United States)

    Combalbert, Nicolas; Andronikof, Anne; Armand, Marine; Robin, Cécile; Bazex, Hélène

    2014-01-01

    The quality of forensic mental health assessment has been a growing concern in various countries on both sides of the Atlantic, but the legal systems are not always comparable and some aspects of forensic assessment are specific to a given country. This paper describes the legal context of forensic psychological assessment in France (i.e. pre-trial investigation phase entrusted to a judge, with mental health assessment performed by preselected professionals called "experts" in French), its advantages and its pitfalls. Forensic psychiatric or psychological assessment is often an essential and decisive element in criminal cases, but since a judiciary scandal which was made public in 2005 (the Outreau case) there has been increasing criticism from the public and the legal profession regarding the reliability of clinical conclusions. Several academic studies and a parliamentary report have highlighted various faulty aspects in both the judiciary process and the mental health assessments. The heterogeneity of expert practices in France appears to be mainly related to a lack of consensus on several core notions such as mental health diagnosis or assessment methods, poor working conditions, lack of specialized training, and insufficient familiarity with the Code of Ethics. In this article we describe and analyze the French practice of forensic psychologists and psychiatrists in criminal cases and propose steps that could be taken to improve its quality, such as setting up specialized training courses, enforcing the Code of Ethics for psychologists, and calling for consensus on diagnostic and assessment methods.

  20. Theoretical Aspects and Methodological Approaches to Sales Services Quality Assessment

    Directory of Open Access Journals (Sweden)

    Tarasova EE

    2015-11-01

    Full Text Available The article defines trade service quality and proposes an object-oriented approach for its essence interpretation, according to which such components as product offering and goods quality, service forms and goods selling methods, merchandising, services and staff are singled out; a model of managing retail outlets trading service, which covers levels of strategic, tactical and operational management and is aimed at ensuring customers’ perception expectations, achieving sustainable competitive positions and increasing customers’ loyalty is worked out; a methodology of trade services quality estimation that allows to carry out a comparative assessment of cooperative retailing both in terms of general indicators and their individual components, regulate the factors affecting trade services quality and have a positive administrative action is developed and tested; the results of evaluation of the customers’ service quality in the consumer cooperative retailers, dynamics of overall and comprehensive indicators of measurement of trade service quality for selected components are given; the main directions and measures for improving trade services quality basing on quantitative values of individual indicators for each of the five selected components (product offering and goods quality, service forms and sale methods, merchandising, services, staff are stated.

  1. Assessment of Soil Quality of Tidal Marshes in Shanghai City

    Institute of Scientific and Technical Information of China (English)

    Qing; WANG; Juan; TAN; Jianqiang; WU; Chenyan; SHA; Junjie; RUAN; Min; WANG; Shenfa; HUANG

    2013-01-01

    We take three types of tidal marshes in Shanghai City as the study object:tidal marshes in mainland,tidal marshes in the rim of islands,and shoal in Yangtze estuary.On the basis of assessing nutrient quality and environmental quality,respectively,we use soil quality index(SQI)to assess the soil quality of tidal flats,meanwhile formulate the quality grading standards,and analyze the current situation and characteristics of it.The results show that except the north of Hangzhou Bay,Nanhui and Jiuduansha with low soil nutrient quality,there are not obvious differences in soil nutrient quality between other regions;the heavy metal pollution of tidal marshes in mainland is more serious than that of tidal marshes in the rim of islands;in terms of the comprehensive soil quality index,the regions are sequenced as follows:Jiuduansha wetland>Chongming Dongtan wetland>Nanhui tidal flat>tidal flat on the periphery of Chongming Island>tidal flat on the periphery of Hengsha Island>Pudong tidal flat>Baoshan tidal flat>tidal flat on the periphery of Changxing Island>tidal flat in the north of Hangzhou Bay.Among them,Jiuduansha wetland and Chongming Dongtan wetland have the best soil quality,belonging to class III,followed by Nanhui tidal flat,tidal flat on the periphery of Chongming Island and tidal flat on the periphery of Hengsha Island,belonging to class IV;tidal flat on the periphery of Changxing Island,Pudong tidal flat,Baoshan tidal flat and tidal flat in the north of Hangzhou Bay belong to class V.

  2. Quality of Information Assurance - Assessment, Management and Use (QIAAMU)

    Science.gov (United States)

    2013-04-01

    QUALITY OF INFORMATION ASSURANCE – ASSESSMENT, MANAGEMENT AND USE (QIAAMU) RAYTHEON BBN TECHNOLGIES APRIL 2013 FINAL TECHNICAL...unit uses a mobile platform (laptop plus radio and SATCOM gear) for information operations such as calling in air support or feeding air operation

  3. Assessing the Learning Path Specification: a Pragmatic Quality Approach

    NARCIS (Netherlands)

    Janssen, José; Berlanga, Adriana; Heyenrath, Stef; Martens, Harrie; Vogten, Hubert; Finders, Anton; Herder, Eelco; Hermans, Henry; Melero, Javier; Schaeps, Leon; Koper, Rob

    2010-01-01

    Janssen, J., Berlanga, A. J., Heyenrath, S., Martens, H., Vogten, H., Finders, A., Herder, E., Hermans, H., Melero Gallardo, J., Schaeps, L., & Koper, R. (2010). Assessing the Learning Path Specification: a Pragmatic Quality Approach. Journal of Universal Computer Science, 16(21), 3191-3209.

  4. Heuristic Model Of The Composite Quality Index Of Environmental Assessment

    Science.gov (United States)

    Khabarov, A. N.; Knyaginin, A. A.; Bondarenko, D. V.; Shepet, I. P.; Korolkova, L. N.

    2017-01-01

    The goal of the paper is to present the heuristic model of the composite environmental quality index based on the integrated application of the elements of utility theory, multidimensional scaling, expert evaluation and decision-making. The composite index is synthesized in linear-quadratic form, it provides higher adequacy of the results of the assessment preferences of experts and decision-makers.

  5. [Assessments during Medical Specialists Training: quantity or quality?

    Science.gov (United States)

    Hamming, J F

    2017-01-01

    Structured assessments form a mandatory part of Dutch Medical Specialist Training, but create administrative workload for both the staff and supervisors. One could argue that the quality of the narrative feedback is more important than the extensive reporting in learning portfolios, and that the focus should be on continuous on-the-job coaching.

  6. Measure for Measure: Advancement's Role in Assessments of Institutional Quality.

    Science.gov (United States)

    Wedekind, Annie; Pollack, Rachel H.

    2002-01-01

    Explores how accreditation, bond ratings, and magazine rankings--including advancement's role in these assessments--continue to be incomplete and controversial indicators of educational quality. Asserts that advancement officers should work to demonstrate the importance of their efforts, such as increasing endowments and alumni support, within the…

  7. Quality Control Charts in Large-Scale Assessment Programs

    Science.gov (United States)

    Schafer, William D.; Coverdale, Bradley J.; Luxenberg, Harlan; Jin, Ying

    2011-01-01

    There are relatively few examples of quantitative approaches to quality control in educational assessment and accountability contexts. Among the several techniques that are used in other fields, Shewart charts have been found in a few instances to be applicable in educational settings. This paper describes Shewart charts and gives examples of how…

  8. Parameters of Higher School Internationalization and Quality Assessment

    Science.gov (United States)

    Juknyte-Petreikiene, Inga

    2006-01-01

    The article presents the analysis of higher education internationalization, its conceptions and forms of manifestation. It investigates the ways and means of higher education internationalization, the diversity of higher school internationalization motives, the issues of higher education internationalization quality assessment, presenting an…

  9. Feedback Effects of Teaching Quality Assessment: Macro and Micro Evidence

    Science.gov (United States)

    Bianchini, Stefano

    2014-01-01

    This study investigates the feedback effects of teaching quality assessment. Previous literature looked separately at the evolution of individual and aggregate scores to understand whether instructors and university performance depends on its past evaluation. I propose a new quantitative-based methodology, combining statistical distributions and…

  10. Quality assessment of strategic management in organizations - ma maturity model

    OpenAIRE

    Balta Corneliu; Rosioru Nicoleta Diana

    2013-01-01

    The paper presents the actual main concepts related to assessment of quality management in organizations. Strategic management is analyzed taking into consideration the most important dimensions including leadership, culture and values, process improvement, etc. The five levels of maturity model of strategic management are described showing the connection with organizational development

  11. Compensating for Type-I Errors in Video Quality Assessment

    DEFF Research Database (Denmark)

    Brunnström, Kjell; Tavakoli, Samira; Søgaard, Jacob

    2015-01-01

    This paper analyzes the impact on compensating for Type-I errors in video quality assessment. A Type-I error is to incorrectly conclude that there is an effect. The risk increases with the number of comparisons that are performed in statistical tests. Type-I errors are an issue often neglected...

  12. Quality of Feedback Following Performance Assessments: Does Assessor Expertise Matter?

    Science.gov (United States)

    Govaerts, Marjan J. B.; van de Wiel, Margje W. J.; van der Vleuten, Cees P. M.

    2013-01-01

    Purpose: This study aims to investigate quality of feedback as offered by supervisor-assessors with varying levels of assessor expertise following assessment of performance in residency training in a health care setting. It furthermore investigates if and how different levels of assessor expertise influence feedback characteristics.…

  13. Framework for dementia Quality of Life assessment with Assistive Technology

    DEFF Research Database (Denmark)

    Peterson, Carrie Beth; Prasad, Neeli R.; Prasad, Ramjee

    2010-01-01

    This paper proposes a theoretical framework for a Quality of Life (QOL) evaluation tool that is sensitive, flexible, computerized, and specific to assistive technology (AT) for dementia care. Using the appropriate evaluation tool serves to improve methodologies that are used for AT assessment...

  14. Assessment of quality indicators in spanish higher military education

    Directory of Open Access Journals (Sweden)

    Olmos Gómez Maria del Carmen

    2016-01-01

    Full Text Available The quality assessment is subject to multiple interpretations of its content and purpose, and also regarding to methods and techniques used to develop it. Although purposes of assessment are varied, usually pursuit three goals: Improvement, accountability and information. Currently, the concept of quality evaluation has been replaced by the management of educational quality, as Matthew [6] point “the new culture of evaluation is no longer oriented to penalty, ranking or selection of people, as provide a reasoned and reasonable information to guide the management of educational improvement”. Military Training Centres are externally evaluated by an experts External Evaluation Unit to identify strengths and weaknesses on their self-evaluation system and focus on important aspects related to the organization of the Centre, development of work plans, teacher’s style and students learning strategies, system of evaluation and qualification and accurate recommendations to improve all that. This research focuses on the evaluation of quality indicators for the external evaluation of higher education at Military Education Centres in Spain and it is funded by a joint project between University of Granada and MADOC. The technique used for collecting and analysing information was a content description of several documents provided by these military educational authorities, arising the identification and extraction of relevant indicators on the evaluation of higher education. This analysis was primarily based on standards and indicators systems by ANECA (National Agency for Quality Assessment and Accreditation adapted to the Military Higher Education, but also it was consider other standards by international agencies and evaluative institutions, such as University of Chile, University of Paraguay, Canarias Agency for Quality Assessment and Accreditation and Agency for Quality Education System University of Castilla y León. The analysis realize a usual

  15. Water Quality Assessment of Ayeyarwady River in Myanmar

    Science.gov (United States)

    Thatoe Nwe Win, Thanda; Bogaard, Thom; van de Giesen, Nick

    2015-04-01

    Myanmar's socio-economic activities, urbanisation, industrial operations and agricultural production have increased rapidly in recent years. With the increase of socio-economic development and climate change impacts, there is an increasing threat on quantity and quality of water resources. In Myanmar, some of the drinking water coverage still comes from unimproved sources including rivers. The Ayeyarwady River is the main river in Myanmar draining most of the country's area. The use of chemical fertilizer in the agriculture, the mining activities in the catchment area, wastewater effluents from the industries and communities and other development activities generate pollutants of different nature. Therefore water quality monitoring is of utmost importance. In Myanmar, there are many government organizations linked to water quality management. Each water organization monitors water quality for their own purposes. The monitoring is haphazard, short term and based on individual interest and the available equipment. The monitoring is not properly coordinated and a quality assurance programme is not incorporated in most of the work. As a result, comprehensive data on the water quality of rivers in Myanmar is not available. To provide basic information, action is needed at all management levels. The need for comprehensive and accurate assessments of trends in water quality has been recognized. For such an assessment, reliable monitoring data are essential. The objective of our work is to set-up a multi-objective surface water quality monitoring programme. The need for a scientifically designed network to monitor the Ayeyarwady river water quality is obvious as only limited and scattered data on water quality is available. However, the set-up should also take into account the current socio-economic situation and should be flexible to adjust after first years of monitoring. Additionally, a state-of-the-art baseline river water quality sampling program is required which

  16. 3D Air Quality and the Clean Air Interstate Rule: Lagrangian Sampling of CMAQ Model Results to Aid Regional Accountability Metrics

    Science.gov (United States)

    Fairlie, T. D.; Szykman, Jim; Pierce, Robert B.; Gilliland, A. B.; Engel-Cox, Jill; Weber, Stephanie; Kittaka, Chieko; Al-Saadi, Jassim A.; Scheffe, Rich; Dimmick, Fred; Tikvart, Joe

    2008-01-01

    The Clean Air Interstate Rule (CAIR) is expected to reduce transport of air pollutants (e.g. fine sulfate particles) in nonattainment areas in the Eastern United States. CAIR highlights the need for an integrated air quality observational and modeling system to understand sulfate as it moves in multiple dimensions, both spatially and temporally. Here, we demonstrate how results from an air quality model can be combined with a 3d monitoring network to provide decision makers with a tool to help quantify the impact of CAIR reductions in SO2 emissions on regional transport contributions to sulfate concentrations at surface monitors in the Baltimore, MD area, and help improve decision making for strategic implementation plans (SIPs). We sample results from the Community Multiscale Air Quality (CMAQ) model using ensemble back trajectories computed with the NASA Langley Research Center trajectory model to provide Lagrangian time series and vertical profile information, that can be compared with NASA satellite (MODIS), EPA surface, and lidar measurements. Results are used to assess the regional transport contribution to surface SO4 measurements in the Baltimore MSA, and to characterize the dominant source regions for low, medium, and high SO4 episodes.

  17. Assessing quality of care of elderly patients using the ACOVE quality indicator set: a systematic review.

    Directory of Open Access Journals (Sweden)

    Marjan Askari

    Full Text Available BACKGROUND: Care of the elderly is recognized as an increasingly important segment of health care. The Assessing Care Of Vulnerable Elderly (ACOVE quality indicators (QIs were developed to assess and improve the care of elderly patients. OBJECTIVES: The purpose of this review is to summarize studies that assess the quality of care using QIs from or based on ACOVE, in order to evaluate the state of quality of care for the reported conditions. METHODS: We systematically searched MEDLINE, EMBASE and CINAHL for English-language studies indexed by February 2010. Articles were included if they used any ACOVE QIs, or adaptations thereof, for assessing the quality of care. Included studies were analyzed and relevant information was extracted. We summarized the results of these studies, and when possible generated an overall conclusion about the quality of care as measured by ACOVE for each condition, in various settings, and for each QI. RESULTS: Seventeen studies were included with 278 QIs (original, adapted or newly developed. The quality scores showed large variation between and within conditions. Only a few conditions showed a stable pass rate range over multiple studies. Overall, pass rates for dementia (interquartile range (IQR: 11%-35%, depression (IQR: 27%-41%, osteoporosis (IQR: 34%-43% and osteoarthritis (IQR: 29-41% were notably low. Medication management and use (range: 81%-90%, hearing loss (77%-79% and continuity of care (76%-80% scored higher than other conditions. Out of the 278 QIs, 141 (50% had mean pass rates below 50% and 121 QIs (44% had pass rates above 50%. Twenty-three percent of the QIs scored above 75%, and 16% scored below 25%. CONCLUSIONS: Quality of care per condition varies markedly across studies. Although there has been much effort in improving the care for elderly patients in the last years, the reported quality of care according to the ACOVE indicators is still relatively low.

  18. A Method for Assessing Quality of Service in Broadband Networks

    DEFF Research Database (Denmark)

    Bujlow, Tomasz; Riaz, M. Tahir; Pedersen, Jens Myrup

    2012-01-01

    Monitoring of Quality of Service (QoS) in high-speed Internet infrastructure is a challenging task. However, precise assessments must take into account the fact that the requirements for the given quality level are service-dependent. Backbone QoS monitoring and analysis requires processing of large...... taken from the description of system sockets. This paper proposes a new method for measuring the Quality of Service (QoS) level in broadband networks, based on our Volunteer-Based System for collecting the training data, Machine Learning Algorithms for generating the classification rules and application...... and provide C5.0 high-quality training data, divided into groups corresponding to different types of applications. It was found that currently existing means of collecting data (classification by ports, Deep Packet Inspection, statistical classification, public data sources) are not sufficient and they do...

  19. Evaluating the Role of Content in Subjective Video Quality Assessment

    Directory of Open Access Journals (Sweden)

    Milan Mirkovic

    2014-01-01

    Full Text Available Video quality as perceived by human observers is the ground truth when Video Quality Assessment (VQA is in question. It is dependent on many variables, one of them being the content of the video that is being evaluated. Despite the evidence that content has an impact on the quality score the sequence receives from human evaluators, currently available VQA databases mostly comprise of sequences which fail to take this into account. In this paper, we aim to identify and analyze differences between human cognitive, affective, and conative responses to a set of videos commonly used for VQA and a set of videos specifically chosen to include video content which might affect the judgment of evaluators when perceived video quality is in question. Our findings indicate that considerable differences exist between the two sets on selected factors, which leads us to conclude that videos starring a different type of content than the currently employed ones might be more appropriate for VQA.

  20. Evaluating the Role of Content in Subjective Video Quality Assessment

    Science.gov (United States)

    Vrgovic, Petar

    2014-01-01

    Video quality as perceived by human observers is the ground truth when Video Quality Assessment (VQA) is in question. It is dependent on many variables, one of them being the content of the video that is being evaluated. Despite the evidence that content has an impact on the quality score the sequence receives from human evaluators, currently available VQA databases mostly comprise of sequences which fail to take this into account. In this paper, we aim to identify and analyze differences between human cognitive, affective, and conative responses to a set of videos commonly used for VQA and a set of videos specifically chosen to include video content which might affect the judgment of evaluators when perceived video quality is in question. Our findings indicate that considerable differences exist between the two sets on selected factors, which leads us to conclude that videos starring a different type of content than the currently employed ones might be more appropriate for VQA. PMID:24523643

  1. Ensuring the quality of occupational safety risk assessment.

    Science.gov (United States)

    Pinto, Abel; Ribeiro, Rita A; Nunes, Isabel L

    2013-03-01

    In work environments, the main aim of occupational safety risk assessment (OSRA) is to improve the safety level of an installation or site by either preventing accidents and injuries or minimizing their consequences. To this end, it is of paramount importance to identify all sources of hazards and assess their potential to cause problems in the respective context. If the OSRA process is inadequate and/or not applied effectively, it results in an ineffective safety prevention program and inefficient use of resources. An appropriate OSRA is an essential component of the occupational safety risk management process in industries. In this article, we performed a survey to elicit the relative importance for identified OSRA tasks to enable an in-depth evaluation of the quality of risk assessments related to occupational safety aspects on industrial sites. The survey involved defining a questionnaire with the most important elements (tasks) for OSRA quality assessment, which was then presented to safety experts in the mining, electrical power production, transportation, and petrochemical industries. With this work, we expect to contribute to the main question of OSRA in industries: "What constitutes a good occupational safety risk assessment?" The results obtained from the questionnaire showed that experts agree with the proposed OSRA process decomposition in steps and tasks (taxonomy) and also with the importance of assigning weights to obtain knowledge about OSRA task relevance. The knowledge gained will enable us, in the near future, to build a framework to evaluate OSRA quality for industrial sites.

  2. Assessing the Quality of Teaching Process Lamerd School of Nursing

    Directory of Open Access Journals (Sweden)

    Hashemi SA

    2015-07-01

     Findings: From the viewpoint of students, teaching quality of teaching was not good in all studied elements. There were significant differences in “Technology selection” between the viewpoints of 2012 and 2014 entered students (p=0.001 and also in “assessment and evaluation” between the viewpoints of 2012 and 2013 entered students (p=0.03. There were also significant differences in classroom management (p=0.008, the dynamics of learning (p=0.02 and assessment and evaluation (p=0.01 between boys and girls. Conclusion: The quality of teaching is lower than average in targets selection, class management, learning strategy, content regulation, the dynamics of learning, technology selection and assessment and evaluation in Lamerd nursing faculty from the viewpoint of students.

  3. Quality-of-life assessment in advanced cancer.

    LENUS (Irish Health Repository)

    Donnelly, S

    2000-07-01

    In the past 5 years, quality-of-life (QOL) assessment measures such as the McGill, McMaster, Global Visual Analogue Scale, Assessment of QOL at the End of Life, Life Evaluation Questionnaire, and Hospice QOL Index have been devised specifically for patients with advanced cancer. The developers of these instruments have tried to respond to the changing needs of this specific population, taking into account characteristics including poor performance status, difficulty with longitudinal study, rapidly deteriorating physical condition, and change in relevant issues. Emphasis has been placed on patient report, ease and speed of completion, and the existential domain or meaning of life. Novel techniques in QOL measurement have also been adapted for palliative care, such as judgment analysis in the Schedule for the Evaluation of Individual Quality of Life. It is generally agreed that a single tool will not cover all QOL assessment needs.

  4. Validity of portfolio assessment: which qualities determine ratings?

    Science.gov (United States)

    Driessen, Erik W; Overeem, Karlijn; van Tartwijk, Jan; van der Vleuten, Cees P M; Muijtjens, Arno M M

    2006-09-01

    The portfolio is becoming increasingly accepted as a valuable tool for learning and assessment. The validity of portfolio assessment, however, may suffer from bias due to irrelevant qualities, such as lay-out and writing style. We examined the possible effects of such qualities in a portfolio programme aimed at stimulating Year 1 medical students to reflect on their professional and personal development. In later curricular years, this portfolio is also used to judge clinical competence. We developed an instrument, the Portfolio Analysis Scoring Inventory, to examine the impact of form and content aspects on portfolio assessment. The Inventory consists of 15 items derived from interviews with experienced mentors, the literature, and the criteria for reflective competence used in the regular portfolio assessment procedure. Forty portfolios, selected from 231 portfolios for which ratings from the regular assessment procedure were available, were rated by 2 researchers, independently, using the Inventory. Regression analysis was used to estimate the correlation between the ratings from the regular assessment and those resulting from the Inventory items. Inter-rater agreement ranged from 0.46 to 0.87. The strongest predictor of the variance in the regular ratings was 'quality of reflection' (R 0.80; R2 66%). No further items accounted for a significant proportion of variance. Irrelevant items, such as writing style and lay-out, had negligible effects. The absence of an impact of irrelevant criteria appears to support the validity of the portfolio assessment procedure. Further studies should examine the portfolio's validity for the assessment of clinical competence.

  5. Assessment of the quality of sample labelling for clinical research

    Directory of Open Access Journals (Sweden)

    Pablo Pérez-Huertas

    2016-03-01

    Full Text Available Objective: To assess the quality of the labels for clinical trial samples through current regulations, and to analyze its potential correlation with the specific characteristics of each sample. Method: A transversal multicenter study where the clinical trial samples from two third level hospitals were analyzed. The eleven items from Directive 2003/94/EC, as well as the name of the clinical trial and the dose on the label cover, were considered variables for labelling quality. The influence of the characteristics of each sample on labelling quality was also analyzed. Outcome: The study included 503 samples from 220 clinical trials. The mean quality of labelling, understood as the proportion of items from Appendix 13, was of 91.9%. Out of these, 6.6% did not include the name of the sample in the outer face of the label, while in 9.7% the dose was missing. The samples with clinical trial-type samples presented a higher quality (p < 0.049, blinding reduced their quality (p = 0.017, and identification by kit number or by patient increased it (p < 0.01. The promoter was the variable which introduced the highest variability into the analysis. Conclusions: The mean quality of labelling is adequate in the majority of clinical trial samples. The lack of essential information in some samples, such as the clinical trial code and the period of validity, is alarming and might be the potential source for dispensing or administration errors.

  6. A Methodology for Anatomic Ultrasound Image Diagnostic Quality Assessment.

    Science.gov (United States)

    Hemmsen, Martin Christian; Lange, Theis; Brandt, Andreas Hjelm; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt

    2017-01-01

    This paper discusses the methods for the assessment of ultrasound image quality based on our experiences with evaluating new methods for anatomic imaging. It presents a methodology to ensure a fair assessment between competing imaging methods using clinically relevant evaluations. The methodology is valuable in the continuing process of method optimization and guided development of new imaging methods. It includes a three phased study plan covering from initial prototype development to clinical assessment. Recommendations to the clinical assessment protocol, software, and statistical analysis are presented. Earlier uses of the methodology has shown that it ensures validity of the assessment, as it separates the influences between developer, investigator, and assessor once a research protocol has been established. This separation reduces confounding influences on the result from the developer to properly reveal the clinical value. This paper exemplifies the methodology using recent studies of synthetic aperture sequential beamforming tissue harmonic imaging.

  7. Quality assessment of private practitioners in rural Wardha, Maharashtra

    Directory of Open Access Journals (Sweden)

    Ganguly Enakshi

    2008-01-01

    Full Text Available Objective: To assess the quality of care provided by private practitioners in rural areas of Wardha district. Methodology: The study was carried out in three primary health centres of Wardha district. 20% of the 44 registered private practitioners were selected randomly for the study. The data was collected using checklist through direct observation for the infrastructure. Assessment of quality of services delivered, 10 consecutive patients were observed and also the medical practitioner was interviewed. Supplies and logistics were assessed through observation. Results: All the facilities were sheltered from weather conditions and 90% had adequate waiting space. But, drinking water and adequate IEC material was available in only 20% facilities. Complete history taking and relevant physical examination was done in only 20% cases. Only 20% practitioners recorded blood pressure and 30% recorded temperature in cases with fever. Provisional diagnosis was not written in any of the case and only 20% explained prescription to the patients. Conclusion: There is considerable scope to improve the quality of services of private practitioners. To achieve this quality assurance programs may be initiated along with the training of private medical practitioners.

  8. [Assessment indicators of soil quality in hilly Loess plateau].

    Science.gov (United States)

    Xu, Mingxiang; Liu, Guobin; Zhao, Yunge

    2005-10-01

    By the methods of sensitivity analysis, main component analsis and discriminant analysis, this paper screened the sensitive indicators from 32 soil attributes to assess the productivity and erosion-resistance ability of the soils in hilly Loess Plateau. The results showed that soil available phosphorus content, anti-scouring ability, infiltration coefficient, labile organic carbon content, organic matter content and urease activity were the most sensitive indicators for soil quality assessment and the main targets for soil quality management and improvement, while soil biological indicators were with high and medium sensitivity. Five soil quality factors were summed up from 29 soil chemical, physical and biological attributes, i. e., organic matter, texture, phosphorus, porosity and microstructure. Except the factor porosity, the other four factors were significantly different between different land use types. Eight indicators including soil organic matter content, infiltration coefficient, anti-scouring ability, CEC, invertase activity, mean weight diameter (MWD) of aggregates, available phosphorus, and MWD of microaggregate were identified as the assessment indicators of the soil quality in hilly Loess Plateau, with the organic matter content, infiltration coefficient and anti-scouring ability as the key indicators.

  9. Assessing the Quality of Quality Assessment: The Inspection of Teaching and Learning in British Universities.

    Science.gov (United States)

    Underwood, Simeon

    2000-01-01

    Characterizes Subject Review, a new scrutiny process for British higher education, evaluating its effectiveness against the purposes it has set itself in the area of funding policy, enhancement of provision, and public information. The paper offers a case study of factors which come into account when systems for measuring the quality of higher…

  10. Model-based quality assessment and base-calling for second-generation sequencing data.

    Science.gov (United States)

    Bravo, Héctor Corrada; Irizarry, Rafael A

    2010-09-01

    Second-generation sequencing (sec-gen) technology can sequence millions of short fragments of DNA in parallel, making it capable of assembling complex genomes for a small fraction of the price and time of previous technologies. In fact, a recently formed international consortium, the 1000 Genomes Project, plans to fully sequence the genomes of approximately 1200 people. The prospect of comparative analysis at the sequence level of a large number of samples across multiple populations may be achieved within the next five years. These data present unprecedented challenges in statistical analysis. For instance, analysis operates on millions of short nucleotide sequences, or reads-strings of A,C,G, or T's, between 30 and 100 characters long-which are the result of complex processing of noisy continuous fluorescence intensity measurements known as base-calling. The complexity of the base-calling discretization process results in reads of widely varying quality within and across sequence samples. This variation in processing quality results in infrequent but systematic errors that we have found to mislead downstream analysis of the discretized sequence read data. For instance, a central goal of the 1000 Genomes Project is to quantify across-sample variation at the single nucleotide level. At this resolution, small error rates in sequencing prove significant, especially for rare variants. Sec-gen sequencing is a relatively new technology for which potential biases and sources of obscuring variation are not yet fully understood. Therefore, modeling and quantifying the uncertainty inherent in the generation of sequence reads is of utmost importance. In this article, we present a simple model to capture uncertainty arising in the base-calling procedure of the Illumina/Solexa GA platform. Model parameters have a straightforward interpretation in terms of the chemistry of base-calling allowing for informative and easily interpretable metrics that capture the variability in

  11. Assessment of anxiety and quality of life in fibromyalgia patients

    Directory of Open Access Journals (Sweden)

    Tathiana Pagano

    Full Text Available CONTEXT: Fibromyalgia is a syndrome characterized by chronic, diffuse musculoskeletal pain, and by a low pain threshold at specific anatomical points. The syndrome is associated with other symptoms such as fatigue, sleep disturbance, morning stiffness and anxiety. Because of its chronic nature, it often has a negative impact on patients' quality of life. OBJECTIVE: To assess the quality of life and anxiety level of patients with fibromyalgia. TYPE Of STUDY: Cross-sectional. SETTING: Rheumatology outpatient service of Hospital das Clínicas (Medical School, Universidade de São Paulo. METHODS: This study evaluated 80 individuals, divided between test and control groups. The test group included 40 women with a confirmed diagnosis of fibromyalgia. The control group was composed of 40 healthy women. Three questionnaires were used: two to assess quality of life (FIQ and SF-36 and one to assess anxiety (STAI. They were applied to the individuals in both groups in a single face-to-face interview. The statistical analysis used Student's t test and Pearson's correlation test (r, with a significance level of 95%. Also, the Pearson chi-squared statistics test for homogeneity, with Yates correction, was used for comparing schooling between test and control groups. RESULTS: There was a statistically significant difference between the groups (p = 0.000, thus indicating that fibromyalgia patients have a worse quality of life and higher levels of anxiety. The correlations between the three questionnaires were high (r = 0.9. DISCUSSION: This study has confirmed the efficacy of FIQ for evaluating the impact of fibromyalgia on the quality of life. SF-36 is less specific than FIQ, although statistically significant values were obtained when analyzed separately, STAI showed lower efficacy for discriminating the test group from the control group. The test group showed worse quality of life than did the control group, which was demonstrated by both FIQ and SF-36. Even

  12. Assessing quality in European educational research indicators and approaches

    CERN Document Server

    Åström, Fredrik; Hansen, Antje

    2014-01-01

    Competition-based models for research policy and management have an increasing influence throughout the research process, from attracting funding to publishing results. The introduction of quality control methods utilizing various forms of performance indicators is part of this development. The authors presented in this volume deal with the following questions: What counts as ‘quality’ and how can this be assessed? What are the possible side effects of current quality control systems on research conducted in the European Research Area, especially in the social sciences and the humanities?

  13. Assessment of Quality of Service of Virtual Knowledge Communities

    Institute of Scientific and Technical Information of China (English)

    LA Juan-juan; JIANG Ge-fu; YIN Liang-kui

    2008-01-01

    An assessment method for the quality of service (QoS) of virtual knowledge communities (VKC) is proposed based on fuzzy theory and analytic hierarchy process (AHP). The QoS is evaluated in terms of Website design, reliability, responsiveness, trust,personalization, and information quality. The cognitive QoS and the QoS evaluated by assessors are compared to analyze which QoS of the VKC should be improved urgently and which indicators keep leading positions, and to assist administrators of the VKC on measuring and understanding current status and implementation effect of the QoS.

  14. Web Service for Positional Quality Assessment: the Wps Tier

    Science.gov (United States)

    Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.

    2015-08-01

    In the field of spatial data every day we have more and more information available, but we still have little or very little information about the quality of spatial data. We consider that the automation of the spatial data quality assessment is a true need for the geomatic sector, and that automation is possible by means of web processing services (WPS), and the application of specific assessment procedures. In this paper we propose and develop a WPS tier centered on the automation of the positional quality assessment. An experiment using the NSSDA positional accuracy method is presented. The experiment involves the uploading by the client of two datasets (reference and evaluation data). The processing is to determine homologous pairs of points (by distance) and calculate the value of positional accuracy under the NSSDA standard. The process generates a small report that is sent to the client. From our experiment, we reached some conclusions on the advantages and disadvantages of WPSs when applied to the automation of spatial data accuracy assessments.

  15. Multidisciplinary life cycle metrics and tools for green buildings.

    Science.gov (United States)

    Helgeson, Jennifer F; Lippiatt, Barbara C

    2009-07-01

    Building sector stakeholders need compelling metrics, tools, data, and case studies to support major investments in sustainable technologies. Proponents of green building widely claim that buildings integrating sustainable technologies are cost effective, but often these claims are based on incomplete, anecdotal evidence that is difficult to reproduce and defend. The claims suffer from 2 main weaknesses: 1) buildings on which claims are based are not necessarily "green" in a science-based, life cycle assessment (LCA) sense and 2) measures of cost effectiveness often are not based on standard methods for measuring economic worth. Yet, the building industry demands compelling metrics to justify sustainable building designs. The problem is hard to solve because, until now, neither methods nor robust data supporting defensible business cases were available. The US National Institute of Standards and Technology (NIST) Building and Fire Research Laboratory is beginning to address these needs by developing metrics and tools for assessing the life cycle economic and environmental performance of buildings. Economic performance is measured with the use of standard life cycle costing methods. Environmental performance is measured by LCA methods that assess the "carbon footprint" of buildings, as well as 11 other sustainability metrics, including fossil fuel depletion, smog formation, water use, habitat alteration, indoor air quality, and effects on human health. Carbon efficiency ratios and other eco-efficiency metrics are established to yield science-based measures of the relative worth, or "business cases," for green buildings. Here, the approach is illustrated through a realistic building case study focused on different heating, ventilation, air conditioning technology energy efficiency. Additionally, the evolution of the Building for Environmental and Economic Sustainability multidisciplinary team and future plans in this area are described.

  16. Metric modular spaces

    CERN Document Server

    Chistyakov, Vyacheslav

    2015-01-01

    Aimed toward researchers and graduate students familiar with elements of functional analysis, linear algebra, and general topology; this book contains a general study of modulars, modular spaces, and metric modular spaces. Modulars may be thought of as generalized velocity fields and serve two important purposes: generate metric spaces in a unified manner and provide a weaker convergence, the modular convergence, whose topology is non-metrizable in general. Metric modular spaces are extensions of metric spaces, metric linear spaces, and classical modular linear spaces. The topics covered include the classification of modulars, metrizability of modular spaces, modular transforms and duality between modular spaces, metric  and modular topologies. Applications illustrated in this book include: the description of superposition operators acting in modular spaces, the existence of regular selections of set-valued mappings, new interpretations of spaces of Lipschitzian and absolutely continuous mappings, the existe...

  17. -Metric Space: A Generalization

    Directory of Open Access Journals (Sweden)

    Farshid Khojasteh

    2013-01-01

    Full Text Available We introduce the notion of -metric as a generalization of a metric by replacing the triangle inequality with a more generalized inequality. We investigate the topology of the spaces induced by a -metric and present some essential properties of it. Further, we give characterization of well-known fixed point theorems, such as the Banach and Caristi types in the context of such spaces.

  18. Electronic Quality of Life Assessment Using Computer-Adaptive Testing

    Science.gov (United States)

    2016-01-01

    Background Quality of life (QoL) questionnaires are desirable for clinical practice but can be time-consuming to administer and interpret, making their widespread adoption difficult. Objective Our aim was to assess the performance of the World Health Organization Quality of Life (WHOQOL)-100 questionnaire as four item banks to facilitate adaptive testing using simulated computer adaptive tests (CATs) for physical, psychological, social, and environmental QoL. Methods We used data from the UK WHOQOL-100 questionnaire (N=320) to calibrate item banks using item response theory, which included psychometric assessments of differential item functioning, local dependency, unidimensionality, and reliability. We simulated CATs to assess the number of items administered before prespecified levels of reliability was met. Results The item banks (40 items) all displayed good model fit (P>.01) and were unidimensional (fewer than 5% of t tests significant), reliable (Person Separation Index>.70), and free from differential item functioning (no significant analysis of variance interaction) or local dependency (residual correlations banks were between 45% and 75% shorter than paper-based WHOQOL measures. Across the four domains, a high standard of reliability (alpha>.90) could be gained with a median of 9 items. Conclusions Using CAT, simulated assessments were as reliable as paper-based forms of the WHOQOL with a fraction of the number of items. These properties suggest that these item banks are suitable for computerized adaptive assessment. These item banks have the potential for international development using existing alternative language versions of the WHOQOL items. PMID:27694100

  19. Assessing land-use effects on water quality, in-stream habitat, riparian ecosystems and biodiversity in Patagonian northwest streams.

    Science.gov (United States)

    Miserendino, María Laura; Casaux, Ricardo; Archangelsky, Miguel; Di Prinzio, Cecilia Yanina; Brand, Cecilia; Kutschker, Adriana Mabel

    2011-01-01

    Changes in land-use practices have affected the integrity and quality of water resources worldwide. In Patagonia there is a strong concern about the ecological status of surface waters because these changes are rapidly occurring in the region. To test the hypothesis that greater intensity of land-use will have negative effects on water quality, stream habitat and biodiversity we assessed benthic macroinvertebrates, riparian/littoral invertebrates, fish and birds from the riparian corridor and environmental variables of 15 rivers (Patagonia) subjected to a gradient of land-use practices (non-managed native forest, managed native forest, pine plantations, pasture, urbanization). A total of 158 macroinvertebrate taxa, 105 riparian/littoral invertebrate taxa, 5 fish species, 34 bird species, and 15 aquatic plant species, were recorded considering all sites. Urban land-use produced the most significant changes in streams including physical features, conductivity, nutrients, habitat condition, riparian quality and invertebrate metrics. Pasture and managed native forest sites appeared in an intermediate situation. The highest values of fish and bird abundance and diversity were observed at disturbed sites; this might be explained by the opportunistic behavior displayed by these communities which let them take advantage of increased trophic resources in these environments. As expected, non-managed native forest sites showed the highest integrity of ecological conditions and also great biodiversity of benthic communities. Macroinvertebrate metrics that reflected good water quality were positively related to forest land cover and negatively related to urban and pasture land cover. However, by offering stream edge areas, pasture sites still supported rich communities of riparian/littoral invertebrates, increasing overall biodiversity. Macroinvertebrates were good indicators of land-use impact and water quality conditions and resulted useful tools to early alert of

  20. Topics in Metric Approximation

    Science.gov (United States)

    Leeb, William Edward

    This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.

  1. Prognostic Performance Metrics

    Data.gov (United States)

    National Aeronautics and Space Administration — This chapter presents several performance metrics for offline evaluation of prognostics algorithms. A brief overview of different methods employed for performance...

  2. Quality Assessment of Library Website of Iranian State Universities:

    Directory of Open Access Journals (Sweden)

    Farideh Osareh

    2008-07-01

    Full Text Available The present study carries out a quality assessment of the library websites in Iranian State Universities in order to rank them accordingly. The evaluation tool used is the normalized Web Quality Evaluation Tools (WQET. 41 Active library websites were studied and assessed qualitatively over two time periods (Feb 2006 and May 2006 using WQET. Data were collected by direct observation of the website. The evaluation was based on user characteristics, website purpose, upload speed, structural stability, ease of searching, graphic design, availability of authors’ c.v., clear objectivity, update and internal links. Website ranking showed that the website libraries for the Iran University of Science and Technology and Mazandaran University ranked first by obtaining 82 points out of 82 points. These were followed by the library websites of University of Tehran, Imam Sadegh University, Gilan University and Tarbiyat Moddaress University.

  3. Water Quality Assessment in the Tsunami Areas of Banda Aceh

    Directory of Open Access Journals (Sweden)

    Suhendrayatna Suhendrayatna

    2009-06-01

    Full Text Available Water quality assessment in the tsunami-affected areas conducted in Meuraxa and Kutaradja sub-districts in the area of Banda Aceh City. Water samples were collected in October 2006 from dug wells of tsunami-affected areas. These were characterized for various physical and chemical parameters. Water quality in the selected areas has shown that the surface water was contaminated due to the tsunami. Total Dissolved Solid, Total Suspended Solid, Acidity, and salinity were high in the affected areas indicating saline water intrusion into surface water tables. Dug wells in the highly affected locations showed higher values of heavy metal ions like Mn, Pb, Cu, Fe, Zn, and Cu compared to the reference points. No ion Hg was found in all samples. Keywords: Banda Aceh, heavy metals, tsunami, water quality

  4. Quality control for exposure assessment in epidemiological studies

    DEFF Research Database (Denmark)

    Bornkessel, C; Blettner, M; Breckenkamp, J

    2010-01-01

    In the framework of an epidemiological study, dosemeters were used for the assessment of radio frequency electromagnetic field exposure. To check the correct dosemeter's performance in terms of consistency of recorded field values over the entire study period, a quality control strategy...... was developed. In this paper, the concept of quality control and its results is described. From the 20 dosemeters used, 19 were very stable and reproducible, with deviations of a maximum of +/-1 dB compared with their initial state. One device was found to be faulty and its measurement data had to be excluded...... from the analysis. As a result of continuous quality control procedures, the confidence in the measurements obtained during the field work was strengthened significantly....

  5. A NEW IMAGE QUALITY ASSESSMENT BASED ON HVS

    Institute of Scientific and Technical Information of China (English)

    Du Juan; Yu Yinglin; Xie Shengli

    2005-01-01

    This letter proposes a new kind of image quality philosophy-Modulate Quality based on Fixation Points (MQFP) based on Human Visual System (HVS) model. Dissimilar to the former HVS-based quality assessment, the new measure emphasizes particularly on modeling the jumping phenomenon of human sight instead of modeling the visual perception of human.In other words, to model the HVS using fixation points and stay-frequency instead of Contrast Sensitive Function (CSF) etc. which models the visual perception of HVS. The experiment on various frequency-distortion images indicates that the new measure is correlated with the subjective judgment more than the former HVS-based measure and is a robust measure.

  6. A QUALITY ASSESSMENT METHOD FOR 3D ROAD POLYGON OBJECTS

    Directory of Open Access Journals (Sweden)

    L. Gao

    2015-08-01

    Full Text Available With the development of the economy, the fast and accurate extraction of the city road is significant for GIS data collection and update, remote sensing images interpretation, mapping and spatial database updating etc. 3D GIS has attracted more and more attentions from academics, industries and governments with the increase of requirements for interoperability and integration of different sources of data. The quality of 3D geographic objects is very important for spatial analysis and decision-making. This paper presents a method for the quality assessment of the 3D road polygon objects which is created by integrating 2D Road Polygon data with LiDAR point cloud and other height information such as Spot Height data in Hong Kong Island. The quality of the created 3D road polygon data set is evaluated by the vertical accuracy, geometric and attribute accuracy, connectivity error, undulation error and completeness error and the final results are presented.

  7. TECHNOLOGY ASSESSMENT OF TIRE MOULD CLEANING SYSTEMS AND QUALITY FINISHING

    Directory of Open Access Journals (Sweden)

    Cristiano Fragassa

    2016-09-01

    Full Text Available A modern tire merges up to 300 different chemical elements, both organic and inorganic, natural and synthetic. During manufacturing, various processes are present such as mixing, calendering and extrusion, forming dozens of individual parts. Then, moulding and vulcanization inside special moulds provides the tire its final shape. Since the surface quality of moulds strongly affects the quality of tire, mould cleaning is a fundamental aspect of the whole tire production and cleaning techniques are in continuous development. This investigation proposes a global technology assessment of tire mould cleaning systems including uncommon solutions as multi-axis robots for cleaning on board by laser or dry ice, or ultrasonic cleaning which use cavitation. A specific attention is also focused on the industry adoption of spring-vents in moulds and how they are influencing the development the quality of final products.

  8. Air quality monitoring in NIS (SERBIA) and health impact assessment.

    Science.gov (United States)

    Nikic, Dragana; Bogdanovic, Dragan; Nikolic, Maja; Stankovic, Aleksandra; Zivkovic, Nenad; Djordjevic, Amelija

    2009-11-01

    The aim of this study is to indicate the significance of air quality monitoring and to determine the air quality fields for the assessment of air pollution health effects, with special attention to risk population. Radial basis function network was used for air quality index mapping. Between 1991 and 2005, on the territory of Nis, several epidemiological studies were performed on risk groups (pre-school children, school children, pregnant women and persons older than 65). The total number of subjects was 5837. The exposed group comprised individuals living in the areas with unhealthy AQI, while the control group comprised individuals living in city areas with good or moderate AQI. It was determined that even relatively low levels of air pollution had impact on respiratory system and the occurrence of anaemia, allergy and skin symptoms.

  9. First Steps Toward a Quality of Climate Finance Scorecard (QUODA-CF): Creating a Comparative Index to Assess International Climate Finance Contributions

    Energy Technology Data Exchange (ETDEWEB)

    Sierra, Katherine; Roberts, Timmons; de Nevers, Michele; Langley, Claire; Smith, Cory

    2013-06-15

    Are climate finance contributor countries, multilateral aid agencies and specialized funds using widely accepted best practices in foreign assistance? How is it possible to measure and compare international climate finance contributions when there are as yet no established metrics or agreed definitions of the quality of climate finance? As a subjective metric, quality can mean different things to different stakeholders, while of donor countries, recipients and institutional actors may place quality across a broad spectrum of objectives. This subjectivity makes the assessment of the quality of climate finance contributions a useful and necessary exercise, but one that has many challenges. This work seeks to enhance the development of common definitions and metrics of the quality of climate finance, to understand what we can about those areas where climate finance information is available and shine a light on the areas where there is a severe dearth of data. Allowing for comparisons of the use of best practices across funding institutions in the climate sector could begin a process of benchmarking performance, fostering learning across institutions and driving improvements when incorporated in internal evaluation protocols of those institutions. In the medium term, this kind of benchmarking and transparency could support fundraising in contributor countries and help build trust with recipient countries. As a feasibility study, this paper attempts to outline the importance of assessing international climate finance contributions while describing the difficulties in arriving at universally agreed measurements and indicators for assessment. In many cases, data are neither readily available nor complete, and there is no consensus on what should be included. A number of indicators are proposed in this study as a starting point with which to analyze voluntary contributions, but in some cases their methodologies are not complete, and further research is required for a

  10. Google Scholar Metrics中文期刊H指数评价研究%Evaluation of Journal Quality Based on Google Scholar Metrics

    Institute of Scientific and Technical Information of China (English)

    杨毓丽; 陈陶; 张苏

    2013-01-01

    Google announced a new feature to its Scholar service (Google Scholar Metrics) on April 1, 2012. Metrics currently cover top 100 publications' H index published between 2007 and 2011 in ten languages. In this study, we compare Top 100 Chinese Journal H index between Metric and CNKI. The statistical results show that the Google H index and CNKI H index are exponential correlation. For the same publication, 90% of the Google publications H index are higher than CNKI publications H index.%  Google在2012年4月1日推出了一项新功能———Google学术计量(Google Scholar Metrics),公布了从2007年4月1日至今包括中文在内10种语言期刊H指数前100的排名。用户可以搜索期刊标题,获取期刊H指数。以Google学术计量H指数排名前100名的中文期刊为来源数据库,通过测算Google学术计量和中文期刊引文数据库(CNKI)的期刊H指数,了解二者之间的差异和联系。统计结果表明,Google的期刊H指数和CNKI的H指数呈相关性,对于同一种期刊,90%的Google期刊H指数低于CNKI期刊H指数。

  11. Quality assessment of clinical guidelines in China: 1993-2010

    Institute of Scientific and Technical Information of China (English)

    CHEN Yao-long; XIE Chang-chun; YANG Ke-hu; YAO Liang; XIAO Xiao-juan; WANG Qi; WANG Ze-hao; LIANG Fu-xiang; LIANG Hui; WANG Xin; SHEN Xi-ping

    2012-01-01

    Background Clinical practice guidelines (CPGs) play an important role in healthcare in China as well as in the world.However,the current status and trends of Chinese CPGs are unknown.The aim of this study was to systematically review the present situation and the quality of Chinese CPGs published in the peer-reviewed medical literature.Methods To identify Chinese CPGs,a systematic search of relevant literature databases (CBM,WANFANG,VIP,and CNKI) was performed for the period January 1978 to December 2010.We used the AGREE Ⅱ instrument to assess the quality of the included guidelines.Results We evaluated 269 guidelines published in 115 medical journals from 1993 to 2010 and produced by 256different developers.Only four guidelines (1%) described the systematic methods for searching and selecting the evidence,14 (5%) guidelines indicated an explicit link between the supporting evidence and the recommendations,only one guideline used the Grading of Recommendations Assessment,Development and Evaluation (GRADE) system.Thirty-one guidelines (12%) mentioned updates and the average frequency of update was 5.5 years; none described a procedure for updating the guideline.From the assessment with the Appraisal of Guidelines for Research and Ecaluation Ⅱ (AGREE Ⅱ),the mean scores were low for the domains "scope and purpose" (19%) and "clarity of presentation" (26%)and very low for the other domains ("rigour of development" 7%,"stakeholder involvement" 8%,"applicability" 6% and "editorial independence" 2%).Conclusions Compared with other studies on the quality of guidelines assessed with the AGREE instrument in other countries,Chinese CPGs received lower scores,which indicates a relatively poor quality of the guidelines.However,there was some increase over time.

  12. Collembase: a repository for springtail genomics and soil quality assessment

    Directory of Open Access Journals (Sweden)

    Klein-Lankhorst Rene M

    2007-09-01

    Full Text Available Abstract Background Environmental quality assessment is traditionally based on responses of reproduction and survival of indicator organisms. For soil assessment the springtail Folsomia candida (Collembola is an accepted standard test organism. We argue that environmental quality assessment using gene expression profiles of indicator organisms exposed to test substrates is more sensitive, more toxicant specific and significantly faster than current risk assessment methods. To apply this species as a genomic model for soil quality testing we conducted an EST sequencing project and developed an online database. Description Collembase is a web-accessible database comprising springtail (F. candida genomic data. Presently, the database contains information on 8686 ESTs that are assembled into 5952 unique gene objects. Of those gene objects ~40% showed homology to other protein sequences available in GenBank (blastx analysis; non-redundant (nr database; expect-value -5. Software was applied to infer protein sequences. The putative peptides, which had an average length of 115 amino-acids (ranging between 23 and 440 were annotated with Gene Ontology (GO terms. In total 1025 peptides (~17% of the gene objects were assigned at least one GO term (expect-value -25. Within Collembase searches can be conducted based on BLAST and GO annotation, cluster name or using a BLAST server. The system furthermore enables easy sequence retrieval for functional genomic and Quantitative-PCR experiments. Sequences are submitted to GenBank (Accession numbers: EV473060 – EV481745. Conclusion Collembase http://www.collembase.org is a resource of sequence data on the springtail F. candida. The information within the database will be linked to a custom made microarray, based on the Agilent platform, which can be applied for soil quality testing. In addition, Collembase supplies information that is valuable for related scientific disciplines such as molecular ecology

  13. Health impact assessment of quality wine production in Hungary.

    Science.gov (United States)

    Adám, Balázs; Molnár, Agnes; Bárdos, Helga; Adány, Róza

    2009-12-01

    Alcohol-related health outcomes show strikingly high incidence in Hungary. The effects of alcohol consumption are influenced not only by the quantity, but also the quality of drinks; therefore, wine production can have an important effect on public health outcomes. Nevertheless, the Hungarian wine sector faces several vital problems and challenges influenced by the country's accession to the European Union and by the need for restructuring. A comprehensive health impact assessment (HIA) based on the evaluation of the Hungarian legislation related to the wine sector has been carried out, aiming to assess the impact of the production of quality wine versus that of table wine, using a range of public health and epidemiological research methods and data as well as HIA guidelines. The study finds that the toxic effects of alcohol can be reduced with an increased supply of quality wine and with decreased overall consumption due to higher cost, although this might drive some people to seek illegal sources. Quality wine production allows for improved use of land, creates employment opportunities and increases the incomes of producers and local communities; however, capital-scarce producers unable to manage restructuring may lose their source of subsistence. The supply of quality wine can promote social relations, contribute to a healthy lifestyle and reduce criminality related to alcohol's influence and adulteration. In general, the production and supply of quality wine can have an overall positive impact on health. Nevertheless, because of the several possible negative effects expected without purposeful restructuring, recommendations for the maximization of favourable outcomes and suggestions for monitoring the success of the analysis have been provided.

  14. Assessing immunization data quality from routine reports in Mozambique

    Directory of Open Access Journals (Sweden)

    Mavimbe João C

    2005-10-01

    Full Text Available Abstract Background Worldwide immunization coverage shows an increase in the past years but the validity of the official reports for measuring change over time has been questioned. Facing this problem, donor supported initiatives like the Global Alliance for Vaccine and Immunizations, have been putting a lot of effort into assessing the quality of data used, since accurate immunization information is essential for the Expanded Program on Immunization managers to track and improve program performance. The present article, discusses the practices on record keeping, reporting and the support mechanism to ensure data quality in Mozambique. Methods A process evaluation study was carried out in Mozambique in one district (Cuamba in Niassa Province, between January and March 2003. The study was based on semi-structured interviews, participant observation and review of the data collection materials. Results Differences were found for all vaccine types when comparing facility reports with the tally sheets. The same applies when comparing facility reports with district reports. The study also showed that a routine practice during supervision visits was data quality assessment for the outpatient services but none related to data consistency between the tally sheets and the facility report. For the Expanded Program on Immunization, supervisors concentrated more on the consistency checks between data in the facility reports and the number of vaccines received during the same period. Meetings were based on criticism, for example, why health workers did not reach the target. Nothing in terms of data quality was addressed nor validation rules. Conclusion In this paper we have argued that the quality of data, and consequently of the information system, must be seen in a broader perspective not focusing only on technicalities (data collection tools and the reporting system but also on support mechanisms. Implications of a poor data quality system will be

  15. A novel, fuzzy-based air quality index (FAQI) for air quality assessment

    Science.gov (United States)

    Sowlat, Mohammad Hossein; Gharibi, Hamed; Yunesian, Masud; Tayefeh Mahmoudi, Maryam; Lotfi, Saeedeh

    2011-04-01

    The ever increasing level of air pollution in most areas of the world has led to development of a variety of air quality indices for estimation of health effects of air pollution, though the indices have their own limitations such as high levels of subjectivity. Present study, therefore, aimed at developing a novel, fuzzy-based air quality index (FAQI ) to handle such limitations. The index developed by present study is based on fuzzy logic that is considered as one of the most common computational methods of artificial intelligence. In addition to criteria air pollutants (i.e. CO, SO 2, PM 10, O 3, NO 2), benzene, toluene, ethylbenzene, xylene, and 1,3-butadiene were also taken into account in the index proposed, because of their considerable health effects. Different weighting factors were then assigned to each pollutant according to its priority. Trapezoidal membership functions were employed for classifications and the final index consisted of 72 inference rules. To assess the performance of the index, a case study was carried out employing air quality data at five different sampling stations in Tehran, Iran, from January 2008 to December 2009, results of which were then compared to the results obtained from USEPA air quality index (AQI). According to the results from present study, fuzzy-based air quality index is a comprehensive tool for classification of air quality and tends to produce accurate results. Therefore, it can be considered useful, reliable, and suitable for consideration by local authorities in air quality assessment and management schemes. Fuzzy-based air quality index (FAQI).

  16. A cloud model-based approach for water quality assessment.

    Science.gov (United States)

    Wang, Dong; Liu, Dengfeng; Ding, Hao; Singh, Vijay P; Wang, Yuankun; Zeng, Xiankui; Wu, Jichun; Wang, Lachun

    2016-07-01

    Water quality assessment entails essentially a multi-criteria decision-making process accounting for qualitative and quantitative uncertainties and their transformation. Considering uncertainties of randomness and fuzziness in water quality evaluation, a cloud model-based assessment approach is proposed. The cognitive cloud model, derived from information science, can realize the transformation between qualitative concept and quantitative data, based on probability and statistics and fuzzy set theory. When applying the cloud model to practical assessment, three technical issues are considered before the development of a complete cloud model-based approach: (1) bilateral boundary formula with nonlinear boundary regression for parameter estimation, (2) hybrid entropy-analytic hierarchy process technique for calculation of weights, and (3) mean of repeated simulations for determining the degree of final certainty. The cloud model-based approach is tested by evaluating the eutrophication status of 12 typical lakes and reservoirs in China and comparing with other four methods, which are Scoring Index method, Variable Fuzzy Sets method, Hybrid Fuzzy and Optimal model, and Neural Networks method. The proposed approach yields information concerning membership for each water quality status which leads to the final status. The approach is found to be representative of other alternative methods and accurate.

  17. GEOMETRIC QUALITY ASSESSMENT OF LIDAR DATA BASED ON SWATH OVERLAP

    Directory of Open Access Journals (Sweden)

    A. Sampath

    2016-06-01

    degrees of slope It is suggested that 4000-5000 points are uniformly sampled in the overlapping regions of the point cloud, and depending on the surface roughness, to measure the discrepancy between swaths. Care must be taken to sample only areas of single return points only. Point-to-Plane distance based data quality measures are determined for each sample point. These measurements are used to determine the above mentioned parameters. This paper details the measurements and analysis of measurements required to determine these metrics, i.e. Discrepancy Angle, Mean and RMSD of errors in flat regions and horizontal errors obtained using measurements extracted from sloping regions (slope greater than 10 degrees. The research is a result of an ad-hoc joint working group of the US Geological Survey and the American Society for Photogrammetry and Remote Sensing (ASPRS Airborne Lidar Committee.

  18. Geometric Quality Assessment of LIDAR Data Based on Swath Overlap

    Science.gov (United States)

    Sampath, A.; Heidemann, H. K.; Stensaas, G. L.

    2016-06-01

    ) It is suggested that 4000-5000 points are uniformly sampled in the overlapping regions of the point cloud, and depending on the surface roughness, to measure the discrepancy between swaths. Care must be taken to sample only areas of single return points only. Point-to-Plane distance based data quality measures are determined for each sample point. These measurements are used to determine the above mentioned parameters. This paper details the measurements and analysis of measurements required to determine these metrics, i.e. Discrepancy Angle, Mean and RMSD of errors in flat regions and horizontal errors obtained using measurements extracted from sloping regions (slope greater than 10 degrees). The research is a result of an ad-hoc joint working group of the US Geological Survey and the American Society for Photogrammetry and Remote Sensing (ASPRS) Airborne Lidar Committee.

  19. Analysis and Comparison of Objective Methods for Image Quality Assessment

    Directory of Open Access Journals (Sweden)

    P. S. Babkin

    2014-01-01

    Full Text Available The purpose of this work is research and modification of the reference objective methods for image quality assessment. The ultimate goal is to obtain a modification of formal assessments that more closely corresponds to the subjective expert estimates (MOS.In considering the formal reference objective methods for image quality assessment we used the results of other authors, which offer results and comparative analyzes of the most effective algorithms. Based on these investigations we have chosen two of the most successful algorithm for which was made a further analysis in the MATLAB 7.8 R 2009 a (PQS and MSSSIM. The publication focuses on the features of the algorithms, which have great importance in practical implementation, but are insufficiently covered in the publications by other authors.In the implemented modification of the algorithm PQS boundary detector Kirsch was replaced by the boundary detector Canny. Further experiments were carried out according to the method of the ITU-R VT.500-13 (01/2012 using monochrome images treated with different types of filters (should be emphasized that an objective assessment of image quality PQS is applicable only to monochrome images. Images were obtained with a thermal imaging surveillance system. The experimental results proved the effectiveness of this modification.In the specialized literature in the field of formal to evaluation methods pictures, this type of modification was not mentioned.The method described in the publication can be applied to various practical implementations of digital image processing.Advisability and effectiveness of using the modified method of PQS to assess the structural differences between the images are shown in the article and this will be used in solving the problems of identification and automatic control.

  20. Quality assessment of adaptive 3D video streaming

    Science.gov (United States)

    Tavakoli, Samira; Gutiérrez, Jesús; García, Narciso

    2013-03-01

    The streaming of 3D video contents is currently a reality to expand the user experience. However, because of the variable bandwidth of the networks used to deliver multimedia content, a smooth and high-quality playback experience could not always be guaranteed. Using segments in multiple video qualities, HTTP adaptive streaming (HAS) of video content is a relevant advancement with respect to classic progressive download streaming. Mainly, it allows resolving these issues by offering significant advantages in terms of both user-perceived Quality of Experience (QoE) and resource utilization for content and network service providers. In this paper we discuss the impact of possible HAS client's behavior while adapting to the network capacity on enduser. This has been done through an experiment of testing the end-user response to the quality variation during the adaptation procedure. The evaluation has been carried out through a subjective test of the end-user response to various possible clients' behaviors for increasing, decreasing, and oscillation of quality in 3D video. In addition, some of the HAS typical impairments during the adaptation has been simulated and their effects on the end-user perception are assessed. The experimental conclusions have made good insight into the user's response to different adaptation scenarios and visual impairments causing the visual discomfort that can be used to develop the adaptive streaming algorithm to improve the end-user experience.