WorldWideScience

Sample records for assessments quality metrics

  1. Assessing Software Quality Through Visualised Cohesion Metrics

    Directory of Open Access Journals (Sweden)

    Timothy Shih

    2001-05-01

    Full Text Available Cohesion is one of the most important factors for software quality as well as maintainability, reliability and reusability. Module cohesion is defined as a quality attribute that seeks for measuring the singleness of the purpose of a module. The module of poor quality can be a serious obstacle to the system quality. In order to design a good software quality, software managers and engineers need to introduce cohesion metrics to measure and produce desirable software. A highly cohesion software is thought to be a desirable constructing. In this paper, we propose a function-oriented cohesion metrics based on the analysis of live variables, live span and the visualization of processing element dependency graph. We give six typical cohesion examples to be measured as our experiments and justification. Therefore, a well-defined, well-normalized, well-visualized and well-experimented cohesion metrics is proposed to indicate and thus enhance software cohesion strength. Furthermore, this cohesion metrics can be easily incorporated with software CASE tool to help software engineers to improve software quality.

  2. Software Quality Metrics for Geant4: An Initial Assessment

    CERN Document Server

    Ronchieri, Elisabetta; Giacomini, Francesco

    2016-01-01

    In the context of critical applications, such as shielding and radiation protection, ensuring the quality of simulation software they depend on is of utmost importance. The assessment of simulation software quality is important not only to determine its adoption in experimental applications, but also to guarantee reproducibility of outcome over time. In this study, we present initial results from an ongoing analysis of Geant4 code based on established software metrics. The analysis evaluates the current status of the code to quantify its characteristics with respect to documented quality standards; further assessments concern evolutions over a series of release distributions. We describe the selected metrics that quantify software attributes ranging from code complexity to maintainability, and highlight what metrics are most effective at evaluating radiation transport software quality. The quantitative assessment of the software is initially focused on a set of Geant4 packages, which play a key role in a wide...

  3. Supporting analysis and assessments quality metrics: Utility market sector

    Energy Technology Data Exchange (ETDEWEB)

    Ohi, J. [National Renewable Energy Lab., Golden, CO (United States)

    1996-10-01

    In FY96, NREL was asked to coordinate all analysis tasks so that in FY97 these tasks will be part of an integrated analysis agenda that will begin to define a 5-15 year R&D roadmap and portfolio for the DOE Hydrogen Program. The purpose of the Supporting Analysis and Assessments task at NREL is to provide this coordination and conduct specific analysis tasks. One of these tasks is to prepare the Quality Metrics (QM) for the Program as part of the overall QM effort at DOE/EERE. The Hydrogen Program one of 39 program planning units conducting QM, a process begun in FY94 to assess benefits/costs of DOE/EERE programs. The purpose of QM is to inform decisionmaking during budget formulation process by describing the expected outcomes of programs during the budget request process. QM is expected to establish first step toward merit-based budget formulation and allow DOE/EERE to get {open_quotes}most bang for its (R&D) buck.{close_quotes} In FY96. NREL coordinated a QM team that prepared a preliminary QM for the utility market sector. In the electricity supply sector, the QM analysis shows hydrogen fuel cells capturing 5% (or 22 GW) of the total market of 390 GW of new capacity additions through 2020. Hydrogen consumption in the utility sector increases from 0.009 Quads in 2005 to 0.4 Quads in 2020. Hydrogen fuel cells are projected to displace over 0.6 Quads of primary energy in 2020. In future work, NREL will assess the market for decentralized, on-site generation, develop cost credits for distributed generation benefits (such as deferral of transmission and distribution investments, uninterruptible power service), cost credits for by-products such as heat and potable water, cost credits for environmental benefits (reduction of criteria air pollutants and greenhouse gas emissions), compete different fuel cell technologies against each other for market share, and begin to address economic benefits, especially employment.

  4. Quality Assessment for CRT and LCD Color Reproduction Using a Blind Metric

    OpenAIRE

    Bringier, B.; Quintard, L.; Larabi, M.-C.

    2008-01-01

    This paper deals with image quality assessment that is capturing the focus of several research teams from academic and industrial parts. This field has an important role in various applications related to image from acquisition to projection. A large numbers of objective image quality metrics have been developed during the last decade. These metrics are more or less correlated to end-user feedback and can be separated in three categories: 1) Full Reference (FR) trying to evaluate the impairme...

  5. Multi-resolution Structural Degradation Metrics for Perceptual Image Quality Assessment

    OpenAIRE

    Engelke, Ulrich; Zepernick, Hans-Jürgen

    2007-01-01

    In this paper, a multi-resolution analysis is proposed for image quality assessment. Structural features are extracted from each level of a pyramid decomposition that accurately represents the multiple scales of processing in the human visual system. To obtain an overall quality measure the individual level metrics are accumulated over the considered pyramid levels. Two different metric design approaches are introduced and evaluated. It turns out that one of them outperforms our previous work...

  6. On the Performance of Video Quality Assessment Metrics under Different Compression and Packet Loss Scenarios

    OpenAIRE

    Martínez-Rach, Miguel O.; Pablo Piñol; López, Otoniel M.; Manuel Perez Malumbres; José Oliver; Carlos Tavares Calafate

    2014-01-01

    When comparing the performance of video coding approaches, evaluating different commercial video encoders, or measuring the perceived video quality in a wireless environment, Rate/distortion analysis is commonly used, where distortion is usually measured in terms of PSNR values. However, PSNR does not always capture the distortion perceived by a human being. As a consequence, significant efforts have focused on defining an objective video quality metric that is able to assess quality in the s...

  7. Data Quality Metrics

    OpenAIRE

    Sýkorová, Veronika

    2008-01-01

    The aim of the thesis is to prove measurability of the Data Quality which is a relatively subjective measure and thus is difficult to measure. In doing this various aspects of measuring the quality of data are analyzed and a Complex Data Quality Monitoring System is introduced with the aim to provide a concept for measuring/monitoring the overall Data Quality in an organization. The system is built on a metrics hierarchy decomposed into particular detailed metrics, dimensions enabling multidi...

  8. Quality Metrics in Inpatient Neurology.

    Science.gov (United States)

    Dhand, Amar

    2015-12-01

    Quality of care in the context of inpatient neurology is the standard of performance by neurologists and the hospital system as measured against ideal models of care. There are growing regulatory pressures to define health care value through concrete quantifiable metrics linked to reimbursement. Theoretical models of quality acknowledge its multimodal character with quantitative and qualitative dimensions. For example, the Donabedian model distils quality as a phenomenon of three interconnected domains, structure-process-outcome, with each domain mutually influential. The actual measurement of quality may be implicit, as in peer review in morbidity and mortality rounds, or explicit, in which criteria are prespecified and systemized before assessment. As a practical contribution, in this article a set of candidate quality indicators for inpatient neurology based on an updated review of treatment guidelines is proposed. These quality indicators may serve as an initial blueprint for explicit quality metrics long overdue for inpatient neurology.

  9. Laser beam quality metrics

    CERN Document Server

    Ross, T Sean

    2013-01-01

    This book is geared toward engineers and laser physicists involved in the development of laser-based systems, especially laser systems for directed energy applications. It begins with a review of basic laser properties and moves to definitions and implications of the various standard beam quality metrics such as [i]M[/i][sup]2[/sup], power in the bucket, brightness, beam parameter product, and Strehl ratio. The practical aspects of beam metrology, which have not been sufficiently addressed in the literature, are amply covered here.

  10. Chromosome microarray proficiency testing and analysis of quality metric data trends through an external quality assessment program for Australasian laboratories.

    Science.gov (United States)

    Wright, D C; Adayapalam, N; Bain, N; Bain, S M; Brown, A; Buzzacott, N; Carey, L; Cross, J; Dun, K; Joy, C; McCarthy, C; Moore, S; Murch, A R; O'Malley, F; Parker, E; Watt, J; Wilkin, H; Fagan, K; Pertile, M D; Peters, G B

    2016-10-01

    Chromosome microarrays are an essential tool for investigation of copy number changes in children with congenital anomalies and intellectual deficit. Attempts to standardise microarray testing have focused on establishing technical and clinical quality criteria, however external quality assessment programs are still needed. We report on a microarray proficiency testing program for Australasian laboratories. Quality metrics evaluated included analytical accuracy, result interpretation, report completeness, and laboratory performance data: sample numbers, success and abnormality rate and reporting times. Between 2009 and 2014 nine samples were dispatched with variable results for analytical accuracy (30-100%), correct interpretation (32-96%), and report completeness (30-92%). Laboratory performance data (2007-2014) showed an overall mean success rate of 99.2% and abnormality rate of 23.6%. Reporting times decreased from >90 days to 102 days to <35 days for abnormal results. Data trends showed a positive correlation with improvement for all these quality metrics, however only 'report completeness' and reporting times reached statistical significance. Whether the overall improvement in laboratory performance was due to participation in this program, or from accumulated laboratory experience over time, is not clear. Either way, the outcome is likely to assist referring clinicians and improve patient care.

  11. Chromosome microarray proficiency testing and analysis of quality metric data trends through an external quality assessment program for Australasian laboratories.

    Science.gov (United States)

    Wright, D C; Adayapalam, N; Bain, N; Bain, S M; Brown, A; Buzzacott, N; Carey, L; Cross, J; Dun, K; Joy, C; McCarthy, C; Moore, S; Murch, A R; O'Malley, F; Parker, E; Watt, J; Wilkin, H; Fagan, K; Pertile, M D; Peters, G B

    2016-10-01

    Chromosome microarrays are an essential tool for investigation of copy number changes in children with congenital anomalies and intellectual deficit. Attempts to standardise microarray testing have focused on establishing technical and clinical quality criteria, however external quality assessment programs are still needed. We report on a microarray proficiency testing program for Australasian laboratories. Quality metrics evaluated included analytical accuracy, result interpretation, report completeness, and laboratory performance data: sample numbers, success and abnormality rate and reporting times. Between 2009 and 2014 nine samples were dispatched with variable results for analytical accuracy (30-100%), correct interpretation (32-96%), and report completeness (30-92%). Laboratory performance data (2007-2014) showed an overall mean success rate of 99.2% and abnormality rate of 23.6%. Reporting times decreased from >90 days to 102 days to <35 days for abnormal results. Data trends showed a positive correlation with improvement for all these quality metrics, however only 'report completeness' and reporting times reached statistical significance. Whether the overall improvement in laboratory performance was due to participation in this program, or from accumulated laboratory experience over time, is not clear. Either way, the outcome is likely to assist referring clinicians and improve patient care. PMID:27575971

  12. A proposed metric for assessing the measurement quality of individual microarrays

    Directory of Open Access Journals (Sweden)

    Scheirer Katherine E

    2006-01-01

    Full Text Available Abstract Background High-density microarray technology is increasingly applied to study gene expression levels on a large scale. Microarray experiments rely on several critical steps that may introduce error and uncertainty in analyses. These steps include mRNA sample extraction, amplification and labeling, hybridization, and scanning. In some cases this may be manifested as systematic spatial variation on the surface of microarray in which expression measurements within an individual array may vary as a function of geographic position on the array surface. Results We hypothesized that an index of the degree of spatiality of gene expression measurements associated with their physical geographic locations on an array could indicate the summary of the physical reliability of the microarray. We introduced a novel way to formulate this index using a statistical analysis tool. Our approach regressed gene expression intensity measurements on a polynomial response surface of the microarray's Cartesian coordinates. We demonstrated this method using a fixed model and presented results from real and simulated datasets. Conclusion We demonstrated the potential of such a quantitative metric for assessing the reliability of individual arrays. Moreover, we showed that this procedure can be incorporated into laboratory practice as a means to set quality control specifications and as a tool to determine whether an array has sufficient quality to be retained in terms of spatial correlation of gene expression measurements.

  13. Quality Assessment of Adaptive Bitrate Videos using Image Metrics and Machine Learning

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Brunnström, Kjell

    2015-01-01

    Adaptive bitrate (ABR) streaming is widely used for distribution of videos over the internet. In this work, we investigate how well we can predict the quality of such videos using well-known image metrics, information about the bitrate levels, and a relatively simple machine learning method...

  14. Does the lentic-lotic character of rivers affect invertebrate metrics used in the assessment of ecological quality?

    Directory of Open Access Journals (Sweden)

    Stefania ERBA

    2009-02-01

    Full Text Available The importance of local hydraulic conditions on the structuring of freshwater biotic communities is widely recognized by the scientific community. In spite of this, most current methods based upon invertebrates do not take this factor into account in their assessment of ecological quality. The aim of this paper is to investigate the influence of local hydraulic conditions on invertebrate community metrics and to estimate their potential weight in the evaluation of river water quality. The dataset used consisted of 130 stream sites located in four broad European geographical contexts: Alps, Central mountains, Mediterranean mountains and Lowland streams. Using River Habitat Survey data, the river hydromorphology was evaluated by means of the Lentic-lotic River Descriptor and the Habitat Modification Score. To quantify the level of water pollution, a synoptic Organic Pollution Descriptor was calculated. For their established, wide applicability, STAR Intercalibration Common Metrics and index were selected as biological quality indices. Significant relationships between selected environmental variables and biological metrics devoted to the evaluation of ecological quality were obtained by means of Partial Least Squares regression analysis. The lentic-lotic character was the most significant factor affecting invertebrate communities in the Mediterranean mountains, even if it is a relevant factor for most quality metrics also in the Alpine and Central mountain rivers. Therefore, this character should be taken into account when assessing ecological quality of rivers because it can greatly affect the assignment of ecological status.

  15. SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation

    International Nuclear Information System (INIS)

    Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject to random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be

  16. SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, T; Ruan, D [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject to random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be

  17. A Medical Image Watermarking Technique for Embedding EPR and Its Quality Assessment Using No-Reference Metrics

    Directory of Open Access Journals (Sweden)

    Rupinder Kaur

    2013-01-01

    Full Text Available Digital watermarking can be used as an important tool for the security and copyright protection of digital multimedia content. The present paper explores its applications as a quality indicator of a watermarked medical image when subjected to intentional (noise, cropping, alteration or unintentional (compression, transmission or filtering operations. The watermark also carries EPR data along with a binary mark (used for quality assessment. The binary mark is used as a No-Reference (NR quality metrics that blindly estimates the quality of an image without the need of original image. It is a semi-fragile watermark which degrades at around the same rate as the original image and thus gives an indication of the quality degradation of the host image at the receiving end. In the proposed method, the original image is divided into two parts- ROI and non-ROI. ROI is an area that contains diagnostically important information and must be processed without any distortion. The binary mark and EPR are embedded into the DCT domain of Non-ROI. Embedding EPR within a medical image reduces storage and transmission overheads and no additional file has to be sent along with an image. The watermark (binary mark and EPR is extracted from non-ROI part at the receiving end and a measure of degradation of binary mark is used to estimate the quality of the original image. The performance of the proposed method is evaluated by calculating MSE and PSNR of original and extracted mark.

  18. Application of sigma metrics for the assessment of quality control in clinical chemistry laboratory in Ghana: A pilot study

    Directory of Open Access Journals (Sweden)

    Justice Afrifa

    2015-01-01

    Full Text Available Background: Sigma metrics provide a uniquely defined scale with which we can assess the performance of a laboratory. The objective of this study was to assess the internal quality control (QC in the clinical chemistry laboratory of the University of Cape Cost Hospital (UCC using the six sigma metrics application. Materials and Methods: We used commercial control serum [normal (L1 and pathological (L2] for validation of quality control. Metabolites (glucose, urea, and creatinine, lipids [triglycerides (TG, total cholesterol, high-density lipoprotein cholesterol (HDL-C], enzymes [alkaline phosphatase (ALP, alanine aminotransferase (AST], electrolytes (sodium, potassium, chloride and total protein were assessed. Between-day imprecision (CVs, inaccuracy (Bias and sigma values were calculated for each control level. Results: Apart from sodium (2.40%, 3.83%, chloride (2.52% and 2.51% for both L1 and L2 respectively, and glucose (4.82%, cholesterol (4.86% for L2, CVs for all other parameters (both L1 and L2 were >5%. Four parameters (HDL-C, urea, creatinine and potassium achieved sigma levels >1 for both controls. Chloride and sodium achieved sigma levels >1 for L1 but 1 for L2. Glucose and ALP achieved a sigma level >1 for both control levels whereas TG achieved a sigma level >2 for both control levels. Conclusion: Unsatisfactory sigma levels (<3 where achieved for all parameters using both control levels, this shows instability and low consistency of results. There is the need for detailed assessment of the analytical procedures and the strengthening of the laboratory control systems in order to achieve effective six sigma levels for the laboratory.

  19. A management-oriented framework for selecting metrics used to assess habitat- and path-specific quality in spatially structured populations

    Science.gov (United States)

    Sam Nicol,; Ruscena Wiederholt,; Diffendorfer, James E.; Brady Mattsson,; Thogmartin, Wayne E.; Semmens, Darius J.; Laura Lopez-Hoffman,; Ryan Norris,

    2016-01-01

    Mobile species with complex spatial dynamics can be difficult to manage because their population distributions vary across space and time, and because the consequences of managing particular habitats are uncertain when evaluated at the level of the entire population. Metrics to assess the importance of habitats and pathways connecting habitats in a network are necessary to guide a variety of management decisions. Given the many metrics developed for spatially structured models, it can be challenging to select the most appropriate one for a particular decision. To guide the management of spatially structured populations, we define three classes of metrics describing habitat and pathway quality based on their data requirements (graph-based, occupancy-based, and demographic-based metrics) and synopsize the ecological literature relating to these classes. Applying the first steps of a formal decision-making approach (problem framing, objectives, and management actions), we assess the utility of metrics for particular types of management decisions. Our framework can help managers with problem framing, choosing metrics of habitat and pathway quality, and to elucidate the data needs for a particular metric. Our goal is to help managers to narrow the range of suitable metrics for a management project, and aid in decision-making to make the best use of limited resources.

  20. Assessments of habitat preferences and quality depend on spatial scale and metrics of fitness

    Science.gov (United States)

    Chalfoun, A.D.; Martin, T.E.

    2007-01-01

    1. Identifying the habitat features that influence habitat selection and enhance fitness is critical for effective management. Ecological theory predicts that habitat choices should be adaptive, such that fitness is enhanced in preferred habitats. However, studies often report mismatches between habitat preferences and fitness consequences across a wide variety of taxa based on a single spatial scale and/or a single fitness component. 2. We examined whether habitat preferences of a declining shrub steppe songbird, the Brewer's sparrow Spizella breweri, were adaptive when multiple reproductive fitness components and spatial scales (landscape, territory and nest patch) were considered. 3. We found that birds settled earlier and in higher densities, together suggesting preference, in landscapes with greater shrub cover and height. Yet nest success was not higher in these landscapes; nest success was primarily determined by nest predation rates. Thus landscape preferences did not match nest predation risk. Instead, nestling mass and the number of nesting attempts per pair increased in preferred landscapes, raising the possibility that landscapes were chosen on the basis of food availability rather than safe nest sites. 4. At smaller spatial scales (territory and nest patch), birds preferred different habitat features (i.e. density of potential nest shrubs) that reduced nest predation risk and allowed greater season-long reproductive success. 5. Synthesis and applications. Habitat preferences reflect the integration of multiple environmental factors across multiple spatial scales, and individuals may have more than one option for optimizing fitness via habitat selection strategies. Assessments of habitat quality for management prescriptions should ideally include analysis of diverse fitness consequences across multiple ecologically relevant spatial scales. ?? 2007 The Authors.

  1. Application of Sigma Metrics for the Assessment of Quality Assurance in Clinical Biochemistry Laboratory in India: A Pilot Study

    OpenAIRE

    Singh, Bhawna; Goswami, Binita; Gupta, Vinod Kumar; Chawla, Ranjna; Mallika, Venkatesan

    2010-01-01

    Ensuring quality of laboratory services is the need of the hour in the field of health care. Keeping in mind the revolution ushered by six sigma concept in corporate world, health care sector may reap the benefits of the same. Six sigma provides a general methodology to describe performance on sigma scale. We aimed to gauge our laboratory performance by sigma metrics. Internal quality control (QC) data was analyzed retrospectively over a period of 6 months from July 2009 to December 2009. Lab...

  2. THE QUALITY METRICS OF INFORMATION SYSTEMS

    OpenAIRE

    Zora Arsovski; Slavko Arsovski

    2008-01-01

    Information system is a special kind of products which is depend upon great number variables related to nature, conditions during implementation and organizational clime and culture. Because that quality metrics of information system (QMIS) has to reflect all previous aspects of information systems. In this paper are presented basic elements of QMIS, characteristics of implementation and operation metrics for IS, team - management quality metrics for IS and organizational aspects of quality m...

  3. Identification of Suited Quality Metrics for Natural and Medical Images

    Directory of Open Access Journals (Sweden)

    Kirti V. Thakur

    2016-06-01

    Full Text Available To assess quality of the denoised image is one of the important task in image denoising application.Numerous quality metrics are proposed by researchers with their particular characteristics till today. In practice, image acquisition system is different for natural and medical images. Hence noise introduced in these images is also different in nature. Considering this fact, authors in this paper tried to identify the suited quality metrics for Gaussian, speckle and Poisson corrupted natural, ultrasound and X-ray images respectively. In this paper, sixteen different quality metrics from full reference category are evaluated with respect to noise variance and suited quality metric for particular type of noise is identified. Strong need to develop noise dependent quality metric is also identified in this work.

  4. THE QUALITY METRICS OF INFORMATION SYSTEMS

    Directory of Open Access Journals (Sweden)

    Zora Arsovski

    2008-06-01

    Full Text Available Information system is a special kind of products which is depend upon great number variables related to nature, conditions during implementation and organizational clime and culture. Because that quality metrics of information system (QMIS has to reflect all previous aspects of information systems. In this paper are presented basic elements of QMIS, characteristics of implementation and operation metrics for IS, team - management quality metrics for IS and organizational aspects of quality metrics. In second part of this paper are presented results of study of QMIS in area of MIS (Management IS.

  5. Using quality metrics with laser range scanners

    Science.gov (United States)

    MacKinnon, David K.; Aitken, Victor; Blais, Francois

    2008-02-01

    We have developed a series of new quality metrics that are generalizable to a variety of laser range scanning systems, including those acquiring measurements in the mid-field. Moreover, these metrics can be integrated into either an automated scanning system, or a system that guides a minimally trained operator through the scanning process. In particular, we represent the quality of measurements with regard to aliasing and sampling density for mid-field measurements, two issues that have not been well addressed in contemporary literature. We also present a quality metric that addresses the issue of laser spot motion during sample acquisition. Finally, we take into account the interaction between measurement resolution and measurement uncertainty where necessary. These metrics are presented within the context of an adaptive scanning system in which quality metrics are used to minimize the number of measurements obtained during the acquisition of a single range image.

  6. An Underwater Color Image Quality Evaluation Metric.

    Science.gov (United States)

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score. PMID:26513783

  7. Application of sigma metrics for the assessment of quality assurance in clinical biochemistry laboratory in India: a pilot study.

    Science.gov (United States)

    Singh, Bhawna; Goswami, Binita; Gupta, Vinod Kumar; Chawla, Ranjna; Mallika, Venkatesan

    2011-04-01

    Ensuring quality of laboratory services is the need of the hour in the field of health care. Keeping in mind the revolution ushered by six sigma concept in corporate world, health care sector may reap the benefits of the same. Six sigma provides a general methodology to describe performance on sigma scale. We aimed to gauge our laboratory performance by sigma metrics. Internal quality control (QC) data was analyzed retrospectively over a period of 6 months from July 2009 to December 2009. Laboratory mean, standard deviation and coefficient of variation were calculated for all the parameters. Sigma was calculated for both the levels of internal QC. Satisfactory sigma values (>6) were elicited for creatinine, triglycerides, SGOT, CPK-Total and Amylase. Blood urea performed poorly on the sigma scale with sigma six sigma standards for all the analytical processes. PMID:22468038

  8. Design Metrics Which Predict Source Code Quality

    OpenAIRE

    Hartson, H.Rex; Smith, Eric C.; Henry, Sallie M.; Selig, Calvin

    1987-01-01

    Since the inception of software engineering, the major goal has been to control the development and maintenance of reliable software. To this end, many different design methodologies have been presented as a means to improve software quality through semantic clarity and syntactic accuracy during the specification and design phases of the software life cycle. On the other end of the life cycle, software quality metrics have been proposed to supply quantitative measures of the resultant softwar...

  9. How to evaluate objective video quality metrics reliably

    DEFF Research Database (Denmark)

    Korhonen, Jari; Burini, Nino; You, Junyong;

    2012-01-01

    The typical procedure for evaluating the performance of different objective quality metrics and indices involves comparisons between subjective quality ratings and the quality indices obtained using the objective metrics in question on the known video sequences. Several correlation indicators can...

  10. Golden Horn Estuary: Description of the ecosystem and an attempt to assess its ecological quality status using various classification metrics

    Directory of Open Access Journals (Sweden)

    S. ALBAYRAK

    2012-12-01

    Full Text Available In this paper, we describe the pelagic and benthic ecosystem of the Golden Horn  estuary opening into the Marmara Sea. To improve the water quality of the estuary, which had long been subject to severe anthropogenic pollution (industrial, chemical, shipping,  industrial facilities were moved from the estuary in the 1980s, followed by a rehabilitation plan in the 1990s. Our results, based on chemical parameters and phytoplankton showed some signs of improvement of water conditions in the upper layer. However, macrozoobenthic findings of this study did not reflect such a recovery in bottom life.An approach to the Ecological Quality Status (EQS assessment was performed by applying the biotic indices BENTIX, AMBI, BOPA, BO2A. Our final assessment was based on 'expert-judgements' and revealed a very disturbed overall ecosystem with 'bad' EQS for the station at the head of the estuary,  'poor' in the rest of the estuary and 'moderate' EQS only in the middle station.

  11. A Game Assessment Metric for the Online Gamer

    OpenAIRE

    DENIEFFE, D.; CARRIG, B.; D. Marshall; PICOVICI, D.

    2007-01-01

    This paper describes a new game assessment metric for the online gamer. The metric is based on a mathematical model currently used for network planning assessment. Beside the traditional network-based parameters such as delay, jitter and packet loss, new parameters based on online players' game experience/knowledge are introduced. The metric aims to estimate game quality as perceived by an online player. Measurements can be achieved in real-time or near real-time and could be useful to both o...

  12. New Quality Metrics for Web Search Results

    Science.gov (United States)

    Metaxas, Panagiotis Takis; Ivanova, Lilia; Mustafaraj, Eni

    Web search results enjoy an increasing importance in our daily lives. But what can be said about their quality, especially when querying a controversial issue? The traditional information retrieval metrics of precision and recall do not provide much insight in the case of web information retrieval. In this paper we examine new ways of evaluating quality in search results: coverage and independence. We give examples on how these new metrics can be calculated and what their values reveal regarding the two major search engines, Google and Yahoo. We have found evidence of low coverage for commercial and medical controversial queries, and high coverage for a political query that is highly contested. Given the fact that search engines are unwilling to tune their search results manually, except in a few cases that have become the source of bad publicity, low coverage and independence reveal the efforts of dedicated groups to manipulate the search results.

  13. Towards Video Quality Metrics Based on Colour Fractal Geometry

    Directory of Open Access Journals (Sweden)

    Richard Noël

    2010-01-01

    Full Text Available Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.

  14. Metrics for the Evaluation the Utility of Air Quality Forecasting

    Science.gov (United States)

    Sumo, T. M.; Stockwell, W. R.

    2013-12-01

    Global warming is expected to lead to higher levels of air pollution and therefore the forecasting of both long-term and daily air quality is an important component for the assessment of the costs of climate change and its impact on human health. Some of the risks associated with poor air quality days (where the Air Pollution Index is greater than 100), include hospital visits and mortality. Accurate air quality forecasting has the potential to allow sensitive groups to take appropriate precautions. This research builds metrics for evaluating the utility of air quality forecasting in terms of its potential impacts. Our analysis of air quality models focuses on the Washington, DC/Baltimore, MD region over the summertime ozone seasons between 2010 and 2012. The metrics that are relevant to our analysis include: (1) The number of times that a high ozone or particulate matter (PM) episode is correctly forecasted, (2) the number of times that high ozone or PM episode is forecasted when it does not occur and (3) the number of times when the air quality forecast predicts a cleaner air episode when the air was observed to have high ozone or PM. Our evaluation of the performance of air quality forecasts include those forecasts of ozone and particulate matter and data available from the U.S. Environmental Protection Agency (EPA)'s AIRNOW. We also examined observational ozone and particulate matter data available from Clean Air Partners. Overall the forecast models perform well for our region and time interval.

  15. [Clinical trial data management and quality metrics system].

    Science.gov (United States)

    Chen, Zhao-hua; Huang, Qin; Deng, Ya-zhong; Zhang, Yue; Xu, Yu; Yu, Hao; Liu, Zong-fan

    2015-11-01

    Data quality management system is essential to ensure accurate, complete, consistent, and reliable data collection in clinical research. This paper is devoted to various choices of data quality metrics. They are categorized by study status, e.g. study start up, conduct, and close-out. In each category, metrics for different purposes are listed according to ALCOA+ principles such us completeness, accuracy, timeliness, traceability, etc. Some general quality metrics frequently used are also introduced. This paper contains detail information as much as possible to each metric by providing definition, purpose, evaluation, referenced benchmark, and recommended targets in favor of real practice. It is important that sponsors and data management service providers establish a robust integrated clinical trial data quality management system to ensure sustainable high quality of clinical trial deliverables. It will also support enterprise level of data evaluation and bench marking the quality of data across projects, sponsors, data management service providers by using objective metrics from the real clinical trials. We hope this will be a significant input to accelerate the improvement of clinical trial data quality in the industry.

  16. A universal color image quality metric

    NARCIS (Netherlands)

    Toet, A.; Lucassen, M.P.

    2003-01-01

    We extend a recently introduced universal grayscale image quality index to a newly developed perceptually decorrelated color space. The resulting color image quality index quantifies the distortion of a processed color image relative to its original version. We evaluated the new color image quality

  17. Development of soil quality metrics using mycorrhizal fungi

    Energy Technology Data Exchange (ETDEWEB)

    Baar, J.

    2010-07-01

    Based on the Treaty on Biological Diversity of Rio de Janeiro in 1992 for maintaining and increasing biodiversity, several countries have started programmes monitoring soil quality and the above- and below ground biodiversity. Within the European Union, policy makers are working on legislation for soil protection and management. Therefore, indicators are needed to monitor the status of the soils and these indicators reflecting the soil quality, can be integrated in working standards or soil quality metrics. Soil micro-organisms, particularly arbuscular mycorrhizal fungi (AMF), are indicative of soil changes. These soil fungi live in symbiosis with the great majority of plants and are sensitive to changes in the physico-chemical conditions of the soil. The aim of this study was to investigate whether AMF are reliable and sensitive indicators for disturbances in the soils and can be used for the development of soil quality metrics. Also, it was studied whether soil quality metrics based on AMF meet requirements to applicability by users and policy makers. Ecological criterions were set for the development of soil quality metrics for different soils. Multiple root samples containing AMF from various locations in The Netherlands were analyzed. The results of the analyses were related to the defined criterions. This resulted in two soil quality metrics, one for sandy soils and a second one for clay soils, with six different categories ranging from very bad to very good. These soil quality metrics meet the majority of requirements for applicability and are potentially useful for the development of legislations for the protection of soil quality. (Author) 23 refs.

  18. Intersection of quality metrics and Medicare policy.

    Science.gov (United States)

    Nau, David P

    2011-12-01

    The federal government is increasing its push for a high-value health care system by increasing transparency and accountability related to quality. The Medicare program has begun to publicly rate the quality of Medicare plans, including prescription drug plans, and is transforming its payment policies to reward plans that deliver the highest levels of quality. These policies will have a cascade effect on pharmacies and pharmacists as the Medicare plans look for assistance in improving the quality of medication use. This commentary describes the Medicare policies directed toward improvement of quality and their effect on pharmacy payment and opportunities for pharmacists to affirm their role in a high-quality medication use system. PMID:22045907

  19. Quality metrics can help the expert during neurological clinical trials

    Science.gov (United States)

    Mahé, L.; Autrusseau, F.; Desal, H.; Guédon, J.; Der Sarkissian, H.; Le Teurnier, Y.; Davila, S.

    2016-03-01

    Carotid surgery is a frequent act corresponding to 15 to 20 thousands operations per year in France. Cerebral perfusion has to be tracked before and after carotid surgery. In this paper, a diagnosis support using quality metrics is proposed to detect vascular lesions on MR images. Our key stake is to provide a detection tool mimicking the human visual system behavior during the visual inspection. Relevant Human Visual System (HVS) properties should be integrated in our lesion detection method, which must be robust to common distortions in medical images. Our goal is twofold: to help the neuroradiologist to perform its task better and faster but also to provide a way to reduce the risk of bias in image analysis. Objective quality metrics (OQM) are methods whose goal is to predict the perceived quality. In this work, we use Objective Quality Metrics to detect perceivable differences between pairs of images.

  20. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark;

    2011-01-01

    and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed...

  1. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark;

    2012-01-01

    and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed...

  2. A priori discretization quality metrics for distributed hydrologic modeling applications

    Science.gov (United States)

    Liu, Hongli; Tolson, Bryan; Craig, James; Shafii, Mahyar; Basu, Nandita

    2016-04-01

    In distributed hydrologic modelling, a watershed is treated as a set of small homogeneous units that address the spatial heterogeneity of the watershed being simulated. The ability of models to reproduce observed spatial patterns firstly depends on the spatial discretization, which is the process of defining homogeneous units in the form of grid cells, subwatersheds, or hydrologic response units etc. It is common for hydrologic modelling studies to simply adopt a nominal or default discretization strategy without formally assessing alternative discretization levels. This approach lacks formal justifications and is thus problematic. More formalized discretization strategies are either a priori or a posteriori with respect to building and running a hydrologic simulation model. A posteriori approaches tend to be ad-hoc and compare model calibration and/or validation performance under various watershed discretizations. The construction and calibration of multiple versions of a distributed model can become a seriously limiting computational burden. Current a priori approaches are more formalized and compare overall heterogeneity statistics of dominant variables between candidate discretization schemes and input data or reference zones. While a priori approaches are efficient and do not require running a hydrologic model, they do not fully investigate the internal spatial pattern changes of variables of interest. Furthermore, the existing a priori approaches focus on landscape and soil data and do not assess impacts of discretization on stream channel definition even though its significance has been noted by numerous studies. The primary goals of this study are to (1) introduce new a priori discretization quality metrics considering the spatial pattern changes of model input data; (2) introduce a two-step discretization decision-making approach to compress extreme errors and meet user-specified discretization expectations through non-uniform discretization threshold

  3. Quality Assessment in Oncology

    International Nuclear Information System (INIS)

    The movement to improve healthcare quality has led to a need for carefully designed quality indicators that accurately reflect the quality of care. Many different measures have been proposed and continue to be developed by governmental agencies and accrediting bodies. However, given the inherent differences in the delivery of care among medical specialties, the same indicators will not be valid across all of them. Specifically, oncology is a field in which it can be difficult to develop quality indicators, because the effectiveness of an oncologic intervention is often not immediately apparent, and the multidisciplinary nature of the field necessarily involves many different specialties. Existing and emerging comparative effectiveness data are helping to guide evidence-based practice, and the increasing availability of these data provides the opportunity to identify key structure and process measures that predict for quality outcomes. The increasing emphasis on quality and efficiency will continue to compel the medical profession to identify appropriate quality measures to facilitate quality improvement efforts and to guide accreditation, credentialing, and reimbursement. Given the wide-reaching implications of quality metrics, it is essential that they be developed and implemented with scientific rigor. The aims of the present report were to review the current state of quality assessment in oncology, identify existing indicators with the best evidence to support their implementation, and propose a framework for identifying and refining measures most indicative of true quality in oncologic care.

  4. PSQM-based RR and NR video quality metrics

    Science.gov (United States)

    Lu, Zhongkang; Lin, Weisi; Ong, Eeping; Yang, Xiaokang; Yao, Susu

    2003-06-01

    This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-based metric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.

  5. Spread spectrum image watermarking based on perceptual quality metric.

    Science.gov (United States)

    Zhang, Fan; Liu, Wenyu; Lin, Weisi; Ngan, King Ngi

    2011-11-01

    Efficient image watermarking calls for full exploitation of the perceptual distortion constraint. Second-order statistics of visual stimuli are regarded as critical features for perception. This paper proposes a second-order statistics (SOS)-based image quality metric, which considers the texture masking effect and the contrast sensitivity in Karhunen-Loève transform domain. Compared with the state-of-the-art metrics, the quality prediction by SOS better correlates with several subjectively rated image databases, in which the images are impaired by the typical coding and watermarking artifacts. With the explicit metric definition, spread spectrum watermarking is posed as an optimization problem: we search for a watermark to minimize the distortion of the watermarked image and to maximize the correlation between the watermark pattern and the spread spectrum carrier. The simple metric guarantees the optimal watermark a closed-form solution and a fast implementation. The experiments show that the proposed watermarking scheme can take full advantage of the distortion constraint and improve the robustness in return.

  6. Design For Six Sigma with Critical-To-Quality Metrics for Research Investments

    Energy Technology Data Exchange (ETDEWEB)

    Logan, R W

    2005-06-22

    Design for Six Sigma (DFSS) has evolved as a worthy predecessor to the application of Six-Sigma principles to production, process control, and quality. At Livermore National Laboratory (LLNL), we are exploring the interrelation of our current research, development, and design safety standards as they would relate to the principles of DFSS and Six-Sigma. We have had success in prioritization of research and design using a quantitative scalar metric for value, so we further explore the use of scalar metrics to represent the outcome of our use of the DFSS process. We use the design of an automotive component as an example of combining DFSS metrics into a scalar decision quantity. We then extend this concept to a high-priority, personnel safety example representing work that is toward the mature end of DFSS, and begins the transition into Six-Sigma for safety assessments in a production process. This latter example and objective involves the balance of research investment, quality control, and system operation and maintenance of high explosive handling at LLNL and related production facilities. Assuring a sufficiently low probability of failure (reaction of a high explosive given an accidental impact) is a Critical-To-Quality (CTQ) component of our weapons and stockpile stewardship operation and cost. Our use of DFSS principles, with quantification and merging of CTQ metrics, provides ways to quantify clear (preliminary) paths forward for both the automotive example and the explosive safety example. The presentation of simple, scalar metrics to quantify the path forward then provides a focal point for qualitative caveats and discussion for inclusion of other metrics besides a single, provocative scalar. In this way, carrying a scalar decision metric along with the DFSS process motivates further discussion and ideas for process improvement from the DFSS into the Six-Sigma phase of the product. We end with an example of how our DFSS-generated scalar metric could be

  7. On the Efficiency of Image Metrics for Evaluating the Visual Quality of 3D Models.

    Science.gov (United States)

    Lavoue, Guillaume; Larabi, Mohamed Chaker; Vasa, Libor

    2016-08-01

    3D meshes are deployed in a wide range of application processes (e.g., transmission, compression, simplification, watermarking and so on) which inevitably introduce geometric distortions that may alter the visual quality of the rendered data. Hence, efficient model-based perceptual metrics, operating on the geometry of the meshes being compared, have been recently introduced to control and predict these visual artifacts. However, since the 3D models are ultimately visualized on 2D screens, it seems legitimate to use images of the models (i.e., snapshots from different viewpoints) to evaluate their visual fidelity. In this work we investigate the use of image metrics to assess the visual quality of 3D models. For this goal, we conduct a wide-ranging study involving several 2D metrics, rendering algorithms, lighting conditions and pooling algorithms, as well as several mean opinion score databases. The collected data allow (1) to determine the best set of parameters to use for this image-based quality assessment approach and (2) to compare this approach to the best performing model-based metrics and determine for which use-case they are respectively adapted. We conclude by exploring several applications that illustrate the benefits of image-based quality assessment. PMID:26394428

  8. A Validation of Object-Oriented Design Metrics as Quality Indicators

    Science.gov (United States)

    Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio

    1997-01-01

    This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.

  9. Using Qualitative and Quantitative Methods to Choose a Habitat Quality Metric for Air Pollution Policy Evaluation.

    Science.gov (United States)

    Rowe, Edwin C; Ford, Adriana E S; Smart, Simon M; Henrys, Peter A; Ashmore, Mike R

    2016-01-01

    Atmospheric nitrogen (N) deposition has had detrimental effects on species composition in a range of sensitive habitats, although N deposition can also increase agricultural productivity and carbon storage, and favours a few species considered of importance for conservation. Conservation targets are multiple, and increasingly incorporate services derived from nature as well as concepts of intrinsic value. Priorities vary. How then should changes in a set of species caused by drivers such as N deposition be assessed? We used a novel combination of qualitative semi-structured interviews and quantitative ranking to elucidate the views of conservation professionals specialising in grasslands, heathlands and mires. Although conservation management goals are varied, terrestrial habitat quality is mainly assessed by these specialists on the basis of plant species, since these are readily observed. The presence and abundance of plant species that are scarce, or have important functional roles, emerged as important criteria for judging overall habitat quality. However, species defined as 'positive indicator-species' (not particularly scarce, but distinctive for the habitat) were considered particularly important. Scarce species are by definition not always found, and the presence of functionally important species is not a sufficient indicator of site quality. Habitat quality as assessed by the key informants was rank-correlated with the number of positive indicator-species present at a site for seven of the nine habitat classes assessed. Other metrics such as species-richness or a metric of scarcity were inconsistently or not correlated with the specialists' assessments. We recommend that metrics of habitat quality used to assess N pollution impacts are based on the occurrence of, or habitat-suitability for, distinctive species. Metrics of this type are likely to be widely applicable for assessing habitat change in response to different drivers. The novel combined

  10. Using Qualitative and Quantitative Methods to Choose a Habitat Quality Metric for Air Pollution Policy Evaluation

    Science.gov (United States)

    Ford, Adriana E. S.; Smart, Simon M.; Henrys, Peter A.; Ashmore, Mike R.

    2016-01-01

    Atmospheric nitrogen (N) deposition has had detrimental effects on species composition in a range of sensitive habitats, although N deposition can also increase agricultural productivity and carbon storage, and favours a few species considered of importance for conservation. Conservation targets are multiple, and increasingly incorporate services derived from nature as well as concepts of intrinsic value. Priorities vary. How then should changes in a set of species caused by drivers such as N deposition be assessed? We used a novel combination of qualitative semi-structured interviews and quantitative ranking to elucidate the views of conservation professionals specialising in grasslands, heathlands and mires. Although conservation management goals are varied, terrestrial habitat quality is mainly assessed by these specialists on the basis of plant species, since these are readily observed. The presence and abundance of plant species that are scarce, or have important functional roles, emerged as important criteria for judging overall habitat quality. However, species defined as ‘positive indicator-species’ (not particularly scarce, but distinctive for the habitat) were considered particularly important. Scarce species are by definition not always found, and the presence of functionally important species is not a sufficient indicator of site quality. Habitat quality as assessed by the key informants was rank-correlated with the number of positive indicator-species present at a site for seven of the nine habitat classes assessed. Other metrics such as species-richness or a metric of scarcity were inconsistently or not correlated with the specialists’ assessments. We recommend that metrics of habitat quality used to assess N pollution impacts are based on the occurrence of, or habitat-suitability for, distinctive species. Metrics of this type are likely to be widely applicable for assessing habitat change in response to different drivers. The novel combined

  11. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark;

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed...

  12. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark;

    2012-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed...

  13. Modeling quality attributes and metrics for web service selection

    Science.gov (United States)

    Oskooei, Meysam Ahmadi; Daud, Salwani binti Mohd; Chua, Fang-Fang

    2014-06-01

    Since the service-oriented architecture (SOA) has been designed to develop the system as a distributed application, the service selection has become a vital aspect of service-oriented computing (SOC). Selecting the appropriate web service with respect to quality of service (QoS) through using mathematical solution for optimization of problem turns the service selection problem into a common concern for service users. Nowadays, number of web services that provide the same functionality is increased and selection of services from a set of alternatives which differ in quality parameters can be difficult for service consumers. In this paper, a new model for QoS attributes and metrics is proposed to provide a suitable solution for optimizing web service selection and composition with low complexity.

  14. Detection of image quality metamers based on the metric for unified image quality

    Science.gov (United States)

    Miyata, Kimiyoshi; Tsumura, Norimichi

    2012-01-01

    In this paper, we introduce a concept of the image quality metamerism as an expanded version of the metamerism defined in the color science. The concept is used to unify different image quality attributes, and applied to introduce a metric showing the degree of image quality metamerism to analyze a cultural property. Our global goal is to build a metric to evaluate total quality of images acquired by different imaging systems and observed under different viewing conditions. As the basic step to the global goal, the metric is consisted of color, spectral and texture information in this research, and applied to detect image quality metamers to investigate the cultural property. The property investigated is the oldest extant version of folding screen paintings that depict the thriving city of Kyoto designated as a nationally important cultural property in Japan. Gold colored areas painted by using high granularity colorants compared with other color areas in the property are evaluated based on the metric, then the metric is visualized as a map showing the possibility of the image quality metamer to the reference pixel.

  15. "Assessment of different bioequivalent metrics in Rifampin bioequivalence study "

    Directory of Open Access Journals (Sweden)

    "Rouini MR

    2002-08-01

    Full Text Available The use of secondary metrics has become special interest in bioequivalency studies. The applicability of partial area method, truncated AUC and Cmax/AUC has been argued by many authors. This study aims to evaluate the possible superiority of these metrics to primary metrics (i.e. AUCinf, Cmax and Tmax. The suitability of truncated AUC for assessment of absorption extent as well as Cmax/AUC and partial AUC for the evaluation of absorption rate in bioequivalency determination was investigated following administration of same product as test and reference to 7 healthy volunteers. Among the pharmacokinetic parameters obtained, Cmax/AUCinf was a better indicator or absorption rate and the AUCinf was more sensitive than truncated AUC in evaluation of absorption extent.

  16. Operator-based metric for nuclear operations automation assessment

    Energy Technology Data Exchange (ETDEWEB)

    Zacharias, G.L.; Miao, A.X.; Kalkan, A. [Charles River Analytics Inc., Cambridge, MA (United States)] [and others

    1995-04-01

    Continuing advances in real-time computational capabilities will support enhanced levels of smart automation and AI-based decision-aiding systems in the nuclear power plant (NPP) control room of the future. To support development of these aids, we describe in this paper a research tool, and more specifically, a quantitative metric, to assess the impact of proposed automation/aiding concepts in a manner that can account for a number of interlinked factors in the control room environment. In particular, we describe a cognitive operator/plant model that serves as a framework for integrating the operator`s information-processing capabilities with his procedural knowledge, to provide insight as to how situations are assessed by the operator, decisions made, procedures executed, and communications conducted. Our focus is on the situation assessment (SA) behavior of the operator, the development of a quantitative metric reflecting overall operator awareness, and the use of this metric in evaluating automation/aiding options. We describe the results of a model-based simulation of a selected emergency scenario, and metric-based evaluation of a range of contemplated NPP control room automation/aiding options. The results demonstrate the feasibility of model-based analysis of contemplated control room enhancements, and highlight the need for empirical validation.

  17. Applicability of Existing Objective Metrics of Perceptual Quality for Adaptive Video Streaming

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Krasula, Lukás; Shahid, Muhammad;

    2016-01-01

    Objective video quality metrics are designed to estimate the quality of experience of the end user. However, these objective metrics are usually validated with video streams degraded under common distortion types. In the presented work, we analyze the performance of published and known full......-reference and noreference quality metrics in estimating the perceived quality of adaptive bit-rate video streams knowingly out of scope. Experimental results indicate not surprisingly that state of the art objective quality metrics overlook the perceived degradations in the adaptive video streams and perform poorly...

  18. Enhancing the quality metric of protein microarray image

    Institute of Scientific and Technical Information of China (English)

    王立强; 倪旭翔; 陆祖康; 郑旭峰; 李映笙

    2004-01-01

    The novel method of improving the quality metric of protein microarray image presented in this paper reduces impulse noise by using an adaptive median filter that employs the switching scheme based on local statistics characters; and achieves the impulse detection by using the difference between the standard deviation of the pixels within the filter window and the current pixel of concern. It also uses a top-hat filter to correct the background variation. In order to decrease time consumption, the top-hat filter core is cross structure. The experimental results showed that, for a protein microarray image contaminated by impulse noise and with slow background variation, the new method can significantly increase the signal-to-noise ratio, correct the trends in the background, and enhance the flatness of the background and the consistency of the signal intensity.

  19. Macroinvertebrate and diatom metrics as indicators of water-quality conditions in connected depression wetlands in the Mississippi Alluvial Plain

    Science.gov (United States)

    Justus, Billy; Burge, David; Cobb, Jennifer; Marsico, Travis; Bouldin, Jennifer

    2016-01-01

    Methods for assessing wetland conditions must be established so wetlands can be monitored and ecological services can be protected. We evaluated biological indices compiled from macroinvertebrate and diatom metrics developed primarily for streams to assess their ability to indicate water quality in connected depression wetlands. We collected water-quality and biological samples at 24 connected depressions dominated by water tupelo (Nyssa aquatica) or bald cypress (Taxodium distichum) (water depths = 0.5–1.0 m). Water quality of the least-disturbed connected depressions was characteristic of swamps in the southeastern USA, which tend to have low specific conductance, nutrient concentrations, and pH. We compared 162 macroinvertebrate metrics and 123 diatom metrics with a water-quality disturbance gradient. For most metrics, we evaluated richness, % richness, abundance, and % relative abundance values. Three of the 4 macroinvertebrate metrics that were most beneficial for identifying disturbance in connected depressions decreased along the disturbance gradient even though they normally increase relative to stream disturbance. The negative relationship to disturbance of some taxa (e.g., dipterans, mollusks, and crustaceans) that are considered tolerant in streams suggests that the tolerance scale for some macroinvertebrates can differ markedly between streams and wetlands. Three of the 4 metrics chosen for the diatom index reflected published tolerances or fit the usual perception of metric response to disturbance. Both biological indices may be useful in connected depressions elsewhere in the Mississippi Alluvial Plain Ecoregion and could have application in other wetland types. Given the paradoxical relationship of some macroinvertebrate metrics to dissolved O2 (DO), we suggest that the diatom metrics may be easier to interpret and defend for wetlands with low DO concentrations in least-disturbed conditions.

  20. Digitization and metric conversion for image quality test targets: Part II

    Science.gov (United States)

    Kress, William C.

    2003-12-01

    A common need of the INCITS W1.1 Macro Uniformity, Color Rendition and Micro Uniformity ad hoc efforts is to digitize image quality test targets and derive parameters that correlate with image quality assessments. The digitized data should be in a colorimetric color space such as CIELAB and the process of digitizing will introduce no spatial artifacts that reduce the accuracy of image quality parameters. Input digitizers come in many forms including inexpensive scanners used in the home, a range of sophisticated scanners used for graphic arts and scanners used for scientific and industrial measurements (e.g., microdensitometers). Some of these are capable of digitizing hard copy output for image quality objective metrices, and this report focuses on assessment of high quality flatbed scanners for that role. Digitization using flatbed scanners is attractive because they are relatively inexpensive, easy to use, and most are available with document feeders permitting analysis of a stack of documents with little user interaction. Other authors have addressed using scanners for image quality measurements. This paper focuses (1) on color transformations from RGB to CIELAB and (2) sampling issues and demonstrates that flatbed scanners can have a high level of accuracy for generating accurate, stable images in the CIELAB metric. Previous discussion and experimental results focusing on color conversions had been presented at PICS 2003. This paper reviews the past discussion with some refinement based on recent experiments and extends the analysis into color accuracy verification and sampling issues.

  1. Sigma metrics in clinical chemistry laboratory – A guide to quality control

    Directory of Open Access Journals (Sweden)

    Usha S. Adiga

    2015-10-01

    Full Text Available Background: Six sigma is a process of quality measurement and improvement program used in industries. Sigma methodology can be applied wherever an outcome of a process is to be measured. A poor outcome is counted as an error or defect. This is quantified as defects per million (DPM. Six sigma provides a more quantitative frame work for evaluating process performance with evidence for process improvement and describes how many sigma fit within the tolerance limits. Sigma metrics can be used effectively in laboratory services. The present study was undertaken to evaluate the quality of the analytical performance of clinical chemistry laboratory by calculating sigma metrics. Methodology: The study was conducted in the clinical biochemistry laboratory of Karwar Institute of Medical Sciences, Karwar. Sigma metrics of 15 parameters with automated chemistry analyzer, transasia XL 640 were analyzed. The analytes assessed were glucose, urea, creatinine, uric acid, total bilirubin (BT, direct bilirubin (BD, total protein, albumin, SGOT, SGPT, ALP, Total cholesterol, triglycerides, HDL and Calcium. Results: We have sigma values <3 for Urea, ALT, BD, BT, Ca, creatinine (L1 and urea, AST, BD (L2. Sigma lies between 3-6 for Glucose, AST, cholesterol, uric acid, total protein(L1 and ALT, cholesterol, BT, calcium, creatinine and glucose (L2.Sigma was more than 6 for Triglyceride, ALP, HDL, albumin (L1 and TG, uric acid, ALP, HDL, albumin, total protein(L2. Conclusion: Sigma metrics helps to assess analytical methodologies and augment laboratory performance. It acts as a guide for planning quality control strategy. It can be a self assessment tool regarding the functioning of clinical laboratory.

  2. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale

    CERN Document Server

    Emmons, Scott; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie network clustering. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms -- Blondel, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 o...

  3. SU-E-J-155: Automatic Quantitative Decision Making Metric for 4DCT Image Quality

    Energy Technology Data Exchange (ETDEWEB)

    Kiely, J Blanco; Olszanski, A; Both, S; White, B [University of Pennsylvania, Philadelphia, PA (United States); Low, D [Deparment of Radiation Oncology, University of California Los Angeles, Los Angeles, CA (United States)

    2015-06-15

    Purpose: To develop a quantitative decision making metric for automatically detecting irregular breathing using a large patient population that received phase-sorted 4DCT. Methods: This study employed two patient cohorts. Cohort#1 contained 256 patients who received a phasesorted 4DCT. Cohort#2 contained 86 patients who received three weekly phase-sorted 4DCT scans. A previously published technique used a single abdominal surrogate to calculate the ratio of extreme inhalation tidal volume to normal inhalation tidal volume, referred to as the κ metric. Since a single surrogate is standard for phase-sorted 4DCT in radiation oncology clinical practice, tidal volume was not quantified. Without tidal volume, the absolute κ metric could not be determined, so a relative κ (κrel) metric was defined based on the measured surrogate amplitude instead of tidal volume. Receiver operator characteristic (ROC) curves were used to quantitatively determine the optimal cutoff value (jk) and efficiency cutoff value (τk) of κrel to automatically identify irregular breathing that would reduce the image quality of phase-sorted 4DCT. Discriminatory accuracy (area under the ROC curve) of κrel was calculated by a trapezoidal numeric integration technique. Results: The discriminatory accuracy of ?rel was found to be 0.746. The key values of jk and tk were calculated to be 1.45 and 1.72 respectively. For values of ?rel such that jk≤κrel≤τk, the decision to reacquire the 4DCT would be at the discretion of the physician. This accounted for only 11.9% of the patients in this study. The magnitude of κrel held consistent over 3 weeks for 73% of the patients in cohort#3. Conclusion: The decision making metric, ?rel, was shown to be an accurate classifier of irregular breathing patients in a large patient population. This work provided an automatic quantitative decision making metric to quickly and accurately assess the extent to which irregular breathing is occurring during phase

  4. SU-E-J-155: Automatic Quantitative Decision Making Metric for 4DCT Image Quality

    International Nuclear Information System (INIS)

    Purpose: To develop a quantitative decision making metric for automatically detecting irregular breathing using a large patient population that received phase-sorted 4DCT. Methods: This study employed two patient cohorts. Cohort#1 contained 256 patients who received a phasesorted 4DCT. Cohort#2 contained 86 patients who received three weekly phase-sorted 4DCT scans. A previously published technique used a single abdominal surrogate to calculate the ratio of extreme inhalation tidal volume to normal inhalation tidal volume, referred to as the κ metric. Since a single surrogate is standard for phase-sorted 4DCT in radiation oncology clinical practice, tidal volume was not quantified. Without tidal volume, the absolute κ metric could not be determined, so a relative κ (κrel) metric was defined based on the measured surrogate amplitude instead of tidal volume. Receiver operator characteristic (ROC) curves were used to quantitatively determine the optimal cutoff value (jk) and efficiency cutoff value (τk) of κrel to automatically identify irregular breathing that would reduce the image quality of phase-sorted 4DCT. Discriminatory accuracy (area under the ROC curve) of κrel was calculated by a trapezoidal numeric integration technique. Results: The discriminatory accuracy of ?rel was found to be 0.746. The key values of jk and tk were calculated to be 1.45 and 1.72 respectively. For values of ?rel such that jk≤κrel≤τk, the decision to reacquire the 4DCT would be at the discretion of the physician. This accounted for only 11.9% of the patients in this study. The magnitude of κrel held consistent over 3 weeks for 73% of the patients in cohort#3. Conclusion: The decision making metric, ?rel, was shown to be an accurate classifier of irregular breathing patients in a large patient population. This work provided an automatic quantitative decision making metric to quickly and accurately assess the extent to which irregular breathing is occurring during phase

  5. Assessment and improvement of radiation oncology trainee contouring ability utilizing consensus-based penalty metrics

    International Nuclear Information System (INIS)

    The objective of this study was to develop and assess the feasibility of utilizing consensus-based penalty metrics for the purpose of critical structure and organ at risk (OAR) contouring quality assurance and improvement. A Delphi study was conducted to obtain consensus on contouring penalty metrics to assess trainee-generated OAR contours. Voxel-based penalty metric equations were used to score regions of discordance between trainee and expert contour sets. The utility of these penalty metric scores for objective feedback on contouring quality was assessed by using cases prepared for weekly radiation oncology radiation oncology trainee treatment planning rounds. In two Delphi rounds, six radiation oncology specialists reached agreement on clinical importance/impact and organ radiosensitivity as the two primary criteria for the creation of the Critical Structure Inter-comparison of Segmentation (CriSIS) penalty functions. Linear/quadratic penalty scoring functions (for over- and under-contouring) with one of four levels of severity (none, low, moderate and high) were assigned for each of 20 OARs in order to generate a CriSIS score when new OAR contours are compared with reference/expert standards. Six cases (central nervous system, head and neck, gastrointestinal, genitourinary, gynaecological and thoracic) then were used to validate 18 OAR metrics through comparison of trainee and expert contour sets using the consensus derived CriSIS functions. For 14 OARs, there was an improvement in CriSIS score post-educational intervention. The use of consensus-based contouring penalty metrics to provide quantitative information for contouring improvement is feasible.

  6. Setting Maintenance Quality Objectives and Prioritizing Maintenance Work by Using Quality Metrics

    OpenAIRE

    Schneidewind, Norman F.

    1991-01-01

    We show how metrics that are collected and validated during development can be used during maintenance to control quality and prioritize maintenance work. Our approach is to capitalize on knowledge acquired and experience gained with the software during development through measurement. The motivation for this research stems from the need to provide maintenance management with the following: 1) quantitative basis for establishing quality objectives during ...

  7. Design and Implementation of Performance Metrics for Evaluation of Assessments Data

    CERN Document Server

    Ahmed, Irfan

    2015-01-01

    The objective of this paper is to design performance metrics and respective formulas to quantitatively evaluate the achievement of set objectives and expected outcomes both at the course and program levels. Evaluation is defined as one or more processes for interpreting the data acquired through the assessment processes in order to determine how well the set objectives and outcomes are being attained. Even though assessment processes for accreditation are well documented but existence of an evaluation process is assumed. This paper focuses on evaluation process to provide insights and techniques for data interpretation. It gives a complete evaluation process from the data collection through various assessment methods, performance metrics, to the presentations in the form of tables and graphs. Authors hope that the articulated description of evaluation formulas will help convergence to high quality standard in evaluation process.

  8. Economic Benefits: Metrics and Methods for Landscape Performance Assessment

    Directory of Open Access Journals (Sweden)

    Zhen Wang

    2016-04-01

    Full Text Available This paper introduces an expanding research frontier in the landscape architecture discipline, landscape performance research, which embraces the scientific dimension of landscape architecture through evidence-based designs that are anchored in quantitative performance assessment. Specifically, this paper summarizes metrics and methods for determining landscape-derived economic benefits that have been utilized in the Landscape Performance Series (LPS initiated by the Landscape Architecture Foundation. This paper identifies 24 metrics and 32 associated methods for the assessment of economic benefits found in 82 published case studies. Common issues arising through research in quantifying economic benefits for the LPS are discussed and the various approaches taken by researchers are clarified. The paper also provides an analysis of three case studies from the LPS that are representative of common research methods used to quantify economic benefits. The paper suggests that high(er levels of sustainability in the built environment require the integration of economic benefits into landscape performance assessment portfolios in order to forecast project success and reduce uncertainties. Therefore, evidence-based design approaches increase the scientific rigor of landscape architecture education and research, and elevate the status of the profession.

  9. Metrics-based assessments of research: incentives for 'institutional plagiarism'?

    Science.gov (United States)

    Berry, Colin

    2013-06-01

    The issue of plagiarism--claiming credit for work that is not one's own, rightly, continues to cause concern in the academic community. An analysis is presented that shows the effects that may arise from metrics-based assessments of research, when credit for an author's outputs (chiefly publications) is given to an institution that did not support the research but which subsequently employs the author. The incentives for what is termed here "institutional plagiarism" are demonstrated with reference to the UK Research Assessment Exercise in which submitting units of assessment are shown in some instances to derive around twice the credit for papers produced elsewhere by new recruits, compared to papers produced 'in-house'.

  10. Metrics-based assessments of research: incentives for 'institutional plagiarism'?

    Science.gov (United States)

    Berry, Colin

    2013-06-01

    The issue of plagiarism--claiming credit for work that is not one's own, rightly, continues to cause concern in the academic community. An analysis is presented that shows the effects that may arise from metrics-based assessments of research, when credit for an author's outputs (chiefly publications) is given to an institution that did not support the research but which subsequently employs the author. The incentives for what is termed here "institutional plagiarism" are demonstrated with reference to the UK Research Assessment Exercise in which submitting units of assessment are shown in some instances to derive around twice the credit for papers produced elsewhere by new recruits, compared to papers produced 'in-house'. PMID:22371031

  11. Video Object Relevance Metrics for Overall Segmentation Quality Evaluation

    OpenAIRE

    Correia Paulo; Pereira Fernando

    2006-01-01

    Video object segmentation is a task that humans perform efficiently and effectively, but which is difficult for a computer to perform. Since video segmentation plays an important role for many emerging applications, as those enabled by the MPEG-4 and MPEG-7 standards, the ability to assess the segmentation quality in view of the application targets is a relevant task for which a standard, or even a consensual, solution is not available. This paper considers the evaluation of overall segmenta...

  12. Effective dose efficiency: an application-specific metric of quality and dose for digital radiography

    Energy Technology Data Exchange (ETDEWEB)

    Samei, Ehsan; Ranger, Nicole T; Dobbins, James T III; Ravin, Carl E, E-mail: samei@duke.edu [Carl E Ravin Advanced Imaging Laboratories, Department of Radiology (United States)

    2011-08-21

    The detective quantum efficiency (DQE) and the effective DQE (eDQE) are relevant metrics of image quality for digital radiography detectors and systems, respectively. The current study further extends the eDQE methodology to technique optimization using a new metric of the effective dose efficiency (eDE), reflecting both the image quality as well as the effective dose (ED) attributes of the imaging system. Using phantoms representing pediatric, adult and large adult body habitus, image quality measurements were made at 80, 100, 120 and 140 kVp using the standard eDQE protocol and exposures. ED was computed using Monte Carlo methods. The eDE was then computed as a ratio of image quality to ED for each of the phantom/spectral conditions. The eDQE and eDE results showed the same trends across tube potential with 80 kVp yielding the highest values and 120 kVp yielding the lowest. The eDE results for the pediatric phantom were markedly lower than the results for the adult phantom at spatial frequencies lower than 1.2-1.7 mm{sup -1}, primarily due to a correspondingly higher value of ED per entrance exposure. The relative performance for the adult and large adult phantoms was generally comparable but affected by kVps. The eDE results for the large adult configuration were lower than the eDE results for the adult phantom, across all spatial frequencies (120 and 140 kVp) and at spatial frequencies greater than 1.0 mm{sup -1} (80 and 100 kVp). Demonstrated for chest radiography, the eDE shows promise as an application-specific metric of imaging performance, reflective of body habitus and radiographic technique, with utility for radiography protocol assessment and optimization.

  13. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  14. Large-scale seismic waveform quality metric calculation using Hadoop

    Science.gov (United States)

    Magana-Zook, S.; Gaylord, J. M.; Knapp, D. R.; Dodge, D. A.; Ruppert, S. D.

    2016-09-01

    In this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of ~0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. These experiments were conducted multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely

  15. 层次型Java软件质量度量模型研究%Research on Layered Quality Metrics Model for Java Programme

    Institute of Scientific and Technical Information of China (English)

    黄璜; 周欣; 孙家骕

    2003-01-01

    Metrics model is in fact a cluster of criterions to assess software, which may show the characteristics ofdifferent software systems or modules and then serve different demands from users. The research on software metricstries to give characteristic evaluations to software components in component extraction, and then supports users to se-lect reusable components in high quality.Java has been one of the main languages today. With consideration of characteristics of Java and research on somegeneral metrics model, our model: Quality Metrics Model for Java is born.Following the principle of "Factor-Criterion-Metrics", more detailed descriptions of factors, criterions and met-rics of our model are given. In fact, the metrics model shows us some way for consideration. Through this model, wehope to normalize the point of the views of users.In JavaSQMM, four activities organize software quality evaluating: understanding, function implementing,maintaining and reusing, and then four corresponding factors of quality come to birth, which are mixed by criteria andmetrics.When designing our Java metrics model, the original development of Object Oriented Metrics Model Tool for Ja-va(OOMTJava)provides the support to process of metrics semi-automatically.

  16. Building a Reduced Reference Video Quality Metric with Very Low Overhead Using Multivariate Data Analysis

    Directory of Open Access Journals (Sweden)

    Tobias Oelbaum

    2008-10-01

    Full Text Available In this contribution a reduced reference video quality metric for AVC/H.264 is proposed that needs only a very low overhead (not more than two bytes per sequence. This reduced reference metric uses well established algorithms to measure objective features of the video such as 'blur' or 'blocking'. Those measurements are then combined into a single measurement for the overall video quality. The weights of the single features and the combination of those are determined using methods provided by multivariate data analysis. The proposed metric is verified using a data set of AVC/H.264 encoded videos and the corresponding results of a carefully designed and conducted subjective evaluation. Results show that the proposed reduced reference metric not only outperforms standard PSNR but also two well known full reference metrics.

  17. Quality Metrics and Reliability Analysis of Laser Communication System

    Directory of Open Access Journals (Sweden)

    A. Arockia Bazil Raj

    2016-03-01

    Full Text Available Beam wandering is the main cause for major power loss in laser communication. To analyse this prerequisite at our environment, a 155 Mbps data transmission experimental setup is built with necessary optoelectronic components for the link range of 0.5 km at an altitude of 15.25 m. A neuro-controller is developed inside the FPGA and used to stabilise the received beam at the centre of detector plane. The Q-factor and bit error rate variation profiles are calculated using the signal statistics obtained from the eye-diagram. The performance improvements on the laser communication system due to the incorporation of beam wandering mitigation control are investigated and discussed in terms of various communication quality assessment key parameters.Defence Science Journal, Vol. 66, No. 2, March 2016, pp. 175-185, DOI: http://dx.doi.org/10.14429/dsj.66.9707

  18. Quality metrics in high-dimensional data visualization: an overview and systematization.

    Science.gov (United States)

    Bertini, Enrico; Tatu, Andrada; Keim, Daniel

    2011-12-01

    In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research.

  19. Metrics for Assessment of Smart Grid Data Integrity Attacks

    Energy Technology Data Exchange (ETDEWEB)

    Annarita Giani; Miles McQueen; Russell Bent; Kameshwar Poolla; Mark Hinrichs

    2012-07-01

    There is an emerging consensus that the nation’s electricity grid is vulnerable to cyber attacks. This vulnerability arises from the increasing reliance on using remote measurements, transmitting them over legacy data networks to system operators who make critical decisions based on available data. Data integrity attacks are a class of cyber attacks that involve a compromise of information that is processed by the grid operator. This information can include meter readings of injected power at remote generators, power flows on transmission lines, and relay states. These data integrity attacks have consequences only when the system operator responds to compromised data by redispatching generation under normal or contingency protocols. These consequences include (a) financial losses from sub-optimal economic dispatch to service loads, (b) robustness/resiliency losses from placing the grid at operating points that are at greater risk from contingencies, and (c) systemic losses resulting from cascading failures induced by poor operational choices. This paper is focused on understanding the connections between grid operational procedures and cyber attacks. We first offer two examples to illustrate how data integrity attacks can cause economic and physical damage by misleading operators into taking inappropriate decisions. We then focus on unobservable data integrity attacks involving power meter data. These are coordinated attacks where the compromised data are consistent with the physics of power flow, and are therefore passed by any bad data detection algorithm. We develop metrics to assess the economic impact of these attacks under re-dispatch decisions using optimal power flow methods. These metrics can be use to prioritize the adoption of appropriate countermeasures including PMU placement, encryption, hardware upgrades, and advance attack detection algorithms.

  20. Design of video quality metrics with multi-way data analysis a data driven approach

    CERN Document Server

    Keimel, Christian

    2016-01-01

    This book proposes a data-driven methodology using multi-way data analysis for the design of video-quality metrics. It also enables video- quality metrics to be created using arbitrary features. This data- driven design approach not only requires no detailed knowledge of the human visual system, but also allows a proper consideration of the temporal nature of video using a three-way prediction model, corresponding to the three-way structure of video. Using two simple example metrics, the author demonstrates not only that this purely data- driven approach outperforms state-of-the-art video-quality metrics, which are often optimized for specific properties of the human visual system, but also that multi-way data analysis methods outperform the combination of two-way data analysis methods and temporal pooling. .

  1. A Metric Tool for Predicting Source Code Quality from a PDL Design

    OpenAIRE

    Henry, Sallie M.; Selig, Calvin

    1987-01-01

    The software crisis has increased the demand for automated tools to assist software developers in the production of quality software. Quality metrics have given software developers a tool to measure software quality. These measurements, however, are available only after the software has been produced. Due to high cost, software managers are reluctant, to redesign and reimplement low quality software. Ideally, a life cycle which allows early measurement of software quality is a necessary ingre...

  2. Using business intelligence to monitor clinical quality metrics.

    Science.gov (United States)

    Resetar, Ervina; Noirot, Laura A; Reichley, Richard M; Storey, Patricia; Skiles, Ann M; Traynor, Patrick; Dunagan, W Claiborne; Bailey, Thomas C

    2007-10-11

    BJC HealthCare (BJC) uses a number of industry standard indicators to monitor the quality of services provided by each of its hospitals. By establishing an enterprise data warehouse as a central repository of clinical quality information, BJC is able to monitor clinical quality performance in a timely manner and improve clinical outcomes.

  3. A quality metric for homology modeling: the H-factor

    Directory of Open Access Journals (Sweden)

    di Luccio Eric

    2011-02-01

    Full Text Available Abstract Background The analysis of protein structures provides fundamental insight into most biochemical functions and consequently into the cause and possible treatment of diseases. As the structures of most known proteins cannot be solved experimentally for technical or sometimes simply for time constraints, in silico protein structure prediction is expected to step in and generate a more complete picture of the protein structure universe. Molecular modeling of protein structures is a fast growing field and tremendous works have been done since the publication of the very first model. The growth of modeling techniques and more specifically of those that rely on the existing experimental knowledge of protein structures is intimately linked to the developments of high resolution, experimental techniques such as NMR, X-ray crystallography and electron microscopy. This strong connection between experimental and in silico methods is however not devoid of criticisms and concerns among modelers as well as among experimentalists. Results In this paper, we focus on homology-modeling and more specifically, we review how it is perceived by the structural biology community and what can be done to impress on the experimentalists that it can be a valuable resource to them. We review the common practices and provide a set of guidelines for building better models. For that purpose, we introduce the H-factor, a new indicator for assessing the quality of homology models, mimicking the R-factor in X-ray crystallography. The methods for computing the H-factor is fully described and validated on a series of test cases. Conclusions We have developed a web service for computing the H-factor for models of a protein structure. This service is freely accessible at http://koehllab.genomecenter.ucdavis.edu/toolkit/h-factor.

  4. Visual signal quality assessment quality of experience (QOE)

    CERN Document Server

    Ma, Lin; Lin, Weisi; Ngan, King

    2015-01-01

    This book provides comprehensive coverage of the latest trends/advances in subjective and objective quality evaluation for traditional visual signals, such as 2D images and video, as well as the most recent challenges for the field of multimedia quality assessment and processing, such as mobile video and social media. Readers will learn how to ensure the highest storage/delivery/ transmission quality of visual content (including image, video, graphics, animation, etc.) from the server to the consumer, under resource constraints, such as computation, bandwidth, storage space, battery life, etc.    Provides an overview of quality assessment for traditional visual signals; Covers newly emerged visual signals such as social media, 3D image/video, mobile video, high dynamic range (HDR) images, graphics/animation, etc., which demand better quality of experience (QoE); Helps readers to develop better quality metrics and processing methods for newly emerged visual signals; Enables testing, optimizing, benchmarking...

  5. Effective Implementation of Agile Practices - Object Oriented Metrics Tool to Improve Software Quality

    Directory of Open Access Journals (Sweden)

    K. Nageswara Rao

    2012-08-01

    Full Text Available Maintaining the quality of the software is the major challenge in the process of software development.Software inspections which use the methods like structured walkthroughs and formal code reviews involvecareful examination of each and every aspect/stage of software development. In Agile softwaredevelopment, refactoring helps to improve software quality. This refactoring is a technique to improvesoftware internal structure without changing its behaviour. After much study regarding the ways toimprove software quality, our research proposes an object oriented software metric tool called“MetricAnalyzer”. This tool is tested on different codebases and is proven to be much useful.

  6. Diet quality assessment indexes

    OpenAIRE

    Kênia Mara Baiocchi de Carvalho; Eliane Said Dutra; Nathalia Pizato; Nádia Dias Gruezo; Marina Kiyomi Ito

    2014-01-01

    Various indices and scores based on admittedly healthy dietary patterns or food guides for the general population, or aiming at the prevention of diet-related diseases have been developed to assess diet quality. The four indices preferred by most studies are: the Diet Quality Index; the Healthy Eating Index; the Mediterranean Diet Score; and the Overall Nutritional Quality Index. Other instruments based on these indices have been developed and the words 'adapted', 'revised', or 'new version I...

  7. Development of Metrics to Assess Effectiveness of Stream Restoration in Second-Growth Forests

    Science.gov (United States)

    Stockwell, E.; Johnson, A. C.; Edwards, R.

    2010-12-01

    This project was designed to develop and test metrics to assess whether stream restoration work in second growth riparian areas produces measurable changes in ecosystem function. Proposed metrics evaluate anadramadous fish and the trophic basis for their production. These metrics consist of: measuring benthic chlorophyll a and photosynthetically active radiation (PAR) to detect changes in primary production, computing invertebrate litterfall as allochthonous food inputs into the streams, evaluating transient storage, counting the number and depth of pools, counting the number of pieces of large wood, and determining the substrate size of the stream to detect changes in channel retentiveness and habitat availability. Data were collected prior to restoration treatment. Restoration work is expected to increase growth and survival of anadramous fish by increasing availability of high quality habitat and food. Although there is large variation in the data collected, preliminary results show a positive correlation between the number of invertebrates, especially of the subclass collembola, collected in the litterfall and the percent of PAR reaching the stream. There is also a correlation between an increase in the percent of PAR reaching the stream and an increase in the amount of alder in the riparian area. This suggests that riparian areas with a majority of alder trees could provide more invertebrate food for fish in the streams.

  8. Fovea based image quality assessment

    Science.gov (United States)

    Guo, Anan; Zhao, Debin; Liu, Shaohui; Cao, Guangyao

    2010-07-01

    Humans are the ultimate receivers of the visual information contained in an image, so the reasonable method of image quality assessment (IQA) should follow the properties of the human visual system (HVS). In recent years, IQA methods based on HVS-models are slowly replacing classical schemes, such as mean squared error (MSE) and Peak Signal-to-Noise Ratio (PSNR). IQA-structural similarity (SSIM) regarded as one of the most popular HVS-based methods of full reference IQA has apparent improvements in performance compared with traditional metrics in nature, however, it performs not very well when the images' structure is destroyed seriously or masked by noise. In this paper, a new efficient fovea based structure similarity image quality assessment (FSSIM) is proposed. It enlarges the distortions in the concerned positions adaptively and changes the importances of the three components in SSIM. FSSIM predicts the quality of an image through three steps. First, it computes the luminance, contrast and structure comparison terms; second, it computes the saliency map by extracting the fovea information from the reference image with the features of HVS; third, it pools the above three terms according to the processed saliency map. Finally, a commonly experimental database LIVE IQA is used for evaluating the performance of the FSSIM. Experimental results indicate that the consistency and relevance between FSSIM and mean opinion score (MOS) are both better than SSIM and PSNR clearly.

  9. Modeling quality video metrics of video streaming over optical network

    OpenAIRE

    Blanco Fernández, Sara

    2009-01-01

    Digital video data, stored in video databases and distributed through communication networks, is subject to various kinds of distortions during acquisition, compression, processing, transmission, and reproduction. Video quality is a characteristic of a video passed through a video transmission/processing system, a formal or informal measure of perceived video degradation (typically, compared to the original video). The impact of encoding and transmission impairments on the perceptual quality ...

  10. Power quality assessment

    International Nuclear Information System (INIS)

    The electrical power systems are exposed to different types of power quality disturbances problems. Assessment of power quality is necessary for maintaining accurate operation of sensitive equipment's especially for nuclear installations, it also ensures that unnecessary energy losses in a power system are kept at a minimum which lead to more profits. With advanced in technology growing of industrial / commercial facilities in many region. Power quality problems have been a major concern among engineers; particularly in an industrial environment, where there are many large-scale type of equipment. Thus, it would be useful to investigate and mitigate the power quality problems. Assessment of Power quality requires the identification of any anomalous behavior on a power system, which adversely affects the normal operation of electrical or electronic equipment. The choice of monitoring equipment in a survey is also important to ascertain a solution to these power quality problems. A power quality assessment involves gathering data resources; analyzing the data (with reference to power quality standards); then, if problems exist, recommendation of mitigation techniques must be considered. The main objective of the present work is to investigate and mitigate of power quality problems in nuclear installations. Normally electrical power is supplied to the installations via two sources to keep good reliability. Each source is designed to carry the full load. The Assessment of power quality was performed at the nuclear installations for both sources at different operation conditions. The thesis begins with a discussion of power quality definitions and the results of previous studies in power quality monitoring. The assessment determines that one source of electricity was deemed to have relatively good power quality; there were several disturbances, which exceeded the thresholds. Among of them are fifth harmonic, voltage swell, overvoltage and flicker. While the second

  11. Quality of Service Metrics in Wireless Sensor Networks: A Survey

    Science.gov (United States)

    Snigdh, Itu; Gupta, Nisha

    2016-03-01

    Wireless ad hoc network is characterized by autonomous nodes communicating with each other by forming a multi hop radio network and maintaining connectivity in a decentralized manner. This paper presents a systematic approach to the interdependencies and the analogy of the various factors that affect and constrain the wireless sensor network. This article elaborates the quality of service parameters in terms of methods of deployment, coverage and connectivity which affect the lifetime of the network that have been addressed, till date by the different literatures. The analogy of the indispensable rudiments was discussed that are important factors to determine the varied quality of service achieved, yet have not been duly focused upon.

  12. A Novel Spatial Pooling Strategy for Image Quality Assessment

    Institute of Scientific and Technical Information of China (English)

    Qiaohong Li; Yu-Ming Fang; Jing-Tao Xu

    2016-01-01

    A variety of existing image quality assessment (IQA) metrics share a similar two-stage framework: at the first stage, a quality map is constructed by comparison between local regions of reference and distorted images; at the second stage, the spatial pooling is adopted to obtain overall quality score. In this work, we propose a novel spatial pooling strategy for image quality assessment through statistical analysis of the quality map. Our in-depth analysis indicates that the overall image quality is sensitive to the quality distribution. Based on the analysis, the quality histogram and statistical descriptors extracted from the quality map are used as input to the support vector regression to obtain the final objective quality score. Experimental results on three large public IQA databases have demonstrated that the proposed spatial pooling strategy can greatly improve the quality prediction performance of the original IQA metrics in terms of correlation with human subjective ratings.

  13. Diet quality assessment indexes

    Directory of Open Access Journals (Sweden)

    Kênia Mara Baiocchi de Carvalho

    2014-10-01

    Full Text Available Various indices and scores based on admittedly healthy dietary patterns or food guides for the general population, or aiming at the prevention of diet-related diseases have been developed to assess diet quality. The four indices preferred by most studies are: the Diet Quality Index; the Healthy Eating Index; the Mediterranean Diet Score; and the Overall Nutritional Quality Index. Other instruments based on these indices have been developed and the words 'adapted', 'revised', or 'new version I, II or III' added to their names. Even validated indices usually find only modest associations between diet and risk of disease or death, raising questions about their limitations and the complexity associated with measuring the causal relationship between diet and health parameters. The objective of this review is to describe the main instruments used for assessing diet quality, and the applications and limitations related to their use and interpretation.

  14. Using full-reference image quality metrics for automatic image sharpening

    Science.gov (United States)

    Krasula, Lukas; Fliegel, Karel; Le Callet, Patrick; Klíma, Miloš

    2014-05-01

    Image sharpening is a post-processing technique employed for the artificial enhancement of the perceived sharpness by shortening the transitions between luminance levels or increasing the contrast on the edges. The greatest challenge in this area is to determine the level of perceived sharpness which is optimal for human observers. This task is complex because the enhancement is gained only until the certain threshold. After reaching it, the quality of the resulting image drops due to the presence of annoying artifacts. Despite the effort dedicated to the automatic sharpness estimation, none of the existing metrics is designed for localization of this threshold. Nevertheless, it is a very important step towards the automatic image sharpening. In this work, possible usage of full-reference image quality metrics for finding the optimal amount of sharpening is proposed and investigated. The intentionally over-sharpened "anchor image" was included to the calculation as the "anti-reference" and the final metric score was computed from the differences between reference, processed, and anchor versions of the scene. Quality scores obtained from the subjective experiment were used to determine the optimal combination of partial metric values. Five popular fidelity metrics - SSIM, MS-SSIM, IW-SSIM, VIF, and FSIM - were tested. The performance of the proposed approach was then verified in the subjective experiment.

  15. An Approach towards Software Quality Assessment

    Science.gov (United States)

    Srivastava, Praveen Ranjan; Kumar, Krishan

    Software engineer needs to determine the real purpose of the software, which is a prime point to keep in mind: The customer’s needs come first, and they include particular levels of quality, not just functionality. Thus, the software engineer has a responsibility to elicit quality requirements that may not even be explicit at the outset and to discuss their importance and the difficulty of attaining them. All processes associated with software quality (e.g. building, checking, improving quality) will be designed with these in mind and carry costs based on the design. Therefore, it is important to have in mind some of the possible attributes of quality. We start by identifying the metrics and measurement approaches that can be used to assess the quality of software product. Most of them can be measured subjectively because there is no solid statistics regarding them. Here, in this paper we propose an approach to measure the software quality statistically.

  16. Patent Assessment Quality

    DEFF Research Database (Denmark)

    Burke, Paul F.; Reitzig, Markus

    2006-01-01

    of the European Patent Office's (EPO's) granting and opoposition decisions for individual patents. We use the historical example of biotech patens filed between 1978 until 1986, the early stage of the industry. Our results indicate that the EPO shows systematically different assessments of technological quality...

  17. The impact of climate-induced distributional changes on the validity of biological water quality metrics.

    Science.gov (United States)

    Hassall, Christopher; Thompson, David J; Harvey, Ian F

    2010-01-01

    We present data on the distributional changes within an order of macroinvertebrates used in biological water quality monitoring. The British Odonata (dragonflies and damselflies) have been shown to be expanding their range northwards and this could potentially affect the use of water quality metrics. The results show that the families of Odonata that are used in monitoring are shifting their ranges poleward and that species richness is increasing through time at most UK latitudes. These past distributional shifts have had negligible effects on water quality indicators. However, variation in Odonata species richness (particularly in species-poor regions) has a significant effect on water quality metrics. We conclude with a brief review of current and predicted responses of aquatic macroinvertebrates to environmental warming and maintain that caution is warranted in the use of such dynamic biological indicators. PMID:19101810

  18. QESTRAL (Part 4): Test signals, combining metrics and the prediction of overall spatial quality

    OpenAIRE

    Dewhirst, M; Conetta, R; Rumsey, F; Jackson, PJB; Zielinski, S.; George, S.; Bech, S; Meares, D

    2008-01-01

    The QESTRAL project has developed an artificial listener that compares the perceived quality of a spatial audio reproduction to a reference reproduction. Test signals designed to identify distortions in both the foreground and background audio streams are created for both the reference and the impaired reproduction systems. Metrics are calculated from these test signals and are then combined using a regression model to give a measure of the overall perceived spatial quality of the impaired re...

  19. Quality assessment of healthcare systems

    OpenAIRE

    Koubeková, Eva

    2007-01-01

    Quality assessment of healthcare systems is considered to be the basic tool of developing strategic concepts in healthcare quality improvement and has a great impact on quality of life. The thesis' main focus is on possibilities of quality assessment on international quality model level and its transformation into national structures. It includes teoretical points of quality and economic evaluation of quality in healthcare. The objective is to assess the participation of czech hospitals in he...

  20. A Code Level Based Programmer Assessment and Selection Criterion Using Metric Tools

    OpenAIRE

    Ezekiel U. Okike

    2014-01-01

    this study presents a code level measurement of computer programs developed by computer programmers using a Chidamber and Kemerer Java metric (CKJM) tool and the Myers Briggs Type Indicator (MBTI) tool. The identification of potential computer programmers using personality trait factors does not seem to be the best approach without a code level measurement of the quality of programs. Hence the need to evolve a metric tool which measures both personality traits of programmers and code level qu...

  1. Translation Quality Assessment

    OpenAIRE

    Malcolm Williams

    2009-01-01

    The relevance of, and justification for, translation quality assessment (TQA) is stronger than ever: professional translators, their clients, translatological researchers and trainee translators all rely on TQA for different reasons. Yet whereas there is general agreement about the need for a translation to be "good," "satisfactory" or "acceptable," the definition of acceptability and of the means of determining it are matters of ongoing debate. National and international translation standard...

  2. Compensating for Type-I Errors in Video Quality Assessment

    DEFF Research Database (Denmark)

    Brunnström, Kjell; Tavakoli, Samira; Søgaard, Jacob

    2015-01-01

    This paper analyzes the impact on compensating for Type-I errors in video quality assessment. A Type-I error is to incorrectly conclude that there is an effect. The risk increases with the number of comparisons that are performed in statistical tests. Type-I errors are an issue often neglected...... in Quality of Experience and video quality assessment analysis. Examples are given for the analysis of subjective experiments and the evaluation of objective metrics by correlation....

  3. Analysis of image quality metric for ROI using image restoration techniques

    Directory of Open Access Journals (Sweden)

    Syed Amma Sheik

    2014-03-01

    Full Text Available Analysis of an image plays vital role in the image processing field, which leads to the inventions of applications in the area of telemedicine, remote sensing via satellites and other spacecrafts, radar, sonar and acoustic image processing etc. This concept is a key factor in research field. One of the common image analysis is use of Region of Interest (ROI image, which is an effortless way of analyzing images. This paper proposes a method to analyze the Image Quality Metric (IQM for a ROI based color image. IQM is accomplished by the use of the three image restoration algorithms such as Blind deconvolution algorithm, Wiener Filtering algorithm and Lucy – Richardson algorithm. Keywords: Blind Deconvolution, Lucy Richardson, Point Spread Function, Region of Interest, Image Quality Metric, Wiener Filter.

  4. Sigma metrics in clinical chemistry laboratory – A guide to quality control

    OpenAIRE

    Usha S. Adiga; A. Preethika; K.Swathi

    2015-01-01

    Background: Six sigma is a process of quality measurement and improvement program used in industries. Sigma methodology can be applied wherever an outcome of a process is to be measured. A poor outcome is counted as an error or defect. This is quantified as defects per million (DPM). Six sigma provides a more quantitative frame work for evaluating process performance with evidence for process improvement and describes how many sigma fit within the tolerance limits. Sigma metrics can be used e...

  5. Program analysis methodology Office of Transportation Technologies: Quality Metrics final report

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2002-03-01

    "Quality Metrics" is the analytical process for measuring and estimating future energy, environmental and economic benefits of US DOE Office of Energy Efficiency and Renewable Energy (EE/RE) programs. This report focuses on the projected benefits of the programs currently supported by the Office of Transportation Technologies (OTT) within EE/RE. For analytical purposes, these various benefits are subdivided in terms of Planning Units which are related to the OTT program structure.

  6. Total quality management: an analysis and evaluation of the effectiveness of performance metrics for ACAT III programs of record

    OpenAIRE

    Higginbotham, Jayne Marie

    2014-01-01

    This project studies the metrics of a sample United States Army Aviation Acquisition Category (ACAT) III program. This program reports weekly metrics across the functional areas of logistics, business, and technology (software development and risk management), which are reviewed in functional-management staff calls. This project investigates whether these metrics align with total quality management (TQM) best-practice standards. The framework for the study is the National Institute of Standar...

  7. A Multi-Component Model for Assessing Learning Objects: The Learning Object Evaluation Metric (LOEM)

    Science.gov (United States)

    Kay, Robin H.; Knaack, Liesel

    2008-01-01

    While discussion of the criteria needed to assess learning objects has been extensive, a formal, systematic model for evaluation has yet to be thoroughly tested. The purpose of the following study was to develop and assess a multi-component model for evaluating learning objects. The Learning Object Evaluation Metric (LOEM) was developed from a…

  8. Integrated Metrics for Improving the Life Cycle Approach to Assessing Product System Sustainability

    Directory of Open Access Journals (Sweden)

    Wesley Ingwersen

    2014-03-01

    Full Text Available Life cycle approaches are critical for identifying and reducing environmental burdens of products. While these methods can indicate potential environmental impacts of a product, current Life Cycle Assessment (LCA methods fail to integrate the multiple impacts of a system into unified measures of social, economic or environmental performance related to sustainability. Integrated metrics that combine multiple aspects of system performance based on a common scientific or economic principle have proven to be valuable for sustainability evaluation. In this work, we propose methods of adapting four integrated metrics for use with LCAs of product systems: ecological footprint, emergy, green net value added, and Fisher information. These metrics provide information on the full product system in land, energy, monetary equivalents, and as a unitless information index; each bundled with one or more indicators for reporting. When used together and for relative comparison, integrated metrics provide a broader coverage of sustainability aspects from multiple theoretical perspectives that is more likely to illuminate potential issues than individual impact indicators. These integrated metrics are recommended for use in combination with traditional indicators used in LCA. Future work will test and demonstrate the value of using these integrated metrics and combinations to assess product system sustainability.

  9. Assessing group-level participation in fluid teams: testing a new metric.

    Science.gov (United States)

    Paletz, Susannah B F; Schunn, Christian D

    2011-06-01

    Participation is an important factor in team success. We propose a new metric of participation equality that provides an unbiased estimate across groups of different sizes and across those that change size over time. Using 11 h of transcribed utterances from informal, fluid, colocated workgroup meetings, we compared the associations of this metric with coded equality of participation and standard deviation. While coded participation and our metric had similar patterns of findings, standard deviation had a somewhat different pattern, suggesting that it might lead to incorrect assessments with fluid teams. Exploratory analyses suggest that, as compared with mixed-age/status groups, groups of younger faculty had more equal participation and that the presence of negative affect words was associated with more dominated participation. Future research can take advantage of this new metric to further theory on team processes in both face-to-face and distributed settings.

  10. Reduced-reference image quality assessment using moment method

    Science.gov (United States)

    Yang, Diwei; Shen, Yuantong; Shen, Yongluo; Li, Hongwei

    2016-10-01

    Reduced-reference image quality assessment (RR IQA) aims to evaluate the perceptual quality of a distorted image through partial information of the corresponding reference image. In this paper, a novel RR IQA metric is proposed by using the moment method. We claim that the first and second moments of wavelet coefficients of natural images can have approximate and regular change that are disturbed by different types of distortions, and that this disturbance can be relevant to human perceptions of quality. We measure the difference of these statistical parameters between reference and distorted image to predict the visual quality degradation. The introduced IQA metric is suitable for implementation and has relatively low computational complexity. The experimental results on Laboratory for Image and Video Engineering (LIVE) and Tampere Image Database (TID) image databases indicate that the proposed metric has a good predictive performance.

  11. MUSTANG: A Community-Facing Web Service to Improve Seismic Data Quality Awareness Through Metrics

    Science.gov (United States)

    Templeton, M. E.; Ahern, T. K.; Casey, R. E.; Sharer, G.; Weertman, B.; Ashmore, S.

    2014-12-01

    IRIS DMC is engaged in a new effort to provide broad and deep visibility into the quality of data and metadata found in its terabyte-scale geophysical data archive. Taking advantage of large and fast disk capacity, modern advances in open database technologies, and nimble provisioning of virtual machine resources, we are creating an openly accessible treasure trove of data measurements for scientists and the general public to utilize in providing new insights into the quality of this data. We have branded this statistical gathering system MUSTANG, and have constructed it as a component of the web services suite that IRIS DMC offers. MUSTANG measures over forty data metrics addressing issues with archive status, data statistics and continuity, signal anomalies, noise analysis, metadata checks, and station state of health. These metrics could potentially be used both by network operators to diagnose station problems and by data users to sort suitable data from unreliable or unusable data. Our poster details what MUSTANG is, how users can access it, what measurements they can find, and how MUSTANG fits into the IRIS DMC's data access ecosystem. Progress in data processing, approaches to data visualization, and case studies of MUSTANG's use for quality assurance will be presented. We want to illustrate what is possible with data quality assurance, the need for data quality assurance, and how the seismic community will benefit from this freely available analytics service.

  12. Video quality assessment for web content mirroring

    Science.gov (United States)

    He, Ye; Fei, Kevin; Fernandez, Gustavo A.; Delp, Edward J.

    2014-03-01

    Due to the increasing user expectation on watching experience, moving web high quality video streaming content from the small screen in mobile devices to the larger TV screen has become popular. It is crucial to develop video quality metrics to measure the quality change for various devices or network conditions. In this paper, we propose an automated scoring system to quantify user satisfaction. We compare the quality of local videos with the videos transmitted to a TV. Four video quality metrics, namely Image Quality, Rendering Quality, Freeze Time Ratio and Rate of Freeze Events are used to measure video quality change during web content mirroring. To measure image quality and rendering quality, we compare the matched frames between the source video and the destination video using barcode tools. Freeze time ratio and rate of freeze events are measured after extracting video timestamps. Several user studies are conducted to evaluate the impact of each objective video quality metric on the subjective user watching experience.

  13. Information System Quality Assessment Methods

    OpenAIRE

    Korn, Alexandra

    2014-01-01

    This thesis explores challenging topic of information system quality assessment and mainly process assessment. In this work the term Information System Quality is defined as well as different approaches in a quality definition for different domains of information systems are outlined. Main methods of process assessment are overviewed and their relationships are described. Process assessment methods are divided into two categories: ISO standards and best practices. The main objective of this w...

  14. Revision and extension of Eco-LCA metrics for sustainability assessment of the energy and chemical processes.

    Science.gov (United States)

    Yang, Shiying; Yang, Siyu; Kraslawski, Andrzej; Qian, Yu

    2013-12-17

    Ecologically based life cycle assessment (Eco-LCA) is an appealing approach for the evaluation of resources utilization and environmental impacts of the process industries from an ecological scale. However, the aggregated metrics of Eco-LCA suffer from some drawbacks: the environmental impact metric has limited applicability; the resource utilization metric ignores indirect consumption; the renewability metric fails to address the quantitative distinction of resources availability; the productivity metric seems self-contradictory. In this paper, the existing Eco-LCA metrics are revised and extended for sustainability assessment of the energy and chemical processes. A new Eco-LCA metrics system is proposed, including four independent dimensions: environmental impact, resource utilization, resource availability, and economic effectiveness. An illustrative example of comparing assessment between a gas boiler and a solar boiler process provides insight into the features of the proposed approach. PMID:24228888

  15. Assessment of data quality in ATLAS

    CERN Document Server

    Wilson, M G

    2008-01-01

    Assessing the quality of data recorded with the ATLAS detector is crucial for commissioning and operating the detector to achieve sound physics measurements. In particular, the fast assessment of complex quantities obtained during event reconstruction and the ability to easily track them over time are especially important given the large data throughput and the distributed nature of the analysis environment. The data are processed once on a computer farm comprising O(1, 000) nodes before being distributed on the Grid, and reliable, centralized methods must be used to organize, merge, present, and archive data-quality metrics for performance experts and analysts. A review of the tools and approaches employed by the detector and physics groups in this environment and a summary of their performances during commissioning are presented.

  16. Optimal Rate Control in H.264 Video Coding Based on Video Quality Metric

    Directory of Open Access Journals (Sweden)

    R. Karthikeyan

    2014-05-01

    Full Text Available The aim of this research is to find a method for providing better visual quality across the complete video sequence in H.264 video coding standard. H.264 video coding standard with its significantly improved coding efficiency finds important applications in various digital video streaming, storage and broadcast. To achieve comparable quality across the complete video sequence with the constrains on bandwidth availability and buffer fullness, it is important to allocate more bits to frames with high complexity or a scene change and fewer bits to other less complex frames. A frame layer bit allocation scheme is proposed based on the perceptual quality metric as indicator of the frame complexity. The proposed model computes the Quality Index ratio (QIr of the predicted quality index of the current frame to the average quality index of all the previous frames in the group of pictures which is used for bit allocation to the current frame along with bits computed based on buffer availability. The standard deviation of the perceptual quality indicator MOS computed for the proposed model is significantly less which means the quality of the video sequence is identical throughout the full video sequence. Thus the experiment results shows that the proposed model effectively handles the scene changes and scenes with high motion for better visual quality.

  17. Survey of Quantitative Research Metrics to Assess Pilot Performance in Upset Recovery

    Science.gov (United States)

    Le Vie, Lisa R.

    2016-01-01

    Accidents attributable to in-flight loss of control are the primary cause for fatal commercial jet accidents worldwide. The National Aeronautics and Space Administration (NASA) conducted a literature review to determine and identify the quantitative standards for assessing upset recovery performance. This review contains current recovery procedures for both military and commercial aviation and includes the metrics researchers use to assess aircraft recovery performance. Metrics include time to first input, recognition time and recovery time and whether that input was correct or incorrect. Other metrics included are: the state of the autopilot and autothrottle, control wheel/sidestick movement resulting in pitch and roll, and inputs to the throttle and rudder. In addition, airplane state measures, such as roll reversals, altitude loss/gain, maximum vertical speed, maximum/minimum air speed, maximum bank angle and maximum g loading are reviewed as well.

  18. Advancing Efforts to Achieve Health Equity: Equity Metrics for Health Impact Assessment Practice

    Directory of Open Access Journals (Sweden)

    Jonathan Heller

    2014-10-01

    Full Text Available Equity is a core value of Health Impact Assessment (HIA. Many compelling moral, economic, and health arguments exist for prioritizing and incorporating equity considerations in HIA practice. Decision-makers, stakeholders, and HIA practitioners see the value of HIAs in uncovering the impacts of policy and planning decisions on various population subgroups, developing and prioritizing specific actions that promote or protect health equity, and using the process to empower marginalized communities. There have been several HIA frameworks developed to guide the inclusion of equity considerations. However, the field lacks clear indicators for measuring whether an HIA advanced equity. This article describes the development of a set of equity metrics that aim to guide and evaluate progress toward equity in HIA practice. These metrics also intend to further push the field to deepen its practice and commitment to equity in each phase of an HIA. Over the course of a year, the Society of Practitioners of Health Impact Assessment (SOPHIA Equity Working Group took part in a consensus process to develop these process and outcome metrics. The metrics were piloted, reviewed, and refined based on feedback from reviewers. The Equity Metrics are comprised of 23 measures of equity organized into four outcomes: (1 the HIA process and products focused on equity; (2 the HIA process built the capacity and ability of communities facing health inequities to engage in future HIAs and in decision-making more generally; (3 the HIA resulted in a shift in power benefiting communities facing inequities; and (4 the HIA contributed to changes that reduced health inequities and inequities in the social and environmental determinants of health. The metrics are comprised of a measurement scale, examples of high scoring activities, potential data sources, and example interview questions to gather data and guide evaluators on scoring each metric.

  19. Formal analysis of security metrics and risk

    OpenAIRE

    Krautsevich L.; Martinelli F.; Yautsiukhin A.

    2011-01-01

    Security metrics are usually defined informally and, therefore, the rigourous analysis of these metrics is a hard task. This analysis is required to identify the existing relations between the security metrics, which try to quantify the same quality: security. Risk, computed as Annualised Loss Expectancy, is often used in order to give the overall assessment of security as a whole. Risk and security metrics are usually defined separately and the relation between these indicators have not been...

  20. A Stochastic Quality Metric for Optimal Control of Active Camera Network Configurations for 3D Computer Vision Tasks

    OpenAIRE

    Ilie, Adrian; Welch, Greg; Macenko, Marc

    2008-01-01

    International audience We present a stochastic state-space quality metric for use in controlling active camera networks aimed at 3D vision tasks such as surveillance, motion tracking, and 3D shape/appearance reconstruction. Specifically, the metric provides an estimate of the aggregate steady-state uncertainty of the 3D resolution of the objects of interest, as a function of camera parameters such as pan, tilt, and zoom. The use of stochastic state-space models for the quality metric resul...

  1. Impact of artifact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data.

    Science.gov (United States)

    Carroll, Thomas S; Liang, Ziwei; Salama, Rafik; Stark, Rory; de Santiago, Ines

    2014-01-01

    With the advent of ChIP-seq multiplexing technologies and the subsequent increase in ChIP-seq throughput, the development of working standards for the quality assessment of ChIP-seq studies has received significant attention. The ENCODE consortium's large scale analysis of transcription factor binding and epigenetic marks as well as concordant work on ChIP-seq by other laboratories has established a new generation of ChIP-seq quality control measures. The use of these metrics alongside common processing steps has however not been evaluated. In this study, we investigate the effects of blacklisting and removal of duplicated reads on established metrics of ChIP-seq quality and show that the interpretation of these metrics is highly dependent on the ChIP-seq preprocessing steps applied. Further to this we perform the first investigation of the use of these metrics for ChIP-exo data and make recommendations for the adaptation of the NSC statistic to allow for the assessment of ChIP-exo efficiency.

  2. Impact of artefact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data.

    Directory of Open Access Journals (Sweden)

    Thomas Samuel Carroll

    2014-04-01

    Full Text Available With the advent of ChIP-seq multiplexing technologies and the subsequent increase in ChIP-seq throughput, the development of working standards for the quality assessment of ChIP-seq studies has received significant attention. The ENCODE consortium’s large scale analysis of transcription factor binding and epigenetic marks as well as concordant work on ChIP-seq by other laboratories has established a new generation of ChIP-seq quality control measures. The use of these metrics alongside common processing steps has however not been evaluated. In this study, we investigate the effects of blacklisting and removal of duplicated reads on established metrics of ChIP-seq quality and show that the interpretation of these metrics is highly dependent on the ChIP-seq preprocessing steps applied. Further to this we perform the first investigation of the use of these metrics for ChIP-exo data and make recommendations for the adaptation of the NSC statistic to allow for the assessment of ChIP-exo efficiency.

  3. Portfolio Assessment and Quality Teaching

    Science.gov (United States)

    Kim, Youb; Yazdian, Lisa Sensale

    2014-01-01

    Our article focuses on using portfolio assessment to craft quality teaching. Extant research literature on portfolio assessment suggests that the primary purpose of assessment is to serve learning, and portfolio assessments facilitate the process of making linkages among assessment, curriculum, and student learning (Asp, 2000; Bergeron, Wermuth,…

  4. Image Signature Based Mean Square Error for Image Quality Assessment

    Institute of Scientific and Technical Information of China (English)

    CUI Ziguan; GAN Zongliang; TANG Guijin; LIU Feng; ZHU Xiuchang

    2015-01-01

    Motivated by the importance of Human visual system (HVS) in image processing, we propose a novel Image signature based mean square error (ISMSE) metric for full reference Image quality assessment (IQA). Efficient image signature based describer is used to predict visual saliency map of the reference image. The saliency map is incorporated into luminance diff erence between the reference and distorted images to obtain image quality score. The eff ect of luminance diff erence on visual quality with larger saliency value which is usually corresponding to foreground objects is highlighted. Experimental results on LIVE database release 2 show that by integrating the eff ects of image signature based saliency on luminance dif-ference, the proposed ISMSE metric outperforms several state-of-the-art HVS-based IQA metrics but with lower complexity.

  5. Retinal image quality assessment using generic features

    Science.gov (United States)

    Fasih, Mahnaz; Langlois, J. M. Pierre; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Retinal image quality assessment is an important step in automated eye disease diagnosis. Diagnosis accuracy is highly dependent on the quality of retinal images, because poor image quality might prevent the observation of significant eye features and disease manifestations. A robust algorithm is therefore required in order to evaluate the quality of images in a large database. We developed an algorithm for retinal image quality assessment based on generic features that is independent from segmentation methods. It exploits the local sharpness and texture features by applying the cumulative probability of blur detection metric and run-length encoding algorithm, respectively. The quality features are combined to evaluate the image's suitability for diagnosis purposes. Based on the recommendations of medical experts and our experience, we compared a global and a local approach. A support vector machine with radial basis functions was used as a nonlinear classifier in order to classify images to gradable and ungradable groups. We applied our methodology to 65 images of size 2592×1944 pixels that had been graded by a medical expert. The expert evaluated 38 images as gradable and 27 as ungradable. The results indicate very good agreement between the proposed algorithm's predictions and the medical expert's judgment: the sensitivity and specificity for the local approach are respectively 92% and 94%. The algorithm demonstrates sufficient robustness to identify relevant images for automated diagnosis.

  6. Holistic Metrics for Assessment of the Greenness of Chemical Reactions in the Context of Chemical Education

    Science.gov (United States)

    Ribeiro, M. Gabriela T. C.; Machado, Adelio A. S. C.

    2013-01-01

    Two new semiquantitative green chemistry metrics, the green circle and the green matrix, have been developed for quick assessment of the greenness of a chemical reaction or process, even without performing the experiment from a protocol if enough detail is provided in it. The evaluation is based on the 12 principles of green chemistry. The…

  7. Assessing Metrics for Estimating Fire Induced Change in the Forest Understorey Structure Using Terrestrial Laser Scanning

    Directory of Open Access Journals (Sweden)

    Vaibhav Gupta

    2015-06-01

    Full Text Available Quantifying post-fire effects in a forested landscape is important to ascertain burn severity, ecosystem recovery and post-fire hazard assessments and mitigation planning. Reporting of such post-fire effects assumes significance in fire-prone countries such as USA, Australia, Spain, Greece and Portugal where prescribed burns are routinely carried out. This paper describes the use of Terrestrial Laser Scanning (TLS to estimate and map change in the forest understorey following a prescribed burn. Eighteen descriptive metrics are derived from bi-temporal TLS which are used to analyse and visualise change in a control and fire-altered plot. Metrics derived are Above Ground Height-based (AGH percentiles and heights, point count and mean intensity. Metrics such as AGH50change, mean AGHchange and point countchange are sensitive enough to detect subtle fire-induced change (28%–52% whilst observing little or no change in the control plot (0–4%. A qualitative examination with field measurements of the spatial distribution of burnt areas and percentage area burnt also show similar patterns. This study is novel in that it examines the behaviour of TLS metrics for estimating and mapping fire induced change in understorey structure in a single-scan mode with a minimal fixed reference system. Further, the TLS-derived metrics can be used to produce high resolution maps of change in the understorey landscape.

  8. What are we assessing when we measure food security? A compendium and review of current metrics.

    Science.gov (United States)

    Jones, Andrew D; Ngure, Francis M; Pelto, Gretel; Young, Sera L

    2013-09-01

    The appropriate measurement of food security is critical for targeting food and economic aid; supporting early famine warning and global monitoring systems; evaluating nutrition, health, and development programs; and informing government policy across many sectors. This important work is complicated by the multiple approaches and tools for assessing food security. In response, we have prepared a compendium and review of food security assessment tools in which we review issues of terminology, measurement, and validation. We begin by describing the evolving definition of food security and use this discussion to frame a review of the current landscape of measurement tools available for assessing food security. We critically assess the purpose/s of these tools, the domains of food security assessed by each, the conceptualizations of food security that underpin each metric, as well as the approaches that have been used to validate these metrics. Specifically, we describe measurement tools that 1) provide national-level estimates of food security, 2) inform global monitoring and early warning systems, 3) assess household food access and acquisition, and 4) measure food consumption and utilization. After describing a number of outstanding measurement challenges that might be addressed in future research, we conclude by offering suggestions to guide the selection of appropriate food security metrics.

  9. The role of metrics and measurements in a software intensive total quality management environment

    Science.gov (United States)

    Daniels, Charles B.

    1992-01-01

    Paramax Space Systems began its mission as a member of the Rockwell Space Operations Company (RSOC) team which was the successful bidder on a massive operations consolidation contract for the Mission Operations Directorate (MOD) at JSC. The contract awarded to the team was the Space Transportation System Operations Contract (STSOC). Our initial challenge was to accept responsibility for a very large, highly complex and fragmented collection of software from eleven different contractors and transform it into a coherent, operational baseline. Concurrently, we had to integrate a diverse group of people from eleven different companies into a single, cohesive team. Paramax executives recognized the absolute necessity to develop a business culture based on the concept of employee involvement to execute and improve the complex process of our new environment. Our executives clearly understood that management needed to set the example and lead the way to quality improvement. The total quality management policy and the metrics used in this endeavor are presented.

  10. Definition of Metric Dependencies for Monitoring the Impact of Quality of Services on Quality of Processes

    OpenAIRE

    2007-01-01

    Service providers have to monitor the quality of offered services and to ensure the compliance of service levels provider and requester agreed on. Thereby, a service provider should notify a service requester about violations of service level agreements (SLAs). Furthermore, the provider should point to impacts on affected processes in which services are invoked. For that purpose, a model is needed to define dependencies between quality of processes and quality of invoked services. In order to...

  11. Area of Concern: A new paradigm in life cycle assessment for the development of footprint metrics

    DEFF Research Database (Denmark)

    Ridoutt, Bradley G.; Pfister, Stephan; Manzardo, Alessandro;

    2016-01-01

    As a class of environmental metrics, footprints have been poorly defined, have shared an unclear relationship to life cycle assessment (LCA), and the variety of approaches to quantification have sometimes resulted in confusing and contradictory messages in the marketplace. In response, a task force...... operating under the auspices of the UNEP/SETAC Life Cycle Initiative project on environmental life cycle impact assessment (LCIA) has been working to develop generic guidance for developers of footprint metrics. The purpose of this paper is to introduce a universal footprint definition and related...... are defined by the interests of stakeholders in society rather than the LCA community. In addition, areas of concern are stand-alone and not necessarily part of a framework intended for comprehensive environmental performance assessment. The area of concern paradigm is needed to support the development...

  12. A convolutional neural network approach for objective video quality assessment.

    Science.gov (United States)

    Le Callet, Patrick; Viard-Gaudin, Christian; Barba, Dominique

    2006-09-01

    This paper describes an application of neural networks in the field of objective measurement method designed to automatically assess the perceived quality of digital videos. This challenging issue aims to emulate human judgment and to replace very complex and time consuming subjective quality assessment. Several metrics have been proposed in literature to tackle this issue. They are based on a general framework that combines different stages, each of them addressing complex problems. The ambition of this paper is not to present a global perfect quality metric but rather to focus on an original way to use neural networks in such a framework in the context of reduced reference (RR) quality metric. Especially, we point out the interest of such a tool for combining features and pooling them in order to compute quality scores. The proposed approach solves some problems inherent to objective metrics that should predict subjective quality score obtained using the single stimulus continuous quality evaluation (SSCQE) method. This latter has been adopted by video quality expert group (VQEG) in its recently finalized reduced referenced and no reference (RRNR-TV) test plan. The originality of such approach compared to previous attempts to use neural networks for quality assessment, relies on the use of a convolutional neural network (CNN) that allows a continuous time scoring of the video. Objective features are extracted on a frame-by-frame basis on both the reference and the distorted sequences; they are derived from a perceptual-based representation and integrated along the temporal axis using a time-delay neural network (TDNN). Experiments conducted on different MPEG-2 videos, with bit rates ranging 2-6 Mb/s, show the effectiveness of the proposed approach to get a plausible model of temporal pooling from the human vision system (HVS) point of view. More specifically, a linear correlation criteria, between objective and subjective scoring, up to 0.92 has been obtained on

  13. A Code Level Based Programmer Assessment and Selection Criterion Using Metric Tools

    Directory of Open Access Journals (Sweden)

    Ezekiel U. Okike

    2014-11-01

    Full Text Available this study presents a code level measurement of computer programs developed by computer programmers using a Chidamber and Kemerer Java metric (CKJM tool and the Myers Briggs Type Indicator (MBTI tool. The identification of potential computer programmers using personality trait factors does not seem to be the best approach without a code level measurement of the quality of programs. Hence the need to evolve a metric tool which measures both personality traits of programmers and code level quality of programs developed by programmers. This is the focus of this study. In this experiment, a set of Java based programming tasks were given to 33 student programmers who could confidently use the Java programming language. The codes developed by these students were analyzed for quality using a CKJM tool. Cohesion, coupling and number of public methods (NPM metrics were used in the study. The choice of these three metrics from the CKJM suite was because they are useful in measuring well designed codes. By examining the cohesion values of classes, high cohesion ranges [0,1] and low coupling imply well designed code. Also number of methods (NPM in a well-designed class is always less than 5 when cohesion range is [0,1]. Results from this study show that 19 of the 33 programmers developed good and cohesive programs while 14 did not. Further analysis revealed the personality traits of programmers and the number of good programs written by them. Programmers with Introverted Sensing Thinking Judging (ISTJ traits produced the highest number of good programs, followed by Introverted iNtuitive Thinking Perceiving (INTP, Introverted iNtuitive Feelingng Perceiving (INTP, and Extroverted Sensing Thinking Judging (ESTJ

  14. 允许总误差在西格玛度量用于评价临床化学检测项目分析质量上的应用研究%Application of allowable total error in sigma metrics for assessing the analytical quality of clinical chemistry determination

    Institute of Scientific and Technical Information of China (English)

    张路; 王薇; 王治国

    2015-01-01

    Objective To investigate the importance of allowable total error (TEa) source in sigma(σ) metrics for assessing the analytical quality of clinical chemistry determination.Methods In this study, the data were collected from the second internal quality control of routine chemistry and the first external quality assessment of routine chemistry in 2014 organized by the National Center for Clinical Laboratory.One of the laboratories was selected for its coefficient of variation ( CV) and the bias of 19 clinical chemistry items from the data.σof 2 runs were calculated by 5 different TEa. The σmetrics′performance for assessing the analytical quality of clinical chemistry determination was analyzed comparatively.Results σmetrics varied with the changes of TEa and imprecision.Under the National Health Industry Standard, the majorσvalues(68.4%)for control 1 ranged from 2 to 4 and from 3 to 6 for control 2(58%).Under RiliBÄK, except triglyceride (negative) and alanine aminotransferase (ALT)(3, up to 7.69, and 84.2% of control 2 showed a σvalue >3, up to 10.43.Under biological variability, theσvalue of control 1 ranged from 1 to 5, and the most (63%) was <3, and that of control 2 ranged from 1 to 6, but those of 9 from 19 were <3.Under the TEa of Australian, theσvalue of control 1 was <3, and that of 79%control 2 was <3 .The σvalue of control 2 was generally higher than that of control 1.Conclusions The 6σis an efficient way to control quality, but the lack of TEa for many analytes and inconsistent TEa from different sources are important variables for the interpretation ofσmetrics in a routine clinical laboratory.%目的:探讨不同允许总误差( TEa)来源对西格玛(σ)度量评价实验室临床化学检验项目分析质量的重要性。方法用卫生部临床检验中心2014年第2次常规化学室内质控数据及2014年第1次常规化学室间质评某实验室19个检测项目的变异系数( CV)及百分差值(用其

  15. Irrigation water quality assessments

    Science.gov (United States)

    Increasing demands on fresh water supplies by municipal and industrial users means decreased fresh water availability for irrigated agriculture in semi arid and arid regions. There is potential for agricultural use of treated wastewaters and low quality waters for irrigation but this will require co...

  16. Assessing spelling in kindergarten: further comparison of scoring metrics and their relation to reading skills.

    Science.gov (United States)

    Clemens, Nathan H; Oslund, Eric L; Simmons, Leslie E; Simmons, Deborah

    2014-02-01

    Early reading and spelling development share foundational skills, yet spelling assessment is underutilized in evaluating early reading. This study extended research comparing the degree to which methods for scoring spelling skills at the end of kindergarten were associated with reading skills measured at the same time as well as at the end of first grade. Five strategies for scoring spelling responses were compared: totaling the number of words spelled correctly, totaling the number of correct letter sounds, totaling the number of correct letter sequences, using a rubric for scoring invented spellings, and calculating the Spelling Sensitivity Score (Masterson & Apel, 2010b). Students (N=287) who were identified at kindergarten entry as at risk for reading difficulty and who had received supplemental reading intervention were administered a standardized spelling assessment in the spring of kindergarten, and measures of phonological awareness, decoding, word recognition, and reading fluency were administered concurrently and at the end of first grade. The five spelling scoring metrics were similar in their strong relations with factors summarizing reading subskills (phonological awareness, decoding, and word reading) on a concurrent basis. Furthermore, when predicting first-grade reading skills based on spring-of-kindergarten performance, spelling scores from all five metrics explained unique variance over the autoregressive effects of kindergarten word identification. The practical advantages of using a brief spelling assessment for early reading evaluation and the relative tradeoffs of each scoring metric are discussed.

  17. Perceptual Weights Based On Local Energy For Image Quality Assessment

    Directory of Open Access Journals (Sweden)

    Sudhakar Nagalla

    2014-12-01

    Full Text Available This paper proposes an image quality metric that can effectively measure the quality of an image that correlates well with human judgment on the appearance of the image. The present work adds a new dimension to the structural approach based full-reference image quality assessment for gray scale images. The proposed method assigns more weight to the distortions present in the visual regions of interest of the reference (original image than to the distortions present in the other regions of the image, referred to as perceptual weights. The perceptual features and their weights are computed based on the local energy modeling of the original image. The proposed model is validated using the image database provided by LIVE (Laboratory for Image & Video Engineering, The University of Texas at Austin based on the evaluation metrics as suggested in the video quality experts group (VQEG Phase I FR-TV test.

  18. Audiovisual quality assessment in communications applications: Current status, trends and challenges

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2010-01-01

    Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception...... addressed in practical quality metrics is the co-impact of audio and video qualities. This paper provides an overview of the current trends and challenges in objective audiovisual quality assessment, with emphasis on communication applications...

  19. On using Multiple Quality Link Metrics with Destination Sequenced Distance Vector Protocol for Wireless Multi-Hop Networks

    CERN Document Server

    Javaid, N; Khan, Z A; Djouani, K

    2012-01-01

    In this paper, we compare and analyze performance of five quality link metrics forWireless Multi-hop Networks (WMhNs). The metrics are based on loss probability measurements; ETX, ETT, InvETX, ML and MD, in a distance vector routing protocol; DSDV. Among these selected metrics, we have implemented ML, MD, InvETX and ETT in DSDV which are previously implemented with different protocols; ML, MD, InvETX are implemented with OLSR, while ETT is implemented in MR-LQSR. For our comparison, we have selected Throughput, Normalized Routing Load (NRL) and End-to-End Delay (E2ED) as performance parameters. Finally, we deduce that InvETX due to low computational burden and link asymmetry measurement outperforms among all metrics.

  20. Quality assessment of urban environment

    Science.gov (United States)

    Ovsiannikova, T. Y.; Nikolaenko, M. N.

    2015-01-01

    This paper is dedicated to the research applicability of quality management problems of construction products. It is offered to expand quality management borders in construction, transferring its principles to urban systems as economic systems of higher level, which qualitative characteristics are substantially defined by quality of construction product. Buildings and structures form spatial-material basis of cities and the most important component of life sphere - urban environment. Authors justify the need for the assessment of urban environment quality as an important factor of social welfare and life quality in urban areas. The authors suggest definition of a term "urban environment". The methodology of quality assessment of urban environment is based on integrated approach which includes the system analysis of all factors and application of both quantitative methods of assessment (calculation of particular and integrated indicators) and qualitative methods (expert estimates and surveys). The authors propose the system of indicators, characterizing quality of the urban environment. This indicators fall into four classes. The authors show the methodology of their definition. The paper presents results of quality assessment of urban environment for several Siberian regions and comparative analysis of these results.

  1. Assessing the Quality of Bioforensic Signatures

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Holmes, Aimee E.; Gosink, Luke J.; Webb-Robertson, Bobbie-Jo M.; Kreuzer, Helen W.; Anderson, Richard M.; Brothers, Alan J.; Corley, Courtney D.; Tardiff, Mark F.

    2013-06-04

    We present a mathematical framework for assessing the quality of signature systems in terms of fidelity, cost, risk, and utility—a method we refer to as Signature Quality Metrics (SQM). We demonstrate the SQM approach by assessing the quality of a signature system designed to predict the culture medium used to grow a microorganism. The system consists of four chemical assays designed to identify various ingredients that could be used to produce the culture medium. The analytical measurements resulting from any combination of these four assays can be used in a Bayesian network to predict the probabilities that the microorganism was grown using one of eleven culture media. We evaluated fifteen combinations of the signature system by removing one or more of the assays from the Bayes network. We demonstrated that SQM can be used to distinguish between the various combinations in terms of attributes of interest. The approach assisted in clearly identifying assays that were least informative, largely in part because they only could discriminate between very few culture media, and in particular, culture media that are rarely used. There are limitations associated with the data that were used to train and test the signature system. Consequently, our intent is not to draw formal conclusions regarding this particular bioforensic system, but rather to illustrate an analytical approach that could be useful in comparing one signature system to another.

  2. The Northeast Stream Quality Assessment

    Science.gov (United States)

    Van Metre, Peter C.; Riva-Murray, Karen; Coles, James F.

    2016-04-22

    In 2016, the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) is assessing stream quality in the northeastern United States. The goal of the Northeast Stream Quality Assessment (NESQA) is to assess the quality of streams in the region by characterizing multiple water-quality factors that are stressors to aquatic life and evaluating the relation between these stressors and biological communities. The focus of NESQA in 2016 will be on the effects of urbanization and agriculture on stream quality in all or parts of eight states: Connecticut, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont.Findings will provide the public and policymakers with information about the most critical factors affecting stream quality, thus providing insights about possible approaches to protect the health of streams in the region. The NESQA study will be the fourth regional study conducted as part of NAWQA and will be of similar design and scope to the first three, in the Midwest in 2013, the Southeast in 2014, and the Pacific Northwest in 2015 (http://txpub.usgs.gov/RSQA/).

  3. Attention modeling for video quality assessment:balancing global quality and local quality

    OpenAIRE

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2010-01-01

    This paper proposes to evaluate video quality by balancing two quality components: global quality and local quality. The global quality is a result from subjects allocating their ttention equally to all regions in a frame and all frames n a video. It is evaluated by image quality metrics (IQM) ith averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in ...

  4. The effect of assessment scale and metric selection on the greenhouse gas benefits of woody biomass

    International Nuclear Information System (INIS)

    Recent attention has focused on the net greenhouse gas (GHG) implications of using woody biomass to produce energy. In particular, a great deal of controversy has erupted over the appropriate manner and scale at which to evaluate these GHG effects. Here, we conduct a comparative assessment of six different assessment scales and four different metric calculation techniques against the backdrop of a common biomass demand scenario. We evaluate the net GHG balance of woody biomass co-firing in existing coal-fired facilities in the state of Virginia, finding that assessment scale and metric calculation technique do in fact strongly influence the net GHG balance yielded by this common scenario. Those assessment scales that do not include possible market effects attributable to increased biomass demand, including changes in forest area, forest management intensity, and traditional industry production, generally produce less-favorable GHG balances than those that do. Given the potential difficulty small operators may have generating or accessing information on the extent of these market effects, however, it is likely that stakeholders and policy makers will need to balance accuracy and comprehensiveness with reporting and administrative simplicity. -- Highlights: ► Greenhouse gas (GHG) effects of co-firing forest biomass with coal are assessed. ► GHG effect of replacing coal with forest biomass linked to scale, analytic approach. ► Not accounting for indirect market effects yields poorer relative GHG balances. ► Accounting systems must balance comprehensiveness with administrative simplicity.

  5. Sugar concentration in nectar: a quantitative metric of crop attractiveness for refined pollinator risk assessments.

    Science.gov (United States)

    Knopper, Loren D; Dan, Tereza; Reisig, Dominic D; Johnson, Josephine D; Bowers, Lisa M

    2016-10-01

    Those involved with pollinator risk assessment know that agricultural crops vary in attractiveness to bees. Intuitively, this means that exposure to agricultural pesticides is likely greatest for attractive plants and lowest for unattractive plants. While crop attractiveness in the risk assessment process has been qualitatively remarked on by some authorities, absent is direction on how to refine the process with quantitative metrics of attractiveness. At a high level, attractiveness of crops to bees appears to depend on several key variables, including but not limited to: floral, olfactory, visual and tactile cues; seasonal availability; physical and behavioral characteristics of the bee; plant and nectar rewards. Notwithstanding the complexities and interactions among these variables, sugar content in nectar stands out as a suitable quantitative metric by which to refine pollinator risk assessments for attractiveness. Provided herein is a proposed way to use sugar nectar concentration to adjust the exposure parameter (with what is called a crop attractiveness factor) in the calculation of risk quotients in order to derive crop-specific tier I assessments. This Perspective is meant to invite discussion on incorporating such changes in the risk assessment process. © 2016 The Authors. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. PMID:27197566

  6. A New Normalizing Algorithm for BAC CGH Arrays with Quality Control Metrics

    Directory of Open Access Journals (Sweden)

    Jeffrey C. Miecznikowski

    2011-01-01

    Full Text Available The main focus in pin-tip (or print-tip microarray analysis is determining which probes, genes, or oligonucleotides are differentially expressed. Specifically in array comparative genomic hybridization (aCGH experiments, researchers search for chromosomal imbalances in the genome. To model this data, scientists apply statistical methods to the structure of the experiment and assume that the data consist of the signal plus random noise. In this paper we propose “SmoothArray”, a new method to preprocess comparative genomic hybridization (CGH bacterial artificial chromosome (BAC arrays and we show the effects on a cancer dataset. As part of our R software package “aCGHplus,” this freely available algorithm removes the variation due to the intensity effects, pin/print-tip, the spatial location on the microarray chip, and the relative location from the well plate. removal of this variation improves the downstream analysis and subsequent inferences made on the data. Further, we present measures to evaluate the quality of the dataset according to the arrayer pins, 384-well plates, plate rows, and plate columns. We compare our method against competing methods using several metrics to measure the biological signal. With this novel normalization algorithm and quality control measures, the user can improve their inferences on datasets and pinpoint problems that may arise in their BAC aCGH technology.

  7. Quality assessment of images displayed on LCD screen with local backlight dimming

    DEFF Research Database (Denmark)

    Mantel, Claire; Burini, Nino; Korhonen, Jari;

    2013-01-01

    This paper presents a subjective experiment collecting quality assessment of images displayed on a LCD with local backlight dimming using two methodologies: absolute category ratings and paired-comparison. Some well-known objective quality metrics are then applied to the stimuli and their respect......This paper presents a subjective experiment collecting quality assessment of images displayed on a LCD with local backlight dimming using two methodologies: absolute category ratings and paired-comparison. Some well-known objective quality metrics are then applied to the stimuli...

  8. Blind image quality assessment via deep learning.

    Science.gov (United States)

    Hou, Weilong; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-06-01

    This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness.

  9. Application Comparison of Two Source of Allowable Total Errors inσMetrics Assessing the Analytical Quality and Selecting Quality Control Procedures for Automated Clinical Chemistry%国内两种允许总误差标准在评估临床化学检测项目分析质量及选择质控程序中的应用比较

    Institute of Scientific and Technical Information of China (English)

    张路; 王薇; 王治国

    2015-01-01

    evaluate the difference of two sources of allowable total errors provided by National Health Industry Standard (WS/T 403-2012,analytical quality specification for routine analytes in clinical biochemistry)and National Stand-ard (GB/T 20470-2006,requirements of external quality assessment for clinical laboratories)in assessing the analytical qual-ity byσmetrics,and selecting quality control procedures using operational process specifications graphs.Methods Selected one of the laboratories participating in the internal quality control activity of routine chemistry of February,2014 and the first time external quality assessment activity of routine chemistry in 2014 organized by National Center for Clinical Labora-tories for its coefficient of variation and the bias of nineteen clinical chemistry tests.With the CV% and Bia%,σmetrics of controls at two analyte concentrations were calculated using two different allowable total errors targets (National Health In-dustry Standard (WS/T 403-2012)and National Standard (GB/T 20470-2006).Could obtain a operational process specifica-tions graph by which Could select quality control procedures using the Quality control computer simulat software developed by National Center for Clinical Laboratories and the company zhongchuangyida.Results The σ metrics under National Health Industry Standard (WS/T 403-2012)were from 0 to 7.Most of the values (86% and 76.2%)under National Stand-ard (GB/T 20470-2006)were from 3 to 15.On the normalized method decision chart,the assay quality using the allowable total errors targets of National Standard (GB/T 20470-2006)was at least one hierarchy more than one using National Health Industry Standard (WS/T 403-2012).The quality control rules under National Health Industry Standard (WS/T 403-2012)were obviously more strict than that under National Standard (GB/T 20470-2006).Among the control procedures using National Health Industry Standard (WS/T 403-2012),multirule (n=4):ALB,ALP,Ca,Cl,TC,Crea,Glu,LDH,K, Na

  10. Quality Metrics of Semi Automatic DTM from Large Format Digital Camera

    Science.gov (United States)

    Narendran, J.; Srinivas, P.; Udayalakshmi, M.; Muralikrishnan, S.

    2014-11-01

    The high resolution digital images from Ultracam-D Large Format Digital Camera (LFDC) was used for near automatic DTM generation. In the past, manual method for DTM generation was used which are time consuming and labour intensive. In this study LFDC in synergy with accurate position and orientation system and processes like image matching algorithms, distributed processing and filtering techniques for near automatic DTM generation. Traditionally the DTM accuracy is reported using check points collected from the field which are limited in number, time consuming and costly. This paper discusses the reliability of near automatic DTM generated from Ultracam-D for an operational project covering an area of nearly 600 Sq. Km. using 21,000 check points captured stereoscopically by experienced operators. The reliability of the DTM for the three study areas with different morphology is presented using large number of stereo check points and parameters related to statistical distribution of residuals such as skewness, kurtosis, standard deviation and linear error at 90% confidence interval. The residuals obtained for the three areas follow normal distribution in agreement with the majority of standards on positional accuracy. The quality metrics in terms of reliability were computed for the DTMs generated and the tables and graphs show the potential of Ultracam-D for the generation of semiautomatic DTM process for different terrain types.

  11. A new quality assessment and improvement system for print media

    Science.gov (United States)

    Liu, Mohan; Konya, Iuliu; Nandzik, Jan; Flores-Herr, Nicolas; Eickeler, Stefan; Ndjiki-Nya, Patrick

    2012-12-01

    Print media collections of considerable size are held by cultural heritage organizations and will soon be subject to digitization activities. However, technical content quality management in digitization workflows strongly relies on human monitoring. This heavy human intervention is cost intensive and time consuming, which makes automization mandatory. In this article, a new automatic quality assessment and improvement system is proposed. The digitized source image and color reference target are extracted from the raw digitized images by an automatic segmentation process. The target is evaluated by a reference-based algorithm. No-reference quality metrics are applied to the source image. Experimental results are provided to illustrate the performance of the proposed system. We show that it features a good performance in the extraction as well as in the quality assessment step compared to the state-of-the-art. The impact of efficient and dedicated quality assessors on the optimization step is extensively documented.

  12. Applying Undertaker to quality assessment

    DEFF Research Database (Denmark)

    Archie, John G.; Paluszewski, Martin; Karplus, Kevin

    2009-01-01

    Our group tested three quality assessment functions in CASP8: a function which used only distance constraints derived from alignments (SAM-T08-MQAO), a function which added other single-model terms to the distance constraints (SAM-T08-MQAU), and a function which used both single-model and consens...

  13. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    Science.gov (United States)

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  14. Color image quality assessment with biologically inspired feature and machine learning

    Science.gov (United States)

    Deng, Cheng; Tao, Dacheng

    2010-07-01

    In this paper, we present a new no-reference quality assessment metric for color images by using biologically inspired features (BIFs) and machine learning. In this metric, we first adopt a biologically inspired model to mimic the visual cortex and represent a color image based on BIFs which unifies color units, intensity units and C1 units. Then, in order to reduce the complexity and benefit the classification, the high dimensional features are projected to a low dimensional representation with manifold learning. Finally, a multiclass classification process is performed on this new low dimensional representation of the image and the quality assessment is based on the learned classification result in order to respect the one of the human observers. Instead of computing a final note, our method classifies the quality according to the quality scale recommended by the ITU. The preliminary results show that the developed metric can achieve good quality evaluation performance.

  15. Timeliness “at a glance”: assessing the turnaround time through the six sigma metrics.

    Science.gov (United States)

    Ialongo, Cristiano; Bernardini, Sergio

    2016-01-01

    Almost thirty years of systematic analysis have proven the turnaround time to be a fundamental dimension for the clinical laboratory. Several indicators are to date available to assess and report quality with respect to timeliness, but they sometimes lack the communicative immediacy and accuracy. The six sigma is a paradigm developed within the industrial domain for assessing quality and addressing goal and issues. The sigma level computed through the Z-score method is a simple and straightforward tool which delivers quality by a universal dimensionless scale and allows to handle non-normal data. Herein we report our preliminary experience in using the sigma level to assess the change in urgent (STAT) test turnaround time due to the implementation of total automation. We found that the Z-score method is a valuable and easy to use method for assessing and communicating the quality level of laboratory timeliness, providing a good correspondence with the actual change in efficiency which was retrospectively observed.

  16. Determine metrics and set targets for soil quality on agriculture residue and energy crop pathways

    Energy Technology Data Exchange (ETDEWEB)

    Ian Bonner; David Muth

    2013-09-01

    There are three objectives for this project: 1) support OBP in meeting MYPP stated performance goals for the Sustainability Platform, 2) develop integrated feedstock production system designs that increase total productivity of the land, decrease delivered feedstock cost to the conversion facilities, and increase environmental performance of the production system, and 3) deliver to the bioenergy community robust datasets and flexible analysis tools for establishing sustainable and viable use of agricultural residues and dedicated energy crops. The key project outcome to date has been the development and deployment of a sustainable agricultural residue removal decision support framework. The modeling framework has been used to produce a revised national assessment of sustainable residue removal potential. The national assessment datasets are being used to update national resource assessment supply curves using POLYSIS. The residue removal modeling framework has also been enhanced to support high fidelity sub-field scale sustainable removal analyses. The framework has been deployed through a web application and a mobile application. The mobile application is being used extensively in the field with industry, research, and USDA NRCS partners to support and validate sustainable residue removal decisions. The results detailed in this report have set targets for increasing soil sustainability by focusing on primary soil quality indicators (total organic carbon and erosion) in two agricultural residue management pathways and a dedicated energy crop pathway. The two residue pathway targets were set to, 1) increase residue removal by 50% while maintaining soil quality, and 2) increase soil quality by 5% as measured by Soil Management Assessment Framework indicators. The energy crop pathway was set to increase soil quality by 10% using these same indicators. To demonstrate the feasibility and impact of each of these targets, seven case studies spanning the US are presented

  17. NEW VISUAL PERCEPTUAL POOLING STRATEGY FOR IMAGE QUALITY ASSESSMENT

    Institute of Scientific and Technical Information of China (English)

    Zhou Wujie; Jiang Gangyi; Yu Mei

    2012-01-01

    Most of Image Quality Assessment (IQA) metrics consist of two processes.In the first process,quality map of image is measured locally.In the second process,the last quality score is converted from the quality map by using the pooling strategy.The first process had been made effective and significant progresses,while the second process was always done in simple ways.In the second process of the pooling strategy,the optimal perceptual pooling weights should be determined and computed according to Human Visual System (HVS).Thus,a reliable spatial pooling mathematical model based on HVS is an important issue worthy of study.In this paper,a new Visual Perceptual Pooling Strategy (VPPS) for IQA is presented based on contrast sensitivity and luminance sensitivity of HVS.Experimental results with the LIVE database show that the visual perceptual weights,obtained by the proposed pooling strategy,can effectively and significantly improve the performances of the IQA metrics with Mean Structural SIMilarity (MSSIM) or Phase Quantization Code (PQC).It is confirmed that the proposed VPPS demonstrates promising results for improving the performances of existing IQA metrics.

  18. Aerial Image Series Quality Assessment

    International Nuclear Information System (INIS)

    With the growing demand for geospatial data, the aerial imagery with high spatial, spectral, and temporal resolution achieves great development. It is imperative to evaluate whether the acquired images are qualified enough, since the further image mosaic asks for strict time consistency and a re-flight involves considerable resources. In this paper, we address the problem of quick aerial image series quality assessment. An image series quality analysis system is proposed, which includes single image quality assessment, image series quality assessment based on the image matching, and offering a visual matching result in real time for human validation when the computer achieves dubious results. For two images, the affine matrix is different for different parts of images, especially for images of wide field. Therefore we calculate transfer matrixes by using even-distributed control points from different image parts with the RANSAC technology, and use the image rotation angle for image mosaic for human validation. Extensive experiments conducted on aerial images show that the proposed method can obtain similar results with experts

  19. Critical Assessment of the Foundations of Power Transmission and Distribution Reliability Metrics and Standards.

    Science.gov (United States)

    Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan

    2016-01-01

    The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events.

  20. Enforcing Quality Metrics over Equipment Utilization Rates as Means to Reduce Centers for Medicare and Medicaid Services Imaging Costs and Improve Quality of Care

    Directory of Open Access Journals (Sweden)

    Amit Sura

    2011-01-01

    On examining quality metrics, such as appropriateness criteria and pre-authorization, promising results have ensued. The development and enforcement of appropriateness criteria lowers overutilization of studies without requiring unattainable fixed rates. Pre-authorization educates ordering physicians as to when imaging is indicated.

  1. SOME METRIC CHARACTERISTICS OF TESTS TO ASSESS BALL SPEED DURING OVERARM THROW PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Ante Prižmić

    2010-12-01

    Full Text Available The aim of the study was to determine metric characteristics of the 2 tests for evaluationhandball ball speed during over arm throw of handball ball. Research was conducted on a sampleof 50 students of the Faculty of kinesiology, average age of 20.4 years. Beside measurements ofbody height and body weight, speed of ball flight after over arm throw from sitting position (dis-tance 4 meters was assessed with radar gun. The tests of over arm throw were performed with ablocked and a free hand which does not perform a throw. Results show satisfactory reliability,sensitivity and validity of all tests. The homogeneity of tests was not good considering that thepositive trend of results was observed. This is a consequence of respondent adaptation to thetechnique of over arm throw performance. Factor analysis extracted a latent dimension that maybe called a factor of the ball speed during overarm throw performance. Respondents achievedsignificantly better results in the test RS because of biomechanical freer movement. This alsoconfirmed the pragmatic validity of the tests. The tests are best for use in sports like handball,water polo, tennis, volleyball, baseball or throwing disciplines in athletics because of the similarityof overarm performance and technical elements of the chosen sport. The advantages of tests arefast performance, easy execution and good metric characteristics and the defects poor homoge-neity and necessity for a radar gun.

  2. Data quality assessment from provenance graphs

    OpenAIRE

    Huynh, Trung Dong; Ebden, Mark; Ramchurn, Sarvapali; Roberts, Stephen; Moreau, Luc

    2014-01-01

    Provenance is a domain-independent means to represent what happened in an application, which can help verify data and infer data quality. Provenance patterns can manifest real-world phenomena such as a significant interest in a piece of content, providing an indication of its quality, or even issues such as undesirable interactions within a group of contributors. This paper presents an application-independent methodology for analyzing data based on the network metrics of provenance graphs to ...

  3. Software Architecture Coupling Metric for Assessing Operational Responsiveness of Trading Systems

    Directory of Open Access Journals (Sweden)

    Claudiu VINTE

    2012-01-01

    Full Text Available The empirical observation that motivates our research relies on the difficulty to assess the performance of a trading architecture beyond a few synthetic indicators like response time, system latency, availability or volume capacity. Trading systems involve complex software architectures of distributed resources. However, in the context of a large brokerage firm, which offers a global coverage from both, market and client perspectives, the term distributed gains a critical significance indeed. Offering a low latency ordering system by nowadays standards is relatively easily achievable, but integrating it in a flexible manner within the broader information system architecture of a broker/dealer requires operational aspects to be factored in. We propose a metric for measuring the coupling level within software architecture, and employ it to identify architectural designs that can offer a higher level of operational responsiveness, which ultimately would raise the overall real-world performance of a trading system.

  4. A City and National Metric measuring Isolation from the Global Market for Food Security Assessment

    Science.gov (United States)

    Brown, Molly E.; Silver, Kirk Coleman; Rajagopalan, Krishnan

    2013-01-01

    The World Bank has invested in infrastructure in developing countries for decades. This investment aims to reduce the isolation of markets, reducing both seasonality and variability in food availability and food prices. Here we combine city market price data, global distance to port, and country infrastructure data to create a new Isolation Index for countries and cities around the world. Our index quantifies the isolation of a city from the global market. We demonstrate that an index built at the country level can be applied at a sub-national level to quantify city isolation. In doing so, we offer policy makers with an alternative metric to assess food insecurity. We compare our isolation index with other indices and economic data found in the literature.We show that our Index measures economic isolation regardless of economic stability using correlation and analysis

  5. Assessing water quality trends in catchments with contrasting hydrological regimes

    Science.gov (United States)

    Sherriff, Sophie C.; Shore, Mairead; Mellander, Per-Erik

    2016-04-01

    Environmental resources are under increasing pressure to simultaneously achieve social, economic and ecological aims. Increasing demand for food production, for example, has expanded and intensified agricultural systems globally. In turn, greater risks of diffuse pollutant delivery (suspended sediment (SS) and Phosphorus (P)) from land to water due to higher stocking densities, fertilisation rates and soil erodibility has been attributed to deterioration of chemical and ecological quality of aquatic ecosystems. Development of sustainable and resilient management strategies for agro-ecosystems must detect and consider the impact of land use disturbance on water quality over time. However, assessment of multiple monitoring sites over a region is challenged by hydro-climatic fluctuations and the propagation of events through catchments with contrasting hydrological regimes. Simple water quality metrics, for example, flow-weighted pollutant exports have potential to normalise the impact of catchment hydrology and better identify water quality fluctuations due to land use and short-term climate fluctuations. This paper assesses the utility of flow-weighted water quality metrics to evaluate periods and causes of critical pollutant transfer. Sub-hourly water quality (SS and P) and discharge data were collected from hydrometric monitoring stations at the outlets of five small (~10 km2) agricultural catchments in Ireland. Catchments possess contrasting land uses (predominantly grassland or arable) and soil drainage (poorly, moderately or well drained) characteristics. Flow-weighted water quality metrics were calculated and evaluated according to fluctuations in source pressure and rainfall. Flow-weighted water quality metrics successfully identified fluctuations in pollutant export which could be attributed to land use changes through the agricultural calendar, i.e., groundcover fluctuations. In particular, catchments with predominantly poor or moderate soil drainage

  6. Landscape Metric Modeling - a Technique for Forest Disturbance Assessment in Shendurney Wildlife Sanctuary

    Directory of Open Access Journals (Sweden)

    Subin Jose

    2011-12-01

    Full Text Available Deforestation and forest degradation are associated and progressive processes result in the anthropogenic stress, climate change, and conversion of the forest area into a mosaic of mature forest fragments, pasture, and degraded habitat. The present study addresses forest degradation assessment of landscape using landscape metrics. Geospatial techniques including GIS, remote sensing and fragstat methods are powerful tools in the assessment of forest degradation. The present study is carried out in Shendurney wildlife sanctuary located in the mega biodiversity hot spot of Western ghats, Kerala. A large extent of forest is affected by degradation in this region leading to depletion of forest biodiversity. For conservation of forest biodiversity and implementation of conservation strategies, forest degradation assessment of habitat destruction area is important. Two types of data are used in the study i.e. spatial and non-spatial data. Non-spatial data include both anthropogenic stress and climate data. The study shows that the disturbance index value ranges from 2.5 to 7.5 which has been reclassified into four disturbance zones as low disturbed, medium disturbed, high disturbed and very high disturbed. The analysis would play a key role in the formulation and implementation of forest conservation and management strategies.

  7. Multiscale Metrics to Assess Flood Resilience: Feedbacks from SMARTesT

    Science.gov (United States)

    Schertzer, D.; Tchiguirinskaia, I.; Lovejoy, S.

    2012-04-01

    The goal of the FP7 SMARTesT project is to greatly improve flood resilient technologies and systems. A major difficulty is that hydrological basins, in particular urban basins, are systems that are not only complicated due to their large number of components with multiple functions, but also complex. This explains many failures in flood management, as well as to assess, including with the help of numerical simulations, the resilience of a flood management system and therefore to optimize strategies. The term resilience has become extremely fashionable, although corresponding operational and mathematical definitions have remained rather elusive. The latter is required to analyse flood scenarios and simulations. It should be based on some conceptual definition, e.g. the definition of "ecological resilience" (Hollings 1973). The first attempt to define resilience metrics was based on the dynamical system approach. In spite of its mathematical elegance and apparent rigor, this approach suffers from a series of limitations. A common limitation with viability theory is the emergence of spatial scales in systems that are complex in time and space. As recently discussed (Folke et al., 2010), "multiscale resilience is fundamental for understanding the interplay between persistence and change, adaptability and transformability". An operational definition of multiscale resilience can be obtained as soon as scale symmetries are considered. The latter considerably reduce the space-time complexity by defining scale independent variables, called singularities. Ae scale independent resilient metrics should rely on singularities, e.g. to measure qualitative changes of their distribution. Incidentally, singularities are more and more used to analyse urban floods e.g. with the help done for climate scenario analysis. A radical point of view would correspond to define the scale independent analogues of the viability constraint set, viability kernel and resilient basin for

  8. MULTICRITERIA APPROACH FOR ASSESSMENT OF ENVIRONMENTAL QUALITY

    OpenAIRE

    Boris Agarski; Igor Budak; Janko Hodolič; Đorđe Vukelić

    2010-01-01

    Environment is important and inevitable element that has direct impact on life quality. Furthermore, environmental protection represents prerequisite for healthy and sustainable way of life. Environmental quality can be represented through specific indicators that can be identified, measured, analyzed, and assessed with adequate methods for assessment of environmental quality. Problem of insight in total environmental quality, caused by different, mutually incomparable, indicators of environm...

  9. A multi-model multi-objective study to evaluate the role of metric choice on sensitivity assessment

    Science.gov (United States)

    Haghnegahdar, Amin; Razavi, Saman; Wheater, Howard; Gupta, Hoshin

    2016-04-01

    Sensitivity analysis (SA) is an essential tool for providing insight into model behavior, calibration, and uncertainty assessment. It is often overlooked that the metric choice can significantly change the assessment of model sensitivity. In order to identify important hydrological processes across various case studies, we conducted a multi-model multi-criteria sensitivity analysis using a novel and efficient technique, Variogram Analysis of Response Surfaces (VARS). The analysis was conducted using three physically-based hydrological models, applied at various scales ranging from small (hillslope) to large (watershed) scale. In each case, the sensitivity of simulated streamflow to model processes (represented through parameters) were measured using different metrics selected based on various hydrograph characteristics including high flows, low flows, and volume. It is demonstrated that metric choice has a significant influence on SA results and must be aligned with study objectives. Guidelines for identifying important model parameters from a multi-objective SA perspective is discussed as part of this study.

  10. Total Probability of Collision as a Metric for Finite Conjunction Assessment and Collision Risk Management

    Science.gov (United States)

    Frigm, R.; Johnson, L.

    The Probability of Collision (Pc) has become a universal metric and statement of on-orbit collision risk. Although several flavors of the computation exist and are well-documented in the literature, the basic calculation requires the same input: estimates for the position, position uncertainty, and sizes of the two objects involved. The Pc is used operationally to make decisions on whether a given conjunction poses significant collision risk to the primary object (or space asset of concern). It is also used to determine necessity and degree of mitigative action (typically in the form of an orbital maneuver) to be performed. The predicted post-maneuver Pc also informs the maneuver planning process into regarding the timing, direction, and magnitude of the maneuver needed to mitigate the collision risk. Although the data sources, techniques, decision calculus, and workflows vary for different agencies and organizations, they all have a common thread. The standard conjunction assessment and collision risk concept of operations (CONOPS) predicts conjunctions, assesses the collision risk (typically, via the Pc), and plans and executes avoidance activities for conjunctions as a discrete events. As the space debris environment continues to increase and improvements are made to remote sensing capabilities and sensitivities to detect, track, and predict smaller debris objects, the number of conjunctions will in turn continue to increase. The expected order-of-magnitude increase in the number of predicted conjunctions will challenge the paradigm of treating each conjunction as a discrete event. The challenge will not be limited to workload issues, such as manpower and computing performance, but also the ability for satellite owner/operators to successfully execute their mission while also managing on-orbit collision risk. Executing a propulsive maneuver occasionally can easily be absorbed into the mission planning and operations tempo; whereas, continuously planning evasive

  11. Total Probability of Collision as a Metric for Finite Conjunction Assessment and Collision Risk Management

    Science.gov (United States)

    Frigm, Ryan C.; Hejduk, Matthew D.; Johnson, Lauren C.; Plakalovic, Dragan

    2015-01-01

    On-orbit collision risk is becoming an increasing mission risk to all operational satellites in Earth orbit. Managing this risk can be disruptive to mission and operations, present challenges for decision-makers, and is time-consuming for all parties involved. With the planned capability improvements to detecting and tracking smaller orbital debris and capacity improvements to routinely predict on-orbit conjunctions, this mission risk will continue to grow in terms of likelihood and effort. It is very real possibility that the future space environment will not allow collision risk management and mission operations to be conducted in the same manner as it is today. This paper presents the concept of a finite conjunction assessment-one where each discrete conjunction is not treated separately but, rather, as a continuous event that must be managed concurrently. The paper also introduces the Total Probability of Collision as an analogous metric for finite conjunction assessment operations and provides several options for its usage in a Concept of Operations.

  12. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Shiraishi, Satomi; Moore, Kevin L., E-mail: kevinmoore@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California 92093 (United States); Tan, Jun [Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas 75490 (United States); Olsen, Lindsey A. [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States)

    2015-02-15

    Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V{sub 10Gy} (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM{sub clin} − QM{sub pred}, and a coefficient of determination, R{sup 2}. For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are

  13. Performance metrics

    CERN Document Server

    Pijpers, F P

    2006-01-01

    Scientific output varies between research fields and between disciplines within a field such as astrophysics. Even in fields where publication is the primary output, there is considerable variation in publication and hence in citation rates. Data from the Smithsonian/NASA Astrophysics Data System is used to illustrate this problem and argue against a "one size fits all" approach to performance metrics, especially over the short time-span covered by the Research Assessment Exercise (soon underway in the UK).

  14. Monitoring cognitive function and need with the automated neuropsychological assessment metrics in Decompression Sickness (DCS) research

    Science.gov (United States)

    Nesthus, Thomas E.; Schiflett, Sammuel G.

    1993-01-01

    Hypobaric decompression sickness (DCS) research presents the medical monitor with the difficult task of assessing the onset and progression of DCS largely on the basis of subjective symptoms. Even with the introduction of precordial Doppler ultrasound techniques for the detection of venous gas emboli (VGE), correct prediction of DCS can be made only about 65 percent of the time according to data from the Armstrong Laboratory's (AL's) hypobaric DCS database. An AL research protocol concerned with exercise and its effects on denitrogenation efficiency includes implementation of a performance assessment test battery to evaluate cognitive functioning during a 4-h simulated 30,000 ft (9144 m) exposure. Information gained from such a test battery may assist the medical monitor in identifying early signs of DCS and subtle neurologic dysfunction related to cases of asymptomatic, but advanced, DCS. This presentation concerns the selection and integration of a test battery and the timely graphic display of subject test results for the principal investigator and medical monitor. A subset of the Automated Neuropsychological Assessment Metrics (ANAM) developed through the Office of Military Performance Assessment Technology (OMPAT) was selected. The ANAM software provides a library of simple tests designed for precise measurement of processing efficiency in a variety of cognitive domains. For our application and time constraints, two tests requiring high levels of cognitive processing and memory were chosen along with one test requiring fine psychomotor performance. Accuracy, speed, and processing throughout variables as well as RMS error were collected. An automated mood survey provided 'state' information on six scales including anger, happiness, fear, depression, activity, and fatigue. An integrated and interactive LOTUS 1-2-3 macro was developed to import and display past and present task performance and mood-change information.

  15. Quality assessment of digital annotated ECG data from clinical trials by the FDA ECG Warehouse.

    Science.gov (United States)

    Sarapa, Nenad

    2007-09-01

    The FDA mandates that digital electrocardiograms (ECGs) from 'thorough' QTc trials be submitted into the ECG Warehouse in Health Level 7 extended markup language format with annotated onset and offset points of waveforms. The FDA did not disclose the exact Warehouse metrics and minimal acceptable quality standards. The author describes the Warehouse scoring algorithms and metrics used by FDA, points out ways to improve FDA review and suggests Warehouse benefits for pharmaceutical sponsors. The Warehouse ranks individual ECGs according to their score for each quality metric and produces histogram distributions with Warehouse-specific thresholds that identify ECGs of questionable quality. Automatic Warehouse algorithms assess the quality of QT annotation and duration of manual QT measurement by the central ECG laboratory.

  16. Convective Weather Forecast Quality Metrics for Air Traffic Management Decision-Making

    Science.gov (United States)

    Chatterji, Gano B.; Gyarfas, Brett; Chan, William N.; Meyn, Larry A.

    2006-01-01

    the process described in Refs. 5 through 7, in terms of percentage coverage or confidence level is notionally sound compared to characterizing in terms of probabilities because the probability of the forecast being correct can only be determined using actual observations. References 5 through 7 only use the forecast data and not the observations. The method for computing the probability of detection, false alarm ratio and several forecast quality metrics (Skill Scores) using both the forecast and observation data are given in Ref. 2. This paper extends the statistical verification method in Ref. 2 to determine co-occurrence probabilities. The method consists of computing the probability that a severe weather cell (grid location) is detected in the observation data in the neighborhood of the severe weather cell in the forecast data. Probabilities of occurrence at the grid location and in its neighborhood with higher severity, and with lower severity in the observation data compared to that in the forecast data are examined. The method proposed in Refs. 5 through 7 is used for computing the probability that a certain number of cells in the neighborhood of severe weather cells in the forecast data are seen as severe weather cells in the observation data. Finally, the probability of existence of gaps in the observation data in the neighborhood of severe weather cells in forecast data is computed. Gaps are defined as openings between severe weather cells through which an aircraft can safely fly to its intended destination. The rest of the paper is organized as follows. Section II summarizes the statistical verification method described in Ref. 2. The extension of this method for computing the co-occurrence probabilities in discussed in Section HI. Numerical examples using NCWF forecast data and NCWD observation data are presented in Section III to elucidate the characteristics of the co-occurrence probabilities. This section also discusses the procedure for computing

  17. Color Image Quality Assessment Based on CIEDE2000

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2012-01-01

    Full Text Available Combining the color difference formula of CIEDE2000 and the printing industry standard for visual verification, we present an objective color image quality assessment method correlated with subjective vision perception. An objective score conformed to subjective perception (OSCSP Q was proposed to directly reflect the subjective visual perception. In addition, we present a general method to calibrate correction factors of color difference formula under real experimental conditions. Our experiment results show that the present DE2000-based metric can be consistent with human visual system in general application environment.

  18. Comparing apples and oranges: assessment of the relative video quality in the presence of different types of distortions

    DEFF Research Database (Denmark)

    Reiter, Ulrich; Korhonen, Jari; You, Junyong

    2011-01-01

    Video quality assessment is essential for the performance analysis of visual communication applications. Objective metrics can be used for estimating the relative quality differences, but they typically give reliable results only if the compared videos contain similar types of quality distortion....

  19. Quality Assessment of Imputations in Administrative Data

    OpenAIRE

    Schnetzer, Matthias; Astleithner, Franz; Cetkovic, Predrag; Humer, Stefan; Lenk, Manuela; Moser, Mathias

    2015-01-01

    This article contributes a framework for the quality assessment of imputations within a broader structure to evaluate the quality of register-based data. Four quality-related hyperdimensions examine the data processing from the raw-data level to the final statistics. Our focus lies on the quality assessment of different imputation steps and their influence on overall data quality. We suggest classification rates as a measure of accuracy of imputation and derive several computat...

  20. The palmar metric: A novel radiographic assessment of the equine distal phalanx

    Directory of Open Access Journals (Sweden)

    M.A. Burd

    2014-08-01

    Full Text Available Digital radiographs are often used to subjectively assess the equine digit. Recently, quantitative and objective radiographic measurements have been reported that give new insight into the form and function of the equine digit. We investigated a radio-dense curvilinear profile along the distal phalanx on lateral radiographs we term the Palmar Curve (PC that we believe provides a measurement of the concavity of the distal phalanx of the horse. A second quantitative measurement, the Palmar Metric (PM was defined as the percent area under the PC. We correlated the PM and age from 544 radiographs of the distal phalanx from the left and right front feet of various breed horses of known age, and 278 radiographs of the front feet of Quarter Horses. The PM was negatively correlated with age and decreased at a rate of 0.28 % per year for horses of various breeds and 0.33 % per year for Quarter Horses. Therefore, veterinarians should be aware of age related change in the concave, parietal solar aspect of the distal phalanx in the horse.

  1. Effects of display rendering on HDR image quality assessment

    Science.gov (United States)

    Zerman, Emin; Valenzise, Giuseppe; De Simone, Francesca; Banterle, Francesco; Dufaux, Frederic

    2015-09-01

    High dynamic range (HDR) displays use local backlight modulation to produce both high brightness levels and large contrast ratios. Thus, the display rendering algorithm and its parameters may greatly affect HDR visual experience. In this paper, we analyze the impact of display rendering on perceived quality for a specific display (SIM2 HDR47) and for a popular application scenario, i.e., HDR image compression. To this end, we assess whether significant differences exist between subjective quality of compressed images, when these are displayed using either the built-in rendering of the display, or a rendering algorithm developed by ourselves. As a second contribution of this paper, we investigate whether the possibility to estimate the true pixel-wise luminance emitted by the display, offered by our rendering approach, can improve the performance of HDR objective quality metrics that require true pixel-wise luminance as input.

  2. Assessing the Greenness of Chemical Reactions in the Laboratory Using Updated Holistic Graphic Metrics Based on the Globally Harmonized System of Classification and Labeling of Chemicals

    Science.gov (United States)

    Ribeiro, M. Gabriela T. C.; Yunes, Santiago F.; Machado, Adelio A. S. C.

    2014-01-01

    Two graphic holistic metrics for assessing the greenness of synthesis, the "green star" and the "green circle", have been presented previously. These metrics assess the greenness by the degree of accomplishment of each of the 12 principles of green chemistry that apply to the case under evaluation. The criteria for assessment…

  3. Visual Perception Based Objective Stereo Image Quality Assessment for 3D Video Communication

    Directory of Open Access Journals (Sweden)

    Gangyi Jiang

    2014-04-01

    Full Text Available Stereo image quality assessment is a crucial and challenging issue in 3D video communication. One of major difficulties is how to weigh binocular masking effect. In order to establish the assessment mode more in line with the human visual system, Watson model is adopted, which defines visibility threshold under no distortion composed of contrast sensitivity, masking effect and error in this study. As a result, we propose an Objective Stereo Image Quality Assessment method (OSIQA, organically combining a new Left-Right view Image Quality Assessment (LR-IQA metric and Depth Perception Image Quality Assessment (DP-IQA metric. The new LR-IQA metric is first given to calculate the changes of perception coefficients in each sub-band utilizing Watson model and human visual system after wavelet decomposition of left and right images in stereo image pair, respectively. Then, a concept of absolute difference map is defined to describe abstract differential value between the left and right view images and the DP-IQA metric is presented to measure structure distortion of the original and distorted abstract difference maps through luminance function, error sensitivity and contrast function. Finally, an OSIQA metric is generated by using multiplicative fitting of the LR-IQA and DP-IQA metrics based on weighting. Experimental results shows that the proposed method are highly correlated with human visual judgments (Mean Opinion Score and the correlation coefficient and monotony are more than 0.92 under five types of distortions such as Gaussian blur, Gaussian noise, JP2K compression, JPEG compression and H.264 compression.

  4. Towards Perceptually Driven Segmentation Evaluation Metrics

    OpenAIRE

    Drelie Gelasca, E.; Ebrahimi, T.; Farias, M; Carli, M; Mitra, S.

    2004-01-01

    To be reliable, an automatic segmentation evaluation metric has to be validated by subjective tests. In this paper, a formal protocol for subjective tests for segmentation quality assessment is presented. The most common artifacts produced by segmentation algorithms are identified and an extensive analysis of their effects on the perceived quality is performed. A psychophysical experiment was performed to assess the quality of video with segmentation errors. The results show how an objective ...

  5. PRAGMATIC MODEL OF TRASLATION QUALITY ASSESSMENT

    OpenAIRE

    Vorobjeva, S.; Podrezenko, V.

    2006-01-01

    The study analyses various approaches to translation quality assessment. Functional and pragmatic translation quality evaluation model which is based on target text function being equivalent to source text function has been proposed.

  6. Algorithm for automatic forced spirometry quality assessment: technological developments.

    Science.gov (United States)

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.

  7. Healthcare quality maturity assessment model based on quality drivers.

    Science.gov (United States)

    Ramadan, Nadia; Arafeh, Mazen

    2016-04-18

    Purpose - Healthcare providers differ in their readiness and maturity levels regarding quality and quality management systems applications. The purpose of this paper is to serve as a useful quantitative quality maturity-level assessment tool for healthcare organizations. Design/methodology/approach - The model proposes five quality maturity levels (chaotic, primitive, structured, mature and proficient) based on six quality drivers: top management, people, operations, culture, quality focus and accreditation. Findings - Healthcare managers can apply the model to identify the status quo, quality shortcomings and evaluating ongoing progress. Practical implications - The model has been incorporated in an interactive Excel worksheet that visually displays the quality maturity-level risk meter. The tool has been applied successfully to local hospitals. Originality/value - The proposed six quality driver scales appear to measure healthcare provider maturity levels on a single quality meter. PMID:27120510

  8. Comparison of a Graphical and a Textual Design Language Using Software Quality Metrics

    OpenAIRE

    Henry, Sallie M.; Goff, Roger

    1988-01-01

    For many years the software engineering community has been attacking the software reliability problem on two fronts. First via design methodologies, languages and tools as a precheck on quality and second by measuring the quality of produced software as a postcheck. This research attempts to unify the approach to creating reliable software by providing the ability to measure the quality of a design prior to its implementation. A comparison of a graphical and a textual design language is pres...

  9. Selection of metrics based on the seagrass Cymodocea nodosa and development of a biotic index (CYMOX) for assessing ecological status of coastal and transitional waters

    Science.gov (United States)

    Oliva, Silvia; Mascaró, Oriol; Llagostera, Izaskun; Pérez, Marta; Romero, Javier

    2012-12-01

    Bioindicators, based on a large variety of organisms, have been increasingly used in the assessment of the status of aquatic systems. In marine coastal waters, seagrasses have shown a great potential as bioindicator organisms, probably due to both their environmental sensitivity and the large amount of knowledge available. However, and as far as we are aware, only little attention has been paid to euryhaline species suitable for biomonitoring both transitional and marine waters. With the aim to contribute to this expanding field, and provide new and useful tools for managers, we develop here a multi-bioindicator index based on the seagrass Cymodocea nodosa. We first compiled from the literature a suite of 54 candidate metrics, i. e. measurable attribute of the concerned organism or community that adequately reflects properties of the environment, obtained from C. nodosa and its associated ecosystem, putatively responding to environmental deterioration. We then evaluated them empirically, obtaining a complete dataset on these metrics along a gradient of anthropogenic disturbance. Using this dataset, we selected the metrics to construct the index, using, successively: (i) ANOVA, to assess their capacity to discriminate among sites of different environmental conditions; (ii) PCA, to check the existence of a common pattern among the metrics reflecting the environmental gradient; and (iii) feasibility and cost-effectiveness criteria. Finally, 10 metrics (out of the 54 tested) encompassing from the physiological (δ15N, δ34S, % N, % P content of rhizomes), through the individual (shoot size) and the population (root weight ratio), to the community (epiphytes load) organisation levels, and some metallic pollution descriptors (Cd, Cu and Zn content of rhizomes) were retained and integrated into a single index (CYMOX) using the scores of the sites on the first axis of a PCA. These scores were reduced to a 0-1 (Ecological Quality Ratio) scale by referring the values to the

  10. qcML : an exchange format for quality control metrics from mass spectrometry experiments

    NARCIS (Netherlands)

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W P; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A; Kelstrup, Christian D; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S; Olsen, Jesper V; Heck, Albert J R; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart

    2014-01-01

    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exc

  11. Fuzzy Multiple Metrics Link Assessment for Routing in Mobile Ad-Hoc Network

    Science.gov (United States)

    Soo, Ai Luang; Tan, Chong Eng; Tay, Kai Meng

    2011-06-01

    In this work, we investigate on the use of Sugeno fuzzy inference system (FIS) in route selection for mobile Ad-Hoc networks (MANETs). Sugeno FIS is introduced into Ad-Hoc On Demand Multipath Distance Vector (AOMDV) routing protocol, which is derived from its predecessor, Ad-Hoc On Demand Distance Vector (AODV). Instead of using the conventional way that considering only a single metric to choose the best route, our proposed fuzzy decision making model considers up to three metrics. In the model, the crisp inputs of the three parameters are fed into an FIS and being processed in stages, i.e., fuzzification, inference, and defuzzification. Finally, after experiencing all the stages, a single value score is generated from the combination metrics, which will be used to measure all the discovered routes credibility. Results obtained from simulations show a promising improvement as compared to AOMDV and AODV.

  12. Metrics to assess the mitigation of global warming by carbon capture and storage in the ocean and in geological reservoirs

    OpenAIRE

    Haugan, Peter Mosby; Joos, Fortunat

    2004-01-01

    Different metrics to assess mitigation of global warming by carbon capture and storage are discussed. The climatic impact of capturing 30% of the anthropogenic carbon emission and its storage in the ocean or in geological reservoir are evaluated for different stabilization scenarios using a reduced-form carbon cycle-climate model. The accumulated Global Warming Avoided (GWA) remains, after a ramp-up during the first ~50 years, in the range of 15 to 30% over the next millennium for de...

  13. Quality assurance in performance assessments

    Energy Technology Data Exchange (ETDEWEB)

    Maul, P.R.; Watkins, B.M.; Salter, P.; Mcleod, R [QuantiSci Ltd, Henley-on-Thames (United Kingdom)

    1999-01-01

    Following publication of the Site-94 report, SKI wishes to review how Quality Assurance (QA) issues could be treated in future work both in undertaking their own Performance Assessment (PA) calculations and in scrutinising documents supplied by SKB (on planning a repository for spent fuels in Sweden). The aim of this report is to identify the key QA issues and to outline the nature and content of a QA plan which would be suitable for SKI, bearing in mind the requirements and recommendations of relevant standards. Emphasis is on issues which are specific to Performance Assessments for deep repositories for radioactive wastes, but consideration is also given to issues which need to be addressed in all large projects. Given the long time over which the performance of a deep repository system must be evaluated, the demonstration that a repository is likely to perform satisfactorily relies on the use of computer-generated model predictions of system performance. This raises particular QA issues which are generally not encountered in other technical areas (for instance, power station operations). The traceability of the arguments used is a key QA issue, as are conceptual model uncertainty, and code verification and validation; these were all included in the consideration of overall uncertainties in the Site-94 project. Additionally, issues which are particularly relevant to SKI include: How QA in a PA fits in with the general QA procedures of the organisation undertaking the work. The relationship between QA as applied by the regulator and the implementor of a repository development programme. Section 2 introduces the discussion of these issues by reviewing the standards and guidance which are available from national and international organisations. This is followed in Section 3 by a review of specific issues which arise from the Site-94 exercise. An outline procedure for managing QA issues in SKI is put forward as a basis for discussion in Section 4. It is hoped that

  14. Adding A Spending Metric To Medicare's Value-Based Purchasing Program Rewarded Low-Quality Hospitals.

    Science.gov (United States)

    Das, Anup; Norton, Edward C; Miller, David C; Ryan, Andrew M; Birkmeyer, John D; Chen, Lena M

    2016-05-01

    In fiscal year 2015 the Centers for Medicare and Medicaid Services expanded its Hospital Value-Based Purchasing program by rewarding or penalizing hospitals for their performance on both spending and quality. This represented a sharp departure from the program's original efforts to incentivize hospitals for quality alone. How this change redistributed hospital bonuses and penalties was unknown. Using data from 2,679 US hospitals that participated in the program in fiscal years 2014 and 2015, we found that the new emphasis on spending rewarded not only low-spending hospitals but some low-quality hospitals as well. Thirty-eight percent of low-spending hospitals received bonuses in fiscal year 2014, compared to 100 percent in fiscal year 2015. However, low-quality hospitals also began to receive bonuses (0 percent in fiscal year 2014 compared to 17 percent in 2015). All high-quality hospitals received bonuses in both years. The Centers for Medicare and Medicaid Services should consider incorporating a minimum quality threshold into the Hospital Value-Based Purchasing program to avoid rewarding low-quality, low-spending hospitals. PMID:27140997

  15. Adding A Spending Metric To Medicare's Value-Based Purchasing Program Rewarded Low-Quality Hospitals.

    Science.gov (United States)

    Das, Anup; Norton, Edward C; Miller, David C; Ryan, Andrew M; Birkmeyer, John D; Chen, Lena M

    2016-05-01

    In fiscal year 2015 the Centers for Medicare and Medicaid Services expanded its Hospital Value-Based Purchasing program by rewarding or penalizing hospitals for their performance on both spending and quality. This represented a sharp departure from the program's original efforts to incentivize hospitals for quality alone. How this change redistributed hospital bonuses and penalties was unknown. Using data from 2,679 US hospitals that participated in the program in fiscal years 2014 and 2015, we found that the new emphasis on spending rewarded not only low-spending hospitals but some low-quality hospitals as well. Thirty-eight percent of low-spending hospitals received bonuses in fiscal year 2014, compared to 100 percent in fiscal year 2015. However, low-quality hospitals also began to receive bonuses (0 percent in fiscal year 2014 compared to 17 percent in 2015). All high-quality hospitals received bonuses in both years. The Centers for Medicare and Medicaid Services should consider incorporating a minimum quality threshold into the Hospital Value-Based Purchasing program to avoid rewarding low-quality, low-spending hospitals.

  16. How to assess the quality of your analytical method?

    Science.gov (United States)

    Topic, Elizabeta; Nikolac, Nora; Panteghini, Mauro; Theodorsson, Elvar; Salvagno, Gian Luca; Miler, Marijana; Simundic, Ana-Maria; Infusino, Ilenia; Nordin, Gunnar; Westgard, Sten

    2015-10-01

    Laboratory medicine is amongst the fastest growing fields in medicine, crucial in diagnosis, support of prevention and in the monitoring of disease for individual patients and for the evaluation of treatment for populations of patients. Therefore, high quality and safety in laboratory testing has a prominent role in high-quality healthcare. Applied knowledge and competencies of professionals in laboratory medicine increases the clinical value of laboratory results by decreasing laboratory errors, increasing appropriate utilization of tests, and increasing cost effectiveness. This collective paper provides insights into how to validate the laboratory assays and assess the quality of methods. It is a synopsis of the lectures at the 15th European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Continuing Postgraduate Course in Clinical Chemistry and Laboratory Medicine entitled "How to assess the quality of your method?" (Zagreb, Croatia, 24-25 October 2015). The leading topics to be discussed include who, what and when to do in validation/verification of methods, verification of imprecision and bias, verification of reference intervals, verification of qualitative test procedures, verification of blood collection systems, comparability of results among methods and analytical systems, limit of detection, limit of quantification and limit of decision, how to assess the measurement uncertainty, the optimal use of Internal Quality Control and External Quality Assessment data, Six Sigma metrics, performance specifications, as well as biological variation. This article, which continues the annual tradition of collective papers from the EFLM continuing postgraduate courses in clinical chemistry and laboratory medicine, aims to provide further contributions by discussing the quality of laboratory methods and measurements and, at the same time, to offer continuing professional development to the attendees.

  17. Elliptical local vessel density: a fast and robust quality metric for retinal images

    OpenAIRE

    Giancardo, L.; Abramoff, M.D.; Chaum, E.; Karnowski, T.P.; Meriaudeau, F.; Tobin, K.W.

    2008-01-01

    A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches for automatically judging the image quality. We propose a new set of features independent of field of view or resolution to describe the morphology of the patient's vessels. Our initial results suggest that these features can be used to estimate the image quality i...

  18. Cost of Quality (CoQ) metrics for telescope operations and project management

    Science.gov (United States)

    Radziwill, Nicole M.

    2006-06-01

    This study describes the goals, foundational work, and early returns associated with establishing a pilot quality cost program at the Robert C. Byrd Green Bank Telescope (GBT). Quality costs provide a means to communicate the results of process improvement efforts in the universal language of project management: money. This scheme stratifies prevention, appraisal, internal failure and external failure costs, and seeks to quantify and compare the up-front investment in planning and risk management versus the cost of rework. An activity-based Cost of Quality (CoQ) model was blended with the Cost of Software Quality (CoSQ) model that has been successfully deployed at Raytheon Electronic Systems (RES) for this pilot program, analyzing the efforts of the GBT Software Development Division. Using this model, questions that can now be answered include: What is an appropriate length for our development cycle? Are some observing modes more reliable than others? Are we testing too much, or not enough? How good is our software quality, not in terms of defects reported and fixed, but in terms of its impact on the user? The ultimate goal is to provide a higher quality of service to customers of the telescope.

  19. Compression-based classification of biological sequences and structures via the Universal Similarity Metric: experimental assessment

    Directory of Open Access Journals (Sweden)

    Manzini Giovanni

    2007-07-01

    Full Text Available Abstract Background Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity, NCD (Normalized Compression Dissimilarity and CD (Compression Dissimilarity. Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. Results We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC

  20. Use of Frequency Response Metrics to Assess the Planning and Operating Requirements for Reliable Integration of Variable Renewable Generation

    Energy Technology Data Exchange (ETDEWEB)

    Eto, Joseph H.; Undrill, John; Mackin, Peter; Daschmans, Ron; Williams, Ben; Haney, Brian; Hunt, Randall; Ellis, Jeff; Illian, Howard; Martinez, Carlos; O' Malley, Mark; Coughlin, Katie; LaCommare, Kristina Hamachi

    2010-12-20

    An interconnected electric power system is a complex system that must be operated within a safe frequency range in order to reliably maintain the instantaneous balance between generation and load. This is accomplished by ensuring that adequate resources are available to respond to expected and unexpected imbalances and restoring frequency to its scheduled value in order to ensure uninterrupted electric service to customers. Electrical systems must be flexible enough to reliably operate under a variety of"change" scenarios. System planners and operators must understand how other parts of the system change in response to the initial change, and need tools to manage such changes to ensure reliable operation within the scheduled frequency range. This report presents a systematic approach to identifying metrics that are useful for operating and planning a reliable system with increased amounts of variable renewable generation which builds on existing industry practices for frequency control after unexpected loss of a large amount of generation. The report introduces a set of metrics or tools for measuring the adequacy of frequency response within an interconnection. Based on the concept of the frequency nadir, these metrics take advantage of new information gathering and processing capabilities that system operators are developing for wide-area situational awareness. Primary frequency response is the leading metric that will be used by this report to assess the adequacy of primary frequency control reserves necessary to ensure reliable operation. It measures what is needed to arrest frequency decline (i.e., to establish frequency nadir) at a frequency higher than the highest set point for under-frequency load shedding within an interconnection. These metrics can be used to guide the reliable operation of an interconnection under changing circumstances.

  1. Bringing Public Engagement into an Academic Plan and Its Assessment Metrics

    Science.gov (United States)

    Britner, Preston A.

    2012-01-01

    This article describes how public engagement was incorporated into a research university's current Academic Plan, how the public engagement metrics were selected and adopted, and how those processes led to subsequent strategic planning. Some recognition of the importance of civic engagement has followed, although there are many areas in which…

  2. EMF exposure assessment in the Finnish garment industry: evaluation of proposed EMF exposure metrics.

    Science.gov (United States)

    Hansen, N H; Sobel, E; Davanipour, Z; Gillette, L M; Niiranen, J; Wilson, B W

    2000-01-01

    Recently published studies indicate that having worked in occupations that involve moderate to high electromagnetic field (EMF) exposure is a risk factor for neurodegenerative diseases, including Alzheimer's disease. In these studies, the occupational groups most over-represented for EMF exposure comprised seamstresses, dressmakers, and tailors. Future epidemiologic studies designed to evaluate the possibility of a causal relationship between exposure to EMF and a neuro degenerative disease endpoint such as incidence of Alzheimer's disease, will benefit from the measurement of electromagnetic field metrics with potential biological relevance. Data collection methodology in such studies would be highly dependent upon how the metrics are defined. In this research the authors developed and demonstrated (1) protocols for collecting EMF exposure data suitable for estimating a variety of exposure metrics that may have biological relevance, and (2) analytical methods for calculation of these metrics. The authors show how exposure might be estimated under each of the three prominent EMF health-effects mechanism theories and evaluate the assertion that relative exposure ranking is dependent on which mechanism is assumed. The authors also performed AC RMS magnetic flux density measurements, confirming previously reported findings. The results indicate that seamstresses, as an occupational group, should be considered for study of the possible health effects of long-term EMF exposure.

  3. How the choice of flood damage metrics influences urban flood risk assessment

    NARCIS (Netherlands)

    Ten Veldhuis, J.A.E.

    2011-01-01

    This study presents a first attempt to quantify tangible and intangible flood damage according to two different damage metrics: monetary values and number of people affected by flooding. Tangible damage includes material damage to buildings and infrastructure; intangible damage includes damages that

  4. Using Landscape Metrics Analysis and Analytic Hierarchy Process to Assess Water Harvesting Potential Sites in Jordan

    Directory of Open Access Journals (Sweden)

    Abeer Albalawneh

    2015-09-01

    Full Text Available Jordan is characterized as a “water scarce” country. Therefore, conserving ecosystem services such as water regulation and soil retention is challenging. In Jordan, rainwater harvesting has been adapted to meet those challenges. However, the spatial composition and configuration features of a target landscape are rarely considered when selecting a rainwater-harvesting site. This study aimed to introduce landscape spatial features into the schemes for selecting a proper water-harvesting site. Landscape metrics analysis was used to quantify 10 metrics for three potential landscapes (i.e., Watershed 104 (WS 104, Watershed 59 (WS 59, and Watershed 108 (WS 108 located in the Jordanian Badia region. Results of the metrics analysis showed that the three non–vegetative land cover types in the three landscapes were highly suitable for serving as rainwater harvesting sites. Furthermore, Analytic Hierarchy Process (AHP was used to prioritize the fitness of the three target sites by comparing their landscape metrics. Results of AHP indicate that the non-vegetative land cover in the WS 104 landscape was the most suitable site for rainwater harvesting intervention, based on its dominance, connectivity, shape, and low degree of fragmentation. Our study advances the water harvesting network design by considering its landscape spatial pattern.

  5. Elliptical Local Vessel Density: a Fast and Robust Quality Metric for Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Chaum, Edward [ORNL; Karnowski, Thomas Paul [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL; Abramoff, M.D. [University of Iowa

    2008-01-01

    A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches to the problem of automatically judging the image quality. We propose a new set of features independent of Field of View or resolution to describe the morphology of the patient's vessels. Our initial results suggest that they can be used to estimate the image quality in a time one order of magnitude shorter respect to previous techniques.

  6. Parasitology: United Kingdom National Quality Assessment Scheme.

    OpenAIRE

    Hawthorne, M; Chiodini, P L; Snell, J J; Moody, A H; Ramsay, A

    1992-01-01

    AIMS: To assess the results from parasitology laboratories taking part in a quality assessment scheme between 1986 and 1991; and to compare performance with repeat specimens. METHODS: Quality assessment of blood parasitology, including tissue parasites (n = 444; 358 UK, 86 overseas), and faecal parasitology, including extra-intestinal parasites (n = 205; 141 UK, 64 overseas), was performed. RESULTS: Overall, the standard of performance was poor. A questionnaire distributed to participants sho...

  7. A metrics-based comparison of secondary user quality between iOS and Android

    NARCIS (Netherlands)

    Amman, T.

    2014-01-01

    Native mobile applications gain popularity in the commercial market. There is no other econom- ical sector that grows as fast. A lot of economical research is done in this sector, but there is very little research that deals with qualities for mobile application developers. This paper compares the q

  8. [Establishing IAQ Metrics and Baseline Measures.] "Indoor Air Quality Tools for Schools" Update #20

    Science.gov (United States)

    US Environmental Protection Agency, 2009

    2009-01-01

    This issue of "Indoor Air Quality Tools for Schools" Update ("IAQ TfS" Update) contains the following items: (1) News and Events; (2) IAQ Profile: Establishing Your Baseline for Long-Term Success (Feature Article); (3) Insight into Excellence: Belleville Township High School District #201, 2009 Leadership Award Winner; and (4) Have Your Questions…

  9. Welfare Quality assessment protocol for laying hens = Welfare Quality assessment protocol voor leghennen

    OpenAIRE

    Niekerk, van, M.; H. Gunnink; Reenen, van, A Alexander

    2012-01-01

    Results of a study on the Welfare Quality® assessment protocol for laying hens. It reports the development of the integration of welfare assessment as scores per criteria as well as simplification of the Welfare Quality® assessment protocol. Results are given from assessment of 122 farms.

  10. Metric for the measurement of the quality of complex beams: a theoretical study.

    Science.gov (United States)

    Kaim, Sergiy; Lumeau, Julien; Smirnov, Vadim; Zeldovich, Boris; Glebov, Leonid

    2015-04-01

    We present a theoretical study of various definitions of laser beam width in a given cross section. Quality of the beam is characterized by dimensionless beam propagation products (BPPs) Δx·Δθ(x)/λ, which are different for the 21 definitions presented, but are close to 1. Six particular beams are studied in detail. In the process, we had to review the properties for the Fourier transform of various modifications and the relationships between them: physical Fourier transform (PFT), mathematical Fourier transform (MFT), and discrete Fourier transform (DFT). We found an axially symmetric self-MFT function, which may be useful for descriptions of diffraction-quality beams. In the appendices, we illustrate the thesis "the Fourier transform lives on the singularities of the original." PMID:26366763

  11. Landscape Classifications for Landscape Metrics-based Assessment of Urban Heat Island: A Comparative Study

    International Nuclear Information System (INIS)

    In recent years, some studies have been carried out on the landscape analysis of urban thermal patterns. With the prevalence of thermal landscape, a key problem has come forth, which is how to classify thermal landscape into thermal patches. Current researches used different methods of thermal landscape classification such as standard deviation method (SD) and R method. To find out the differences, a comparative study was carried out in Xiamen using a 20-year winter time-serial Landsat images. After the retrieval of land surface temperature (LST), the thermal landscape was classified using the two methods separately. Then landscape metrics, 6 at class level and 14 at landscape level, were calculated and analyzed using Fragstats 3.3. We found that: (1) at the class level, all the metrics with SD method were evened and did not show an obvious trend along with the process of urbanization, while the R method could. (2) While at the landscape level, 6 of the 14 metrics remains the similar trends, 5 were different at local turn points of the curve, 3 of them differed completely in the shape of curves. (3) When examined with visual interpretation, SD method tended to exaggerate urban heat island effects than the R method

  12. Algal Attributes: An Autecological Classification of Algal Taxa Collected by the National Water-Quality Assessment Program

    Science.gov (United States)

    Porter, Stephen D.

    2008-01-01

    Algae are excellent indicators of water-quality conditions, notably nutrient and organic enrichment, and also are indicators of major ion, dissolved oxygen, and pH concentrations and stream microhabitat conditions. The autecology, or physiological optima and tolerance, of algal species for various water-quality contaminants and conditions is relatively well understood for certain groups of freshwater algae, notably diatoms. However, applications of autecological information for water-quality assessments have been limited because of challenges associated with compiling autecological literature from disparate sources, tracking name changes for a large number of algal species, and creating an autecological data base from which algal-indicator metrics can be calculated. A comprehensive summary of algal autecological attributes for North American streams and rivers does not exist. This report describes a large, digital data file containing 28,182 records for 5,939 algal taxa, generally species or variety, collected by the U.S. Geological Survey?s National Water-Quality Assessment (NAWQA) Program. The data file includes 37 algal attributes classified by over 100 algal-indicator codes or metrics that can be calculated easily with readily available software. Algal attributes include qualitative classifications based on European and North American autecological literature, and semi-quantitative, weighted-average regression approaches for estimating optima using regional and national NAWQA data. Applications of algal metrics in water-quality assessments are discussed and national quartile distributions of metric scores are shown for selected indicator metrics.

  13. Metrics, Dose, and Dose Concept: The Need for a Proper Dose Concept in the Risk Assessment of Nanoparticles

    Directory of Open Access Journals (Sweden)

    Myrtill Simkó

    2014-04-01

    Full Text Available In order to calculate the dose for nanoparticles (NP, (i relevant information about the dose metrics and (ii a proper dose concept are crucial. Since the appropriate metrics for NP toxicity are yet to be elaborated, a general dose calculation model for nanomaterials is not available. Here we propose how to develop a dose assessment model for NP in analogy to the radiation protection dose calculation, introducing the so-called “deposited and the equivalent dose”. As a dose metric we propose the total deposited NP surface area (SA, which has been shown frequently to determine toxicological responses e.g. of lung tissue. The deposited NP dose is proportional to the total surface area of deposited NP per tissue mass, and takes into account primary and agglomerated NP. By using several weighting factors the equivalent dose additionally takes into account various physico-chemical properties of the NP which are influencing the biological responses. These weighting factors consider the specific surface area, the surface textures, the zeta-potential as a measure for surface charge, the particle morphology such as the shape and the length-to-diameter ratio (aspect ratio, the band gap energy levels of metal and metal oxide NP, and the particle dissolution rate. Furthermore, we discuss how these weighting factors influence the equivalent dose of the deposited NP.

  14. Privacy Metrics and Boundaries

    NARCIS (Netherlands)

    L-F. Pau (Louis-François)

    2005-01-01

    textabstractThis paper aims at defining a set of privacy metrics (quantitative and qualitative) in the case of the relation between a privacy protector ,and an information gatherer .The aims with such metrics are: -to allow to assess and compare different user scenarios and their differences; for ex

  15. Packaget Water quality and their assessment

    OpenAIRE

    Hromádko, Tomáš

    2011-01-01

    The thesis deals with the quality of bottled water and their evaluation criteria. In the first chapter of the literature search are given the types of bottled waters, including thein requirements, and other variants of drinking water. The next section describes the assessment of water in its mineral and microbial composition and their individual components. The next chapter deals with non-traditional criteria for assessment of water quality, which are described in detail with their connection...

  16. MICROWAVE REMOTE SENSING IN SOIL QUALITY ASSESSMENT

    OpenAIRE

    Saha, S K

    2012-01-01

    Information of spatial and temporal variations of soil quality (soil properties) is required for various purposes of sustainable agriculture development and management. Traditionally, soil quality characterization is done by in situ point soil sampling and subsequent laboratory analysis. Such methodology has limitation for assessing the spatial variability of soil quality. Various researchers in recent past showed the potential utility of hyperspectral remote sensing technique for spatial est...

  17. Quality Assessment in the Blog Space

    Science.gov (United States)

    Schaal, Markus; Fidan, Guven; Muller, Roland M.; Dagli, Orhan

    2010-01-01

    Purpose: The purpose of this paper is the presentation of a new method for blog quality assessment. The method uses the temporal sequence of link creation events between blogs as an implicit source for the collective tacit knowledge of blog authors about blog quality. Design/methodology/approach: The blog data are processed by the novel method for…

  18. Quality Assessment for a University Curriculum.

    Science.gov (United States)

    Hjalmered, Jan-Olof; Lumsden, Kenth

    1994-01-01

    In 1992, a national quality assessment report covering courses in all the Swedish schools of mechanical engineering was presented. This article comments on the general ideas and specific proposals presented, and offers an analysis of the consequences. Presents overall considerations regarding quality issues, the philosophy behind the new…

  19. Data Matching, Integration, and Interoperability for a Metric Assessment of Monographs

    DEFF Research Database (Denmark)

    Zuccala, Alesia Ann; Cornacchia, Roberto

    2016-01-01

    This paper details a unique data experiment carried out at the University of Amsterdam, Center for Digital Humanities. Data pertaining to monographs were collected from three autonomous resources, the Scopus Journal Index, WorldCat.org and Goodreads, and linked according to unique identifiers...... in a new Microsoft SQL database. The purpose of the experiment was to investigate co-varied metrics for a list of book titles based on their citation impact (from Scopus), presence in international libraries (WorldCat.org) and visibility as publically reviewed items (Goodreads). The results of our data...

  20. SIMPLE QUALITY ASSESSMENT FOR BINARY IMAGES

    Institute of Scientific and Technical Information of China (English)

    Zhang Chun'e; Qiu Zhengding

    2007-01-01

    Usually image assessment methods could be classified into two categories: subjective assessments and objective ones. The latter are judged by the correlation coefficient with subjective quality measurement MOS (Mean Opinion Score). This paper presents an objective quality assessment algorithm special for binary images. In the algorithm, noise energy is measured by Euclidean distance between noises and signals and the structural effects caused by noise are described by Euler number change. The assessment on image quality is calculated quantitatively in terms of PSNR (Peak Signal to Noise Ratio). Our experiments show that the results of the algorithm are highly correlative with subjective MOS and the algorithm is more simple and computational saving than traditional objective assessment methods.

  1. Quality Assessment in the Primary care

    Directory of Open Access Journals (Sweden)

    Muharrem Ak

    2013-04-01

    Full Text Available -Quality Assessment in the Primary care Dear Editor; I have read the article titled as “Implementation of Rogi Kalyan Samiti (RKS at Primary Health Centre Durvesh” with great interest. Shrivastava et all concluded that assessment mechanism for the achievement of objectives for the suggested RKS model was not successful (1. Hereby I would like to emphasize the importance of quality assessment (QA especially in the era of newly established primary care implementations in our country. Promotion of quality has been fundamental part of primary care health services. Nevertheless variations in quality of care exist even in the developed countries. Accomplishment of quality in the primary care has some barriers like administration and directorial factors, absence of evidence-based medicine practice lack of continuous medical education. Quality of health care is no doubt multifaceted model that covers all components of health structures and processes of care. Quality in the primary care set up includes patient physician relationship, immunization, maternal, adolescent, adult and geriatric health care, referral, non-communicable disease management and prescribing (2. Most countries are recently beginning the implementation of quality assessments in all walks of healthcare. Organizations like European society for quality and safety in family practice (EQuiP endeavor to accomplish quality by collaboration. There are reported developments and experiments related to the methodology, processes and outcomes of quality assessments of health care. Quality assessments will not only contribute the accomplishment of the program / project but also detect the areas where obstacles also exist. In order to speed up the adoption of QA and to circumvent the occurrence of mistakes, health policy makers and family physicians from different parts of the world should share their experiences. Consensus on quality in preventive medicine implementations can help to yield

  2. No-reference quality assessment based on visual perception

    Science.gov (United States)

    Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao

    2014-11-01

    The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233

  3. Structural similarity analysis for brain MR image quality assessment

    Science.gov (United States)

    Punga, Mirela Visan; Moldovanu, Simona; Moraru, Luminita

    2014-11-01

    Brain MR images are affected and distorted by various artifacts as noise, blur, blotching, down sampling or compression and as well by inhomogeneity. Usually, the performance of pre-processing operation is quantified by using the quality metrics as mean squared error and its related metrics such as peak signal to noise ratio, root mean squared error and signal to noise ratio. The main drawback of these metrics is that they fail to take the structural fidelity of the image into account. For this reason, we addressed to investigate the structural changes related to the luminance and contrast variation (as non-structural distortions) and to denoising process (as structural distortion)through an alternative metric based on structural changes in order to obtain the best image quality.

  4. Measuring Research Quality Using the Journal Impact Factor, Citations and "Ranked Journals": Blunt Instruments or Inspired Metrics?

    Science.gov (United States)

    Jarwal, Som D.; Brion, Andrew M.; King, Maxwell L.

    2009-01-01

    This paper examines whether three bibliometric indicators--the journal impact factor, citations per paper and the Excellence in Research for Australia (ERA) initiative's list of "ranked journals"--can predict the quality of individual research articles as assessed by international experts, both overall and within broad disciplinary groupings. The…

  5. Comment: Assessment of scientific quality is complicated

    NARCIS (Netherlands)

    T. Opthof; A.A.M. Wilde

    2009-01-01

    In their letter 'Assessing scientific quality in a multidisciplinary academic medical centre', Van Kammen, Van Lier and Gunning-Schepers respond to our paper on the assessment of the H-index amongst 28 professors in clinical cardiology appointed at the eight university medical centres in the Netherl

  6. Retinal image quality assessment through a visual similarity index

    OpenAIRE

    Pérez Rodríguez, Jorge; Espinosa Tomás, Julián; Vázquez Ferri, Carmen; Mas Candela, David

    2013-01-01

    Retinal image quality is commonly analyzed through parameters inherited from instrumental optics. These parameters are defined for ‘good optics’ so they are hard to translate into visual quality metrics. Instead of using point or artificial functions, we propose a quality index that takes into account properties of natural images. These images usually show strong local correlations that help to interpret the image. Our aim is to derive an objective index that quantifies the quality of vision ...

  7. ANSS Backbone Station Quality Assessment

    Science.gov (United States)

    Leeds, A.; McNamara, D.; Benz, H.; Gee, L.

    2006-12-01

    In this study we assess the ambient noise levels of the broadband seismic stations within the United States Geological Survey's (USGS) Advanced National Seismic System (ANSS) backbone network. The backbone consists of stations operated by the USGS as well as several regional network stations operated by universities. We also assess the improved detection capability of the network due to the installation of 13 additional backbone stations and the upgrade of 26 existing stations funded by the Earthscope initiative. This assessment makes use of probability density functions (PDF) of power spectral densities (PSD) (after McNamara and Buland, 2004) computed by a continuous noise monitoring system developed by the USGS- ANSS and the Incorporated Research Institutions in Seismology (IRIS) Data Management Center (DMC). We compute the median and mode of the PDF distribution and rank the stations relative to the Peterson Low noise model (LNM) (Peterson, 1993) for 11 different period bands. The power of the method lies in the fact that there is no need to screen the data for system transients, earthquakes or general data artifacts since they map into a background probability level. Previous studies have shown that most regional stations, instrumented with short period or extended short period instruments, have a higher noise level in all period bands while stations in the US network have lower noise levels at short periods (0.0625-8.0 seconds), high frequencies (8.0- 0.125Hz). The overall network is evaluated with respect to accomplishing the design goals set for the USArray/ANSS backbone project which were intended to increase broadband performance for the national monitoring network.

  8. Association of Landscape Metrics to Surface Water Biology in the Savannah River Basin

    OpenAIRE

    Nash, Maliha S.; Deborah J. Chaloud; Susan E. Franson

    2005-01-01

    Surface water quality for the Savannah River basin was assessed using water biology and landscape metrics. Two multivariate analyses, partial least square and canonical correlation, were used to describe how the structural variation in landscape metrics may affect surface water biology and to define the key landscape variable(s) that contribute the most to variation in surface water quality. The results showed that the key landscape metrics in this study area were: percent...

  9. Health outcomes in diabetics measured with Minnesota Community Measurement quality metrics

    Directory of Open Access Journals (Sweden)

    Takahashi PY

    2014-12-01

    Full Text Available Paul Y Takahashi,1 Jennifer L St Sauver,2 Lila J Finney Rutten,2 Robert M Jacobson,3 Debra J Jacobson,2 Michaela E McGree,2 Jon O Ebbert1 1Department of Internal Medicine, Division of Primary Care Internal Medicine, 2Department of Health Sciences Research, Mayo Clinic Robert D and Patricia E Kern Center for the Science of Health Care Delivery, 3Department of Pediatric and Adolescent Medicine, Division of Community Pediatrics, Mayo Clinic, Rochester, MN, USA Objective: Our objective was to understand the relationship between optimal diabetes control, as defined by Minnesota Community Measurement (MCM, and adverse health outcomes including emergency department (ED visits, hospitalizations, 30-day rehospitalization, intensive care unit (ICU stay, and mortality. Patients and methods: In 2009, we conducted a retrospective cohort study of empaneled Employee and Community Health patients with diabetes mellitus. We followed patients from 1 September 2009 until 30 June 2011 for hospitalization and until 5 January 2014 for mortality. Optimal control of diabetes mellitus was defined as achieving the following three measures: low-density lipoprotein (LDL cholesterol <100 mg/mL, blood pressure <140/90 mmHg, and hemoglobin A1c <8%. Using the electronic medical record, we assessed hospitalizations, ED visits, ICU stays, 30-day rehospitalizations, and mortality. The chi-square or Wilcoxon rank-sum tests were used to compare those with and without optimal control. We used Cox proportional hazard models to estimate the associations between optimal diabetes mellitus status and each outcome. Results: We identified 5,731 empaneled patients with diabetes mellitus; 2,842 (49.6% were in the optimal control category. After adjustment, we observed that non-optimally controlled patients had higher risks for hospitalization (hazard ratio [HR] 1.11; 95% confidence interval [CI] 1.00–1.23, ED visits (HR 1.15; 95% CI 1.06–1.25, and mortality (HR 1.29; 95% CI 1.09–1

  10. SOFTWARE METRICS VALIDATION METHODOLOGIES IN SOFTWARE ENGINEERING

    Directory of Open Access Journals (Sweden)

    K.P. Srinivasan

    2014-12-01

    Full Text Available In the software measurement validations, assessing the validation of software metrics in software engineering is a very difficult task due to lack of theoretical methodology and empirical methodology [41, 44, 45]. During recent years, there have been a number of researchers addressing the issue of validating software metrics. At present, software metrics are validated theoretically using properties of measures. Further, software measurement plays an important role in understanding and controlling software development practices and products. The major requirement in software measurement is that the measures must represent accurately those attributes they purport to quantify and validation is critical to the success of software measurement. Normally, validation is a collection of analysis and testing activities across the full life cycle and complements the efforts of other quality engineering functions and validation is a critical task in any engineering project. Further, validation objective is to discover defects in a system and assess whether or not the system is useful and usable in operational situation. In the case of software engineering, validation is one of the software engineering disciplines that help build quality into software. The major objective of software validation process is to determine that the software performs its intended functions correctly and provides information about its quality and reliability. This paper discusses the validation methodology, techniques and different properties of measures that are used for software metrics validation. In most cases, theoretical and empirical validations are conducted for software metrics validations in software engineering [1-50].

  11. Dental metric assessment of the omo fossils: implications for the phylogenetic position of Australopithecus africanus.

    Science.gov (United States)

    Hunt, K; Vitzthum, V J

    1986-10-01

    The discovery of Australopithecus afarensis has led to new interpretations of hominid phylogeny, some of which reject A. africanus as an ancestor of Homo. Analysis of buccolingual tooth crown dimensions in australopithecines and Homo species by Johanson and White (Science 202:321-330, 1979) revealed that the South African gracile australopithecines are intermediate in size between Laetoli/hadar hominids and South African robust hominids. Homo, on the other hand, displays dimensions similar to those of A. afarensis and smaller than those of other australopithecines. These authors conclude, therefore, that A. africanus is derived in the direction of A. robustus and is not an ancestor of the Homo clade. However, there is a considerable time gap (ca. 800,000 years) between the Laetoli/Hadar specimens and the earliest Homo specimens; "gracile" hominids from Omo fit into this chronological gap and are from the same geographic area. Because the early specimens at Omo have been designated A. afarensis and the later specimens classified as Homo habilis, Omo offers a unique opportunity to test hypotheses concerning hominid evolution, especially regarding the phylogenetic status of A. africanus. Comparisons of mean cheek teeth breadths disclosed the significant (P less than or equal to 0.05) differences between the Omo sample and the Laetoli/Hadar fossils (P4, M2, and M3), the Homo fossils (P3, P4, M1, M2, and M1), and A. africanus (M3). Of the several possible interpretations of these data, it appears that the high degree of similarity between the Omo sample and the South African gracile australopithecine material warrants considering the two as geographical variants of A. africanus. The geographic, chronologic, and metric attributes of the Omo sample argue for its lineal affinity with A. afarensis and Homo. In conclusion, a consideration of hominid postcanine dental metrics provides no basis for removing A. africanus from the ancestry of the Homo lineage. PMID:3099582

  12. MICROWAVE REMOTE SENSING IN SOIL QUALITY ASSESSMENT

    Directory of Open Access Journals (Sweden)

    S. K. Saha

    2012-08-01

    Full Text Available Information of spatial and temporal variations of soil quality (soil properties is required for various purposes of sustainable agriculture development and management. Traditionally, soil quality characterization is done by in situ point soil sampling and subsequent laboratory analysis. Such methodology has limitation for assessing the spatial variability of soil quality. Various researchers in recent past showed the potential utility of hyperspectral remote sensing technique for spatial estimation of soil properties. However, limited research studies have been carried out showing the potential of microwave remote sensing data for spatial estimation of various soil properties except soil moisture. This paper reviews the status of microwave remote sensing techniques (active and passive for spatial assessment of soil quality parameters such as soil salinity, soil erosion, soil physical properties (soil texture & hydraulic properties; drainage condition; and soil surface roughness. Past and recent research studies showed that both active and passive microwave remote sensing techniques have great potentials for assessment of these soil qualities (soil properties. However, more research studies on use of multi-frequency and full polarimetric microwave remote sensing data and modelling of interaction of multi-frequency and full polarimetric microwave remote sensing data with soil are very much needed for operational use of satellite microwave remote sensing data in soil quality assessment.

  13. Assessing product image quality for online shopping

    Science.gov (United States)

    Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq

    2012-01-01

    Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.

  14. Quality assessment in meta-analisys

    Directory of Open Access Journals (Sweden)

    Giuseppe La Torre

    2006-06-01

    Full Text Available

    Background: An important characteristic of meta-analysis is that the results are determined both by the management of the meta-analysis process and by the features of studies included. The scientific rigor of potential primary studies varies considerably and the common objection to meta-analytic summaries is that they combine results from studies of different quality. Researchers began to develop quality scales for experimental studies, however now the interest of researchers is also focusing on observational studies. Since 1980, when Chalmers developed the first quality scale to assess primary studies included in metaanalysis, more than 100 scales have been developed, which vary dramatically in the quality and quantity of the items included. No standard lists of items exist, and the used quality scales lack empirically-supported components.

    Methods: Two of the most important and diffuse quality scales for experimental studies, Jadad system and Chalmers’ scale, and a quality scale used for observational studies, developed by Angelillo et al., are described and compared.

    Conclusion: The fallibility of meta-analysis is not surprising, considering the various bias that may be introduced by the processes of locating and selecting studies, including publication bias, language bias and citation bias. Quality assessment of the studies offers an estimate of the likelihood that their results will express the truth.

  15. Can we go beyond burned area assessment with fire patch metrics from global remote rensing?

    Science.gov (United States)

    Nogueira Pereira Messias, Joana; Ruffault, Julien; Chuvieco, Emilio; Mouillot, Florent

    2016-04-01

    Fire is a major event influencing global biogeochemical cycles and contribute to the emissions of CO2 and other greenhouse gases to the atmosphere. Global burned area (BA) datasets from remote sensing have provided the fruitful information for quantifying carbon emissions in global biogeochemical models, and for DGVM's benchmarking. Patch level analysis from pixel level information recently emerged as an informative additional feature of the regime as fire size distribution. The aim of this study is to evaluate the ability of global BA products to accurately represent characteristics of fire patches (size, complexity shape and spatial orientation). We selected a site in the Brazilian savannas (Cerrado), one of the most fire prone biome and one of the validation test site for the ESA fire-Cci project. We used the pixel-level burned area detected by Landsat, MCD45A1 and the newly delivered MERIS ESA fire-Cci for the period 2002-2009. A flood-fill algorithm adapted from Archibald and Roy (2009) was used to identify the individual fire patches (patch ID) according to the burned date (BD). For each patch ID, we calculated a panel of patch metrics as area, perimeter and core area, shape complexity (shape index and fractal dimension) and the feature of the ellipse fitted over the spatial distribution of pixels composing the patch (eccentricity and direction of the main axis). Paired fire patches overlapping between each BA products were compared. The correlation between patch metrics were evaluated by linear regression models for each inter-product comparison according to fire size classes. Our results showed significant patch overlaps (>30%) between products for patches with areas larger than 270ha, with more than 90% of patches overlapping between MERIS and MCD45A1. Fire Patch metrics correlations showed R2>0.6 for all comparisons of patch Area and Core Area, with a slope of 0.99 between MERIS and MCD45A1 illustrating the agreement between the two global products. The

  16. Visualization and quality assessment of the contrast transfer function estimation.

    Science.gov (United States)

    Sheth, Lisa K; Piotrowski, Angela L; Voss, Neil R

    2015-11-01

    The contrast transfer function (CTF) describes an undesirable distortion of image data from a transmission electron microscope. Many users of full-featured processing packages are often new to electron microscopy and are unfamiliar with the CTF concept. Here we present a common graphical output to clearly demonstrate the CTF fit quality independent of estimation software. Separately, many software programs exist to estimate the four CTF parameters, but their results are difficult to compare across multiple runs and it is all but impossible to select the best parameters to use for further processing. A new measurement is presented based on the correlation falloff of the calculated CTF oscillations against the normalized oscillating signal of the data, called the CTF resolution. It was devised to provide a robust numerical quality metric of every CTF estimation for high-throughput screening of micrographs and to select the best parameters for each micrograph. These new CTF visualizations and quantitative measures will help users better assess the quality of their CTF parameters and provide a mechanism to choose the best CTF tool for their data. PMID:26080023

  17. User-Perceived Quality Assessment for VoIP Applications

    CERN Document Server

    Beuran, R; CERN. Geneva

    2004-01-01

    We designed and implemented a system that permits the measurement of network Quality of Service (QoS) parameters. This system allows us to objectively evaluate the requirements of network applications for delivering user-acceptable quality. To do this we compute accurately the network QoS parameters: one-way delay, jitter, packet loss and throughput. The measurement system makes use of a global clock to synchronise the time measurements in different points of the network. To study the behaviour of real network applications specific metrics must be defined in order to assess the user-perceived quality (UPQ) for each application. Since we measure simultaneously network QoS and application UPQ, we are able to correlate them. Determining application requirements has two main uses: (i) to predict the expected UPQ for an application running over a given network (based on the corresponding measured QoS parameters) and understand the causes of application failure; (ii) to design/configure networks that provide the ne...

  18. Metrical Quantization

    CERN Document Server

    Klauder, J R

    1998-01-01

    Canonical quantization may be approached from several different starting points. The usual approaches involve promotion of c-numbers to q-numbers, or path integral constructs, each of which generally succeeds only in Cartesian coordinates. All quantization schemes that lead to Hilbert space vectors and Weyl operators---even those that eschew Cartesian coordinates---implicitly contain a metric on a flat phase space. This feature is demonstrated by studying the classical and quantum ``aggregations'', namely, the set of all facts and properties resident in all classical and quantum theories, respectively. Metrical quantization is an approach that elevates the flat phase space metric inherent in any canonical quantization to the level of a postulate. Far from being an unwanted structure, the flat phase space metric carries essential physical information. It is shown how the metric, when employed within a continuous-time regularization scheme, gives rise to an unambiguous quantization procedure that automatically ...

  19. Assessing the performance of macroinvertebrate metrics in the Challhuaco-Ñireco System (Northern Patagonia, Argentina

    Directory of Open Access Journals (Sweden)

    Melina Mauad

    2015-09-01

    Full Text Available ABSTRACT Seven sites were examined in the Challhuaco-Ñireco system, located in the reserve of the Nahuel Huapi National Park, however part of the catchment is urbanized, being San Carlos de Bariloche (150,000 inhabitants placed in the lower part of the basin. Physico-chemical variables were measured and benthic macroinvertebrates were collected during three consecutive years at seven sites from the headwater to the river outlet. Sites near the source of the river were characterised by Plecoptera, Ephemeroptera, Trichoptera and Diptera, whereas sites close to the river mouth were dominated by Diptera, Oligochaeta and Mollusca. Regarding functional feeding groups, collector-gatherers were dominant at all sites and this pattern was consistent among years. Ordination Analysis (RDA revealed that species assemblages distribution responded to the climatic and topographic gradient (temperature and elevation, but also were associated with variables related to human impact (conductivity, nitrate and phosphate contents. Species assemblages at headwaters were mostly represented by sensitive insects, whereas tolerant taxa such as Tubificidae, Lumbriculidae, Chironomidae and crustacean Aegla sp. were dominant at urbanised sites. Regarding macroinvertebrate metrics employed, total richness, EPT taxa, Shannon diversity index and Biotic Monitoring Patagonian Stream index resulted fairly consistent and evidenced different levels of disturbances at the stream, meaning that this measures are suitable for evaluation of the status of Patagonian mountain streams.

  20. Objective assessment of MPEG-2 video quality

    Science.gov (United States)

    Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano

    2002-07-01

    The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.

  1. Quality assessment of aluminized steel tubes

    OpenAIRE

    K. Żaba

    2010-01-01

    The results of assessments of the welded steel tubes with the Al-Si coating intended for the motorization needs – are presented in thepaper. The measurement of mechanical properties, tube diameters and thickness, internal flash heights as well as the alternative assessmentof the weld quality were performed. The obtained results are presented by means of tools available in the Statistica program andmacroscopic observations.

  2. Water quality issues and energy assessments

    Energy Technology Data Exchange (ETDEWEB)

    Davis, M.J.; Chiu, S.

    1980-11-01

    This report identifies and evaluates the significant water quality issues related to regional and national energy development. In addition, it recommends improvements in the Office assessment capability. Handbook-style formating, which includes a system of cross-references and prioritization, is designed to help the reader use the material.

  3. Urban Air Quality Assessment Model UAQAM

    NARCIS (Netherlands)

    van Pul WAJ; van Zantvoort EDG; de Leeuw FAAM; Sluyter RJCF; LLO

    1996-01-01

    Het Urban Air Quality Assessment Model (UAQAM) berekent de concentratie van luchtverontreiniging in stedelijk gebied veroorzaakt door emissies uit de stad zelf. In een werkversie van dit model werden 3 beschrijvingen van de verspreiding bestudeerd: een Box-model, het Gifford-Hanna (GH)-model en een

  4. Urban Air Quality Assessment Model UAQAM

    NARCIS (Netherlands)

    Pul WAJ van; Zantvoort EDG van; Leeuw FAAM de; Sluyter RJCF; LLO

    1996-01-01

    The Urban Air Quality Assessment Model (UAQAM) calculates the city concentration caused by city emissions themselves, the so-called city background concentration. Three versions of the model for describing the dispersion were studied: Box, Gifford Hanna (GH) and a combined form of these two (the Box

  5. Assessing uncertainty in stormwater quality modelling.

    Science.gov (United States)

    Wijesiri, Buddhi; Egodawatta, Prasanna; McGree, James; Goonetilleke, Ashantha

    2016-10-15

    Designing effective stormwater pollution mitigation strategies is a challenge in urban stormwater management. This is primarily due to the limited reliability of catchment scale stormwater quality modelling tools. As such, assessing the uncertainty associated with the information generated by stormwater quality models is important for informed decision making. Quantitative assessment of build-up and wash-off process uncertainty, which arises from the variability associated with these processes, is a major concern as typical uncertainty assessment approaches do not adequately account for process uncertainty. The research study undertaken found that the variability of build-up and wash-off processes for different particle size ranges leads to processes uncertainty. After variability and resulting process uncertainties are accurately characterised, they can be incorporated into catchment stormwater quality predictions. Accounting of process uncertainty influences the uncertainty limits associated with predicted stormwater quality. The impact of build-up process uncertainty on stormwater quality predictions is greater than that of wash-off process uncertainty. Accordingly, decision making should facilitate the designing of mitigation strategies which specifically addresses variations in load and composition of pollutants accumulated during dry weather periods. Moreover, the study outcomes found that the influence of process uncertainty is different for stormwater quality predictions corresponding to storm events with different intensity, duration and runoff volume generated. These storm events were also found to be significantly different in terms of the Runoff-Catchment Area ratio. As such, the selection of storm events in the context of designing stormwater pollution mitigation strategies needs to take into consideration not only the storm event characteristics, but also the influence of process uncertainty on stormwater quality predictions. PMID:27423532

  6. Surface water quality assessment by environmetric methods.

    Science.gov (United States)

    Boyacioglu, Hülya; Boyacioglu, Hayal

    2007-08-01

    This environmetric study deals with the interpretation of river water monitoring data from the basin of the Buyuk Menderes River and its tributaries in Turkey. Eleven variables were measured to estimate water quality at 17 sampling sites. Factor analysis was applied to explain the correlations between the observations in terms of underlying factors. Results revealed that, water quality was strongly affected from agricultural uses. Cluster analysis was used to classify stations with similar properties and results distinguished three groups of stations. Water quality at downstream of the river was quite different from the other part. It is recommended to involve the environmetric data treatment as a substantial procedure in assessment of water quality data.

  7. Quality Assessment of Urinary Stone Analysis

    DEFF Research Database (Denmark)

    Siener, Roswitha; Buchholz, Noor; Daudon, Michel;

    2016-01-01

    , fulfilled the quality requirements. According to the current standard, chemical analysis is considered to be insufficient for stone analysis, whereas infrared spectroscopy or X-ray diffraction is mandatory. However, the poor results of infrared spectroscopy highlight the importance of equipment, reference...... and chemical analysis. The aim of the present study was to assess the quality of urinary stone analysis of laboratories in Europe. Nine laboratories from eight European countries participated in six quality control surveys for urinary calculi analyses of the Reference Institute for Bioanalytics, Bonn, Germany......, between 2010 and 2014. Each participant received the same blinded test samples for stone analysis. A total of 24 samples, comprising pure substances and mixtures of two or three components, were analysed. The evaluation of the quality of the laboratory in the present study was based on the attainment...

  8. A multi-scale metrics approach to forest fragmentation for Strategic Environmental Impact Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eunyoung, E-mail: eykim@kei.re.kr [Korea Environment Institute, 215 Jinheungno, Eunpyeong-gu, Seoul 122-706 (Korea, Republic of); Song, Wonkyong, E-mail: wksong79@gmail.com [Suwon Research Institute, 145 Gwanggyo-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-270 (Korea, Republic of); Lee, Dongkun, E-mail: dklee7@snu.ac.kr [Department of Landscape Architecture and Rural System Engineering, Seoul National University, 599 Gwanakro, Gwanak-gu, Seoul 151-921 (Korea, Republic of); Research Institute for Agriculture and Life Sciences, Seoul National University, Seoul 151-921 (Korea, Republic of)

    2013-09-15

    Forests are becoming severely fragmented as a result of land development. South Korea has responded to changing community concerns about environmental issues. The nation has developed and is extending a broad range of tools for use in environmental management. Although legally mandated environmental compliance requirements in South Korea have been implemented to predict and evaluate the impacts of land-development projects, these legal instruments are often insufficient to assess the subsequent impact of development on the surrounding forests. It is especially difficult to examine impacts on multiple (e.g., regional and local) scales in detail. Forest configuration and size, including forest fragmentation by land development, are considered on a regional scale. Moreover, forest structure and composition, including biodiversity, are considered on a local scale in the Environmental Impact Assessment process. Recently, the government amended the Environmental Impact Assessment Act, including the SEA, EIA, and small-scale EIA, to require an integrated approach. Therefore, the purpose of this study was to establish an impact assessment system that minimizes the impacts of land development using an approach that is integrated across multiple scales. This study focused on forest fragmentation due to residential development and road construction sites in selected Congestion Restraint Zones (CRZs) in the Greater Seoul Area of South Korea. Based on a review of multiple-scale impacts, this paper integrates models that assess the impacts of land development on forest ecosystems. The applicability of the integrated model for assessing impacts on forest ecosystems through the SEIA process is considered. On a regional scale, it is possible to evaluate the location and size of a land-development project by considering aspects of forest fragmentation, such as the stability of the forest structure and the degree of fragmentation. On a local scale, land-development projects should

  9. Mass Customization Measurements Metrics

    DEFF Research Database (Denmark)

    Nielsen, Kjeld; Brunø, Thomas Ditlev; Jørgensen, Kaj Asbjørn;

    2014-01-01

    A recent survey has indicated that 17 % of companies have ceased mass customizing less than 1 year after initiating the effort. This paper presents measurement for a company’s mass customization performance, utilizing metrics within the three fundamental capabilities: robust process design, choice...... navigation, and solution space development. A mass customizer when assessing performance with these metrics can identify within which areas improvement would increase competitiveness the most and enable more efficient transition to mass customization....

  10. OBJECTIVE QUALITY ASSESSMENT OF IMAGE ENHANCEMENT METHODS IN DIGITAL MAMMOGRAPHY-A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Sheba K.U

    2016-08-01

    Full Text Available Mammography is the primary and most reliable technique for detection of breast cancer. Mammograms are examined for the presence of malignant masses and indirect signs of malignancy such as micro calcifications, architectural distortion and bilateral asymmetry. However, Mammograms are X-ray images taken with low radiation dosage which results in low contrast, noisy images. Also, malignancies in dense breast are difficult to detect due to opaque uniform background in mammograms. Hence, techniques for improving visual screening of mammograms are essential. Image enhancement techniques are used to improve the visual quality of the images. This paper presents the comparative study of different preprocessing techniques used for enhancement of mammograms in mini-MIAS data base. Performance of the image enhancement techniques is evaluated using objective image quality assessment techniques. They include simple statistical error metrics like PSNR and human visual system (HVS feature based metrics such as SSIM, NCC, UIQI, and Discrete Entropy

  11. Air Quality Assessment Using Interpolation Technique

    Directory of Open Access Journals (Sweden)

    Awkash Kumar

    2016-07-01

    Full Text Available Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance Weighting (IDW of Geographical Information System (GIS has been used to perform interpolation with the help of concentration data on air quality at three locations of Mumbai for the year 2008. The classification was done for the spatial and temporal variation in air quality levels for Mumbai region. The seasonal and annual variations of air quality levels for SO2, NOx and SPM (Suspended Particulate Matter have been focused in this study. Results show that SPM concentration always exceeded the permissible limit of National Ambient Air Quality Standard. Also, seasonal trends of pollutant SPM was low in monsoon due rain fall. The finding of this study will help to formulate control strategies for rational management of air pollution and can be used for many other regions.

  12. Assessing Quality of Data Standards: Framework and Illustration Using XBRL GAAP Taxonomy

    Science.gov (United States)

    Zhu, Hongwei; Wu, Harris

    The primary purpose of data standards or metadata schemas is to improve the interoperability of data created by multiple standard users. Given the high cost of developing data standards, it is desirable to assess the quality of data standards. We develop a set of metrics and a framework for assessing data standard quality. The metrics include completeness and relevancy. Standard quality can also be indirectly measured by assessing interoperability of data instances. We evaluate the framework using data from the financial sector: the XBRL (eXtensible Business Reporting Language) GAAP (Generally Accepted Accounting Principles) taxonomy and US Securities and Exchange Commission (SEC) filings produced using the taxonomy by approximately 500 companies. The results show that the framework is useful and effective. Our analysis also reveals quality issues of the GAAP taxonomy and provides useful feedback to taxonomy users. The SEC has mandated that all publicly listed companies must submit their filings using XBRL. Our findings are timely and have practical implications that will ultimately help improve the quality of financial data.

  13. Water Quality Assessment using Satellite Remote Sensing

    Science.gov (United States)

    Haque, Saad Ul

    2016-07-01

    The two main global issues related to water are its declining quality and quantity. Population growth, industrialization, increase in agriculture land and urbanization are the main causes upon which the inland water bodies are confronted with the increasing water demand. The quality of surface water has also been degraded in many countries over the past few decades due to the inputs of nutrients and sediments especially in the lakes and reservoirs. Since water is essential for not only meeting the human needs but also to maintain natural ecosystem health and integrity, there are efforts worldwide to assess and restore quality of surface waters. Remote sensing techniques provide a tool for continuous water quality information in order to identify and minimize sources of pollutants that are harmful for human and aquatic life. The proposed methodology is focused on assessing quality of water at selected lakes in Pakistan (Sindh); namely, HUBDAM, KEENJHAR LAKE, HALEEJI and HADEERO. These lakes are drinking water sources for several major cities of Pakistan including Karachi. Satellite imagery of Landsat 7 (ETM+) is used to identify the variation in water quality of these lakes in terms of their optical properties. All bands of Landsat 7 (ETM+) image are analyzed to select only those that may be correlated with some water quality parameters (e.g. suspended solids, chlorophyll a). The Optimum Index Factor (OIF) developed by Chavez et al. (1982) is used for selection of the optimum combination of bands. The OIF is calculated by dividing the sum of standard deviations of any three bands with the sum of their respective correlation coefficients (absolute values). It is assumed that the band with the higher standard deviation contains the higher amount of 'information' than other bands. Therefore, OIF values are ranked and three bands with the highest OIF are selected for the visual interpretation. A color composite image is created using these three bands. The water quality

  14. QoS Metrics for Cloud Computing Services Evaluation

    Directory of Open Access Journals (Sweden)

    Amid Khatibi Bardsiri

    2014-11-01

    Full Text Available Cloud systems are transforming the Information Technology trade by facultative the companies to provide admission to their structure and also software products to the membership foundation. Because of the vast range within the delivered Cloud solutions, from the customer’s perspective of an aspect, it's emerged as troublesome to decide whose providers they need to utilize and then what's the thought of his or her option. Especially, employing suitable metrics is vital in assessing practices. Nevertheless, to the most popular of our knowledge, there's no methodical explanation relating to metrics for estimating Cloud products and services. QoS (Quality of Service metrics playing an important role in selecting Cloud providers and also optimizing resource utilization efficiency. While many reports have got to devote to exploitation QoS metrics, relatively not much equipment supports the remark and investigation of QoS metrics of Cloud programs. To guarantee a specialized product is published, describing metrics for assessing the QoS might be an essential necessity. So, this text suggests various QoS metrics for service vendors, especially thinking about the consumer’s worry. This article provides the metrics list may stand to help the future study and also assessment within the field of Cloud service's evaluation.

  15. Collembase: a repository for springtail genomics and soil quality assessment

    NARCIS (Netherlands)

    Timmermans, M.J.T.N.; Boer, de M.E.; Nota, B.; Marien, J.; Klein Lankhorst, R.M.; Straalen, van N.M.; Roelofs, D.

    2007-01-01

    Environmental quality assessment is traditionally based on responses of reproduction and survival of indicator organisms. For soil assessment the springtail Folsomia candida (Collembola) is an accepted standard test organism. We argue that environmental quality assessment using gene expression profi

  16. Assessing the Quality of Diabetic Patients Care

    Directory of Open Access Journals (Sweden)

    Belkis Vicente Sánchez

    2012-12-01

    Full Text Available Background: to improve the efficiency and effectiveness of the actions of family doctors and nurses in this area is an indispensable requisite in order to achieve a comprehensive health care. Objective: to assess the quality of health care provided to diabetic patients by the family doctor in Abreus health area. Methods: a descriptive and observational study based on the application of tools to assess the performance of family doctors in the treatment of diabetes mellitus in the five family doctors consultation in Abreus health area from January to July 2011 was conducted. The five doctors working in these consultations, as well as the 172 diabetic patients were included in the study. At the same time, 172 randomly selected medical records were also revised. Through observation, the existence of some necessary material resources and the quality of their performance as well as the quality of medical records were evaluated. Patient criteria served to assess the quality of the health care provided. Results: scientific and technical training on diabetes mellitus has been insufficient; the necessary equipment for the appropriate care and monitoring of patients with diabetes is available; in 2.9% of medical records reviewed, interrogation appears in its complete form including the complete physical examination in 12 of them and the complete medical indications in 26. Conclusions: the quality of comprehensive medical care to diabetic patients included in the study is compromised. Doctors interviewed recognized the need to be trained in the diagnosis and treatment of diabetes in order to improve their professional performance and enhance the quality of the health care provided to these patients.

  17. Automated Data Quality Assessment of Marine Sensors

    OpenAIRE

    Smith, Daniel V; Leon Reznik; Paulo A. Souza; Timms, Greg P.

    2011-01-01

    The automated collection of data (e.g., through sensor networks) has led to a massive increase in the quantity of environmental and other data available. The sheer quantity of data and growing need for real-time ingestion of sensor data (e.g., alerts and forecasts from physical models) means that automated Quality Assurance/Quality Control (QA/QC) is necessary to ensure that the data collected is fit for purpose. Current automated QA/QC approaches provide assessments based upon hard classific...

  18. Drinking Water Quality Assessment in Tetova Region

    OpenAIRE

    B. H. Durmishi; Ismaili, M.; Shabani, A.; Sh. Abduli

    2012-01-01

    Problem statement: The quality of drinking water is a crucial factor for human health. The objective of this study was the assessment of physical, chemical and bacteriological quality of the drinking water in the city of Tetova and several surrounding villages in the Republic of Macedonia for the period May 2007-2008. The sampling and analysis are conducted in accordance with State Regulation No. 57/2004, which is in compliance with EU and WHO standards. A total of 415 samples were taken for ...

  19. Ecological Status of a Patagonian Mountain River: Usefulness of Environmental and Biotic Metrics for Rehabilitation Assessment.

    Science.gov (United States)

    Laura, Miserendino M; Adriana, M Kutschker; Cecilia, Brand; La Ludmila, Manna; Cecilia, Prinzio Y Di; Gabriela, Papazian; José, Bava

    2016-06-01

    This work evaluates the consequences of anthropogenic pressures at different sections of a Patagonian mountain river using a set of environmental and biological measures. A map of risk of soil erosion at a basin scale was also produced. The study was conducted at 12 sites along the Percy River system, where physicochemical parameters, riparian ecosystem quality, habitat condition, plants, and macroinvertebrates were investigated. While livestock and wood collection, the dominant activities at upper and mean basin sites resulted in an important loss of the forest cover still the riparian ecosystem remains in a relatively good status of conservation, as do the in-stream habitat conditions and physicochemical features. Besides, most indicators based on macroinvertebrates revealed that both upper and middle basin sections supported similar assemblages, richness, density, and most functional feeding group attributes. Instead, the lower urbanized basin showed increases in conductivity and nutrient values, poor quality in the riparian ecosystem, and habitat condition. According to the multivariate analysis, ammonia level, elevation, current velocity, and habitat conditions had explanatory power on benthos assemblages. Discharge, naturalness of the river channel, flood plain morphology, conservation status, and percent of urban areas were important moderators of plant composition. Finally, although the present land use in the basin would not produce a significant risk of soil erosion, unsustainable practices that promotes the substitution of the forest for shrubs would lead to severe consequences. Mitigation efforts should be directed to protect headwater forest, restore altered riparian ecosystem, and to control the incipient eutrophication process. PMID:26961305

  20. Ecological Status of a Patagonian Mountain River: Usefulness of Environmental and Biotic Metrics for Rehabilitation Assessment

    Science.gov (United States)

    Laura, Miserendino M.; Adriana, M. Kutschker; Cecilia, Brand; La Ludmila, Manna; Cecilia, Prinzio Y. Di; Gabriela, Papazian; José, Bava

    2016-06-01

    This work evaluates the consequences of anthropogenic pressures at different sections of a Patagonian mountain river using a set of environmental and biological measures. A map of risk of soil erosion at a basin scale was also produced. The study was conducted at 12 sites along the Percy River system, where physicochemical parameters, riparian ecosystem quality, habitat condition, plants, and macroinvertebrates were investigated. While livestock and wood collection, the dominant activities at upper and mean basin sites resulted in an important loss of the forest cover still the riparian ecosystem remains in a relatively good status of conservation, as do the in-stream habitat conditions and physicochemical features. Besides, most indicators based on macroinvertebrates revealed that both upper and middle basin sections supported similar assemblages, richness, density, and most functional feeding group attributes. Instead, the lower urbanized basin showed increases in conductivity and nutrient values, poor quality in the riparian ecosystem, and habitat condition. According to the multivariate analysis, ammonia level, elevation, current velocity, and habitat conditions had explanatory power on benthos assemblages. Discharge, naturalness of the river channel, flood plain morphology, conservation status, and percent of urban areas were important moderators of plant composition. Finally, although the present land use in the basin would not produce a significant risk of soil erosion, unsustainable practices that promotes the substitution of the forest for shrubs would lead to severe consequences. Mitigation efforts should be directed to protect headwater forest, restore altered riparian ecosystem, and to control the incipient eutrophication process.

  1. Retinal image quality assessment through a visual similarity index

    Science.gov (United States)

    Pérez, Jorge; Espinosa, Julián; Vázquez, Carmen; Mas, David

    2013-04-01

    Retinal image quality is commonly analyzed through parameters inherited from instrumental optics. These parameters are defined for 'good optics' so they are hard to translate into visual quality metrics. Instead of using point or artificial functions, we propose a quality index that takes into account properties of natural images. These images usually show strong local correlations that help to interpret the image. Our aim is to derive an objective index that quantifies the quality of vision by taking into account the local structure of the scene, instead of focusing on a particular aberration. As we show, this index highly correlates with visual acuity and allows inter-comparison of natural images around the retina. The usefulness of the index is proven through the analysis of real eyes before and after undergoing corneal surgery, which usually are hard to analyze with standard metrics.

  2. Toward assessing subjective quality of service of conversational mobile multimedia applications delivered over the internet: a methodology study

    OpenAIRE

    Dugénie, P; Munro, ATD; Barton, MH

    2002-01-01

    Some recent publications have proposed methodologies to assess the performance of multimedia services in introducing subjective estimate of the end-to-end quality of various applications. As a general statement, in order to obtain meaningful subjective results, the experiments must be repeatable and the elements of the whole chain of transmission between users must be restricted to a minimum number of objective quality metrics. This paper presents the approach to specifying the minimum qualit...

  3. Comparing concentration-based (AOT40) and stomatal uptake (PODY) metrics for ozone risk assessment to European forests.

    Science.gov (United States)

    Anav, Alessandro; De Marco, Alessandra; Proietti, Chiara; Alessandri, Andrea; Dell'Aquila, Alessandro; Cionni, Irene; Friedlingstein, Pierre; Khvorostyanov, Dmitry; Menut, Laurent; Paoletti, Elena; Sicard, Pierre; Sitch, Stephen; Vitale, Marcello

    2016-04-01

    Tropospheric ozone (O3) produces harmful effects to forests and crops, leading to a reduction of land carbon assimilation that, consequently, influences the land sink and the crop yield production. To assess the potential negative O3 impacts to vegetation, the European Union uses the Accumulated Ozone over Threshold of 40 ppb (AOT40). This index has been chosen for its simplicity and flexibility in handling different ecosystems as well as for its linear relationships with yield or biomass loss. However, AOT40 does not give any information on the physiological O3 uptake into the leaves since it does not include any environmental constraints to O3 uptake through stomata. Therefore, an index based on stomatal O3 uptake (i.e. PODY), which describes the amount of O3 entering into the leaves, would be more appropriate. Specifically, the PODY metric considers the effects of multiple climatic factors, vegetation characteristics and local and phenological inputs rather than the only atmospheric O3 concentration. For this reason, the use of PODY in the O3 risk assessment for vegetation is becoming recommended. We compare different potential O3 risk assessments based on two methodologies (i.e. AOT40 and stomatal O3 uptake) using a framework of mesoscale models that produces hourly meteorological and O3 data at high spatial resolution (12 km) over Europe for the time period 2000-2005. Results indicate a remarkable spatial and temporal inconsistency between the two indices, suggesting that a new definition of European legislative standard is needed in the near future. Besides, our risk assessment based on AOT40 shows a good consistency compared to both in-situ data and other model-based datasets. Conversely, risk assessment based on stomatal O3 uptake shows different spatial patterns compared to other model-based datasets. This strong inconsistency can be likely related to a different vegetation cover and its associated parameterizations.

  4. Comparing concentration-based (AOT40) and stomatal uptake (PODY) metrics for ozone risk assessment to European forests.

    Science.gov (United States)

    Anav, Alessandro; De Marco, Alessandra; Proietti, Chiara; Alessandri, Andrea; Dell'Aquila, Alessandro; Cionni, Irene; Friedlingstein, Pierre; Khvorostyanov, Dmitry; Menut, Laurent; Paoletti, Elena; Sicard, Pierre; Sitch, Stephen; Vitale, Marcello

    2016-04-01

    Tropospheric ozone (O3) produces harmful effects to forests and crops, leading to a reduction of land carbon assimilation that, consequently, influences the land sink and the crop yield production. To assess the potential negative O3 impacts to vegetation, the European Union uses the Accumulated Ozone over Threshold of 40 ppb (AOT40). This index has been chosen for its simplicity and flexibility in handling different ecosystems as well as for its linear relationships with yield or biomass loss. However, AOT40 does not give any information on the physiological O3 uptake into the leaves since it does not include any environmental constraints to O3 uptake through stomata. Therefore, an index based on stomatal O3 uptake (i.e. PODY), which describes the amount of O3 entering into the leaves, would be more appropriate. Specifically, the PODY metric considers the effects of multiple climatic factors, vegetation characteristics and local and phenological inputs rather than the only atmospheric O3 concentration. For this reason, the use of PODY in the O3 risk assessment for vegetation is becoming recommended. We compare different potential O3 risk assessments based on two methodologies (i.e. AOT40 and stomatal O3 uptake) using a framework of mesoscale models that produces hourly meteorological and O3 data at high spatial resolution (12 km) over Europe for the time period 2000-2005. Results indicate a remarkable spatial and temporal inconsistency between the two indices, suggesting that a new definition of European legislative standard is needed in the near future. Besides, our risk assessment based on AOT40 shows a good consistency compared to both in-situ data and other model-based datasets. Conversely, risk assessment based on stomatal O3 uptake shows different spatial patterns compared to other model-based datasets. This strong inconsistency can be likely related to a different vegetation cover and its associated parameterizations. PMID:26492093

  5. Website Quality Assessment Model (WQAM for Developing Efficient E-Learning Framework- A Novel Approach

    Directory of Open Access Journals (Sweden)

    R.Jayakumar

    2013-10-01

    Full Text Available The prodigious growth of internet as an environment for learning has led to the development of enormous sites to offer knowledge to the novices in an efficient manner. However, evaluating the quality of those sites is a substantial task. With that concern, this paper attempts to evaluate the quality measures for enhancing the site design and contents of an e-learning framework, as it relates to information retrieval over the internet. Moreover, the proposal explores two main processes. Firstly, evaluating a website quality with the defined high-level quality metrics such as accuracy, feasibility, utility and propriety using Website Quality Assessment Model (WQAM and secondly, developing an e-learning framework with improved quality. Specifically, the quality metrics are analyzedwith the feedback compliance obtained through a Questionnaire Sample (QS. By which, the area of the website that requires improvement can be identified and then, a new e-learning framework has been developed with the incorporation of those enhancements.

  6. Visual quality assessment by machine learning

    CERN Document Server

    Xu, Long; Kuo, C -C Jay

    2015-01-01

    The book encompasses the state-of-the-art visual quality assessment (VQA) and learning based visual quality assessment (LB-VQA) by providing a comprehensive overview of the existing relevant methods. It delivers the readers the basic knowledge, systematic overview and new development of VQA. It also encompasses the preliminary knowledge of Machine Learning (ML) to VQA tasks and newly developed ML techniques for the purpose. Hence, firstly, it is particularly helpful to the beginner-readers (including research students) to enter into VQA field in general and LB-VQA one in particular. Secondly, new development in VQA and LB-VQA particularly are detailed in this book, which will give peer researchers and engineers new insights in VQA.

  7. Validation of no-reference image quality index for the assessment of digital mammographic images

    Science.gov (United States)

    de Oliveira, Helder C. R.; Barufaldi, Bruno; Borges, Lucas R.; Gabarda, Salvador; Bakic, Predrag R.; Maidment, Andrew D. A.; Schiabel, Homero; Vieira, Marcelo A. C.

    2016-03-01

    To ensure optimal clinical performance of digital mammography, it is necessary to obtain images with high spatial resolution and low noise, keeping radiation exposure as low as possible. These requirements directly affect the interpretation of radiologists. The quality of a digital image should be assessed using objective measurements. In general, these methods measure the similarity between a degraded image and an ideal image without degradation (ground-truth), used as a reference. These methods are called Full-Reference Image Quality Assessment (FR-IQA). However, for digital mammography, an image without degradation is not available in clinical practice; thus, an objective method to assess the quality of mammograms must be performed without reference. The purpose of this study is to present a Normalized Anisotropic Quality Index (NAQI), based on the Rényi entropy in the pseudo-Wigner domain, to assess mammography images in terms of spatial resolution and noise without any reference. The method was validated using synthetic images acquired through an anthropomorphic breast software phantom, and the clinical exposures on anthropomorphic breast physical phantoms and patient's mammograms. The results reported by this noreference index follow the same behavior as other well-established full-reference metrics, e.g., the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Reductions of 50% on the radiation dose in phantom images were translated as a decrease of 4dB on the PSNR, 25% on the SSIM and 33% on the NAQI, evidencing that the proposed metric is sensitive to the noise resulted from dose reduction. The clinical results showed that images reduced to 53% and 30% of the standard radiation dose reported reductions of 15% and 25% on the NAQI, respectively. Thus, this index may be used in clinical practice as an image quality indicator to improve the quality assurance programs in mammography; hence, the proposed method reduces the subjectivity

  8. Quality assessment of aluminized steel tubes

    Directory of Open Access Journals (Sweden)

    K. Żaba

    2010-07-01

    Full Text Available The results of assessments of the welded steel tubes with the Al-Si coating intended for the motorization needs – are presented in thepaper. The measurement of mechanical properties, tube diameters and thickness, internal flash heights as well as the alternative assessmentof the weld quality were performed. The obtained results are presented by means of tools available in the Statistica program andmacroscopic observations.

  9. Quality Markers in Cardiology. Main Markers to Measure Quality of Results (Outcomes) and Quality Measures Related to Better Results in Clinical Practice (Performance Metrics). INCARDIO (Indicadores de Calidad en Unidades Asistenciales del Área del Corazón): A SEC/SECTCV Consensus Position Paper.

    Science.gov (United States)

    López-Sendón, José; González-Juanatey, José Ramón; Pinto, Fausto; Cuenca Castillo, José; Badimón, Lina; Dalmau, Regina; González Torrecilla, Esteban; López-Mínguez, José Ramón; Maceira, Alicia M; Pascual-Figal, Domingo; Pomar Moya-Prats, José Luis; Sionis, Alessandro; Zamorano, José Luis

    2015-11-01

    Cardiology practice requires complex organization that impacts overall outcomes and may differ substantially among hospitals and communities. The aim of this consensus document is to define quality markers in cardiology, including markers to measure the quality of results (outcomes metrics) and quality measures related to better results in clinical practice (performance metrics). The document is mainly intended for the Spanish health care system and may serve as a basis for similar documents in other countries.

  10. Assessing anthropogenic pressures on estuarine fish nurseries along the Portuguese coast: a multi-metric index and conceptual approach.

    Science.gov (United States)

    Vasconcelos, R P; Reis-Santos, P; Fonseca, V; Maia, A; Ruano, M; França, S; Vinagre, C; Costa, M J; Cabral, H

    2007-03-15

    Estuaries are among the most productive ecosystems and simultaneously among the most threatened by conflicting human activities which damage their ecological functions, namely their nursery role for many fish species. A thorough assessment of the anthropogenic pressures in Portuguese estuarine systems (Douro, Ria de Aveiro, Mondego, Tejo, Sado, Mira, Ria Formosa and Guadiana) was made applying an aggregating multi-metric index, which quantitatively evaluates influences from key components: dams, population and industry, port activities and resource exploitation. Estuaries were ranked from most (Tejo) to least pressured (Mira), and the most influential types of pressure identified. In most estuaries overall pressure was generated by a dominant group of pressure components, with several systems being afflicted by similar problematic sources. An evaluation of the influence of anthropogenic pressures on the most important sparidae, soleidae, pleuronectidae, moronidae and clupeidae species that use these estuaries as nurseries was also performed. To consolidate information and promote management an ecological conceptual model was built to identify potential problems for the nursery function played by these estuaries, identifying pressure agents, ecological impacts and endpoints for the anthropogenic sources quantified in the assessment. This will be important baseline information to safeguard these vital areas, articulating information and forecasting the potential efficacy of future management options.

  11. Metrics for Success: Strategies for Enabling Core Facility Performance and Assessing Outcomes.

    Science.gov (United States)

    Turpen, Paula B; Hockberger, Philip E; Meyn, Susan M; Nicklin, Connie; Tabarini, Diane; Auger, Julie A

    2016-04-01

    Core Facilities are key elements in the research portfolio of academic and private research institutions. Administrators overseeing core facilities (core administrators) require assessment tools for evaluating the need and effectiveness of these facilities at their institutions. This article discusses ways to promote best practices in core facilities as well as ways to evaluate their performance across 8 of the following categories: general management, research and technical staff, financial management, customer base and satisfaction, resource management, communications, institutional impact, and strategic planning. For each category, we provide lessons learned that we believe contribute to the effective and efficient overall management of core facilities. If done well, we believe that encouraging best practices and evaluating performance in core facilities will demonstrate and reinforce the importance of core facilities in the research and educational mission of institutions. It will also increase job satisfaction of those working in core facilities and improve the likelihood of sustainability of both facilities and personnel.

  12. Metrics for Success: Strategies for Enabling Core Facility Performance and Assessing Outcomes.

    Science.gov (United States)

    Turpen, Paula B; Hockberger, Philip E; Meyn, Susan M; Nicklin, Connie; Tabarini, Diane; Auger, Julie A

    2016-04-01

    Core Facilities are key elements in the research portfolio of academic and private research institutions. Administrators overseeing core facilities (core administrators) require assessment tools for evaluating the need and effectiveness of these facilities at their institutions. This article discusses ways to promote best practices in core facilities as well as ways to evaluate their performance across 8 of the following categories: general management, research and technical staff, financial management, customer base and satisfaction, resource management, communications, institutional impact, and strategic planning. For each category, we provide lessons learned that we believe contribute to the effective and efficient overall management of core facilities. If done well, we believe that encouraging best practices and evaluating performance in core facilities will demonstrate and reinforce the importance of core facilities in the research and educational mission of institutions. It will also increase job satisfaction of those working in core facilities and improve the likelihood of sustainability of both facilities and personnel. PMID:26848284

  13. Quality of assessments within reach: Review study of research and results of the quality of assessments

    NARCIS (Netherlands)

    Maassen, N.A.M.; Otter, den D.; Wools, S.; Hemker, B.T.; Straetmans, G.J.J.M.; Eggen, T.J.H.M.

    2015-01-01

    Educational tests and assessments are important instruments to measure a student’s knowledge and skills. The question that is addressed in this review study is: “which aspects are currently considered as important to the quality of educational assessments?” Furthermore, it is explored how this infor

  14. Metric Properties of the Neighborhood Inventory for Environmental Typology (NIfETy): An Environmental Assessment Tool for Measuring Indicators of Violence, Alcohol, Tobacco, and Other Drug Exposures

    Science.gov (United States)

    Furr-Holden, C. D. M.; Campbell, K. D. M.; Milam, A. J.; Smart, M. J.; Ialongo, N. A.; Leaf, P. J.

    2010-01-01

    Objectives: Establish metric properties of the Neighborhood Inventory for Environmental Typology (NIfETy). Method: A total of 919 residential block faces were assessed by paired raters using the NIfETy. Reliability was evaluated via interrater and internal consistency reliability; validity by comparing NIfETy data with youth self-reported…

  15. Quantitative Metrics and Risk Assessment: The Three Tenets Model of Cybersecurity

    Directory of Open Access Journals (Sweden)

    Jeff Hughes

    2013-08-01

    Full Text Available Progress in operational cybersecurity has been difficult to demonstrate. In spite of the considerable research and development investments made for more than 30 years, many government, industrial, financial, and consumer information systems continue to be successfully attacked and exploited on a routine basis. One of the main reasons that progress has been so meagre is that most technical cybersecurity solutions that have been proposed to-date have been point solutions that fail to address operational tradeoffs, implementation costs, and consequent adversary adaptations across the full spectrum of vulnerabilities. Furthermore, sound prescriptive security principles previously established, such as the Orange Book, have been difficult to apply given current system complexity and acquisition approaches. To address these issues, the authors have developed threat-based descriptive methodologies to more completely identify system vulnerabilities, to quantify the effectiveness of possible protections against those vulnerabilities, and to evaluate operational consequences and tradeoffs of possible protections. This article begins with a discussion of the tradeoffs among seemingly different system security properties such as confidentiality, integrity, and availability. We develop a quantitative framework for understanding these tradeoffs and the issues that arise when those security properties are all in play within an organization. Once security goals and candidate protections are identified, risk/benefit assessments can be performed using a novel multidisciplinary approach, called “QuERIES.” The article ends with a threat-driven quantitative methodology, called “The Three Tenets”, for identifying vulnerabilities and countermeasures in networked cyber-physical systems. The goal of this article is to offer operational guidance, based on the techniques presented here, for informed decision making about cyber-physical system security.

  16. Assessing natural resource use by forest-reliant communities in Madagascar using functional diversity and functional redundancy metrics.

    Directory of Open Access Journals (Sweden)

    Kerry A Brown

    Full Text Available Biodiversity plays an integral role in the livelihoods of subsistence-based forest-dwelling communities and as a consequence it is increasingly important to develop quantitative approaches that capture not only changes in taxonomic diversity, but also variation in natural resources and provisioning services. We apply a functional diversity metric originally developed for addressing questions in community ecology to assess utilitarian diversity of 56 forest plots in Madagascar. The use categories for utilitarian plants were determined using expert knowledge and household questionnaires. We used a null model approach to examine the utilitarian (functional diversity and utilitarian redundancy present within ecological communities. Additionally, variables that might influence fluctuations in utilitarian diversity and redundancy--specifically number of felled trees, number of trails, basal area, canopy height, elevation, distance from village--were analyzed using Generalized Linear Models (GLMs. Eighteen of the 56 plots showed utilitarian diversity values significantly higher than expected. This result indicates that these habitats exhibited a low degree of utilitarian redundancy and were therefore comprised of plants with relatively distinct utilitarian properties. One implication of this finding is that minor losses in species richness may result in reductions in utilitarian diversity and redundancy, which may limit local residents' ability to switch between alternative choices. The GLM analysis showed that the most predictive model included basal area, canopy height and distance from village, which suggests that variation in utilitarian redundancy may be a result of local residents harvesting resources from the protected area. Our approach permits an assessment of the diversity of provisioning services available to local communities, offering unique insights that would not be possible using traditional taxonomic diversity measures. These analyses

  17. Contribution to a quantitative assessment model for reliability-based metrics of electronic and programmable safety-related functions

    International Nuclear Information System (INIS)

    The use of fault-tolerant EP architectures has induced growing constraints, whose influence on reliability-based performance metrics is no more negligible. To face up the growing influence of simultaneous failure, this thesis proposes, for safety-related functions, a new-trend assessment method of reliability, based on a better taking into account of time-aspect. This report introduces the concept of information and uses it to interpret the failure modes of safety-related function as the direct result of the initiation and propagation of erroneous information until the actuator-level. The main idea is to distinguish the apparition and disappearance of erroneous states, which could be defined as intrinsically dependent of HW-characteristic and maintenance policies, and their possible activation, constrained through architectural choices, leading to the failure of safety-related function. This approach is based on a low level on deterministic SED models of the architecture and use non homogeneous Markov chains to depict the time-evolution of probabilities of errors. (author)

  18. A metric-based assessment of flood risk and vulnerability of rural communities in the Lower Shire Valley, Malawi

    Science.gov (United States)

    Adeloye, A. J.; Mwale, F. D.; Dulanya, Z.

    2015-06-01

    In response to the increasing frequency and economic damages of natural disasters globally, disaster risk management has evolved to incorporate risk assessments that are multi-dimensional, integrated and metric-based. This is to support knowledge-based decision making and hence sustainable risk reduction. In Malawi and most of Sub-Saharan Africa (SSA), however, flood risk studies remain focussed on understanding causation, impacts, perceptions and coping and adaptation measures. Using the IPCC Framework, this study has quantified and profiled risk to flooding of rural, subsistent communities in the Lower Shire Valley, Malawi. Flood risk was obtained by integrating hazard and vulnerability. Flood hazard was characterised in terms of flood depth and inundation area obtained through hydraulic modelling in the valley with Lisflood-FP, while the vulnerability was indexed through analysis of exposure, susceptibility and capacity that were linked to social, economic, environmental and physical perspectives. Data on these were collected through structured interviews of the communities. The implementation of the entire analysis within GIS enabled the visualisation of spatial variability in flood risk in the valley. The results show predominantly medium levels in hazardousness, vulnerability and risk. The vulnerability is dominated by a high to very high susceptibility. Economic and physical capacities tend to be predominantly low but social capacity is significantly high, resulting in overall medium levels of capacity-induced vulnerability. Exposure manifests as medium. The vulnerability and risk showed marginal spatial variability. The paper concludes with recommendations on how these outcomes could inform policy interventions in the Valley.

  19. Image quality assessment and human visual system

    Science.gov (United States)

    Gao, Xinbo; Lu, Wen; Tao, Dacheng; Li, Xuelong

    2010-07-01

    This paper summaries the state-of-the-art of image quality assessment (IQA) and human visual system (HVS). IQA provides an objective index or real value to measure the quality of the specified image. Since human beings are the ultimate receivers of visual information in practical applications, the most reliable IQA is to build a computational model to mimic the HVS. According to the properties and cognitive mechanism of the HVS, the available HVS-based IQA methods can be divided into two categories, i.e., bionics methods and engineering methods. This paper briefly introduces the basic theories and development histories of the above two kinds of HVS-based IQA methods. Finally, some promising research issues are pointed out in the end of the paper.

  20. Fingerprint Quality Assessment Combining Blind Image Quality, Texture and Minutiae Features

    OpenAIRE

    Z. Yao; Le Bars, Jean-Marie; Charrier, Christophe; Rosenberger, Christophe; Rosenberger, C

    2015-01-01

    International audience Biometric sample quality assessment approaches are generally designed in terms of utility property due to the potential difference between human perception of quality and the biometric quality requirements for a recog-nition system. This study proposes a utility based quality assessment method of fingerprints by considering several complementary aspects: 1) Image quality assessment without any reference which is consistent with human conception of inspecting quality,...

  1. Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...

  2. Peer Review and Quality Assessment in Complete Denture Education.

    Science.gov (United States)

    Novetsky, Marvin; Razzoog, Michael E.

    1981-01-01

    A program in peer review and quality assessment at the University of Michigan denture department is described. The program exposes students to peer review in order to assess the quality of their treatment. (Author/MLW)

  3. Trajectory-Oriented Approach to Managing Traffic Complexity: Trajectory Flexibility Metrics and Algorithms and Preliminary Complexity Impact Assessment

    Science.gov (United States)

    Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek

    2009-01-01

    This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.

  4. Cyber threat metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Frye, Jason Neal; Veitch, Cynthia K.; Mateski, Mark Elliot; Michalski, John T.; Harris, James Mark; Trevino, Cassandra M.; Maruoka, Scott

    2012-03-01

    Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.

  5. Service Quality and Process Maturity Assessment

    Directory of Open Access Journals (Sweden)

    Serek Radomir

    2013-12-01

    Full Text Available This article deals with service quality and the methods for its measurement and improvements to reach the so called service excellence. Besides older methods such as SERVQUAL and SERPERF, there are also shortly described capability maturity models based on which the own methodology is developed and used for process maturity assessment in organizations providing technical services. This method is equally described and accompanied by examples on pictures. The verification of method functionality is explored on finding a correlation between service employee satisfaction and average process maturity in a service organization. The results seem to be quite promising and open an arena for further studies.

  6. Quality Assessment of Landsat Surface Reflectance Products Using MODIS Data

    Science.gov (United States)

    Feng, Min; Huang, Chengquan; Channan, Saurabh; Vermote, Eric; Masek, Jeffrey G.; Townshend, John R.

    2012-01-01

    Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat

  7. Quality assessment of Landsat surface reflectance products using MODIS data

    Science.gov (United States)

    Feng, Min; Huang, Chengquan; Channan, Saurabh; Vermote, Eric F.; Masek, Jeffrey G.; Townshend, John R.

    2012-01-01

    Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat

  8. Video quality assessment using content-weighted spatial and temporal pooling method

    Science.gov (United States)

    Li, Chaofeng; Pan, Feng; Wu, Xiaojun; Ju, Yiwen; Yuan, Yun-Hao; Fang, Wei

    2015-09-01

    Video quality assessment plays an important role in video processing and communication applications. We propose a full reference video quality metric by combining a content-weighted spatial pooling strategy with a temporal pooling strategy. All pixels in a frame are classified into edge, texture, and smooth regions, and their structural similarity image index (SSIM) maps are divided into increasing and saturated regions by the curve of their SSIM values, then a content weight method is applied to increasing regions to get the score of an image frame. Finally, a temporal pooling method is used to get the overall video quality. Experimental results on the LIVE and IVP video quality databases show our proposed method works well in matching subjective scores.

  9. Towards Web Documents Quality Assessment for Digital Humanities Scholars

    NARCIS (Netherlands)

    D. Ceolin; J. Noordegraaf; L. Aroyo; C. van Son

    2016-01-01

    We present a framework for assessing the quality of Web documents, and a baseline of three quality dimensions: trustworthiness, objectivity and basic scholarly quality. Assessing Web document quality is a "deep data" problem necessitating approaches to handle both data size and complexity.

  10. Institutional Quality Assessment of Higher Education: Dimensions, Criteria and Indicators

    Science.gov (United States)

    Savickiene, Izabela; Pukelis, Kestutis

    2004-01-01

    The article discusses dimensions and criteria, which are used to assess the quality of higher education in different countries. The paper presents dimensions and criteria that could be appropriate for assessment of the quality of higher education at Lithuanian universities. Quality dimensions, assessment criteria and indicators are defined and…

  11. Quadrupolar metrics

    CERN Document Server

    Quevedo, Hernando

    2016-01-01

    We review the problem of describing the gravitational field of compact stars in general relativity. We focus on the deviations from spherical symmetry which are expected to be due to rotation and to the natural deformations of mass distributions. We assume that the relativistic quadrupole moment takes into account these deviations, and consider the class of axisymmetric static and stationary quadrupolar metrics which satisfy Einstein's equations in empty space and in the presence of matter represented by a perfect fluid. We formulate the physical conditions that must be satisfied for a particular spacetime metric to describe the gravitational field of compact stars. We present a brief review of the main static and axisymmetric exact solutions of Einstein's vacuum equations, satisfying all the physical conditions. We discuss how to derive particular stationary and axisymmetric solutions with quadrupolar properties by using the solution generating techniques which correspond either to Lie symmetries and B\\"acku...

  12. Metrication manual

    International Nuclear Information System (INIS)

    In April 1978 a meeting of senior metrication officers convened by the Commonwealth Science Council of the Commonwealth Secretariat, was held in London. The participants were drawn from Australia, Bangladesh, Britain, Canada, Ghana, Guyana, India, Jamaica, Papua New Guinea, Solomon Islands and Trinidad and Tobago. Among other things, the meeting resolved to develop a set of guidelines to assist countries to change to SI and to compile such guidelines in the form of a working manual

  13. Metrical Quantization

    OpenAIRE

    Klauder, John R.

    1998-01-01

    Canonical quantization may be approached from several different starting points. The usual approaches involve promotion of c-numbers to q-numbers, or path integral constructs, each of which generally succeeds only in Cartesian coordinates. All quantization schemes that lead to Hilbert space vectors and Weyl operators---even those that eschew Cartesian coordinates---implicitly contain a metric on a flat phase space. This feature is demonstrated by studying the classical and quantum ``aggregati...

  14. Learnometrics: Metrics for Learning Objects (Learnometrics: metrieken voor leerobjecten)

    OpenAIRE

    Ochoa, Xavier

    2008-01-01

    - Introduction - Quantitative Analysis of the Publication of Learning Objects - Quantiative Analysis of the Reuse of Learning Objects - Metadata Quality Metrics for Learning Objects - Relevance Ranking Metrics for Learning Objects - Metrics Service Architecture and Use Cases - Conclusions

  15. Efficient neural-network-based no-reference approach to an overall quality metric for JPEG and JPEG2000 compressed images

    OpenAIRE

    H. Liu; Redi, J.A.; Alers, H.; R. Zunino; Heynderickx, I.E.J.R.

    2011-01-01

    Reliably assessing overall quality of JPEG/JPEG2000 coded images without having the original image as a reference is still challenging, mainly due to our limited understanding of how humans combine the various perceived artifacts to an overall quality judgment. A known approach to avoid the explicit simulation of human assessment of overall quality is the use of a neural network. Neural network approaches usually start by selecting active features from a set of generic image characteristics, ...

  16. Assessing quality and total quality in economic higher education

    OpenAIRE

    Catalina Sitnikov

    2008-01-01

    Nowadays, there are countries, systems and cultures where the issue of quality management and all the items implied are firmly on the agenda for higher education institutions. Whether a result of a growing climate of increasing accountability or an expansion in the size and diversity of student populations, both quality assurance and quality enhancement are now considered essential components of any quality management programme.

  17. Survey and Assessment of Land Ecological Quality in Cixi City

    OpenAIRE

    LIU, JUNBAO; Chen, Zhiyuan; Pan, Weifeng; Xie, Shaojuan

    2013-01-01

    Soil, atmosphere, water and quality of agricultural product constitute the content of land ecological quality. Cixi City, through survey pilot project of basic farmland quality, carried out high precision soil geochemical survey and survey of agricultural products, irrigation water and air quality, and established ecological quality evaluation model of land. Based on the evaluation of soil geochemical quality, we conducted comprehensive quality assessment of atmosphere, water, agricultural pr...

  18. Quality assessment metrics for whole genome gene expression profiling of paraffin embedded samples

    OpenAIRE

    Mahoney, Douglas W.; Terry M. Therneau; Anderson, S. Keith; Jen, Jin; Kocher, Jean-Pierre A.; Reinholz, Monica M; Perez, Edith A.; Eckel-Passow, Jeanette E

    2013-01-01

    Background Formalin fixed, paraffin embedded tissues are most commonly used for routine pathology analysis and for long term tissue preservation in the clinical setting. Many institutions have large archives of Formalin fixed, paraffin embedded tissues that provide a unique opportunity for understanding genomic signatures of disease. However, genome-wide expression profiling of Formalin fixed, paraffin embedded samples have been challenging due to RNA degradation. Because of the significant h...

  19. 2003 SNL ASCI applications software quality engineering assessment report.

    Energy Technology Data Exchange (ETDEWEB)

    Schofield, Joseph Richard, Jr.; Ellis, Molly A.; Williamson, Charles Michael; Bonano, Lora A.

    2004-02-01

    This document describes the 2003 SNL ASCI Software Quality Engineering (SQE) assessment of twenty ASCI application code teams and the results of that assessment. The purpose of this assessment was to determine code team compliance with the Sandia National Laboratories ASCI Applications Software Quality Engineering Practices, Version 2.0 as part of an overall program assessment.

  20. Toward a No-Reference Image Quality Assessment Using Statistics of Perceptual Color Descriptors.

    Science.gov (United States)

    Lee, Dohyoung; Plataniotis, Konstantinos N

    2016-08-01

    Analysis of the statistical properties of natural images has played a vital role in the design of no-reference (NR) image quality assessment (IQA) techniques. In this paper, we propose parametric models describing the general characteristics of chromatic data in natural images. They provide informative cues for quantifying visual discomfort caused by the presence of chromatic image distortions. The established models capture the correlation of chromatic data between spatially adjacent pixels by means of color invariance descriptors. The use of color invariance descriptors is inspired by their relevance to visual perception, since they provide less sensitive descriptions of image scenes against viewing geometry and illumination variations than luminances. In order to approximate the visual quality perception of chromatic distortions, we devise four parametric models derived from invariance descriptors representing independent aspects of color perception: 1) hue; 2) saturation; 3) opponent angle; and 4) spherical angle. The practical utility of the proposed models is examined by deploying them in our new general-purpose NR IQA metric. The metric initially estimates the parameters of the proposed chromatic models from an input image to constitute a collection of quality-aware features (QAF). Thereafter, a machine learning technique is applied to predict visual quality given a set of extracted QAFs. Experimentation performed on large-scale image databases demonstrates that the proposed metric correlates well with the provided subjective ratings of image quality over commonly encountered achromatic and chromatic distortions, indicating that it can be deployed on a wide variety of color image processing problems as a generalized IQA solution. PMID:27305678

  1. Engineering performance metrics

    Science.gov (United States)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  2. Performance assessment of geospatial simulation models of land-use change--a landscape metric-based approach.

    Science.gov (United States)

    Sakieh, Yousef; Salmanmahiny, Abdolrassoul

    2016-03-01

    Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.

  3. Data Complexity Metrics for XML Web Services

    Directory of Open Access Journals (Sweden)

    MISRA, S.

    2009-06-01

    Full Text Available Web services that are based on eXtensible Markup Language (XML technologies enable integration of diverse IT processes and systems and have been gaining extraordinary acceptance from the basic to the most complicated business and scientific processes. The maintainability is one of the important factors that affect the quality of the Web services that can be seen a kind of software project. The effective management of any type of software projects requires modelling, measurement, and quantification. This study presents a metric for the assessment of the quality of the Web services in terms of its maintainability. For this purpose we proposed a data complexity metric that can be evaluated by analyzing WSDL (Web Service Description Language documents used for describing Web services.

  4. Metric dynamics

    CERN Document Server

    Siparov, S V

    2015-01-01

    The suggested approach makes it possible to produce a consistent description of motions of a physical system. It is shown that the concept of force fields defining the systems dynamics is equivalent to the choice of the corresponding metric of an anisotropic space, which is used for the modeling of physical reality and the processes that take place. The examples from hydrodynamics, electrodynamics, quantum mechanics and theory of gravitation are discussed. This approach makes it possible to get rid of some known paradoxes. It can be also used for the further development of the theory.

  5. Columbia River system operations - water quality assessment

    International Nuclear Information System (INIS)

    In mid-1990, the U.S. Army Corps of Engineers, U.S. Bureau of Reclamation, and Bonneville Power Administration embarked on a Columbia River system operation review (SOR). The goal of the SOR is to establish an updated operation strategy which best recognizes the various river uses as identified through community input. Ninety alternative operations of the Columbia and Snake River systems were proposed by various users. These users included the general public, irrigation and utility districts, as well as local, state and various Federal government agencies involved with specific water resource interests in the Columbia River basin. Ten technical work groups were formed to cover the spectrum of interest and to evaluate the alternative operations. Using simplified tools and risk-based analysis, each work group analyzed and then ranked the alternatives according to the effect on the work group's specific interest. The focus of the water quality technical work group is the impact assessment, on water quality and dissolved gas saturation, of the various operations proposed by special interests (i.e., hydropower, navigation, flood control, irrigation, recreation, cultural resources, wildlife, and anadromous and resident fisheries)

  6. Groundwater quality data from the National Water-Quality Assessment Project, May 2012 through December 2013

    Science.gov (United States)

    Arnold, Terri L.; DeSimone, Leslie A.; Bexfield, Laura M.; Lindsey, Bruce D.; Barlow, Jeannie R.; Kulongoski, Justin T.; Musgrove, Marylynn; Kingsbury, James A.; Belitz, Kenneth

    2016-06-20

    Groundwater-quality data were collected from 748 wells as part of the National Water-Quality Assessment Project of the U.S. Geological Survey National Water-Quality Program from May 2012 through December 2013. The data were collected from four types of well networks: principal aquifer study networks, which assess the quality of groundwater used for public water supply; land-use study networks, which assess land-use effects on shallow groundwater quality; major aquifer study networks, which assess the quality of groundwater used for domestic supply; and enhanced trends networks, which evaluate the time scales during which groundwater quality changes. Groundwater samples were analyzed for a large number of water-quality indicators and constituents, including major ions, nutrients, trace elements, volatile organic compounds, pesticides, and radionuclides. These groundwater quality data are tabulated in this report. Quality-control samples also were collected; data from blank and replicate quality-control samples are included in this report.

  7. Analysis of Temporal Effects in Quality Assessment of High Definition Video

    Directory of Open Access Journals (Sweden)

    M. Slanina

    2012-04-01

    Full Text Available The paper deals with the temporal properties of a~scoring session when assessing the subjective quality of full HD video sequences using the continuous video quality tests. The performed experiment uses a modification of the standard test methodology described in ITU-R Rec. BT.500. It focuses on the reactive times and the time needed for the user ratings to stabilize at the beginning of a video sequence. In order to compare the subjective scores with objective quality measures, we also provide an analysis of PSNR and VQM for the considered sequences to find that correlation of the objective metric results with user scores, recored during playback and after playback, differs significantly.

  8. 42 CFR 493.1249 - Standard: Preanalytic systems quality assessment.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Preanalytic systems quality assessment... AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Preanalytic Systems § 493.1249 Standard: Preanalytic systems quality assessment. (a)...

  9. 42 CFR 493.1299 - Standard: Postanalytic systems quality assessment.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Postanalytic systems quality assessment... AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Postanalytic Systems § 493.1299 Standard: Postanalytic systems quality assessment. (a)...

  10. 42 CFR 493.1289 - Standard: Analytic systems quality assessment.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Analytic systems quality assessment. 493... HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Analytic Systems § 493.1289 Standard: Analytic systems quality assessment. (a)...

  11. Quality Assessment of Compressed Video for Automatic License Plate Recognition

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Støttrup-Andersen, Jesper; Forchhammer, Søren;

    2014-01-01

    Definition of video quality requirements for video surveillance poses new questions in the area of quality assessment. This paper presents a quality assessment experiment for an automatic license plate recognition scenario. We explore the influence of the compression by H.264/AVC and H.265/HEVC...

  12. The quality of assessment visits in community nursing.

    NARCIS (Netherlands)

    Kerkstra, A.; Beemster, F.

    1994-01-01

    The aim of this study was the measurement of the quality of assessment visits of community nurses in The Netherlands. Process criteria were derived for the quality of the assessment visits from the quality standards of community nursing care established by Appelman et al. Over a period of 8 weeks, a

  13. Measuring and Assessing the Quality and Usefulness of Accounting Information

    OpenAIRE

    Gergana Tsoncheva

    2014-01-01

    High quality accounting information is of key importance for a large number of users, as it influences the quality of the decisions made. Providing high quality and useful accounting information is a prerequisite for the efficiency of the enterprise. Usefulness is determined by the quality of accounting information. Measuring and assessing the quality and usefulness of accounting information are of particular importance, as these activities will not only enhance the quality of economic decisi...

  14. Beef quality assessed at European research centres.

    Science.gov (United States)

    Dransfield, E; Nute, G R; Roberts, T A; Boccard, R; Touraille, C; Buchter, L; Casteels, M; Cosentino, E; Hood, D E; Joseph, R L; Schon, I; Paardekooper, E J

    1984-01-01

    Loin steaks and cubes of M. semimembranosus from eight (12 month old) Galloway steers and eight (16-18 month old) Charolais cross steers raised in England and from which the meat was conditioned for 2 or 10 days, were assessed in research centres in Belgium, Denmark, England, France, the Federal Republic of Germany, Ireland, Italy and the Netherlands. Laboratory panels assessed meat by grilling the steaks and cooking the cubes in casseroles according to local custom using scales developed locally and by scales used frequently at other research centres. The meat was mostly of good quality but with sufficient variation to obtain meaningful comparisons. Tenderness and juiciness were assessed most, and flavour least, consistently. Over the 32 meats, acceptability of steaks and casseroles was in general compounded from tenderness, juiciness and flavour. However, when the meat was tough, it dominated the overall judgement; but when tender, flavour played an important rôle. Irish and English panels tended to weight more on flavour and Italian panels on tenderness and juiciness. Juciness and tenderness were well correlated among all panels except in Italy and Germany. With flavour, however, Belgian, Irish, German and Dutch panels ranked the meats similarly and formed a group distinct from the others which did not. The panels showed a similar grouping for judgements of acceptability. French and Belgian panels judged the steaks from the older Charolais cross steers to have more flavour and be more juicy than average and tended to prefer them. Casseroles from younger steers were invariably preferred although the French and Belgian panels judged aged meat from older animals equally acceptable. These regional biases were thought to be derived mainly from differences in cooking, but variations in experience and perception of assessors also contributed. PMID:22055992

  15. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin

    2015-01-01

    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  16. Lupus anticoagulant : case-based external quality assessment

    NARCIS (Netherlands)

    van den Besselaar, A. M. H. P.; Devreese, K. M. J.; de Groot, P. G.; Castel, A.

    2009-01-01

    Aims: A model for presenting case histories with quality assessment material is to be developed for the Dutch external quality assessment (EQA) scheme for blood coagulation testing. The purpose of the present study was to assess the performance of clinical laboratories in casebased EQA using the cas

  17. Comparing subjective and objective quality assessment of HDR images compressed with JPEG-XT

    DEFF Research Database (Denmark)

    Mantel, Claire; Ferchiu, Stefan Catalin; Forchhammer, Søren

    2014-01-01

    In this paper a subjective test in which participants evaluate the quality of JPEG-XT compressed HDR images is presented. Results show that for the selected test images and display, the subjective quality reached its saturation point starting around 3bpp. Objective evaluations are obtained...... the best performance with the limit that it does not capture the quality saturation. The usage of the gamma correction prior to applying metrics depends on the characteristics of each objective metric....

  18. Efficient neural-network-based no-reference approach to an overall quality metric for JPEG and JPEG2000 compressed images

    NARCIS (Netherlands)

    Liu, H.; Redi, J.A.; Alers, H.; Zunino, R.; Heynderickx, I.E.J.R.

    2011-01-01

    Reliably assessing overall quality of JPEG/JPEG2000 coded images without having the original image as a reference is still challenging, mainly due to our limited understanding of how humans combine the various perceived artifacts to an overall quality judgment. A known approach to avoid the explicit

  19. Food quality assessment by NIR hyperspectral imaging

    Science.gov (United States)

    Whitworth, Martin B.; Millar, Samuel J.; Chau, Astor

    2010-04-01

    Near infrared reflectance (NIR) spectroscopy is well established in the food industry for rapid compositional analysis of bulk samples. NIR hyperspectral imaging provides new opportunities to measure the spatial distribution of components such as moisture and fat, and to identify and measure specific regions of composite samples. An NIR hyperspectral imaging system has been constructed for food research applications, incorporating a SWIR camera with a cooled 14 bit HgCdTe detector and N25E spectrograph (Specim Ltd, Finland). Samples are scanned in a pushbroom mode using a motorised stage. The system has a spectral resolution of 256 pixels covering a range of 970-2500 nm and a spatial resolution of 320 pixels covering a swathe adjustable from 8 to 300 mm. Images are acquired at a rate of up to 100 lines s-1, enabling samples to be scanned within a few seconds. Data are captured using SpectralCube software (Specim) and analysed using ENVI and IDL (ITT Visual Information Solutions). Several food applications are presented. The strength of individual absorbance bands enables the distribution of particular components to be assessed. Examples are shown for detection of added gluten in wheat flour and to study the effect of processing conditions on fat distribution in chips/French fries. More detailed quantitative calibrations have been developed to study evolution of the moisture distribution in baguettes during storage at different humidities, to assess freshness of fish using measurements of whole cod and fillets, and for prediction of beef quality by identification and separate measurement of lean and fat regions.

  20. Assessing the effects of sampling design on water quality status classification

    Science.gov (United States)

    Lloyd, Charlotte; Freer, Jim; Johnes, Penny; Collins, Adrian

    2013-04-01

    The Water Framework Directive (WFD) requires continued reporting of the water quality status of all European waterbodies, with this status partly determined by the time a waterbody exceeds different pollution concentration thresholds. Routine water quality monitoring most commonly takes place at weekly to monthly time steps meaning that potentially important pollution events can be missed. This has the potential to result in the misclassification of water quality status. Against this context, this paper investigates the implications of sampling design on a range of existing water quality status metrics routinely applied to WFD compliance assessments. Previous research has investigated the effect of sampling design on the calculation of annual nutrient and sediment loads using a variety of different interpolation and extrapolation models. This work builds on this foundation, extending the analysis to include the effects of sampling regime on flow- and concentration-duration curves as well as threshold-exceedance statistics, which form an essential part of WFD reporting. The effects of sampling regime on both the magnitude of the summary metrics and their corresponding uncertainties are investigated. This analysis is being undertaken on data collected as part of the Hampshire Avon Demonstration Test Catchment (DTC) project; a DEFRA funded initiative investigating cost-effective solutions for reducing diffuse pollution from agriculture. The DTC monitoring platform is collecting water quality data at a variety of temporal resolutions and using differing collection methods, including weekly grab samples, daily ISCO autosamples and high resolution samples (15-30 min time step) using analysers in situ on the river bank. Datasets collected during 2011-2013 were used to construct flow- and concentration-duration curves. A bootstrapping methodology was employed to resample randomly the individual datasets and produce distributions of the curves in order to quantify the

  1. The quality of assessment visits in community nursing.

    OpenAIRE

    Kerkstra, A.; Beemster, F.

    1994-01-01

    The aim of this study was the measurement of the quality of assessment visits of community nurses in The Netherlands. Process criteria were derived for the quality of the assessment visits from the quality standards of community nursing care established by Appelman et al. Over a period of 8 weeks, a representative sample of 108 community nurses and 49 community nursing auxiliaries at 47 different locations paid a total number of 433 assessment visits. The nursing activities were recorded for ...

  2. Blind image quality assessment using statistical independence in the divisive normalization transform domain

    Science.gov (United States)

    Chu, Ying; Mou, Xuanqin; Fu, Hong; Ji, Zhen

    2015-11-01

    We present a general purpose blind image quality assessment (IQA) method using the statistical independence hidden in the joint distributions of divisive normalization transform (DNT) representations for natural images. The DNT simulates the redundancy reduction process of the human visual system and has good statistical independence for natural undistorted images; meanwhile, this statistical independence changes as the images suffer from distortion. Inspired by this, we investigate the changes in statistical independence between neighboring DNT outputs across the space and scale for distorted images and propose an independence uncertainty index as a blind IQA (BIQA) feature to measure the image changes. The extracted features are then fed into a regression model to predict the image quality. The proposed BIQA metric is called statistical independence (STAIND). We evaluated STAIND on five public databases: LIVE, CSIQ, TID2013, IRCCyN/IVC Art IQA, and intentionally blurred background images. The performances are relatively high for both single- and cross-database experiments. When compared with the state-of-the-art BIQA algorithms, as well as representative full-reference IQA metrics, such as SSIM, STAIND shows fairly good performance in terms of quality prediction accuracy, stability, robustness, and computational costs.

  3. Objective assessment of the impact of frame rate on video quality

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Korhonen, Jari; Forchhammer, Søren

    2012-01-01

    In this paper, we present a novel objective quality metric that takes the impact of frame rate into account. The proposed metric uses PSNR, frame rate and a content dependent parameter that can easily be obtained from spatial and temporal activity indices. The results have been validated on data ...

  4. Attention modeling for video quality assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2010-01-01

    averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in modeling visual attention, which is then integrated into IQMs to calculate the local quality of a video frame...

  5. Water depletion: An improved metric for incorporating seasonal and dry-year water scarcity into water risk assessments

    OpenAIRE

    Kate A. Brauman; Brian D. Richter; Sandra Postel; Marcus Malsy; Martina Flörke

    2016-01-01

    Abstract We present an improved water-scarcity metric we call water depletion, calculated as the fraction of renewable water consumptively used for human activities. We employ new data from the WaterGAP3 integrated global water resources model to illustrate water depletion for 15,091 watersheds worldwide, constituting 90% of total land area. Our analysis illustrates that moderate water depletion at an annual time scale is better characterized as high depletion at a monthly time scale and we a...

  6. Water depletion: An improved metric for incorporating seasonal and dry-year water scarcity into water risk assessments

    Directory of Open Access Journals (Sweden)

    Kate A. Brauman

    2016-01-01

    Full Text Available Abstract We present an improved water-scarcity metric we call water depletion, calculated as the fraction of renewable water consumptively used for human activities. We employ new data from the WaterGAP3 integrated global water resources model to illustrate water depletion for 15,091 watersheds worldwide, constituting 90% of total land area. Our analysis illustrates that moderate water depletion at an annual time scale is better characterized as high depletion at a monthly time scale and we are thus able to integrate seasonal and dry-year depletion into the water depletion metric, providing a more accurate depiction of water shortage that could affect irrigated agriculture, urban water supply, and freshwater ecosystems. Applying the metric, we find that the 2% of watersheds that are more than 75% depleted on an average annual basis are home to 15% of global irrigated area and 4% of large cities. An additional 30% of watersheds are depleted by more than 75% seasonally or in dry years. In total, 71% of world irrigated area and 47% of large cities are characterized as experiencing at least periodic water shortage.

  7. Integration of MODIS-derived metrics to assess interannual variability in snowpack, lake ice, and NDVI in southwest Alaska

    Science.gov (United States)

    Reed, B.; Budde, M.; Spencer, P.; Miller, A.E.

    2009-01-01

    Impacts of global climate change are expected to result in greater variation in the seasonality of snowpack, lake ice, and vegetation dynamics in southwest Alaska. All have wide-reaching physical and biological ecosystem effects in the region. We used Moderate Resolution Imaging Spectroradiometer (MODIS) calibrated radiance, snow cover extent, and vegetation index products for interpreting interannual variation in the duration and extent of snowpack, lake ice, and vegetation dynamics for southwest Alaska. The approach integrates multiple seasonal metrics across large ecological regions. Throughout the observation period (2001-2007), snow cover duration was stable within ecoregions, with variable start and end dates. The start of the lake ice season lagged the snow season by 2 to 3??months. Within a given lake, freeze-up dates varied in timing and duration, while break-up dates were more consistent. Vegetation phenology varied less than snow and ice metrics, with start-of-season dates comparatively consistent across years. The start of growing season and snow melt were related to one another as they are both temperature dependent. Higher than average temperatures during the El Ni??o winter of 2002-2003 were expressed in anomalous ice and snow season patterns. We are developing a consistent, MODIS-based dataset that will be used to monitor temporal trends of each of these seasonal metrics and to map areas of change for the study area.

  8. MODERN PRINCIPLES OF QUALITY ASSESSMENT OF CARDIOVASCULAR DISEASES TREATMENT

    Directory of Open Access Journals (Sweden)

    A. Yu. Suvorov

    2015-09-01

    Full Text Available The most common ways of assessment of cardiovascular diseases treatment abroad, approaches to creation of such assessment methods are considered, as well as data on the principles of the assessment of treatment in Russia. Some foreign registers of acute myocardial infarction, the aim of which was therapy quality assessment, are given as examples. The problem of high-quality treatment based on data from evidence-based medicine, some legal aspects related to clinical guidelines in Russia are considered, as well as various ways of treatment quality assessment.

  9. Ljubljana quality selection (LQS) - innovative case of restaurant assessment system

    OpenAIRE

    Maja Uran Maravić; Daniela Gračan; Zrinka Zadel

    2014-01-01

    The purpose – The purpose of this paper is to briefly present the most well-known restaurant assessment systems where restaurant are assessed by experts. The aim is to highlight the strengths and weaknesses of each system. Design –The special focus is to give answers on questions: how are the restaurants assessed by experts, which are the elements and standards of assessment and whether they are consistent with the quality dimensions as advocated in the theory of service quality. Methodology ...

  10. Quality Index of Subtidal Macroalgae (QISubMac): A suitable tool for ecological quality status assessment under the scope of the European Water Framework Directive.

    Science.gov (United States)

    Le Gal, A; Derrien-Courtel, S

    2015-12-15

    Despite their representativeness and importance in coastal waters, subtidal rocky bottom habitats have been under-studied. This has resulted in a lack of available indicators for subtidal hard substrate communities. However, a few indicators using subtidal macroalgae have been developed in recent years for the purpose of being implemented into the Water Framework Directive (WFD). Accordingly, a quality index of subtidal macroalgae has been defined as a French assessment tool for subtidal rocky bottom habitats in coastal waters. This approach is based on 14 metrics that consider the depth penetration, composition (sensitive, characteristic and opportunistic) and biodiversity of macroalgae assemblages and complies with WFD requirements. Three ecoregions have been defined to fit with the geographical distribution of macroalgae along the French coastline. As a test, QISubMac was used to assess the water quality of 20 water bodies. The results show that QISubMac may discriminate among different quality classes of water bodies. PMID:26555795

  11. Survey and Assessment of Land Ecological Quality in Cixi City

    Institute of Scientific and Technical Information of China (English)

    Junbao; LIU; Zhiyuan; CHEN; Weifeng; PAN; Shaojuan; XIE

    2013-01-01

    Soil,atmosphere,water and quality of agricultural product constitute the content of land ecological quality.Cixi City,through survey pilot project of basic farmland quality,carried out high precision soil geochemical survey and survey of agricultural products,irrigation water and air quality,and established ecological quality evaluation model of land.Based on the evaluation of soil geochemical quality,we conducted comprehensive quality assessment of atmosphere,water,agricultural products,and assessed the ecological quality of agricultural land in Cixi City.The evaluation results show that the ecological quality of most agricultural land in Cixi City is excellent,and there is ecological risk only in some local areas such as urban periphery.The experimental results provide demonstration and basis for the fine management of basic farmland and ecological protection.

  12. Quality Assessment and Economic Sustainability of Translation

    OpenAIRE

    Muzii, Luigi

    2006-01-01

    The concept of quality is mature and widespread. However, its associated attributes can only be measured against a set of specifications since quality itself is a relative concept. Today, the concept of quality broadly corresponds to product suitability – meaning that the product meets the user’s requirements. But then, how does one know when a translation is good? No answer can be given to this very simple question without recall to translation criticism and the theory of t...

  13. Water Quality Assessment of Porsuk River, Turkey

    OpenAIRE

    Suheyla Yerel

    2010-01-01

    The surface water quality of Porsuk River in Turkey was evaluated by using the multivariate statistical techniques including principal component analysis, factor analysis and cluster analysis. When principal component analysis and factor analysis as applied to the surface water quality data obtain from the eleven different observation stations, three factors were determined, which were responsible from the 66.88% of total variance of the surface water quality in Porsuk River. Cluster analysis...

  14. Fingerprint Quality Assessment With Multiple Segmentation

    OpenAIRE

    Z. Yao; Le Bars, Jean-Marie; Charrier, C.; Rosenberger, Christophe

    2015-01-01

    International audience —Image quality is an important factor to automated fingerprint identification systems (AFIS) because the matching performance could be significantly affected by poor quality samples. Most of the existing studies mainly focused on calculating a quality index represented by either a single feature or a combination of multiple features, and some others achieve this purpose with learning approaches which may depend on a prior-knowledge of matching performance. In this pa...

  15. A Review of Quality Measures for Assessing the Impact of Antimicrobial Stewardship Programs in Hospitals.

    Science.gov (United States)

    Akpan, Mary Richard; Ahmad, Raheelah; Shebl, Nada Atef; Ashiru-Oredope, Diane

    2016-01-01

    The growing problem of antimicrobial resistance (AMR) has led to calls for antimicrobial stewardship programs (ASP) to control antibiotic use in healthcare settings. Key strategies include prospective audit with feedback and intervention, and formulary restriction and preauthorization. Education, guidelines, clinical pathways, de-escalation, and intravenous to oral conversion are also part of some programs. Impact and quality of ASP can be assessed using process or outcome measures. Outcome measures are categorized as microbiological, patient or financial outcomes. The objective of this review was to provide an overview of quality measures for assessing ASP and the reported impact of ASP in peer-reviewed studies, focusing particularly on patient outcomes. A literature search of papers published in English between 1990 and June 2015 was conducted in five databases using a combination of search terms. Primary studies of any design were included. A total of 63 studies were included in this review. Four studies defined quality metrics for evaluating ASP. Twenty-one studies assessed the impact of ASP on antimicrobial utilization and cost, 25 studies evaluated impact on resistance patterns and/or rate of Clostridium difficile infection (CDI). Thirteen studies assessed impact on patient outcomes including mortality, length of stay (LOS) and readmission rates. Six of these 13 studies reported non-significant difference in mortality between pre- and post-ASP intervention, and five reported reductions in mortality rate. On LOS, six studies reported shorter LOS post intervention; a significant reduction was reported in one of these studies. Of note, this latter study reported significantly (p < 0.001) higher unplanned readmissions related to infections post-ASP. Patient outcomes need to be a key component of ASP evaluation. The choice of metrics is influenced by data and resource availability. Controlling for confounders must be considered in the design of evaluation studies

  16. A Review of Quality Measures for Assessing the Impact of Antimicrobial Stewardship Programs in Hospitals

    Directory of Open Access Journals (Sweden)

    Mary Richard Akpan

    2016-01-01

    Full Text Available The growing problem of antimicrobial resistance (AMR has led to calls for antimicrobial stewardship programs (ASP to control antibiotic use in healthcare settings. Key strategies include prospective audit with feedback and intervention, and formulary restriction and preauthorization. Education, guidelines, clinical pathways, de-escalation, and intravenous to oral conversion are also part of some programs. Impact and quality of ASP can be assessed using process or outcome measures. Outcome measures are categorized as microbiological, patient or financial outcomes. The objective of this review was to provide an overview of quality measures for assessing ASP and the reported impact of ASP in peer-reviewed studies, focusing particularly on patient outcomes. A literature search of papers published in English between 1990 and June 2015 was conducted in five databases using a combination of search terms. Primary studies of any design were included. A total of 63 studies were included in this review. Four studies defined quality metrics for evaluating ASP. Twenty-one studies assessed the impact of ASP on antimicrobial utilization and cost, 25 studies evaluated impact on resistance patterns and/or rate of Clostridium difficile infection (CDI. Thirteen studies assessed impact on patient outcomes including mortality, length of stay (LOS and readmission rates. Six of these 13 studies reported non-significant difference in mortality between pre- and post-ASP intervention, and five reported reductions in mortality rate. On LOS, six studies reported shorter LOS post intervention; a significant reduction was reported in one of these studies. Of note, this latter study reported significantly (p < 0.001 higher unplanned readmissions related to infections post-ASP. Patient outcomes need to be a key component of ASP evaluation. The choice of metrics is influenced by data and resource availability. Controlling for confounders must be considered in the design of

  17. The use of the kurtosis metric in the evaluation of occupational hearing loss in workers in China: Implications for hearing risk assessment

    Directory of Open Access Journals (Sweden)

    Robert I Davis

    2012-01-01

    Full Text Available This study examined: (1 the value of using the statistical metric, kurtosis [β(t], along with an energy metric to determine the hazard to hearing from high level industrial noise environments, and (2 the accuracy of the International Standard Organization (ISO-1999:1990 model for median noise-induced permanent threshold shift (NIPTS estimates with actual recent epidemiological data obtained on 240 highly screened workers exposed to high-level industrial noise in China. A cross-sectional approach was used in this study. Shift-long temporal waveforms of the noise that workers were exposed to for evaluation of noise exposures and audiometric threshold measures were obtained on all selected subjects. The subjects were exposed to only one occupational noise exposure without the use of hearing protection devices. The results suggest that: (1 the kurtosis metric is an important variable in determining the hazards to hearing posed by a high-level industrial noise environment for hearing conservation purposes, i.e., the kurtosis differentiated between the hazardous effects produced by Gaussian and non-Gaussian noise environments, (2 the ISO-1999 predictive model does not accurately estimate the degree of median NIPTS incurred to high level kurtosis industrial noise, and (3 the inherent large variability in NIPTS among subjects emphasize the need to develop and analyze a larger database of workers with well-documented exposures to better understand the effect of kurtosis on NIPTS incurred from high level industrial noise exposures. A better understanding of the role of the kurtosis metric may lead to its incorporation into a new generation of more predictive hearing risk assessment for occupational noise exposure.

  18. Crowdsourcing based subjective quality assessment of adaptive video streaming

    DEFF Research Database (Denmark)

    Shahid, M.; Søgaard, Jacob; Pokhrel, J.;

    2014-01-01

    humans are considered to be the most valid method of the as- sessment of QoE. Besides lab-based subjective experiments, crowdsourcing based subjective assessment of video quality is gaining popularity as an alternative method. This paper presents insights into a study that investigates perceptual pref......- erences of various adaptive video streaming scenarios through crowdsourcing based subjective quality assessment....

  19. Quality assessments for cancer centers in the European Union

    NARCIS (Netherlands)

    Wind, A.; Rajan, A.; Harten, van W.H.

    2016-01-01

    Background Cancer centers are pressured to deliver high-quality services that can be measured and improved, which has led to an increase of assessments in many countries. A critical area of quality improvement is to improve patient outcome. An overview of existing assessments can help stakeholders

  20. Development and Validation of Assessing Quality Teaching Rubrics

    Science.gov (United States)

    Chen, Weiyun; Mason, Steve; Hammond-Bennett, Austin; Zlamout, Sandy

    2014-01-01

    Purpose: This study aimed at examining the psychometric properties of the Assessing Quality Teaching Rubric (AQTR) that was designed to assess in-service teachers' quality levels of teaching practices in daily lessons. Methods: 45 physical education lessons taught by nine physical education teachers to students in grades K-5 were videotaped. They…

  1. Quality Assessment of Internationalised Studies: Theory and Practice

    Science.gov (United States)

    Juknyte-Petreikiene, Inga

    2013-01-01

    The article reviews forms of higher education internationalisation at an institutional level. The relevance of theoretical background of internationalised study quality assessment is highlighted and definitions of internationalised studies quality are presented. Existing methods of assessment of higher education internationalisation are criticised…

  2. Academics' Perceptions on the Purposes of Quality Assessment

    Science.gov (United States)

    Rosa, Maria J.; Sarrico, Claudia S.; Amaral, Alberto

    2012-01-01

    The accountability versus improvement debate is an old one. Although being traditionally considered dichotomous purposes of higher education quality assessment, some authors defend the need of balancing both in quality assessment systems. This article goes a step further and contends that not only they should be balanced but also that other…

  3. Higher Education Quality Assessment in China: An Impact Study

    Science.gov (United States)

    Liu, Shuiyun

    2015-01-01

    This research analyses an external higher education quality assessment scheme in China, namely, the Quality Assessment of Undergraduate Education (QAUE) scheme. Case studies were conducted in three Chinese universities with different statuses. Analysis shows that the evaluated institutions responded to the external requirements of the QAUE…

  4. Computing and Interpreting Fisher Information as a Metric of Sustainability: Regime Changes in the United States Air Quality

    Science.gov (United States)

    As a key tool in information theory, Fisher Information has been used to explore the observable behavior of a variety of systems. In particular, recent work has demonstrated its ability to assess the dynamic order of real and model systems. However, in order to solidify the use o...

  5. Real Time Face Quality Assessment for Face Log Generation

    DEFF Research Database (Denmark)

    Kamal, Nasrollahi; Moeslund, Thomas B.

    2009-01-01

    Summarizing a long surveillance video to just a few best quality face images of each subject, a face-log, is of great importance in surveillance systems. Face quality assessment is the back-bone for face log generation and improving the quality assessment makes the face logs more reliable....... Developing a real time face quality assessment system using the most important facial features and employing it for face logs generation are the concerns of this paper. Extensive tests using four databases are carried out to validate the usability of the system....

  6. Service Quality and Customer Satisfaction: An Assessment and Future Directions.

    Science.gov (United States)

    Hernon, Peter; Nitecki, Danuta A.; Altman, Ellen

    1999-01-01

    Reviews the literature of library and information science to examine issues related to service quality and customer satisfaction in academic libraries. Discusses assessment, the application of a business model to higher education, a multiple constituency approach, decision areas regarding service quality, resistance to service quality, and future…

  7. Quality Assurance of Assessment and Moderation Discourses Involving Sessional Staff

    Science.gov (United States)

    Grainger, Peter; Adie, Lenore; Weir, Katie

    2016-01-01

    Quality assurance is a major agenda in tertiary education. The casualisation of academic work, especially in teaching, is also a quality assurance issue. Casual or sessional staff members teach and assess more than 50% of all university courses in Australia, and yet the research in relation to the role sessional staff play in quality assurance of…

  8. Assessment of the Quality Management Models in Higher Education

    Science.gov (United States)

    Basar, Gulsun; Altinay, Zehra; Dagli, Gokmen; Altinay, Fahriye

    2016-01-01

    This study involves the assessment of the quality management models in Higher Education by explaining the importance of quality in higher education and by examining the higher education quality assurance system practices in other countries. The qualitative study was carried out with the members of the Higher Education Planning, Evaluation,…

  9. Food quality assessment in parent-offspring dyads

    DEFF Research Database (Denmark)

    Bech-Larsen, Tino; Jensen, Birger Boutrup

    in-between meals for their children. Results show poor congruence between parent and child quality assessment due to the two parties emphasising quite different quality aspects. Improved parental knowledge of their children's quality experience however has a significant effect on parents' willingness......When the buyer and the consumer of a food product are not identical, the risk of discrepancies between food quality expectations and experiences is even higher. We introduce the concept of dyadic quality assessment and apply it to an exploration of parents' willingness to pay for new and healthier...

  10. SOIL QUALITY ASSESSMENT USING FUZZY MODELING

    Science.gov (United States)

    Maintaining soil productivity is essential if agriculture production systems are to be sustainable, thus soil quality is an essential issue. However, there is a paucity of tools for measurement for the purpose of understanding changes in soil quality. Here the possibility of using fuzzy modeling t...

  11. Assessing water quality in Lake Naivasha

    NARCIS (Netherlands)

    Ndungu, Jane Njeri

    2014-01-01

    Water quality in aquatic systems is important because it maintains the ecological processes that support biodiversity. However, declining water quality due to environmental perturbations threatens the stability of the biotic integrity and therefore hinders the ecosystem services and functions of aqu

  12. MEASURING OBJECT-ORIENTED SYSTEMS BASED ON THE EXPERIMENTAL ANALYSIS OF THE COMPLEXITY METRICS

    Directory of Open Access Journals (Sweden)

    J.S.V.R.S.SASTRY,

    2011-05-01

    Full Text Available Metrics are used to help a software engineer in quantitative analysis to assess the quality of the design before a system is built. The focus of Object-Oriented metrics is on the class which is the fundamental building block of the Object-Oriented architecture. These metrics are focused on internal object structure and external object structure. Internal object structure reflects the complexity of each individual entity such as methods and classes. External complexity measures the interaction among entities such as Coupling and Inheritance. This paper mainly focuses on a set of object oriented metrics that can be used to measure the quality of an object oriented design. Two types of complexity metrics in Object-Oriented paradigm namely Mood metrics and Lorenz & Kidd metrics. Mood metrics consist of Method inheritance factor(MIF, Coupling factor(CF, Attribute inheritance factor(AIF, Method hiding factor(MHF, Attribute hiding factor(AHF, and polymorphism factor(PF. Lorenz & Kidd metrics consist of Number of operations overridden (NOO, Number operations added (NOA, Specialization index(SI. Mood metrics and Lorenz & Kidd metrics measurements are used mainly by designers and testers. Designers uses these metrics to access the software early in process,making changes that will reduce complexity and improve the continuing capability of the design. Testers use to test the software for finding the complexity, performance of the system, quality of the software. This paper reviews Mood metrics and Lorenz & Kidd metrics are validates theoretically and empirically methods. In thispaper, work has been done to explore the quality of design of software components using object oriented paradigm. A number of object oriented metrics have been proposed in the literature for measuring the design attributes such as inheritance, coupling, polymorphism etc. This paper, metrics have been used to analyzevarious features of software component. Complexity of methods

  13. Assessing the Quality of MT Systems for Hindi to English Translation

    OpenAIRE

    Kalyani, Aditi; Kumud, Hemant; Singh, Shashi Pal; Kumar, Ajai

    2014-01-01

    Evaluation plays a vital role in checking the quality of MT output. It is done either manually or automatically. Manual evaluation is very time consuming and subjective, hence use of automatic metrics is done most of the times. This paper evaluates the translation quality of different MT Engines for Hindi-English (Hindi data is provided as input and English is obtained as output) using various automatic metrics like BLEU, METEOR etc. Further the comparison automatic evaluation results with Hu...

  14. Statistical quality assessment of a fingerprint

    Science.gov (United States)

    Hwang, Kyungtae

    2004-08-01

    The quality of a fingerprint is essential to the performance of AFIS (Automatic Fingerprint Identification System). Such a quality may be classified by clarity and regularity of ridge-valley structures.1,2 One may calculate thickness of ridge and valley to measure the clarity and regularity. However, calculating a thickness is not feasible in a poor quality image, especially, severely damaged images that contain broken ridges (or valleys). In order to overcome such a difficulty, the proposed approach employs the statistical properties in a local block, which involve the mean and spread of the thickness of both ridge and valley. The mean value is used for determining whether a fingerprint is wet or dry. For example, the black pixels are dominant if a fingerprint is wet, the average thickness of ridge is larger than one of valley, and vice versa on a dry fingerprint. In addition, a standard deviation is used for determining severity of damage. In this study, the quality is divided into three categories based on two statistical properties mentioned above: wet, good, and dry. The number of low quality blocks is used to measure a global quality of fingerprint. In addition, a distribution of poor blocks is also measured using Euclidean distances between groups of poor blocks. With this scheme, locally condensed poor blocks decreases the overall quality of an image. Experimental results on the fingerprint images captured by optical devices as well as by a rolling method show the wet and dry parts of image were successfully captured. Enhancing an image by employing morphology techniques that modifying the detected poor quality blocks is illustrated in section 3. However, more work needs to be done on designing a scheme to incorporate the number of poor blocks and their distributions for a global quality.

  15. Assessing Requirements Quality through Requirements Coverage

    Science.gov (United States)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software

  16. Assessment of Total Quality Management Practices in a Public Organization

    OpenAIRE

    Özçakar, Necdet

    2010-01-01

    Total quality management has been very popular within the business world in last decades. Various researches showed that employees' assessments about total quality management practices have a great importance on the success of total quality management implemention. However the implemention of total quality management in public organizations is quite different than its implemention in private sector because of different natures of each sector. The aim of this research is to analyze the assessm...

  17. On Special Berwald Metrics

    Directory of Open Access Journals (Sweden)

    Akbar Tayebi

    2010-01-01

    Full Text Available In this paper, we study a class of Finsler metrics which contains the class of Berwald metrics as a special case. We prove that every Finsler metric in this class is a generalized Douglas-Weyl metric. Then we study isotropic flag curvature Finsler metrics in this class. Finally we show that on this class of Finsler metrics, the notion of Landsberg and weakly Landsberg curvature are equivalent.

  18. QUALITY ASSESSMENT OF BISCUITS USING COMPUTER VISION

    Directory of Open Access Journals (Sweden)

    Archana A. Bade

    2016-08-01

    Full Text Available As the developments and customer expectations in the high quality foods are increasing day by day, it becomes very essential for the food industries to maintain the quality of the product. Therefore it is necessary to have the quality inspection system for the product before packaging. Automation in the industry gives better inspection speed as compared to the human vision. The automation based on the computer vision is cost effective, flexible and provides one of the best alternatives for more accurate, fast inspection system. Image processing and image analysis are the vital part of the computer vision system. In this paper, we discuss real time quality inspection of the biscuits of premium class using computer vision. It contains the designing of the system, implementing, verifying it and installation of the complete system at the biscuit industry. Overall system contains Image acquisition, Preprocessing, Important feature extraction using segmentation, Color variations and Interpretation and the system hardware.

  19. National Water Quality Assessment (NAWQA) Program

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — National scope of NAWQA water-quality sample- and laboratory-result data and other supporting information obtained from NWIS systems hosted by individual Water...

  20. Assessing quality in software development: An agile methodology approach

    Directory of Open Access Journals (Sweden)

    V. Rodríguez-Hernández

    2015-06-01

    Full Text Available A novel methodology, result of 10 years of in-field testing, which makes possible the convergence of different types of models and quality standards for Engineering and Computer Science Faculties, is presented. Since most software-developing companies are small and medium sized, the projects developed must focuson SCRUM and Extreme Programming (XP, opposed to a RUP, which is quite heavy, as well as on Personal Software Process (PSP and Team Software Process (TSP, which provide students with competences and a structured framework. ISO 90003:2004 norm is employed to define the processes by means of a quality system without new requirements or changing the existing ones. Also, the model is based on ISO/IEC 25000 (ISO (IEC 9126 – ISO/IEC 14598 to allow comparing software built by different metrics.

  1. Quality Assessment of Family Medicine Teams Based on Accreditation Standards

    OpenAIRE

    Valjevac, Salih; Ridjanovic, Zoran; Masic, Izet

    2009-01-01

    CONFLICT OF INTEREST: NONE DECLARED In order to speed up and simplify the self assessment and external assessment process, provide better overview and access to Accreditation Standards for Family Medicine Teams and better assessment documents archiving, Agency for Healthcare Quality and Accreditation in Federation of Bosnia and Herzegovina (AKAZ) has developed self assessment and externals assessment software for family medicine teams. This article presents the development of standardized sof...

  2. Doctors or technicians: assessing quality of medical education

    Directory of Open Access Journals (Sweden)

    Tayyab Hasan

    2010-09-01

    Full Text Available Tayyab HasanPAPRSB Institute of Health Sciences, University Brunei Darussalam, Bandar Seri Begawan, BruneiAbstract: Medical education institutions usually adapt industrial quality management models that measure the quality of the process of a program but not the quality of the product. The purpose of this paper is to analyze the impact of industrial quality management models on medical education and students, and to highlight the importance of introducing a proper educational quality management model. Industrial quality management models can measure the training component in terms of competencies, but they lack the educational component measurement. These models use performance indicators to assess their process improvement efforts. Researchers suggest that the performance indicators used in educational institutions may only measure their fiscal efficiency without measuring the quality of the educational experience of the students. In most of the institutions, where industrial models are used for quality assurance, students are considered as customers and are provided with the maximum services and facilities possible. Institutions are required to fulfill a list of recommendations from the quality control agencies in order to enhance student satisfaction and to guarantee standard services. Quality of medical education should be assessed by measuring the impact of the educational program and quality improvement procedures in terms of knowledge base development, behavioral change, and patient care. Industrial quality models may focus on academic support services and processes, but educational quality models should be introduced in parallel to focus on educational standards and products.Keywords: educational quality, medical education, quality control, quality assessment, quality management models

  3. Key Elements for Judging the Quality of a Risk Assessment

    Science.gov (United States)

    Fenner-Crisp, Penelope A.; Dellarco, Vicki L.

    2016-01-01

    Background: Many reports have been published that contain recommendations for improving the quality, transparency, and usefulness of decision making for risk assessments prepared by agencies of the U.S. federal government. A substantial measure of consensus has emerged regarding the characteristics that high-quality assessments should possess. Objective: The goal was to summarize the key characteristics of a high-quality assessment as identified in the consensus-building process and to integrate them into a guide for use by decision makers, risk assessors, peer reviewers and other interested stakeholders to determine if an assessment meets the criteria for high quality. Discussion: Most of the features cited in the guide are applicable to any type of assessment, whether it encompasses one, two, or all four phases of the risk-assessment paradigm; whether it is qualitative or quantitative; and whether it is screening level or highly sophisticated and complex. Other features are tailored to specific elements of an assessment. Just as agencies at all levels of government are responsible for determining the effectiveness of their programs, so too should they determine the effectiveness of their assessments used in support of their regulatory decisions. Furthermore, if a nongovernmental entity wishes to have its assessments considered in the governmental regulatory decision-making process, then these assessments should be judged in the same rigorous manner and be held to similar standards. Conclusions: The key characteristics of a high-quality assessment can be summarized and integrated into a guide for judging whether an assessment possesses the desired features of high quality, transparency, and usefulness. Citation: Fenner-Crisp PA, Dellarco VL. 2016. Key elements for judging the quality of a risk assessment. Environ Health Perspect 124:1127–1135; http://dx.doi.org/10.1289/ehp.1510483 PMID:26862984

  4. Metrics for Radiologists in the Era of Value-based Health Care Delivery.

    Science.gov (United States)

    Sarwar, Ammar; Boland, Giles; Monks, Annamarie; Kruskal, Jonathan B

    2015-01-01

    Accelerated by the Patient Protection and Affordable Care Act of 2010, health care delivery in the United States is poised to move from a model that rewards the volume of services provided to one that rewards the value provided by such services. Radiology department operations are currently managed by an array of metrics that assess various departmental missions, but many of these metrics do not measure value. Regulators and other stakeholders also influence what metrics are used to assess medical imaging. Metrics such as the Physician Quality Reporting System are increasingly being linked to financial penalties. In addition, metrics assessing radiology's contribution to cost or outcomes are currently lacking. In fact, radiology is widely viewed as a contributor to health care costs without an adequate understanding of its contribution to downstream cost savings or improvement in patient outcomes. The new value-based system of health care delivery and reimbursement will measure a provider's contribution to reducing costs and improving patient outcomes with the intention of making reimbursement commensurate with adherence to these metrics. The authors describe existing metrics and their application to the practice of radiology, discuss the so-called value equation, and suggest possible metrics that will be useful for demonstrating the value of radiologists' services to their patients.

  5. Coastal Water Quality Assessment by Self-Organizing Map

    Institute of Scientific and Technical Information of China (English)

    NIU Zhiguang; ZHANG Hongwei; ZHANG Ying

    2005-01-01

    A new approach to coastal water quality assessment was put forward through study on self-organizing map (SOM). Firstly, the water quality data of Bohai Bay from 1999 to 2002 were prepared. Then, a set of software for coastal water quality assessment was developed based on the batch version algorithm of SOM and SOM toolbox in MATLAB environment. Furthermore, the training results of SOM could be analyzed with single water quality indexes, the value of N: P( atomic ratio) and the eutrophication index E so that the data were clustered into five different pollution types using k-means clustering method. Finally, it was realized that the monitoring data serial trajectory could be tracked and the new data be classified and assessed automatically. Through application it is found that this study helps to analyze and assess the coastal water quality by several kinds of graphics, which offers an easy decision support for recognizing pollution status and taking corresponding measures.

  6. The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing

    Science.gov (United States)

    DeLoach, Richard

    2016-01-01

    Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.

  7. Factors influencing assessment quality in higher vocational education

    NARCIS (Netherlands)

    Baartman, L.; Gulikers, J.T.M.; Dijkstra, A.

    2013-01-01

    The development of assessments that are fit to assess professional competence in higher vocational education requires a reconsideration of assessment methods, quality criteria and (self)evaluation. This article examines the self-evaluations of nine courses of a large higher vocational education inst

  8. Assessment time of the Welfare Quality protocol for dairy cattle

    NARCIS (Netherlands)

    Vries, de M.; Engel, B.; Uijl, I.; Schaik, van G.; Dijkstra, T.; Boer, de I.J.M.; Bokkers, E.A.M.

    2013-01-01

    The Welfare Quality® (WQ) protocols are increasingly used for assessing welfare of farm animals. These protocols are time consuming (about one day per farm) and, therefore, costly. Our aim was to assess the scope for reduction of on-farm assessment time of the WQ protocol for dairy cattle. Seven tra

  9. Assessment report for Hanford analytical services quality assurance plan

    International Nuclear Information System (INIS)

    This report documents the assessment results of DOE/RL-94-55, Hanford Analytical Services Quality Assurance Plan. The assessment was conducted using the Requirement and Self-Assessment Database (RSAD), which contains mandatory and nonmandatory DOE Order statements for the relevant DOE orders

  10. Factors Influencing Assessment Quality in Higher Vocational Education

    Science.gov (United States)

    Baartman, Liesbeth; Gulikers, Judith; Dijkstra, Asha

    2013-01-01

    The development of assessments that are fit to assess professional competence in higher vocational education requires a reconsideration of assessment methods, quality criteria and (self)evaluation. This article examines the self-evaluations of nine courses of a large higher vocational education institute. Per course, 4-11 teachers and 3-10…

  11. Arbuscular mycorrhiza in soil quality assessment

    DEFF Research Database (Denmark)

    Kling, M.; Jakobsen, I.

    1998-01-01

    quantitative and qualitative measurements of this important biological resource. Various methods for the assessment of the potential for mycorrhiza formation and function are presented. Examples are given of the application of these methods to assess the impact of pesticides on the mycorrhiza....

  12. Virginia Star Quality Initiative: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    Science.gov (United States)

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Virginia's Star Quality Initiative prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators…

  13. Exploring the Notion of Quality in Quality Higher Education Assessment in a Collaborative Future

    Science.gov (United States)

    Maguire, Kate; Gibbs, Paul

    2013-01-01

    The purpose of this article is to contribute to the debate on the notion of quality in higher education with particular focus on "objectifying through articulation" the assessment of quality by professional experts. The article gives an overview of the differentiations of quality as used in higher education. It explores a substantial piece of…

  14. Preliminary quality assessment of bovine colostrum

    OpenAIRE

    Alessandro Taranto; Francesca Conte; Rosario Fruci

    2013-01-01

    Data on bovine colostrum quality are scarce or absent, although Commission Regulations No 1662/2006 and No 1663/2006 include colostrum in the context of chapters on milk. Thus the aim of the present work is to study some physical, chemical, hygiene and safety quality parameters of bovine colostrum samples collected from Sicily and Calabria dairy herds. Thirty individual samples were sampled after 2-3 days from partum. The laboratory tests included: pH, fat (FT), total nitrogen (TN), lactose (...

  15. Groundwater Quality Assessment Based on Improved Water Quality Index in Pengyang County, Ningxia, Northwest China

    OpenAIRE

    Li Pei-Yue; Qian Hui; Wu Jian-Hua

    2010-01-01

    The aim of this work is to assess the groundwater quality in Pengyang County based on an improved water quality index. An information entropy method was introduced to assign weight to each parameter. For calculating WQI and assess the groundwater quality, total 74 groundwater samples were collected and all these samples subjected to comprehensive physicochemical analysis. Each of the groundwater samples was analyzed for 26 parameters and for computing WQI 14 parameters were chosen including c...

  16. Presentation: Visual analytics for automatic quality assessment of user-generated content on the English Wikipedia

    OpenAIRE

    David Strohmaier

    2015-01-01

    Related work has shown that it is possible to automatically measure the quality of Wikipedia articles. Yet, despite all these quality measures, it is difficult to identify what would improve an article. Therefore this master thesis is about an interactive graphic tool made for ranking and editing Wikipedia articles with support from quality measures. The contribution of this work is twofold: i) The Quality Analyzer that allows for creating new quality metrics and co...

  17. Soil quality assessment in rice production systems

    NARCIS (Netherlands)

    Rodrigues de Lima, A.C.

    2007-01-01

    In the state of Rio Grande do Sul, Brazil, rice production is one of the most important regional activities. Farmers are concerned that the land use practices for rice production in the Camaquã region may not be sustainable because of detrimental effects on soil quality. The study presented in this

  18. Objective and Subjective Assessment of Digital Pathology Image Quality

    Directory of Open Access Journals (Sweden)

    Prarthana Shrestha

    2015-03-01

    Full Text Available The quality of an image produced by the Whole Slide Imaging (WSI scanners is of critical importance for using the image in clinical diagnosis. Therefore, it is very important to monitor and ensure the quality of images. Since subjective image quality assessments by pathologists are very time-consuming, expensive and difficult to reproduce, we propose a method for objective assessment based on clinically relevant and perceptual image parameters: sharpness, contrast, brightness, uniform illumination and color separation; derived from a survey of pathologists. We developed techniques to quantify the parameters based on content-dependent absolute pixel performance and to manipulate the parameters in a predefined range resulting in images with content-independent relative quality measures. The method does not require a prior reference model. A subjective assessment of the image quality is performed involving 69 pathologists and 372 images (including 12 optimal quality images and their distorted versions per parameter at 6 different levels. To address the inter-reader variability, a representative rating is determined as a one-tailed 95% confidence interval of the mean rating. The results of the subjective assessment support the validity of the proposed objective image quality assessment method to model the readers’ perception of image quality. The subjective assessment also provides thresholds for determining the acceptable level of objective quality per parameter. The images for both the subjective and objective quality assessment are based on the HercepTestTM slides scanned by the Philips Ultra Fast Scanners, developed at Philips Digital Pathology Solutions. However, the method is applicable also to other types of slides and scanners.

  19. Image and Video Quality Assessment Using Neural Network and SVM

    Institute of Scientific and Technical Information of China (English)

    DING Wenrui; TONG Yubing; ZHANG Qishan; YANG Dongkai

    2008-01-01

    An image and video quality assessment method was developed using neural network and support vector machines (SVM) with the peak signal to noise ratio (PSNR) and the structure similarity indexes used to describe image quality. The neural network was used to obtain the mapping functions between the objec-tive quality assessment indexes and subjective quality assessment. The SVM was used to classify the im-ages into different types which were accessed using different mapping functions. Video quality was as-sessed based on the quality of each frame in the video sequence with various weights to describe motion and scene changes in the video. The number of isolated points in the correlations of the image and video subjective and objective quality assessments was reduced by this method. Simulation results show that the method accurately accesses image quality. The monotonicity of the method for images is 6.94% higher than with the PSNR method, and the root mean square error is at least 35.90% higher than with the PSNR.

  20. E-Services quality assessment framework for collaborative networks

    Science.gov (United States)

    Stegaru, Georgiana; Danila, Cristian; Sacala, Ioan Stefan; Moisescu, Mihnea; Mihai Stanescu, Aurelian

    2015-08-01

    In a globalised networked economy, collaborative networks (CNs) are formed to take advantage of new business opportunities. Collaboration involves shared resources and capabilities, such as e-Services that can be dynamically composed to automate CN participants' business processes. Quality is essential for the success of business process automation. Current approaches mostly focus on quality of service (QoS)-based service selection and ranking algorithms, overlooking the process of service composition which requires interoperable, adaptable and secure e-Services to ensure seamless collaboration, data confidentiality and integrity. Lack of assessment of these quality attributes can result in e-Service composition failure. The quality of e-Service composition relies on the quality of each e-Service and on the quality of the composition process. Therefore, there is the need for a framework that addresses quality from both views: product and process. We propose a quality of e-Service composition (QoESC) framework for quality assessment of e-Service composition for CNs which comprises of a quality model for e-Service evaluation and guidelines for quality of e-Service composition process. We implemented a prototype considering a simplified telemedicine use case which involves a CN in e-Healthcare domain. To validate the proposed quality-driven framework, we analysed service composition reliability with and without using the proposed framework.

  1. FABASOFT BEST PRACTICES AND TEST METRICS MODEL

    Directory of Open Access Journals (Sweden)

    Nadica Hrgarek

    2007-06-01

    Full Text Available Software companies have to face serious problems about how to measure the progress of test activities and quality of software products in order to estimate test completion criteria, and if the shipment milestone will be reached on time. Measurement is a key activity in testing life cycle and requires established, managed and well documented test process, defined software quality attributes, quantitative measures, and using of test management and bug tracking tools. Test metrics are a subset of software metrics (product metrics, process metrics and enable the measurement and quality improvement of test process and/or software product. The goal of this paper is to briefly present Fabasoft best practices and lessons learned during functional and system testing of big complex software products, and to describe a simple test metrics model applied to the software test process with the purpose to better control software projects, measure and increase software quality.

  2. Assessment of Quality Management Practices Within the Healthcare Industry

    Directory of Open Access Journals (Sweden)

    William J. Miller

    2009-01-01

    Full Text Available Problem Statement: Considerable effort has been devoted over the years by many organizations to adopt quality management practices, but few studies have assessed critical factors that affect quality practices in healthcare organizations. The problem addressed in this study was to assess the critical factors influencing the quality management practices in a single important industry (i.e., healthcare. Approach: A survey instrument was adapted from business quality literature and was sent to all hospitals in a large US Southeastern state. Valid responses were received from 147 of 189 hospitals yielding a 75.6% response rate. Factor analysis using principal component analysis with an orthogonal rotation was performed to assess 58 survey items designed to measure ten dimensions of hospital quality management practices. Results: Eight factors were shown to have a statistically significant effect on quality management practices and were classified into two groups: (1 four strategic factors (role of management leadership, role of the physician, customer focus, training resources investment and (2 four operational factors (role of quality department, quality data/reporting, process management/training and employee relations. The results of this study showed that a valid and reliable instrument was developed and used to assess quality management practices in hospitals throughout a large US state. Conclusion: The implications of this study provided an understanding that management of quality required both a focus on longer-term strategic leadership, as well as day-to-day operational management. It was recommended that healthcare researchers and practitioners focus on the critical factors identified and employ this survey instrument to manage and better understand the nature of hospital quality management practices across wider geographical regions and over longer time periods. Furthermore, this study extended the scope of existing quality management

  3. STUDY OF POND WATER QUALITY BY THE ASSESSMENT OF PHYSICOCHEMICAL PARAMETERS AND WATER QUALITY INDEX

    OpenAIRE

    Vinod Jena; Satish Dixit; Ravi ShrivastavaSapana Gupta; Sapana Gupta

    2013-01-01

    Water quality index (WQI) is a dimensionless number that combines multiple water quality factors into a single number by normalizing values to subjective rating curves. Conventionally it has been used for evaluating the quality of water for water resources suchas rivers, streams and lakes, etc. The present work is aimed at assessing the Water Quality Index (W.Q.I) ofpond water and the impact of human activities on it. Physicochemical parameters were monitored for the calculation of W.Q.I for ...

  4. A unifying process capability metric

    Directory of Open Access Journals (Sweden)

    John Jay Flaig

    2009-07-01

    Full Text Available A new economic approach to process capability assessment is presented, which differs from the commonly used engineering metrics. The proposed metric consists of two economic capability measures – the expected profit and the variation in profit of the process. This dual economic metric offers a number of significant advantages over other engineering or economic metrics used in process capability analysis. First, it is easy to understand and communicate. Second, it is based on a measure of total system performance. Third, it unifies the fraction nonconforming approach and the expected loss approach. Fourth, it reflects the underlying interest of management in knowing the expected financial performance of a process and its potential variation.

  5. Self-Organizing Maps for Fingerprint Image Quality Assessment

    DEFF Research Database (Denmark)

    Olsen, Martin Aastrup; Tabassi, Elham; Makarov, Anton;

    2013-01-01

    Fingerprint quality assessment is a crucial task which needs to be conducted accurately in various phases in the biometric enrolment and recognition processes. Neglecting quality measurement will adversely impact accuracy and efficiency of biometric recognition systems (e.g. verification and iden......Fingerprint quality assessment is a crucial task which needs to be conducted accurately in various phases in the biometric enrolment and recognition processes. Neglecting quality measurement will adversely impact accuracy and efficiency of biometric recognition systems (e.g. verification...... the SOM output and biometric performance. The quantitative evaluation performed demonstrates that our proposed quality assessment algorithm is a reasonable predictor of performance. The open source code of our algorithm will be posted at NIST NFIQ 2.0 website....

  6. Acoustical Quality Assessment of the Classroom Environment

    CERN Document Server

    George, Marian

    2012-01-01

    Teaching is one of the most important factors affecting any education system. Many research efforts have been conducted to facilitate the presentation modes used by instructors in classrooms as well as provide means for students to review lectures through web browsers. Other studies have been made to provide acoustical design recommendations for classrooms like room size and reverberation times. However, using acoustical features of classrooms as a way to provide education systems with feedback about the learning process was not thoroughly investigated in any of these studies. We propose a system that extracts different sound features of students and instructors, and then uses machine learning techniques to evaluate the acoustical quality of any learning environment. We infer conclusions about the students' satisfaction with the quality of lectures. Using classifiers instead of surveys and other subjective ways of measures can facilitate and speed such experiments which enables us to perform them continuously...

  7. Assessment of Groundwater Quality by Chemometrics.

    Science.gov (United States)

    Papaioannou, Agelos; Rigas, George; Kella, Sotiria; Lokkas, Filotheos; Dinouli, Dimitra; Papakonstantinou, Argiris; Spiliotis, Xenofon; Plageras, Panagiotis

    2016-07-01

    Chemometric methods were used to analyze large data sets of groundwater quality from 18 wells supplying the central drinking water system of Larissa city (Greece) during the period 2001 to 2007 (8.064 observations) to determine temporal and spatial variations in groundwater quality and to identify pollution sources. Cluster analysis grouped each year into three temporal periods (January-April (first), May-August (second) and September-December (third). Furthermore, spatial cluster analysis was conducted for each period and for all samples, and grouped the 28 monitoring Units HJI (HJI=represent the observations of the monitoring site H, the J-year and the period I) into three groups (A, B and C). Discriminant Analysis used only 16 from the 24 parameters to correctly assign 97.3% of the cases. In addition, Factor Analysis identified 7, 9 and 8 latent factors for groups A, B and C, respectively. PMID:27329059

  8. Quality assessment of forest cutting with chainsaw

    Directory of Open Access Journals (Sweden)

    Octávio Barbosa Plaster

    2012-06-01

    Full Text Available This research evaluated the quality of forest harvest using chainsaw, in farms in the south of Espirito Santo state, Brazil, considering aspects of quality and loss of wood left in the strains. A total of 250 m² plots were launched to collect data of forest cut with chainsaw, for evaluating the quality of the cut related mto: presence of skewers; crack damage; strains burst range nonstandard; strains without the notch directional, and the remaining height of the strain, in order to measure the loss of wood held in the strains. The main results were: the spike was present in 21.9% of the strains, the cracks in 17.2% of the strains, non-standard strains in 44.6% of them and unnotched directional strains in 34.5% of the evaluations. To check the influence of the realization of the directional notch on the height of the strains t-test, at 5% probability, has shown that there is an increased contribution to height of the strains, where the cut was made without the directional notch. The amount of wood held in the strains above the recommended maximum was, on average, 2.43 m³.ha-1, representing a loss of R$ 172.53 ha-1. It was verified that the loss of timber remaining in eucalyptus strains was higher in places where, for the logging, there was not done the directional notch. The items evaluated showed uneven quality, indicating the need to improve cutting with chainsaw.

  9. Assessment of Water Quality Conditions: Agassiz National Wildlife Refuge, 2012

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This is an assessment of water quality data collected from source water, discharge and within Agassiz Pool. In the summer of 2012, the U.S. Fish and Wildlife...

  10. National Water-Quality Assessment (NAWQA) Area-Characterization Toolbox

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This is release 1.0 of the National Water-Quality Assessment (NAWQA) Area-Characterization Toolbox. These tools are designed to be accessed using ArcGIS Desktop...

  11. National Impact Assessment of CMS Quality Measures Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The National Impact Assessment of the Centers for Medicare and Medicaid Services (CMS) Quality Measures Reports (Impact Reports) are mandated by section 3014(b), as...

  12. Quality Assessment and Improvement Methods in Statistics – what Works?

    Directory of Open Access Journals (Sweden)

    Hans Viggo Sæbø

    2014-12-01

    Full Text Available Several methods for quality assessment and assurance in statistics have been developed in a European context. Data Quality Assessment Methods (DatQAM were considered in a Eurostat handbook in 2007. These methods comprise quality reports and indicators, measurement of process variables, user surveys, self-assessments, audits, labelling and certifi cation. The entry point for the paper is the development of systematic quality work in European statistics with regard to good practices such as those described in the DatQAM handbook. Assessment is one issue, following up recommendations and implementation of improvement actions another. This leads to a discussion on the eff ect of approaches and tools: Which work well, which have turned out to be more of a challenge, and why? Examples are mainly from Statistics Norway, but these are believed to be representative for several statistical institutes.

  13. Water quality assessment of razorback sucker grow-out ponds

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — Water quality parameters had never been assessed in these grow-out ponds. Historically growth, condition, and survival of razorback suckers have been variable...

  14. Assessing the quality of a student-generated question repository

    CERN Document Server

    Bates, Simon P; Homer, Danny; Riise, Jonathan

    2013-01-01

    We present results from a study that categorizes and assesses the quality of questions and explanations authored by students, in question repositories produced as part of the summative assessment in introductory physics courses over the past two years. Mapping question quality onto the levels in the cognitive domain of Bloom's taxonomy, we find that students produce questions of high quality. More than three-quarters of questions fall into categories beyond simple recall, in contrast to similar studies of student-authored content in different subject domains. Similarly, the quality of student-authored explanations for questions was also high, with approximately 60% of all explanations classified as being of high or outstanding quality. Overall, 75% of questions met combined quality criteria, which we hypothesize is due in part to the in-class scaffolding activities that we provided for students ahead of requiring them to author questions.

  15. Using big data for quality assessment in oncology.

    Science.gov (United States)

    Broughman, James R; Chen, Ronald C

    2016-05-01

    There is increasing attention in the US healthcare system on the delivery of high-quality care, an issue central to oncology. In the report 'Crossing the Quality Chasm', the Institute of Medicine identified six aims for improving healthcare quality: safe, effective, patient-centered, timely, efficient and equitable. This article describes how current big data resources can be used to assess these six dimensions, and provides examples of published studies in oncology. Strengths and limitations of current big data resources for the evaluation of quality of care are also discussed. Finally, this article outlines a vision where big data can be used not only to retrospectively assess the quality of oncologic care, but help physicians deliver high-quality care in real time.

  16. Impact Factor and other metrics for evaluating science: essentials for public health practitioners.

    Directory of Open Access Journals (Sweden)

    Angelo G. Solimini

    2011-03-01

    Full Text Available

    Abstract: The quality of scientific evidence is doubly tied with the quality of all research activities that generates
    it (including the “value” of the scientists involved and is usually, but not always, reflected in the reporting quality of the scientific publication(s. Public health practitioners, either at research, academic or management levels, should be aware of the current metrics used to assess the quality value of journals, single publications, research projects, research scientists or entire research groups. However, this task is
    complicated by a vast variety of different metrics and assessment methods. Here we briefly review the most widely used metrics, highlighting the pros and cons of each of them. The rigid application of quantitative metrics to judge the quality of a journal, of a single publication or of a researcher suffers from many negative issues and is prone to many reasonable criticisms. A reasonable way forward could probably be the use of qualitative assessment founded on the indications coming from few but robust quantitative metrics.

  17. A Literature Review of Fingerprint Quality Assessment and Its Evaluation

    OpenAIRE

    Yao, Zhigang; Le Bars, Jean-Marie; Charrier, Christophe; Rosenberger, Christophe

    2016-01-01

    International audience Fingerprint quality assessment (FQA) has been a challenging issue due to a variety of noisy information contained in the samples, such as physical defect and distortions caused by sensing devices. Existing studies have made efforts to find out more suitable techniques for assessing fingerprint quality but it is difficult to achieve a common solution because of, for example, different image settings. This paper gives a twofold study related to FQA, including a literat...

  18. Quality Assessment of TPB-Based Questionnaires: A Systematic Review

    OpenAIRE

    Obiageli Crystal Oluka; Shaofa Nie; Yi Sun

    2014-01-01

    OBJECTIVE: This review is aimed at assessing the quality of questionnaires and their development process based on the theory of planned behavior (TPB) change model. METHODS: A systematic literature search for studies with the primary aim of TPB-based questionnaire development was conducted in relevant databases between 2002 and 2012 using selected search terms. Ten of 1,034 screened abstracts met the inclusion criteria and were assessed for methodological quality using two different appraisal...

  19. A ranking index for quality assessment of forensic DNA profiles

    OpenAIRE

    Ansell Ricky; Hedman Johannes; Nordgaard Anders

    2010-01-01

    Abstract Background Assessment of DNA profile quality is vital in forensic DNA analysis, both in order to determine the evidentiary value of DNA results and to compare the performance of different DNA analysis protocols. Generally the quality assessment is performed through manual examination of the DNA profiles based on empirical knowledge, or by comparing the intensities (allelic peak heights) of the capillary electrophoresis electropherograms. Results We recently developed a ranking index ...

  20. Groundwater Dynamics and Quality Assessment in an Agricultural Area

    OpenAIRE

    Stefano L. Russo; Adriano Fiorucci; Bartolomeo Vigna

    2011-01-01

    Problem statement: The analysis of the relationships among the different hydrogeological Units and the assessment of groundwater quality are fundamental to adopt suitable territorial planning measures aimed to reduce the potential groundwater pollution especially in agricultural regions. In this study, the characteristics of groundwater dynamics and the assessment of its quality in the Cuneo Plain (NW Italy) were examined. Approach: In order to define the geological setting an intense bibliog...

  1. Quality assessment for spectral domain optical coherence tomography (OCT) images

    OpenAIRE

    LIU, SHUANG; Paranjape, Amit S.; Elmaanaoui, Badr; Dewelle, Jordan; Rylander, H. Grady; Markey, Mia K.; Milner, Thomas E.

    2009-01-01

    Retinal nerve fiber layer (RNFL) thickness, a measure of glaucoma progression, can be measured in images acquired by spectral domain optical coherence tomography (OCT). The accuracy of RNFL thickness estimation, however, is affected by the quality of the OCT images. In this paper, a new parameter, signal deviation (SD), which is based on the standard deviation of the intensities in OCT images, is introduced for objective assessment of OCT image quality. Two other objective assessment paramete...

  2. Quality Assessment of Library Website of Iranian State Universities:

    OpenAIRE

    Farideh Osareh; Zeinab Papi

    2008-01-01

    The present study carries out a quality assessment of the library websites in Iranian State Universities in order to rank them accordingly. The evaluation tool used is the normalized Web Quality Evaluation Tools (WQET). 41 Active library websites were studied and assessed qualitatively over two time periods (Feb 2006 and May 2006) using WQET. Data were collected by direct observation of the website. The evaluation was based on user characteristics, website purpose, upload speed, structural st...

  3. Assessing translation quality for cross language image retrieval

    OpenAIRE

    Clough, P.; Sanderson, M.

    2004-01-01

    Like other cross language tasks, we show that the quality of the translation resource, among other factors, has an effect on retrieval performance. Using data from the ImageCLEF test collection, we investigate the relationship between translation quality and retrieval performance when using Systran, a machine translation (MT) system, as a translation resource. The quality of translation is assessed manually by comparing the original ImageCLEF topics with the output from Systran and rated by a...

  4. QUALITY ASSESSMENT OF EGGS PACKED UNDER MODIFIED ATMOSPHERE

    OpenAIRE

    Aline Giampietro-Ganeco; Hirasilva Borba; Aline Mary Scatolini-Silva; Marcel Manente Boiago; Pedro Alves de Souza; Juliana Lolli Malagoli de Mello

    2015-01-01

    Eggs are perishable foods and lose quality quickly if not stored properly. From the moment of posture to the marketing of egg, quality loss occurs through gas exchange and water through the pores of the shell with the external environment and thus, studies involving modified atmosphere packaging are extremely important. The aim of the present study is to assess the internal quality of eggs packed under modified atmosphere and stored at room temperature. Six hundred and twelve fresh commercial...

  5. Assessing Educational Processes Using Total-Quality-Management Measurement Tools.

    Science.gov (United States)

    Macchia, Peter, Jr.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) assessment tools in educational settings highlights and gives examples of fishbone diagrams, or cause and effect charts; Pareto diagrams; control charts; histograms and check sheets; scatter diagrams; and flowcharts. Variation and quality are discussed in terms of continuous process…

  6. Guidance on Data Quality Assessment for Life Cycle Inventory Data

    Science.gov (United States)

    Data quality within Life Cycle Assessment (LCA) is a significant issue for the future support and development of LCA as a decision support tool and its wider adoption within industry. In response to current data quality standards such as the ISO 14000 series, various entities wit...

  7. River Pollution: Part II. Biological Methods for Assessing Water Quality.

    Science.gov (United States)

    Openshaw, Peter

    1984-01-01

    Discusses methods used in the biological assessment of river quality and such indicators of clean and polluted waters as the Trent Biotic Index, Chandler Score System, and species diversity indexes. Includes a summary of a river classification scheme based on quality criteria related to water use. (JN)

  8. Food quality assessment in parent–child dyads

    DEFF Research Database (Denmark)

    Bech-Larsen, Tino; Jensen, Birger Boutrup

    2011-01-01

    of attention. The purpose of this article is to discuss the interpersonal aspects of food quality formation, and to explore these in the context of parents buying new types of healthier in-between meals for their children. To pursue this we introduce the concept of dyadic quality assessment and apply...

  9. Assessing the link between coastal urbanization and the quality of nekton habitat in mangrove tidal tributaries

    Science.gov (United States)

    Krebs, Justin M.; Bell, Susan S.; McIvor, Carole C.

    2014-01-01

    To assess the potential influence of coastal development on habitat quality for estuarine nekton, we characterized body condition and reproduction for common nekton from tidal tributaries classified as undeveloped, industrial, urban or man-made (i.e., mosquito-control ditches). We then evaluated these metrics of nekton performance, along with several abundance-based metrics and community structure from a companion paper (Krebs et al. 2013) to determine which metrics best reflected variation in land-use and in-stream habitat among tributaries. Body condition was not significantly different among undeveloped, industrial, and man-made tidal tributaries for six of nine taxa; however, three of those taxa were in significantly better condition in urban compared to undeveloped tributaries. Palaemonetes shrimp were the only taxon in significantly poorer condition in urban tributaries. For Poecilia latipinna, there was no difference in body condition (length–weight) between undeveloped and urban tributaries, but energetic condition was significantly better in urban tributaries. Reproductive output was reduced for both P. latipinna (i.e., fecundity) and grass shrimp (i.e., very low densities, few ovigerous females) in urban tributaries; however a tradeoff between fecundity and offspring size confounded meaningful interpretation of reproduction among land-use classes for P. latipinna. Reproductive allotment by P. latipinna did not differ significantly among land-use classes. Canonical correspondence analysis differentiated urban and non-urban tributaries based on greater impervious surface, less natural mangrove shoreline, higher frequency of hypoxia and lower, more variable salinities in urban tributaries. These characteristics explained 36 % of the variation in nekton performance, including high densities of poeciliid fishes, greater energetic condition of sailfin mollies, and low densities of several common nekton and economically important taxa from urban tributaries

  10. Quality evaluation of extra high quality images based on key assessment word

    Science.gov (United States)

    Kameda, Masashi; Hayashi, Hidehiko; Akamatsu, Shigeru; Miyahara, Makoto M.

    2001-06-01

    An all encompassing goal of our research is to develop an extra high quality imaging system which is able to convey a high level artistic impression faithfully. We have defined a high order sensation as such a high level artistic impression, and it is supposed that the high order sensation is expressed by the combination of the psychological factor which can be described by plural assessment words. In order to pursue the quality factors that are important for the reproduction of the high order sensation, we have focused on the image quality evaluation of the extra high quality images using the assessment words considering the high order sensation. In this paper, we have obtained the hierarchical structure between the collected assessment words and the principles of European painting based on the conveyance model of the high order sensation, and we have determined a key assessment word 'plasticity' which is able to evaluate the reproduction of the high order sensation more accurately. The results of the subjective assessment experiments using the prototype of the developed extra high quality imaging system have shown that the obtained key assessment word 'plasticity' is the most appropriate assessment word to evaluate the image quality of the extra high quality images quasi-quantitatively.

  11. Quality of life assessment in dogs and cats receiving chemotherapy

    DEFF Research Database (Denmark)

    Vøls, Kåre K.; Heden, Martin A.; Kristensen, Annemarie Thuri;

    2016-01-01

    This study aimed to review currently reported methods of assessing the effects of chemotherapy on the quality of life (QoL) of canine and feline patients and to explore novel ways to assess QoL in such patients in the light of the experience to date in human pediatric oncology. A qualitative...... comparative analysis of published papers on the effects of chemotherapy on QoL in dogs and cats were conducted. This was supplemented with a comparison of the parameters and domains used in veterinary QoL-assessments with those used in the Pediatric Quality of Life Inventory (PedsQL™) questionnaire designed...... to assess QoL in toddlers. Each of the identified publications including QoL-assessment in dogs and cats receiving chemotherapy applied a different method of QoL-assessment. In addition, the veterinary QoL-assessments were mainly focused on physical clinical parameters, whereas the emotional (6/11), social...

  12. Quantifying landscape pattern and assessing the land cover changes in Piatra Craiului National Park and Bucegi Natural Park, Romania, using satellite imagery and landscape metrics.

    Science.gov (United States)

    Vorovencii, Iosif

    2015-11-01

    Protected areas of Romania have enjoyed particular importance after 1989, but, at the same time, they were subject to different anthropogenic and natural pressures which resulted in the occurrence of land cover changes. These changes have generally led to landscape degradation inside and at the borders of the protected areas. In this article, 12 landscape metrics were used in order to quantify landscape pattern and assess land cover changes in two protected areas, Piatra Craiului National Park (PCNP) and Bucegi Natural Park (BNP). The landscape metrics were obtained from land cover maps derived from Landsat Thematic Mapper (TM) and Landsat Enhanced Thematic Mapper Plus (ETM+) images from 1987, 1993, 2000, 2009 and 2010. Three land cover classes were analysed in PCNP and five land cover map classes in BNP. The results show a landscape fragmentation trend for both parks, affecting different types of land covers. Between 1987 and 2010, in PCNP fragmentation was, in principle, the result not only of anthropogenic activities such as forest cuttings and illegal logging but also of natural causes. In BNP, between 1987 and 2009, the fragmentation affected the pasture which resulted in the occurrence of bare land and rocky areas because of the erosion on the Bucegi Plateau.

  13. Assessing the colour quality of LED sources

    DEFF Research Database (Denmark)

    Jost-Boissard, S.; Avouac, P.; Fontoynont, Marc

    2015-01-01

    sources and especially some LEDs. In this paper, several aspects of perceived colour quality are investigated using a side-by-side paired comparison method, and the following criteria: naturalness of fruits and vegetables, colourfulness of the Macbeth Color Checker chart, visual appreciation...... (attractiveness/ preference) and colour difference estimations for both visual scenes. Forty-five observers with normal colour vision evaluated nine light sources at 3000 K, and 36 observers evaluated eight light sources at 4000 K. Our results indicate that perceived colour differences are better dealt...

  14. ASSESSING THE COST OF BEEF QUALITY

    OpenAIRE

    Forristall, Cody; May, Gary J.; Lawrence, John D.

    2002-01-01

    The number of U.S. fed cattle marketed through a value based or grid marketing system is increasing dramatically. Most grids reward Choice or better quality grades and some pay premiums for red meat yield. The Choice-Select (C-S) price spread increased 55 percent, over $3/cwt between 1989-91 and 1999-01. However, there is a cost associated with pursuing these carcass premiums. This paper examines these tradeoffs both in the feedlot and in a retained ownership scenario. Correlations between ca...

  15. Quality assessment of plant transpiration water

    Science.gov (United States)

    Macler, Bruce A.; Janik, Daniel S.; Benson, Brian L.

    1990-01-01

    It has been proposed to use plants as elements of biologically-based life support systems for long-term space missions. Three roles have been brought forth for plants in this application: recycling of water, regeneration of air and production of food. This report discusses recycling of water and presents data from investigations of plant transpiration water quality. Aqueous nutrient solution was applied to several plant species and transpired water collected. The findings indicated that this water typically contained 0.3-6 ppm of total organic carbon, which meets hygiene water standards for NASA's space applications. It suggests that this method could be developed to achieve potable water standards.

  16. Metrics for Evaluating the Accuracy of Solar Power Forecasting: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, J.; Hodge, B. M.; Florita, A.; Lu, S.; Hamann, H. F.; Banunarayanan, V.

    2013-10-01

    Forecasting solar energy generation is a challenging task due to the variety of solar power systems and weather regimes encountered. Forecast inaccuracies can result in substantial economic losses and power system reliability issues. This paper presents a suite of generally applicable and value-based metrics for solar forecasting for a comprehensive set of scenarios (i.e., different time horizons, geographic locations, applications, etc.). In addition, a comprehensive framework is developed to analyze the sensitivity of the proposed metrics to three types of solar forecasting improvements using a design of experiments methodology, in conjunction with response surface and sensitivity analysis methods. The results show that the developed metrics can efficiently evaluate the quality of solar forecasts, and assess the economic and reliability impact of improved solar forecasting.

  17. A Trustability Metric for Code Search based on Developer Karma

    CERN Document Server

    Gysin, Florian S

    2010-01-01

    The promise of search-driven development is that developers will save time and resources by reusing external code in their local projects. To efficiently integrate this code, users must be able to trust it, thus trustability of code search results is just as important as their relevance. In this paper, we introduce a trustability metric to help users assess the quality of code search results and therefore ease the cost-benefit analysis they undertake trying to find suitable integration candidates. The proposed trustability metric incorporates both user votes and cross-project activity of developers to calculate a "karma" value for each developer. Through the karma value of all its developers a project is ranked on a trustability scale. We present JBender, a proof-of-concept code search engine which implements our trustability metric and we discuss preliminary results from an evaluation of the prototype.

  18. Web metrics for library and information professionals

    CERN Document Server

    Stuart, David

    2014-01-01

    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional.Th...

  19. Thermal analysis in quality assessment of rapeseed oils

    Energy Technology Data Exchange (ETDEWEB)

    Wesolowski, Marek; Erecinska, Joanna [Department of Analytical Chemistry, Medical University of Gdansk, Al. Gen. J. Hallera 107, PL 80-416 Gdansk (Poland)

    1998-12-07

    The evaluation of the applicability of thermoanalytical methods to the assessment of the quality of refined rapeseed oils was performed. Density, refractive index, and saponification, iodine and acid numbers of rapeseed oils were determined as part of the study. By correlating the data obtained with the temperatures of initial, final and successive mass losses determined from the thermogravimetric curves, strong relations were observed. The possibility of a practical utilization of regression equations for the assessment of the quality of refined rapeseed oils was indicated. The results of principal component analysis indicate that thermogravimetric techniques are very useful in defining the quality of rapeseed oils compared with chemical analyses

  20. An information theoretic approach for privacy metrics

    Directory of Open Access Journals (Sweden)

    Michele Bezzi

    2010-12-01

    Full Text Available Organizations often need to release microdata without revealing sensitive information. To this scope, data are anonymized and, to assess the quality of the process, various privacy metrics have been proposed, such as k-anonymity, l-diversity, and t-closeness. These metrics are able to capture different aspects of the disclosure risk, imposing minimal requirements on the association of an individual with the sensitive attributes. If we want to combine them in a optimization problem, we need a common framework able to express all these privacy conditions. Previous studies proposed the notion of mutual information to measure the different kinds of disclosure risks and the utility, but, since mutual information is an average quantity, it is not able to completely express these conditions on single records. We introduce here the notion of one-symbol information (i.e., the contribution to mutual information by a single record that allows to express and compare the disclosure risk metrics. In addition, we obtain a relation between the risk values t and l, which can be used for parameter setting. We also show, by numerical experiments, how l-diversity and t-closeness can be represented in terms of two different, but equally acceptable, conditions on the information gain..

  1. Can International Large-Scale Assessments Inform a Global Learning Goal? Insights from the Learning Metrics Task Force

    Science.gov (United States)

    Winthrop, Rebecca; Simons, Kate Anderson

    2013-01-01

    In recent years, the global community has developed a range of initiatives to inform the post-2015 global development agenda. In the education community, International Large-Scale Assessments (ILSAs) have an important role to play in advancing a global shift in focus to access plus learning. However, there are a number of other assessment tools…

  2. A new assessment method for image fusion quality

    Science.gov (United States)

    Li, Liu; Jiang, Wanying; Li, Jing; Yuchi, Ming; Ding, Mingyue; Zhang, Xuming

    2013-03-01

    Image fusion quality assessment plays a critically important role in the field of medical imaging. To evaluate image fusion quality effectively, a lot of assessment methods have been proposed. Examples include mutual information (MI), root mean square error (RMSE), and universal image quality index (UIQI). These image fusion assessment methods could not reflect the human visual inspection effectively. To address this problem, we have proposed a novel image fusion assessment method which combines the nonsubsampled contourlet transform (NSCT) with the regional mutual information in this paper. In this proposed method, the source medical images are firstly decomposed into different levels by the NSCT. Then the maximum NSCT coefficients of the decomposed directional images at each level are obtained to compute the regional mutual information (RMI). Finally, multi-channel RMI is computed by the weighted sum of the obtained RMI values at the various levels of NSCT. The advantage of the proposed method lies in the fact that the NSCT can represent image information using multidirections and multi-scales and therefore it conforms to the multi-channel characteristic of human visual system, leading to its outstanding image assessment performance. The experimental results using CT and MRI images demonstrate that the proposed assessment method outperforms such assessment methods as MI and UIQI based measure in evaluating image fusion quality and it can provide consistent results with human visual assessment.

  3. External Quality Assessments for Microbiologic Diagnosis of Diphtheria in Europe

    OpenAIRE

    Both, Leonard; Neal, Shona; De Zoysa, Aruni; Mann, Ginder; Czumbel, Ida; Efstratiou, Androulla

    2014-01-01

    The European Diphtheria Surveillance Network (EDSN) ensures the reliable epidemiological and microbiologic assessment of disease prevalence in the European Union. Here, we describe a survey of current diagnostic techniques for diphtheria surveillance conducted across the European Union and report the results from three external quality assessment (EQA) schemes performed between 2010 and 2014.

  4. Development of a dementia assessment quality database

    DEFF Research Database (Denmark)

    Johannsen, P.; Jørgensen, Kasper; Korner, A.;

    2011-01-01

    database for dementia evaluation in the secondary health system. One volume and seven process quality indicators on dementia evaluations are monitored. Indicators include frequency of demented patients, percentage of patients evaluated within three months, whether the work-up included blood tests, Mini...... Mental State Examination (MMSE), brain scan and activities of daily living and percentage of patients treated with anti-dementia drugs. Indicators can be followed over time in an individual clinic. Up to 20 variables are entered to calculate the indicators and to provide risk factor variables...... for the data analyses. RESULTS: The database was constructed in 2005 and covers 30% of the Danish population. Data from all consecutive cases evaluated for dementia in the secondary health system in the Capital Region of Denmark are entered. The database has shown that the basic diagnostic work-up programme...

  5. Quality assessment on FBTR reactor vessel

    International Nuclear Information System (INIS)

    Fast Breeder Test Reactor (FBTR) is a 40 MWt/13MWe, mixed carbide fueled, sodium cooled, loop type reactor built at Indira Gandhi Centre for Atomic Research (IGCAR), Kalpakkam. The Reactor Vessel (RV) is manufactured using modified AISI 316 austenitic stainless steel material as per FBTR specification. The acceptance criteria for non-destructive examination, quality of weld, test requirement, tolerances on various dimensions etc. specified in FBTR specification are very stringent compared to ASME Section III, Div. I, Class I components and other international codes applicable to pressure vessels and nuclear power plant components. During the manufacture and inspection of the Reactor Vessel, a systematic approach has been adopted towards the improvement of various procedures to achieve very high reliability of the Reactor Vessel. This paper explains the details of results achieved on fabrication tolerances, destructive and non-destructive testing on materials and welds and final tests on the reactor vessel. (author)

  6. Quality assessment on FBTR reactor vessel

    Energy Technology Data Exchange (ETDEWEB)

    Shanmugam, K.; Chandramohan, R.; Ramamurthy, M.K. [Indira Gandhi Centre for Atomic Research (IGCAR), Technical Coordination and Quality Assurance Group, Kalpakkam (India)

    1997-08-01

    Fast Breeder Test Reactor (FBTR) is a 40 MWt/13MWe, mixed carbide fueled, sodium cooled, loop type reactor built at Indira Gandhi Centre for Atomic Research (IGCAR), Kalpakkam. The Reactor Vessel (RV) is manufactured using modified AISI 316 austenitic stainless steel material as per FBTR specification. The acceptance criteria for non-destructive examination, quality of weld, test requirement, tolerances on various dimensions etc. specified in FBTR specification are very stringent compared to ASME Section III, Div. I, Class I components and other international codes applicable to pressure vessels and nuclear power plant components. During the manufacture and inspection of the Reactor Vessel, a systematic approach has been adopted towards the improvement of various procedures to achieve very high reliability of the Reactor Vessel. This paper explains the details of results achieved on fabrication tolerances, destructive and non-destructive testing on materials and welds and final tests on the reactor vessel. (author).

  7. EPIDEMIOLOGI UNTUK 'QUALITY ASSESSMENT' PELAYANAN KESEHATAN GIGI MULUT

    Directory of Open Access Journals (Sweden)

    Zaura Anggraeni Matram

    2015-08-01

    Full Text Available The need for quality assessment and assurance in health and oral health becomes an issue of major concern in Indonesia, particularly in relation to the significant decrease of available resources due to the persistence economical crisis. Financial and socioeconomical impacts have led to the need for low cost - high quality accessible oral care. Dentists are ultimately responsibel for the quality of care performed in Public Helath Center (Puskesmas especially for School and Community Dental Programmes often performed by various type of health manpower such as dental nurses and cadres (volunteers. In this paper, emphasis has been placed on two epidemiological models to assess the quality of outcomes of service as well as management control for quality assessment in School Dental Programme. Respectively epidemiological moderls were developed for assessing the effectiveness of oral health education and simple oral prophylaxis carried out the School Dental Programme (known as UKGS. With these epidemiological approaches, it is hope dentists will gain increase appreciation for qualitative assessment quality of care instead of just quantitavely meeting the target that many health administrations use it to indicate success.

  8. Quality assessment of TPB-based questionnaires: a systematic review.

    Directory of Open Access Journals (Sweden)

    Obiageli Crystal Oluka

    Full Text Available OBJECTIVE: This review is aimed at assessing the quality of questionnaires and their development process based on the theory of planned behavior (TPB change model. METHODS: A systematic literature search for studies with the primary aim of TPB-based questionnaire development was conducted in relevant databases between 2002 and 2012 using selected search terms. Ten of 1,034 screened abstracts met the inclusion criteria and were assessed for methodological quality using two different appraisal tools: one for the overall methodological quality of each study and the other developed for the appraisal of the questionnaire content and development process. Both appraisal tools consisted of items regarding the likelihood of bias in each study and were eventually combined to give the overall quality score for each included study. RESULTS: 8 of the 10 included studies showed low risk of bias in the overall quality assessment of each study, while 9 of the studies were of high quality based on the quality appraisal of questionnaire content and development process. CONCLUSION: Quality appraisal of the questionnaires in the 10 reviewed studies was successfully conducted, highlighting the top problem areas (including: sample size estimation; inclusion of direct and indirect measures; and inclusion of questions on demographics in the development of TPB-based questionnaires and the need for researchers to provide a more detailed account of their development process.

  9. ASSESSMENT OF QUALITY OF LIFE IN CANCER PATIENTS

    Directory of Open Access Journals (Sweden)

    Fereshteh Farzianpour

    2014-01-01

    Full Text Available Standards of Joint Commission International (JCI emphasize on the organizational performance level in basic functional domains including patient right, patient care, medical safety and infection control. These standards are focused on two principles: Expectations of the actual organizational performance and assessment of organizational capabilities to provide high quality and safe Health Care Services (HCS. The aim of this study was to analyze the regression model of the Quality of Life (QOL in cancer patients from Mazandaran province in 2013. This descriptive cross-sectional study was carried out on 185 cases after a chemotherapy treatment session during in the first three months that was referred to Rajaee Chemotherapy Center in 2013. The method of sampling was Purposive. General quality of life was assessed using WHO questionnaire (WHOQOL-BREF and particular life quality was assessed using researcher-developed questionnaire. Data analysis was consisted of a multiple regression method and for comparison one-sample test of Kolmogrov-Smirnov was used. Statistical analysis showed that the average of general life quality, particular life quality and total average was evaluated, 1<0.96<5, 1<1.13<5 and 1<1.04<5, respectively. Due to the low quality of general and particular life, fully integration of the care program of patient care in primary health care system, easy access and facilitation in intervention to improve the quality of life is offered. Our motivation behind the research and the implications of the research was improvement of QOL cancer patients.

  10. Measuring data quality for ongoing improvement a data quality assessment framework

    CERN Document Server

    Sebastian-Coleman, Laura

    2013-01-01

    The Data Quality Assessment Framework shows you how to measure and monitor data quality, ensuring quality over time. You'll start with general concepts of measurement and work your way through a detailed framework of more than three dozen measurement types related to five objective dimensions of quality: completeness, timeliness, consistency, validity, and integrity. Ongoing measurement, rather than one time activities will help your organization reach a new level of data quality. This plain-language approach to measuring data can be understood by both business and IT and provides pra

  11. AN ASSESSMENT AND OPTIMIZATION OF QUALITY OF STRATEGY PROCESS

    Directory of Open Access Journals (Sweden)

    Snezana Nestic

    2013-12-01

    Full Text Available In order to improve the quality of their processes companies usually rely on quality management systems and the requirements of ISO 9001:2008. The small and medium-sized companies are faced with a series of challenges in objectification, evaluation and assessment of the quality of processes. In this paper, the strategy process is decomposed for one typical medium size of manufacturing company and the indicators of the defined sub processes, based on the requirements of ISO 9001:2008, are developed. The weights of sub processes are calculated using fuzzy set approach. Finally, the developed solution based on the genetic algorithm approach is presented and tested on data from 142 manufacturing companies. The presented solution enables assessment of the quality of a strategy process, ranks the indicators and provides a basis for successful improvement of the quality of the strategy process.

  12. Balancing Attended and Global Stimuli in Perceived Video Quality Assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew;

    2011-01-01

    The visual attention mechanism plays a key role in the human perception system and it has a significant impact on our assessment of perceived video quality. In spite of receiving less attention from the viewers, unattended stimuli can still contribute to the understanding of the visual content...... tuned by the attention map considers the degradations on the significantly attended stimuli. To generate the overall video quality score, global and local quality features are combined by a content adaptive linear fusion method and pooled over time, taking the temporal quality variation...

  13. Reliability of medical audit in quality assessment of medical care

    Directory of Open Access Journals (Sweden)

    Camacho Luiz Antonio Bastos

    1996-01-01

    Full Text Available Medical audit of hospital records has been a major component of quality of care assessment, although physician judgment is known to have low reliability. We estimated interrater agreement of quality assessment in a sample of patients with cardiac conditions admitted to an American teaching hospital. Physician-reviewers used structured review methods designed to improve quality assessment based on judgment. Chance-corrected agreement for the items considered more relevant to process and outcome of care ranged from low to moderate (0.2 to 0.6, depending on the review item and the principal diagnoses and procedures the patients underwent. Results from several studies seem to converge on this point. Comparisons among different settings should be made with caution, given the sensitivity of agreement measurements to prevalence rates. Reliability of review methods in their current stage could be improved by combining the assessment of two or more reviewers, and by emphasizing outcome-oriented events.

  14. Preliminary quality assessment of bovine colostrum

    Directory of Open Access Journals (Sweden)

    Alessandro Taranto

    2013-02-01

    Full Text Available Data on bovine colostrum quality are scarce or absent, although Commission Regulations No 1662/2006 and No 1663/2006 include colostrum in the context of chapters on milk. Thus the aim of the present work is to study some physical, chemical, hygiene and safety quality parameters of bovine colostrum samples collected from Sicily and Calabria dairy herds. Thirty individual samples were sampled after 2-3 days from partum. The laboratory tests included: pH, fat (FT, total nitrogen (TN, lactose (LTS and dry matter (NM percentage (Lactostar and somatic cell count (CCS (DeLaval cell counter DCC. Bacterial counts included: standard plate count (SPC, total psychrophilic aerobic count (PAC, total, fecal coliforms by MPN (Most Probable Number, sulphite-reducing bacteria (SR. Salmonella spp. was determined. Bacteriological examinations were performed according to the American Public Health Association (APHA methods, with some adjustements related to the requirements of the study. Statistical analysis of data was performed by Spearman’s rank correlation coefficient. The results showed a low variability of pH values and FT, TN and DM percentage between samples; whereas LTS trend was less noticeable. A significant negative correlation (P<0.01 was observed between pH, TN and LTS amount. The correlation between LTS and TN contents was highly significant (P<0.001. Highly significant and negative was the correlation (P<0.001 between DM, NT and LTS content. SPC mean values were 7.54 x106 CFU/mL; PAC mean values were also high (3.3x106 CFU/mL. Acceptable values of coagulase positive staphylococci were showed; 3 Staphylococcus aureus and 1 Staphylococcus epidermidis strains was isolated. Coagulase negative staphylococci counts were low. A high variability in the number of TC, as for FC was observed; bacterial loads were frequently fairly high. Salmonella spp. and SR bacteria were absent. It was assumed that bacteria from samples had a prevailing environmental origin

  15. Assessment on reliability of water quality in water distribution systems

    Institute of Scientific and Technical Information of China (English)

    伍悦滨; 田海; 王龙岩

    2004-01-01

    Water leaving the treatment works is usually of a high quality but its properties change during the transportation stage. Increasing awareness of the quality of the service provided within the water industry today and assessing the reliability of the water quality in a distribution system has become a major significance for decision on system operation based on water quality in distribution networks. Using together a water age model, a chlorine decay model and a model of acceptable maximum water age can assess the reliability of the water quality in a distribution system. First, the nodal water age values in a certain complex distribution system can be calculated by the water age model. Then, the acceptable maximum water age value in the distribution system is obtained based on the chlorine decay model. The nodes at which the water age values are below the maximum value are regarded as reliable nodes. Finally, the reliability index on the percentile weighted by the nodal demands reflects the reliability of the water quality in the distribution system. The approach has been applied in a real water distribution network. The contour plot based on the water age values determines a surface of the reliability of the water quality. At any time, this surface is used to locate high water age but poor reliability areas, which identify parts of the network that may be of poor water quality. As a result, the contour water age provides a valuable aid for a straight insight into the water quality in the distribution system.

  16. Identification of Nominated Classes for Software Refactoring Using Object-Oriented Cohesion Metrics

    Directory of Open Access Journals (Sweden)

    Safwat M. Ibrahim

    2012-03-01

    Full Text Available The production of well-developed software reduces the cost of the software maintainability. Therefore, many software metrics have been developed to measure the quality of the software design. Measuring class cohesion is considered as one of the most important software quality measurements. Unfortunately, most of approaches that have been proposed on cohesion metrics do not consider the inherited attributes and methods in measuring class cohesion. This paper provides a novel assessment criterion for measuring the quality of a software design. In this context, inherited attributes and methods are considered in the assessment. This offers a guideline for choosing the proper Depth of Inheritance Tree (DIT that refers to the nominated classes for refactoring. Experiments are carried out on more than 35K classes from more than 16 open-source projects using the most used cohesion metrics.

  17. Quality assessment of a placental perfusion protocol

    DEFF Research Database (Denmark)

    Mathiesen, Line; Mose, Tina; Mørck, Thit Juul;

    2010-01-01

    Validation of in vitro test systems using the modular approach with steps addressing reliability and relevance is an important aim when developing in vitro tests in e.g. reproductive toxicology. The ex vivo human placental perfusion system may be used for such validation, here presenting the plac......Validation of in vitro test systems using the modular approach with steps addressing reliability and relevance is an important aim when developing in vitro tests in e.g. reproductive toxicology. The ex vivo human placental perfusion system may be used for such validation, here presenting...... the placental perfusion model in Copenhagen including control substances. The positive control substance antipyrine shows no difference in transport regardless of perfusion media used or of terms of delivery (n=59, p... ml h(-1) from the fetal reservoir) when adding 2 (n=7) and 20mg (n=9) FITC-dextran/100 ml fetal perfusion media. Success rate of the Copenhagen placental perfusions is provided in this study, including considerations and quality control parameters. Three checkpoints suggested to determine success...

  18. Assessment of mesh simplification algorithm quality

    Science.gov (United States)

    Roy, Michael; Nicolier, Frederic; Foufou, S.; Truchetet, Frederic; Koschan, Andreas; Abidi, Mongi A.

    2002-03-01

    Traditionally, medical geneticists have employed visual inspection (anthroposcopy) to clinically evaluate dysmorphology. In the last 20 years, there has been an increasing trend towards quantitative assessment to render diagnosis of anomalies more objective and reliable. These methods have focused on direct anthropometry, using a combination of classical physical anthropology tools and new instruments tailor-made to describe craniofacial morphometry. These methods are painstaking and require that the patient remain still for extended periods of time. Most recently, semiautomated techniques (e.g., structured light scanning) have been developed to capture the geometry of the face in a matter of seconds. In this paper, we establish that direct anthropometry and structured light scanning yield reliable measurements, with remarkably high levels of inter-rater and intra-rater reliability, as well as validity (contrasting the two methods).

  19. Assessing wine quality using isotopic methods

    International Nuclear Information System (INIS)

    Full text: The analytical methods used to determine the isotope ratios of deuterium, carbon-13 and oxygen-18 in wines have gained official recognition from the Office International de la Vigne et du Vin (OIV) and National Organisation of Vine and Wine. The amount of stable isotopes in water and carbon dioxide from plant organic materials and their distribution in sugar and ethanol molecules are influenced by geo-climatic conditions of the region, grape varieties and the year of harvest. For wine characterization, to prove the botanical and geographical origin of the raw material, the isotopic analysis by continuous flow mass spectrometry CF-IRMS has made a significant contribution. This paper emphasize the results of a study concerning the assessing of water adulterated wines and non-grape alcohol and sugar additions at different concentration levels, using CF-IRMS analytical technique. (authors)

  20. An assessment of groundwater quality using water quality index in Chennai, Tamil Nadu, India

    Directory of Open Access Journals (Sweden)

    I Nanda Balan

    2012-01-01

    Full Text Available Context : Water, the elixir of life, is a prime natural resource. Due to rapid urbanization in India, the availability and quality of groundwater have been affected. According to the Central Groundwater Board, 80% of Chennai′s groundwater has been depleted and any further exploration could lead to salt water ingression. Hence, this study was done to assess the groundwater quality in Chennai city. Aim : To assess the groundwater quality using water quality index in Chennai city. Materials and Methods: Chennai city was divided into three zones based on the legislative constituency and from these three zones three locations were randomly selected and nine groundwater samples were collected and analyzed for physiochemical properties. Results: With the exception of few parameters, most of the water quality assessment parameters showed parameters within the accepted standard values of Bureau of Indian Standards (BIS. Except for pH in a single location of zone 1, none of the parameters exceeded the permissible values for water quality assessment as prescribed by the BIS. Conclusion: This study demonstrated that in general the groundwater quality status of Chennai city ranged from excellent to good and the groundwater is fit for human consumption based on all the nine parameters of water quality index and fluoride content.

  1. Constructing Assessment Model of Primary and Secondary Educational Quality with Talent Quality as the Core Standard

    Science.gov (United States)

    Chen, Benyou

    2014-01-01

    Quality is the core of education and it is important to standardization construction of primary and secondary education in urban (U) and rural (R) areas. The ultimate goal of the integration of urban and rural education is to pursuit quality urban and rural education. Based on analysing the related policy basis and the existing assessment models…

  2. Assessing the Quality of M-Learning Systems using ISO/IEC 25010

    Directory of Open Access Journals (Sweden)

    Anal Acharya

    2013-09-01

    Full Text Available Mobile learning offers several advantages over other forms of learning like ubiquity and idle time utilization. However for these advantages to be properly addressed there should be a check on the system quality. Poor quality systems will invalidate these benefits. Quality estimation in M-learning systems can be broadly classified into two categories: software system quality and learning characteristics quality. In this work, a M-Learning frame work is first developed. Software System quality is then evaluated following the ISO/IEC 25010 Software Quality model by proposing a set of metrics which measure the characteristics of a M-Learning systems. The applications of these metrics were then illustrated numerically.

  3. Voice and Speech Quality Perception Assessment and Evaluation

    CERN Document Server

    Jekosch, Ute

    2005-01-01

    Foundations of Voice and Speech Quality Perception starts out with the fundamental question of: "How do listeners perceive voice and speech quality and how can these processes be modeled?" Any quantitative answers require measurements. This is natural for physical quantities but harder to imagine for perceptual measurands. This book approaches the problem by actually identifying major perceptual dimensions of voice and speech quality perception, defining units wherever possible and offering paradigms to position these dimensions into a structural skeleton of perceptual speech and voice quality. The emphasis is placed on voice and speech quality assessment of systems in artificial scenarios. Many scientific fields are involved. This book bridges the gap between two quite diverse fields, engineering and humanities, and establishes the new research area of Voice and Speech Quality Perception.

  4. NOVEL IMAGE-DEPENDENT QUALITY ASSESSMENT MEASURES

    Directory of Open Access Journals (Sweden)

    Asaad Noori Hashim

    2014-01-01

    Full Text Available The image is a 2D signal whose pixels are highly correlated in a 2D manner. Hence, using pixel by pixel error what we called previously Mean-Square Error, (MSE is not an efficient way to compare two similar images (e.g., an original image and a compressed version of it. Due to this correlation, image comparison needs a correlative quality measure. It is clear that correlation between two signals gives an idea about the relation between samples of the two signals. Generally speaking, correlation is a measure of similarity between the two signals. An important step in image similarity was introduced by Wang and Bovik where a structural similarity measure has been designed and called SSIM. The similarity measure SSIM has been widely used. It is based on statistical similarity between the two images. However, SSIM can produce confusing results in some cases where it may give a non-trivial amount of similarity while the two images are quite different. This study proposes methods to determine a reliable similarity between any two images, similar or dissimilar, in the sense that dissimilar images have near-zero similarity measure, while similar images give near-one (maximum similarity. The proposed methods are based on image-dependent properties, specifically the outcomes of edge detection and segmentation, in addition to the statistical properties. The proposed methods are tested under Gaussian noise, impulse noise and blur, where good results have been obtained even under low Peak Signal-to-Noise Ratios (PSNR’s.

  5. Assessing the quality of a student-generated question repository

    Science.gov (United States)

    Bates, Simon P.; Galloway, Ross K.; Riise, Jonathan; Homer, Danny

    2014-12-01

    We present results from a study that categorizes and assesses the quality of questions and explanations authored by students in question repositories produced as part of the summative assessment in introductory physics courses over two academic sessions. Mapping question quality onto the levels in the cognitive domain of Bloom's taxonomy, we find that students produce questions of high quality. More than three-quarters of questions fall into categories beyond simple recall, in contrast to similar studies of student-authored content in different subject domains. Similarly, the quality of student-authored explanations for questions was also high, with approximately 60% of all explanations classified as being of high or outstanding quality. Overall, 75% of questions met combined quality criteria, which we hypothesize is due in part to the in-class scaffolding activities that we provided for students ahead of requiring them to author questions. This work presents the first systematic investigation into the quality of student produced assessment material in an introductory physics context, and thus complements and extends related studies in other disciplines.

  6. Semantic Metrics for Object Oriented Design

    Science.gov (United States)

    Etzkorn, Lethe

    2003-01-01

    The purpose of this proposal is to research a new suite of object-oriented (OO) software metrics, called semantic metrics, that have the potential to help software engineers identify fragile, low quality code sections much earlier in the development cycle than is possible with traditional OO metrics. With earlier and better Fault detection, software maintenance will be less time consuming and expensive, and software reusability will be improved. Because it is less costly to correct faults found earlier than to correct faults found later in the software lifecycle, the overall cost of software development will be reduced. Semantic metrics can be derived from the knowledge base of a program understanding system. A program understanding system is designed to understand a software module. Once understanding is complete, the knowledge-base contains digested information about the software module. Various semantic metrics can be collected on the knowledge base. This new kind of metric measures domain complexity, or the relationship of the software to its application domain, rather than implementation complexity, which is what traditional software metrics measure. A semantic metric will thus map much more closely to qualities humans are interested in, such as cohesion and maintainability, than is possible using traditional metrics, that are calculated using only syntactic aspects of software.

  7. A Brief Overview Of Software Testing Metrics

    Directory of Open Access Journals (Sweden)

    Premal B. Nirpal,

    2011-01-01

    Full Text Available Metrics are gaining importance and acceptance in corporate sectors as organizations grow, mature and strive to improve enterprise qualities. Measurement of a test process is a required competence for an effective software test manager for designing and evaluating a cost effective test strategy. Effective management of any process requires quantification, measurement and modeling. Software Metrics provide quantitative approach to the development and validation of the software process models. Metrics help organization to obtain the information it needs to continue to improve its productivity, reduce errors and improve acceptance of processes, products and services and achieve the desired Goal. This paper, focusing on metrics lifecycle, various software testing metrics, need for having metrics, evaluation process and arriving at ideal conclusion have also been discussed in the present paper.

  8. Non-human biota dose assessment. Sensitivity analysis and knowledge quality assessment

    International Nuclear Information System (INIS)

    This report provides a summary of a programme of work, commissioned within the BIOPROTA collaborative forum, to assess the quantitative and qualitative elements of uncertainty associated with biota dose assessment of potential impacts of long-term releases from geological disposal facilities (GDF). Quantitative and qualitative aspects of uncertainty were determined through sensitivity and knowledge quality assessments, respectively. Both assessments focused on default assessment parameters within the ERICA assessment approach. The sensitivity analysis was conducted within the EIKOS sensitivity analysis software tool and was run in both generic and test case modes. The knowledge quality assessment involved development of a questionnaire around the ERICA assessment approach, which was distributed to a range of experts in the fields of non-human biota dose assessment and radioactive waste disposal assessments. Combined, these assessments enabled critical model features and parameters that are both sensitive (i.e. have a large influence on model output) and of low knowledge quality to be identified for each of the three test cases. The output of this project is intended to provide information on those parameters that may need to be considered in more detail for prospective site-specific biota dose assessments for GDFs. Such information should help users to enhance the quality of their assessments and build greater confidence in the results. (orig.)

  9. An Approach for Assessing the Signature Quality of Various Chemical Assays when Predicting the Culture Media Used to Grow Microorganisms

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, Aimee E.; Sego, Landon H.; Webb-Robertson, Bobbie-Jo M.; Kreuzer, Helen W.; Anderson, Richard M.; Unwin, Stephen D.; Weimar, Mark R.; Tardiff, Mark F.; Corley, Courtney D.

    2013-02-01

    We demonstrate an approach for assessing the quality of a signature system designed to predict the culture medium used to grow a microorganism. The system was comprised of four chemical assays designed to identify various ingredients that could be used to produce the culture medium. The analytical measurements resulting from any combination of these four assays can be used in a Bayesian network to predict the probabilities that the microorganism was grown using one of eleven culture media. We evaluated combinations of the signature system by removing one or more of the assays from the Bayes network. We measured and compared the quality of the various Bayes nets in terms of fidelity, cost, risk, and utility, a method we refer to as Signature Quality Metrics

  10. From Log Files to Assessment Metrics: Measuring Students' Science Inquiry Skills Using Educational Data Mining

    Science.gov (United States)

    Gobert, Janice D.; Sao Pedro, Michael; Raziuddin, Juelaila; Baker, Ryan S.

    2013-01-01

    We present a method for assessing science inquiry performance, specifically for the inquiry skill of designing and conducting experiments, using educational data mining on students' log data from online microworlds in the Inq-ITS system (Inquiry Intelligent Tutoring System; www.inq-its.org). In our approach, we use a 2-step process: First we…

  11. Evaluating MyPlate: An Expanded Framework Using Traditional and Nontraditional Metrics for Assessing Health Communication Campaigns

    Science.gov (United States)

    Levine, Elyse; Abbatangelo-Gray, Jodie; Mobley, Amy R.; McLaughlin, Grant R.; Herzog, Jill

    2012-01-01

    MyPlate, the icon and multimodal communication plan developed for the 2010 Dietary Guidelines for Americans (DGA), provides an opportunity to consider new approaches to evaluating the effectiveness of communication initiatives. A review of indicators used in assessments for previous DGA communication initiatives finds gaps in accounting for…

  12. Assessment of Electric Power Quality in Ships'Modern Systems

    Institute of Scientific and Technical Information of China (English)

    Janusz Mindykowski; XU Xiao-yan

    2004-01-01

    The paper deals with the selected problems of electric power quality in ships'modern systems.In the introduction the fundamentals of electric power quality assessment,such as the relations and consequences among power quality phenomena and indices,secondly as the methods and tools as well as the appropriate instrumentation,have been shortly presented.Afterwards,the basic characteristic of power systems on modern ships has been given.The main focus of the paper is put on the assessment of electric power quality in ships'systems fitted with converter subsystems.The state of the art and actual tendencies in the discussed matter have been shown.Some chosen experimental results,based on the research carried out under supervision of the author,have been presented,too.Finally,some concluding issues have been shortly commented on.

  13. Quality assessment of malaria laboratory diagnosis in South Africa.

    Science.gov (United States)

    Dini, Leigh; Frean, John

    2003-01-01

    To assess the quality of malaria diagnosis in 115 South African laboratories participating in the National Health Laboratory Service Parasitology External Quality Assessment Programme we reviewed the results from 7 surveys from January 2000 to August 2002. The mean percentage incorrect result rate was 13.8% (95% CI 11.3-16.9%), which is alarmingly high, with about 1 in 7 blood films being incorrectly interpreted. Most participants with incorrect blood film interpretations had acceptable Giemsa staining quality, indicating that there is less of a problem with staining technique than with blood film interpretation. Laboratories in provinces in which malaria is endemic did not necessarily perform better than those in non-endemic areas. The results clearly suggest that malaria laboratory diagnosis throughout South Africa needs strengthening by improving laboratory standardization and auditing, training, quality assurance and referral resources. PMID:16117961

  14. Sediment quality and ecorisk assessment factors for a major river system

    International Nuclear Information System (INIS)

    Sediment-related water quality and risk assessment parameters for the Columbia River were developed using heavy metal loading and concentration data from Lake Roosevelt (river km 1120) to the mouth and adjacent coastal zone. Correlation of Pb, Zn, Hg, and Cd concentrations in downstream sediments with refinery operations in British Columbia suggest that solutes with Kd's > 105 reach about 1 to 5 μg/g per metric ton/year of input. A low-suspended load (upriver avg. <10 mg/L) and high particle-surface reactivity account for the high clay-fraction contaminant concentrations. In addition, a sediment exposure path was demonstrated based on analysis of post-shutdown biodynamics of a heavy metal radiotracer. The slow decline in sediment was attributed to resuspension, bioturbation, and anthropogenic disturbances. The above findings suggest that conservative sediment quality criteria should be used to restrict additional contaminant loading in the upper drainage basin. The issuance of an advisory for Lake Roosevelt, due in part to Hg accumulation in large sport fish, suggests more restrictive controls are needed. A monitoring strategy for assessing human exposure potential and the ecological health of the river is proposed

  15. Sediment quality and ecorisk assessment factors for a major river system

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, V.G. [Westinghouse Hanford Co., Richland, WA (United States); Wagner, J.J. [Pacific Northwest Lab., Richland, WA (United States); Cutshall, N.H. [Oak Ridge National Lab., TN (United States)

    1993-08-01

    Sediment-related water quality and risk assessment parameters for the Columbia River were developed using heavy metal loading and concentration data from Lake Roosevelt (river km 1120) to the mouth and adjacent coastal zone. Correlation of Pb, Zn, Hg, and Cd concentrations in downstream sediments with refinery operations in British Columbia suggest that solutes with K{sub d}`s > 10{sup 5} reach about 1 to 5 {mu}g/g per metric ton/year of input. A low-suspended load (upriver avg. <10 mg/L) and high particle-surface reactivity account for the high clay-fraction contaminant concentrations. In addition, a sediment exposure path was demonstrated based on analysis of post-shutdown biodynamics of a heavy metal radiotracer. The slow decline in sediment was attributed to resuspension, bioturbation, and anthropogenic disturbances. The above findings suggest that conservative sediment quality criteria should be used to restrict additional contaminant loading in the upper drainage basin. The issuance of an advisory for Lake Roosevelt, due in part to Hg accumulation in large sport fish, suggests more restrictive controls are needed. A monitoring strategy for assessing human exposure potential and the ecological health of the river is proposed.

  16. Quality assessment of user-generated video using camera motion

    OpenAIRE

    Guo, Jinlin; Gurrin, Cathal; Hopfgartner, Frank; Zhang, ZhenXing; Lao, Songyang

    2013-01-01

    With user-generated video (UGV) becoming so popular on theWeb, the availability of a reliable quality assessment (QA) measure of UGV is necessary for improving the users’ quality of experience in videobased application. In this paper, we explore QA of UGV based on how much irregular camera motion it contains with low-cost manner. A blockmatch based optical flow approach has been employed to extract camera motion features in UGV, based on which, irregular camera motion is calculated and ...

  17. Total Variation Based Perceptual Image Quality Assessment Modeling

    OpenAIRE

    Yadong Wu; Hongying Zhang; Ran Duan

    2014-01-01

    Visual quality measure is one of the fundamental and important issues to numerous applications of image and video processing. In this paper, based on the assumption that human visual system is sensitive to image structures (edges) and image local luminance (light stimulation), we propose a new perceptual image quality assessment (PIQA) measure based on total variation (TV) model (TVPIQA) in spatial domain. The proposed measure compares TVs between a distorted image and its reference image to ...

  18. Medical education quality assessment. Perspectives in University Policlinic context.

    Directory of Open Access Journals (Sweden)

    Maricel Castellanos González

    2008-08-01

    Full Text Available Quality has currently a central role within our National Health System, particularly in the formative process of human resources where we need professionals more prepared every day and ready to face complex tasks. We make a bibliographic review related to quality assessment of educational process in health system to analyze the perspectives of the new model of University Policlinic, formative context of Medical Sciences students.

  19. Assessment of Air Quality Status in Wuhan, China

    OpenAIRE

    Jiabei Song; Wu Guang; Linjun Li; Rongbiao Xiang

    2016-01-01

    In this study, air quality characteristics in Wuhan were assessed through descriptive statistics and Hierarchical Cluster Analysis (HCA). Results show that air quality has slightly improved over the recent years. While the NO2 concentration is still increasing, the PM10 concentration shows a clearly downward trend with some small fluctuations. In addition, the SO2 concentration has steadily decreased since 2008. Nevertheless, the current level of air pollutants is still quite high, with the P...

  20. Assessment of spatial audio quality based on sound attributes

    OpenAIRE

    LE BAGOUSSE, Sarah; Paquier, Mathieu; Colomes, Catherine

    2012-01-01

    International audience Spatial audio technologies become very important in audio broadcast services. But, there is a lack of methods for evaluating spatial audio quality. Standards do not take into account spatial dimension of sound and assessments are limited to the overall quality particularly in the context of audio coding. Through different elicitation methods, a long list of attributes has been established to characterize sound but it is difficult to include them in a listening test. ...

  1. Modeling the Color Image and Video Quality on Liquid Crystal Displays with Backlight Dimming

    OpenAIRE

    Korhonen, Jari; Mantel, Claire; Burini, Nino; Forchhammer, Søren

    2013-01-01

    Objective image and video quality metrics focus mostly on the digital representation of the signal. However, the display characteristics are also essential for the overall Quality of Experience (QoE). In this paper, we use a model of a backlight dimming system for Liquid Crystal Display (LCD) and show how the modeled image can be used as an input to quality assessment algorithms. For quality assessment, we propose an image quality metric, based on Peak Signal-to-Noise Ratio (PSNR) computation...

  2. Quality of life assessment in dogs and cats receiving chemotherapy

    DEFF Research Database (Denmark)

    Vøls, Kåre K.; Heden, Martin A.; Kristensen, Annemarie Thuri;

    2016-01-01

    comparative analysis of published papers on the effects of chemotherapy on QoL in dogs and cats were conducted. This was supplemented with a comparison of the parameters and domains used in veterinary QoL-assessments with those used in the Pediatric Quality of Life Inventory (PedsQL™) questionnaire designed...... to assess QoL in toddlers. Each of the identified publications including QoL-assessment in dogs and cats receiving chemotherapy applied a different method of QoL-assessment. In addition, the veterinary QoL-assessments were mainly focused on physical clinical parameters, whereas the emotional (6/11), social...... (4/11) and role (4/11) domains were less represented. QoL-assessment of cats and dogs receiving chemotherapy is in its infancy. The most commonly reported method to assess QoL was questionnaire based and mostly included physical and clinical parameters. Standardizing and including a complete range...

  3. The Information Quality Triangle: a methodology to assess clinical information quality.

    Science.gov (United States)

    Choquet, Rémy; Qouiyd, Samiha; Ouagne, David; Pasche, Emilie; Daniel, Christel; Boussaïd, Omar; Jaulent, Marie-Christine

    2010-01-01

    Building qualitative clinical decision support or monitoring based on information stored in clinical information (or EHR) systems cannot be done without assessing and controlling information quality. Numerous works have introduced methods and measures to qualify and enhance data, information models and terminologies quality. This paper introduces an approach based on an Information Quality Triangle that aims at providing a generic framework to help in characterizing quality measures and methods in the context of the integration of EHR data in a clinical datawarehouse. We have successfully experimented the proposed approach at the HEGP hospital in France, as part of the DebugIT EU FP7 project.

  4. Quantifying subjective assessment of sleep quality, quality of life and depressed mood in children with enuresis

    OpenAIRE

    Üçer, Oktay; Gümüş, Bilal

    2013-01-01

    Aim The aim of this study was to compare a group of children who has monosymptomatic nocturnal enuresis (MNE) with a healthy control group by assessing their depression scales, quality of life and sleep quality. Methods Hundred and one children with MNE and 38 healthy controls are included in the study, aged between 8 and 16 years old. All participants were performed the Pediatric Quality of Life Inventory (PedsQL 4.0), Depression Scale for Children (CES-DC) and The Pittsburgh Sleep Quality I...

  5. Assessing service quality satisfying the expectations of library customers

    CERN Document Server

    Hernon, Peter; Dugan, Robert

    2015-01-01

    Academic and public libraries are continuing to transform as the information landscape changes, expanding their missions into new service roles that call for improved organizational performance and accountability. Since Assessing Service Quality premiered in 1998, receiving the prestigious Highsmith Library Literature Award, scores of library managers and administrators have trusted its guidance for applying a customer-centered approach to service quality and performance evaluation. This extensively revised and updated edition explores even further the ways technology influences both the experiences of library customers and the ways libraries themselves can assess those experiences.

  6. Assessment of foodservice quality and identification of improvement strategies using hospital foodservice quality model

    Science.gov (United States)

    Kim, Kyungjoo; Kim, Minyoung

    2010-01-01

    The purposes of this study were to assess hospital foodservice quality and to identify causes of quality problems and improvement strategies. Based on the review of literature, hospital foodservice quality was defined and the Hospital Foodservice Quality model was presented. The study was conducted in two steps. In Step 1, nutritional standards specified on diet manuals and nutrients of planned menus, served meals, and consumed meals for regular, diabetic, and low-sodium diets were assessed in three general hospitals. Quality problems were found in all three hospitals since patients consumed less than their nutritional requirements. Considering the effects of four gaps in the Hospital Foodservice Quality model, Gaps 3 and 4 were selected as critical control points (CCPs) for hospital foodservice quality management. In Step 2, the causes of the gaps and improvement strategies at CCPs were labeled as "quality hazards" and "corrective actions", respectively and were identified using a case study. At Gap 3, inaccurate forecasting and a lack of control during production were identified as quality hazards and corrective actions proposed were establishing an accurate forecasting system, improving standardized recipes, emphasizing the use of standardized recipes, and conducting employee training. At Gap 4, quality hazards were menus of low preferences, inconsistency of menu quality, a lack of menu variety, improper food temperatures, and patients' lack of understanding of their nutritional requirements. To reduce Gap 4, the dietary departments should conduct patient surveys on menu preferences on a regular basis, develop new menus, especially for therapeutic diets, maintain food temperatures during distribution, provide more choices, conduct meal rounds, and provide nutrition education and counseling. The Hospital Foodservice Quality Model was a useful tool for identifying causes of the foodservice quality problems and improvement strategies from a holistic point of view

  7. Assessment of quality of life in bronchial asthma patients

    Directory of Open Access Journals (Sweden)

    N Nalina

    2015-01-01

    Full Text Available Introduction: Asthma is a common chronic disease that affects persons of all ages. People with asthma report impact on the physical, psychological and social domains of quality of life. Health-related quality of life (HRQoL measures have been developed to complement traditional health measures such as prevalence, mortality and hospitalization as indicators of the impact of disease. Objective and Study Design: The objective of this study was to assess HRQoL in Bronchial asthma patients and to relate the severity of asthma with their quality of life. About 85 asthma patients were evaluated for HRQoL and their pulmonary function tests values were correlated with HRQoL scores. Results and Conclusion: It was found that asthma patients had poor quality of life. There was greater impairment in quality of life in females, obese and middle age patients indicating that sex, body mass index and age are determinants of HRQoL in asthma patients.

  8. Assessing users satisfaction with service quality in Slovenian public library

    Directory of Open Access Journals (Sweden)

    Igor Podbrežnik

    2016-07-01

    Full Text Available Purpose: A research was made into user satisfaction with regard to the quality of library services in one of the Slovenian public libraries. The aim was to establish the type of service quality level actually expected by the users, and to determine their satisfaction with the current quality level of available library services.Methodology: The research was performed by means of the SERVQUAL measuring tool which was used to determine the size and direction of the gap between the detected and the expected quality of library services among public library users.Results: Different groups of users provide different assessments of specific quality factors, and a library cannot satisfy the expectations of each and every user if most quality factors display discrepancies between the estimated perception and expectations. The users expect more reliable services and more qualified library staff members who would understand and allocate time for each user’s individual needs. The largest discrepancies from the expectations are detected among users in the under-35 age group and among the more experienced and skilled library users. The results of factor analysis confirm the fact that a higher number of quality factors can be explained by three common factors affecting the satisfaction of library users. A strong connection between user satisfaction and their assessment of the integral quality of services and loyalty has been established.Research restrictions: The research results should not be generalised and applied to all Slovenian public libraries since they differ in many important aspects. In addition, a non-random sampling method was used.Research originality/Applicability: The conducted research illustrates the use of a measuring tool that was developed with the aim of determining the satisfaction of users with the quality of library services in Slovenian public libraries. Keywords: public library, user satisfaction, quality of library services, user

  9. Landscape morphology metrics for urban areas: analysis of the role of vegetation in the management of the quality of urban environment

    Directory of Open Access Journals (Sweden)

    Danilo Marques de Magalhães

    2013-05-01

    Full Text Available This study has the objective to demonstrate the applicability of landscape metric analysis undertaken in fragments of urban land use. More specifically, it focuses in low vegetation cover, arboreal and shrubbery vegetation and their distribution on land use. Differences of vegetation cover in dense urban areas are explained. It also discusses briefly the state-of-the-art Landscape Ecology and landscape metrics. It develops, as an example, a case study in Belo Horizonte, Minas Gerais, Brazil. For this study, it selects the use of the area’s metrics, the relation between area, perimeter, core, and circumscribed circle. From this analysis, this paper proposes the definition of priority areas for conservation, urban parks, free spaces of common land, linear parks and green corridors. It is demonstrated that, in order to design urban landscape, studies of two-dimension landscape representations are still interesting, but should consider the systemic relation between different factors related to shape and land use.

  10. Assessing the Quality of MT Systems for Hindi to English Translation

    Science.gov (United States)

    Kalyani, Aditi; Kumud, Hemant; Pal Singh, Shashi; Kumar, Ajai

    2014-03-01

    Evaluation plays a vital role in checking the quality of MT output. It is done either manually or automatically. Manual evaluation is very time consuming and subjective, hence use of automatic metrics is done most of the times. This paper evaluates the translation quality of different MT Engines for Hindi-English (Hindi data is provided as input and English is obtained as output) using various automatic metrics like BLEU, METEOR etc. Further the comparison automatic evaluation results with Human ranking have also been given.

  11. Non-indigenous macroinvertebrate species in Lithuanian fresh waters, Part 2: Macroinvertebrate assemblage deviation from naturalness in lotic systems and the consequent potential impacts on ecological quality assessment

    Directory of Open Access Journals (Sweden)

    Arbačiauskas K.

    2011-12-01

    Full Text Available The biological pressure represented by non-indigenous macroinvertebrate species (NIMS should be addressed in the implementation of EU Water Framework Directive as this can have a direct impact on the ’naturalness’ of the invaded macroinvertebrate assemblage. The biocontamination concept allows assessment of this deviation from naturalness, by evaluation of abundance and disparity contamination of an assemblage. This study aimed to assess the biocontamination of macroinvertebrate assemblages in Lithuanian rivers, thereby revealing the most high-impact non-indigenous species, and to explore the relationship between biocontamination and conventional metrics of ecological quality. Most of the studied rivers appeared to be impacted by NIMS. The amphipods Pontogammarus robustoides, Chelicorophium curvispinum and snail Litoglyphus naticoides were revealed as high-impact NIMS for Lithuanian lotic systems. Metrics of ecological quality which largely depend upon the richness of indicator taxa, such as the biological monitoring working party (BMWP score and Ephemeroptera/Plecoptera/Trichoptera (EPT taxa number, were negatively correlated with biocontamination, implying they could provide unreliable ecological quality estimates when NIMS are present. Routine macroinvertebrate water quality monitoring data are sufficient for generation of the biocontamination assessment and thus can provide supplementary information, with minimal extra expense or effort. We therefore recommend that biocontamination assessment is included alongside established methods for gauging biological and chemical water quality.

  12. MO-A-16A-01: QA Procedures and Metrics: In Search of QA Usability

    Energy Technology Data Exchange (ETDEWEB)

    Sathiaseelan, V [Northwestern Memorial Hospital, Chicago, IL (United States); Thomadsen, B [University of Wisconsin, Madison, WI (United States)

    2014-06-15

    Radiation therapy has undergone considerable changes in the past two decades with a surge of new technology and treatment delivery methods. The complexity of radiation therapy treatments has increased and there has been increased awareness and publicity about the associated risks. In response, there has been proliferation of guidelines for medical physicists to adopt to ensure that treatments are delivered safely. Task Group recommendations are copious, and clinical physicists' hours are longer, stretched to various degrees between site planning and management, IT support, physics QA, and treatment planning responsibilities.Radiation oncology has many quality control practices in place to ensure the delivery of high-quality, safe treatments. Incident reporting systems have been developed to collect statistics about near miss events at many radiation oncology centers. However, tools are lacking to assess the impact of these various control measures. A recent effort to address this shortcoming is the work of Ford et al (2012) who recently published a methodology enumerating quality control quantification for measuring the effectiveness of safety barriers. Over 4000 near-miss incidents reported from 2 academic radiation oncology clinics were analyzed using quality control quantification, and a profile of the most effective quality control measures (metrics) was identified.There is a critical need to identify a QA metric to help the busy clinical physicists to focus their limited time and resources most effectively in order to minimize or eliminate errors in the radiation treatment delivery processes. In this symposium the usefulness of workflows and QA metrics to assure safe and high quality patient care will be explored.Two presentations will be given:Quality Metrics and Risk Management with High Risk Radiation Oncology ProceduresStrategies and metrics for quality management in the TG-100 Era Learning Objectives: Provide an overview and the need for QA usability

  13. MO-A-16A-01: QA Procedures and Metrics: In Search of QA Usability

    International Nuclear Information System (INIS)

    Radiation therapy has undergone considerable changes in the past two decades with a surge of new technology and treatment delivery methods. The complexity of radiation therapy treatments has increased and there has been increased awareness and publicity about the associated risks. In response, there has been proliferation of guidelines for medical physicists to adopt to ensure that treatments are delivered safely. Task Group recommendations are copious, and clinical physicists' hours are longer, stretched to various degrees between site planning and management, IT support, physics QA, and treatment planning responsibilities.Radiation oncology has many quality control practices in place to ensure the delivery of high-quality, safe treatments. Incident reporting systems have been developed to collect statistics about near miss events at many radiation oncology centers. However, tools are lacking to assess the impact of these various control measures. A recent effort to address this shortcoming is the work of Ford et al (2012) who recently published a methodology enumerating quality control quantification for measuring the effectiveness of safety barriers. Over 4000 near-miss incidents reported from 2 academic radiation oncology clinics were analyzed using quality control quantification, and a profile of the most effective quality control measures (metrics) was identified.There is a critical need to identify a QA metric to help the busy clinical physicists to focus their limited time and resources most effectively in order to minimize or eliminate errors in the radiation treatment delivery processes. In this symposium the usefulness of workflows and QA metrics to assure safe and high quality patient care will be explored.Two presentations will be given:Quality Metrics and Risk Management with High Risk Radiation Oncology ProceduresStrategies and metrics for quality management in the TG-100 Era Learning Objectives: Provide an overview and the need for QA usability

  14. On Einstein Kropina metrics

    OpenAIRE

    Zhang, Xiaoling; Shen, Yi-Bing

    2012-01-01

    In this paper, a characteristic condition of Einstein Kropina metrics is given. By the characteristic condition, we prove that a non-Riemannian Kropina metric $F=\\frac{\\alpha^2}{\\beta}$ with constant Killing form $\\beta$ on an n-dimensional manifold $M$, $n\\geq 2$, is an Einstein metric if and only if $\\alpha$ is also an Einstein metric. By using the navigation data $(h,W)$, it is proved that an n-dimensional ($n\\geq2$) Kropina metric $F=\\frac{\\alpha^2}{\\beta}$ is Einstein if and only if the ...

  15. Metric Clifford Algebra

    OpenAIRE

    Fernández, V. V.; Moya, A. M.; Rodrigues Jr., W. A.

    2002-01-01

    In this paper we introduce the concept of metric Clifford algebra $\\mathcal{C\\ell}(V,g)$ for a $n$-dimensional real vector space $V$ endowed with a metric extensor $g$ whose signature is $(p,q)$, with $p+q=n$. The metric Clifford product on $\\mathcal{C\\ell}(V,g)$ appears as a well-defined \\emph{deformation}(induced by $g$) of an euclidean Clifford product on $\\mathcal{C\\ell}(V)$. Associated with the metric extensor $g,$ there is a gauge metric extensor $h$ which codifies all the geometric inf...

  16. Assessing the nutritional stress hypothesis: Relative influence of diet quantity and quality on seabird productivity

    Science.gov (United States)

    Jodice, P.G.R.; Roby, D.D.; Turco, K.R.; Suryan, R.M.; Irons, D.B.; Piatt, J.F.; Shultz, M.T.; Roseneau, D.G.; Kettle, A.B.; Anthony, J.A.

    2006-01-01

    Food availability comprises a complex interaction of factors that integrates abundance, taxonomic composition, accessibility, and quality of the prey base. The relationship between food availability and reproductive performance can be assessed via the nutritional stress (NSH) and junkfood (JFH) hypotheses. With respect to reproductive success, NSH posits that a deficiency in any of the aforementioned metrics can have a deleterious effect on a population via poor reproductive success. JFH, a component of NSH, posits specifically that it is a decline in the quality of food (i.e. energy density and lipid content) that leads to poor reproductive success. We assessed each in relation to reproductive success in a piscivorous seabird, the black-legged kittiwake Rissa tridactyla. We measured productivity, taxonomic composition, frequency, size, and quality of meals delivered to nestlings from 1996 to 1999 at 6 colonies in Alaska, USA, 3 each in Prince William Sound and Lower Cook Inlet. Productivity varied widely among colony-years. Pacific herring Clupea pallasi, sand lance Ammodytes hexapterus, and capelin Mallotus villosus comprised ca. 80% of the diet among colony-years, and each was characterized by relatively high energy density. Diet quality for kittiwakes in this region therefore remained uniformly high during this study. Meal delivery rate and meal size were quite variable among colony-years, however, and best explained the variability in productivity. Parent kittiwakes appeared to select prey that were energy dense and that maximized the biomass provisioned to broods. While these results fail to support JFH, they do provide substantial support for NSH. ?? Inter-Research 2006.

  17. Using descriptive mark-up to formalize translation quality assessment

    OpenAIRE

    Kutuzov, Andrey

    2008-01-01

    The paper deals with using descriptive mark-up to emphasize translation mistakes. The author postulates the necessity to develop a standard and formal XML-based way of describing translation mistakes. It is considered to be important for achieving impersonal translation quality assessment. Marked-up translations can be used in corpus translation studies; moreover, automatic translation assessment based on marked-up mistakes is possible. The paper concludes with setting up guidelines for furth...

  18. Soil bioassays as tools for sludge compost quality assessment

    OpenAIRE

    Domene, X.; Solà i Sau, Laura; Ramírez Hernández, Wilson Ariel; Alcañiz, Josep M.; Andrés Pastor, Pilar

    2011-01-01

    Composting is a waste management technology that is becoming more widespread as a response to the increasing production of sewage sludge and the pressure for its reuse in soil. In this study, different bioassays (plant germination, earthworm survival, biomass and reproduction, and collembolan survival and reproduction) were assessed for their usefulness in the compost quality assessment. Compost samples, from two different composting plants, were taken along the composting process, which were...

  19. Using descriptive mark-up to formalize translation quality assessment

    CERN Document Server

    Kutuzov, Andrey

    2008-01-01

    The paper deals with using descriptive mark-up to emphasize translation mistakes. The author postulates the necessity to develop a standard and formal XML-based way of describing translation mistakes. It is considered to be important for achieving impersonal translation quality assessment. Marked-up translations can be used in corpus translation studies; moreover, automatic translation assessment based on marked-up mistakes is possible. The paper concludes with setting up guidelines for further activity within the described field.

  20. Entropy-Based Assessment of Water Quality Monitoring Networks

    OpenAIRE

    ÖZKUL, Sevinç

    2001-01-01

    Assessment of water quality monitoring networks requires potential methods to delineate the efficiency and cost-effectiveness of current monitoring programs. To this end, the concept of entropy has been considered as a promising method in previous studies as it quantitatively measures the information produced by a network. This paper introduces a new approach for the assessment of combined spatial/temporal frequencies of monitoring networks. The results are demonstrated in the case ...

  1. Strategic environmental assessment quality assurance: evaluating and improving the consistency of judgments in assessment panels

    International Nuclear Information System (INIS)

    Assessment panels and expert judgment are playing increasing roles in the practice of strategic environmental assessment (SEA). Thus, the quality of an SEA decision rests considerably on the quality of the judgments of the assessment panel. However, there exists very little guidance in the SEA literature for practitioners concerning the treatment and integration of expert judgment into SEA decision-making processes. Subsequently, the performance of SEAs based on expert judgment is often less than satisfactory, and quality improvements are required in the SEA process. Based on the lessons learned from strategic- and project-level impact assessment practices, this paper outlines a number of principles concerning the use of assessment panels in SEA decision-making, and attempts to provide some guidance for SEA practitioners in this regard. Particular attention is given to the notion and value of consistency in assessment panel judgments

  2. Towards Perceptual Quality Evaluation of Dynamic Meshes

    OpenAIRE

    Torkhani, Fakhri; Wang, Kai; Montanvert, Annick

    2011-01-01

    In practical applications, it is common that a 3D mesh undergoes some lossy operations. Since the end users of 3D meshes are often human beings, it is thus important to derive metrics that can faithfully assess the perceptual distortions induced by these operations. Like in the case of image quality assessment, metrics based on mesh geometric distances (e.g. Hausdorff distance and root mean squared error) cannot correctly predict the visual quality degradation. Recently, several perceptually-...

  3. Forensic mental health assessment in France: recommendations for quality improvement.

    Science.gov (United States)

    Combalbert, Nicolas; Andronikof, Anne; Armand, Marine; Robin, Cécile; Bazex, Hélène

    2014-01-01

    The quality of forensic mental health assessment has been a growing concern in various countries on both sides of the Atlantic, but the legal systems are not always comparable and some aspects of forensic assessment are specific to a given country. This paper describes the legal context of forensic psychological assessment in France (i.e. pre-trial investigation phase entrusted to a judge, with mental health assessment performed by preselected professionals called "experts" in French), its advantages and its pitfalls. Forensic psychiatric or psychological assessment is often an essential and decisive element in criminal cases, but since a judiciary scandal which was made public in 2005 (the Outreau case) there has been increasing criticism from the public and the legal profession regarding the reliability of clinical conclusions. Several academic studies and a parliamentary report have highlighted various faulty aspects in both the judiciary process and the mental health assessments. The heterogeneity of expert practices in France appears to be mainly related to a lack of consensus on several core notions such as mental health diagnosis or assessment methods, poor working conditions, lack of specialized training, and insufficient familiarity with the Code of Ethics. In this article we describe and analyze the French practice of forensic psychologists and psychiatrists in criminal cases and propose steps that could be taken to improve its quality, such as setting up specialized training courses, enforcing the Code of Ethics for psychologists, and calling for consensus on diagnostic and assessment methods.

  4. Theoretical Aspects and Methodological Approaches to Sales Services Quality Assessment

    Directory of Open Access Journals (Sweden)

    Tarasova EE

    2015-11-01

    Full Text Available The article defines trade service quality and proposes an object-oriented approach for its essence interpretation, according to which such components as product offering and goods quality, service forms and goods selling methods, merchandising, services and staff are singled out; a model of managing retail outlets trading service, which covers levels of strategic, tactical and operational management and is aimed at ensuring customers’ perception expectations, achieving sustainable competitive positions and increasing customers’ loyalty is worked out; a methodology of trade services quality estimation that allows to carry out a comparative assessment of cooperative retailing both in terms of general indicators and their individual components, regulate the factors affecting trade services quality and have a positive administrative action is developed and tested; the results of evaluation of the customers’ service quality in the consumer cooperative retailers, dynamics of overall and comprehensive indicators of measurement of trade service quality for selected components are given; the main directions and measures for improving trade services quality basing on quantitative values of individual indicators for each of the five selected components (product offering and goods quality, service forms and sale methods, merchandising, services, staff are stated.

  5. PLÉIADES Project: Assessment of Georeferencing Accuracy, Image Quality, Pansharpening Performence and Dsm/dtm Quality

    Science.gov (United States)

    Topan, Hüseyin; Cam, Ali; Özendi, Mustafa; Oruç, Murat; Jacobsen, Karsten; Taşkanat, Talha

    2016-06-01

    Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical

  6. PLÉIADES PROJECT: ASSESSMENT OF GEOREFERENCING ACCURACY, IMAGE QUALITY, PANSHARPENING PERFORMENCE AND DSM/DTM QUALITY

    Directory of Open Access Journals (Sweden)

    H. Topan

    2016-06-01

    Full Text Available Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo runs a MyGIC (formerly Pléiades Users Group program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD and VNIR (2 m GSD Pléiades 1A images were investigated over Zonguldak test site (Turkey which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC orientation, using ~170 Ground Control Points (GCPs. 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common

  7. The application of nonlinear metrics to assess organization differences in short recordings of paroxysmal and persistent atrial fibrillation

    International Nuclear Information System (INIS)

    Atrial fibrillation (AF) is the most common arrhythmia in clinical practice. In the first stages of the disease, AF may terminate spontaneously and it is referred to as paroxysmal AF. The arrhythmia is called persistent AF when external intervention is required to its termination. In the present work, a method to non-invasively assess AF organization has been applied to discern between paroxysmal and persistent AF episodes at any time. Previous works have suggested that the probability of AF termination is inversely related to the number of reentries wandering throughout the atrial tissue. Given that it has also been hypothesized that the number of reentries is directly correlated with AF organization, a fast and robust method able to assess organization differences in AF could be of great interest. In fact, the distinction between paroxysmal and persistent episodes in patients without previously known AF history, making use of short ECG recordings, could contribute to taking earlier decisions on AF management in daily clinical practice, without the need to require 24 h or 48 h Holter recordings. The method was based on a nonlinear regularity index, such as sample entropy (SampEn), and evidenced to be a significant discriminator of the AF type. Its diagnostic accuracy of 91.80% was demonstrated to be superior to previously proposed parameters, such as dominant atrial frequency (DAF) and fibrillatory waves amplitude, and to others analyzed for the first time in this context, such as atrial activity mean power, 3 dB bandwidth around the DAF, first harmonic frequency, harmonic exponential decay, etc. Additionally, according to previous invasive works, paroxysmal AF episodes (0.0716 ± 0.0143) presented lower SampEn values and, consequently, more organized activity, than persistent episodes (0.1080 ± 0.0145)

  8. Assessment of Soil Quality of Tidal Marshes in Shanghai City

    Institute of Scientific and Technical Information of China (English)

    Qing; WANG; Juan; TAN; Jianqiang; WU; Chenyan; SHA; Junjie; RUAN; Min; WANG; Shenfa; HUANG

    2013-01-01

    We take three types of tidal marshes in Shanghai City as the study object:tidal marshes in mainland,tidal marshes in the rim of islands,and shoal in Yangtze estuary.On the basis of assessing nutrient quality and environmental quality,respectively,we use soil quality index(SQI)to assess the soil quality of tidal flats,meanwhile formulate the quality grading standards,and analyze the current situation and characteristics of it.The results show that except the north of Hangzhou Bay,Nanhui and Jiuduansha with low soil nutrient quality,there are not obvious differences in soil nutrient quality between other regions;the heavy metal pollution of tidal marshes in mainland is more serious than that of tidal marshes in the rim of islands;in terms of the comprehensive soil quality index,the regions are sequenced as follows:Jiuduansha wetland>Chongming Dongtan wetland>Nanhui tidal flat>tidal flat on the periphery of Chongming Island>tidal flat on the periphery of Hengsha Island>Pudong tidal flat>Baoshan tidal flat>tidal flat on the periphery of Changxing Island>tidal flat in the north of Hangzhou Bay.Among them,Jiuduansha wetland and Chongming Dongtan wetland have the best soil quality,belonging to class III,followed by Nanhui tidal flat,tidal flat on the periphery of Chongming Island and tidal flat on the periphery of Hengsha Island,belonging to class IV;tidal flat on the periphery of Changxing Island,Pudong tidal flat,Baoshan tidal flat and tidal flat in the north of Hangzhou Bay belong to class V.

  9. Quality assessment of weather radar wind profiles during bird migration

    NARCIS (Netherlands)

    I. Holleman; H. van Gasteren; W. Bouten

    2008-01-01

    Wind profiles from an operational C-band Doppler radar have been combined with data from a bird tracking radar to assess the wind profile quality during bird migration. The weather radar wind profiles (WRWPs) are retrieved using the well-known volume velocity processing (VVP) technique. The X-band b

  10. Quality Control Charts in Large-Scale Assessment Programs

    Science.gov (United States)

    Schafer, William D.; Coverdale, Bradley J.; Luxenberg, Harlan; Jin, Ying

    2011-01-01

    There are relatively few examples of quantitative approaches to quality control in educational assessment and accountability contexts. Among the several techniques that are used in other fields, Shewart charts have been found in a few instances to be applicable in educational settings. This paper describes Shewart charts and gives examples of how…

  11. Assessing the Quality of Education in Bulgaria using PISA 2009

    OpenAIRE

    World Bank

    2010-01-01

    This report reflects and analyzes the recently released survey results from the Program for International Student Assessment (PISA) conducted in 2009 by the Organization for Economic Co-operation and Development (OECD). The report examines the change in scores from 2006 to 2009 and looks at how much of the change can be attributed to improvements in the quality of the education system, how...

  12. Quality assessment of strategic management in organizations - ma maturity model

    OpenAIRE

    Balta Corneliu; Rosioru Nicoleta Diana

    2013-01-01

    The paper presents the actual main concepts related to assessment of quality management in organizations. Strategic management is analyzed taking into consideration the most important dimensions including leadership, culture and values, process improvement, etc. The five levels of maturity model of strategic management are described showing the connection with organizational development

  13. Quality Assessment Parameters for Student Support at Higher Education Institutions

    Science.gov (United States)

    Sajiene, Laima; Tamuliene, Rasa

    2012-01-01

    The research presented in this article aims to validate quality assessment parameters for student support at higher education institutions. Student support is discussed as the system of services provided by a higher education institution which helps to develop student-centred curriculum and fulfils students' emotional, academic, social needs, and…

  14. Parameters of Higher School Internationalization and Quality Assessment

    Science.gov (United States)

    Juknyte-Petreikiene, Inga

    2006-01-01

    The article presents the analysis of higher education internationalization, its conceptions and forms of manifestation. It investigates the ways and means of higher education internationalization, the diversity of higher school internationalization motives, the issues of higher education internationalization quality assessment, presenting an…

  15. Feedback Effects of Teaching Quality Assessment: Macro and Micro Evidence

    Science.gov (United States)

    Bianchini, Stefano

    2014-01-01

    This study investigates the feedback effects of teaching quality assessment. Previous literature looked separately at the evolution of individual and aggregate scores to understand whether instructors and university performance depends on its past evaluation. I propose a new quantitative-based methodology, combining statistical distributions and…

  16. Quantitative study designs used in quality improvement and assessment.

    Science.gov (United States)

    Ormes, W S; Brim, M B; Coggan, P

    2001-01-01

    This article describes common quantitative design techniques that can be used to collect and analyze quality data. An understanding of the differences between these design techniques can help healthcare quality professionals make the most efficient use of their time, energies, and resources. To evaluate the advantages and disadvantages of these various study designs, it is necessary to assess factors that threaten the degree with which quality professionals may infer a cause-and-effect relationship from the data collected. Processes, the conduits of organizational function, often can be assessed by methods that do not take into account confounding and compromising circumstances that affect the outcomes of their analyses. An assumption that the implementation of process improvements may cause real change is incomplete without a consideration of other factors that might also have caused the same result. It is only through the identification, assessment, and exclusion of these alternative factors that administrators and healthcare quality professionals can assess the degree to which true process improvement or compliance has occurred. This article describes the advantages and disadvantages of common quantitative design techniques and reviews the corresponding threats to the interpretability of data obtained from their use. PMID:11378972

  17. Quality of Feedback Following Performance Assessments: Does Assessor Expertise Matter?

    Science.gov (United States)

    Govaerts, Marjan J. B.; van de Wiel, Margje W. J.; van der Vleuten, Cees P. M.

    2013-01-01

    Purpose: This study aims to investigate quality of feedback as offered by supervisor-assessors with varying levels of assessor expertise following assessment of performance in residency training in a health care setting. It furthermore investigates if and how different levels of assessor expertise influence feedback characteristics.…

  18. Video Quality Assessment and Machine Learning: Performance and Interpretability

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    In this work we compare a simple and a complex Machine Learning (ML) method used for the purpose of Video Quality Assessment (VQA). The simple ML method chosen is the Elastic Net (EN), which is a regularized linear regression model and easier to interpret. The more complex method chosen is Support...

  19. Framework for dementia Quality of Life assessment with Assistive Technology

    DEFF Research Database (Denmark)

    Peterson, Carrie Beth; Prasad, Neeli R.; Prasad, Ramjee

    2010-01-01

    This paper proposes a theoretical framework for a Quality of Life (QOL) evaluation tool that is sensitive, flexible, computerized, and specific to assistive technology (AT) for dementia care. Using the appropriate evaluation tool serves to improve methodologies that are used for AT assessment...

  20. Display device-adapted video quality-of-experience assessment

    Science.gov (United States)

    Rehman, Abdul; Zeng, Kai; Wang, Zhou

    2015-03-01

    Today's viewers consume video content from a variety of connected devices, including smart phones, tablets, notebooks, TVs, and PCs. This imposes significant challenges for managing video traffic efficiently to ensure an acceptable quality-of-experience (QoE) for the end users as the perceptual quality of video content strongly depends on the properties of the display device and the viewing conditions. State-of-the-art full-reference objective video quality assessment algorithms do not take into account the combined impact of display device properties, viewing conditions, and video resolution while performing video quality assessment. We performed a subjective study in order to understand the impact of aforementioned factors on perceptual video QoE. We also propose a full reference video QoE measure, named SSIMplus, that provides real-time prediction of the perceptual quality of a video based on human visual system behaviors, video content characteristics (such as spatial and temporal complexity, and video resolution), display device properties (such as screen size, resolution, and brightness), and viewing conditions (such as viewing distance and angle). Experimental results have shown that the proposed algorithm outperforms state-of-the-art video quality measures in terms of accuracy and speed.