WorldWideScience

Sample records for assessments quality metrics

  1. Software Quality Metrics for Geant4: An Initial Assessment

    CERN Document Server

    Ronchieri, Elisabetta; Giacomini, Francesco

    2016-01-01

    In the context of critical applications, such as shielding and radiation protection, ensuring the quality of simulation software they depend on is of utmost importance. The assessment of simulation software quality is important not only to determine its adoption in experimental applications, but also to guarantee reproducibility of outcome over time. In this study, we present initial results from an ongoing analysis of Geant4 code based on established software metrics. The analysis evaluates the current status of the code to quantify its characteristics with respect to documented quality standards; further assessments concern evolutions over a series of release distributions. We describe the selected metrics that quantify software attributes ranging from code complexity to maintainability, and highlight what metrics are most effective at evaluating radiation transport software quality. The quantitative assessment of the software is initially focused on a set of Geant4 packages, which play a key role in a wide...

  2. A software quality model and metrics for risk assessment

    Science.gov (United States)

    Hyatt, L.; Rosenberg, L.

    1996-01-01

    A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.

  3. Supporting analysis and assessments quality metrics: Utility market sector

    Energy Technology Data Exchange (ETDEWEB)

    Ohi, J. [National Renewable Energy Lab., Golden, CO (United States)

    1996-10-01

    In FY96, NREL was asked to coordinate all analysis tasks so that in FY97 these tasks will be part of an integrated analysis agenda that will begin to define a 5-15 year R&D roadmap and portfolio for the DOE Hydrogen Program. The purpose of the Supporting Analysis and Assessments task at NREL is to provide this coordination and conduct specific analysis tasks. One of these tasks is to prepare the Quality Metrics (QM) for the Program as part of the overall QM effort at DOE/EERE. The Hydrogen Program one of 39 program planning units conducting QM, a process begun in FY94 to assess benefits/costs of DOE/EERE programs. The purpose of QM is to inform decisionmaking during budget formulation process by describing the expected outcomes of programs during the budget request process. QM is expected to establish first step toward merit-based budget formulation and allow DOE/EERE to get {open_quotes}most bang for its (R&D) buck.{close_quotes} In FY96. NREL coordinated a QM team that prepared a preliminary QM for the utility market sector. In the electricity supply sector, the QM analysis shows hydrogen fuel cells capturing 5% (or 22 GW) of the total market of 390 GW of new capacity additions through 2020. Hydrogen consumption in the utility sector increases from 0.009 Quads in 2005 to 0.4 Quads in 2020. Hydrogen fuel cells are projected to displace over 0.6 Quads of primary energy in 2020. In future work, NREL will assess the market for decentralized, on-site generation, develop cost credits for distributed generation benefits (such as deferral of transmission and distribution investments, uninterruptible power service), cost credits for by-products such as heat and potable water, cost credits for environmental benefits (reduction of criteria air pollutants and greenhouse gas emissions), compete different fuel cell technologies against each other for market share, and begin to address economic benefits, especially employment.

  4. The Assessment of Research Quality: Peer Review or Metrics?

    OpenAIRE

    Taylor, J.

    2009-01-01

    This paper investigates the extent to which the outcomes of the 2008 Research Assessment Exercise, determined by peer review, can be explained by a set of quantitative indicators, some of which were made available to the review panels. Three cognate units of assessment are examined in detail: business & management, economics & econometrics, and accounting & finance. The paper focuses on the extent to which the quality of research output, as determined by the RAE panel, can be explained by the...

  5. Quality Assessment for CRT and LCD Color Reproduction Using a Blind Metric

    OpenAIRE

    Bringier, B.; Quintard, L.; Larabi, M.-C.

    2008-01-01

    This paper deals with image quality assessment that is capturing the focus of several research teams from academic and industrial parts. This field has an important role in various applications related to image from acquisition to projection. A large numbers of objective image quality metrics have been developed during the last decade. These metrics are more or less correlated to end-user feedback and can be separated in three categories: 1) Full Reference (FR) trying to evaluate the impairme...

  6. Multi-resolution Structural Degradation Metrics for Perceptual Image Quality Assessment

    OpenAIRE

    Engelke, Ulrich; Zepernick, Hans-Jürgen

    2007-01-01

    In this paper, a multi-resolution analysis is proposed for image quality assessment. Structural features are extracted from each level of a pyramid decomposition that accurately represents the multiple scales of processing in the human visual system. To obtain an overall quality measure the individual level metrics are accumulated over the considered pyramid levels. Two different metric design approaches are introduced and evaluated. It turns out that one of them outperforms our previous work...

  7. On the Performance of Video Quality Assessment Metrics under Different Compression and Packet Loss Scenarios

    OpenAIRE

    Martínez-Rach, Miguel O.; Pablo Piñol; López, Otoniel M.; Manuel Perez Malumbres; José Oliver; Carlos Tavares Calafate

    2014-01-01

    When comparing the performance of video coding approaches, evaluating different commercial video encoders, or measuring the perceived video quality in a wireless environment, Rate/distortion analysis is commonly used, where distortion is usually measured in terms of PSNR values. However, PSNR does not always capture the distortion perceived by a human being. As a consequence, significant efforts have focused on defining an objective video quality metric that is able to assess quality in the s...

  8. Algebraic mesh quality metrics

    Energy Technology Data Exchange (ETDEWEB)

    KNUPP,PATRICK

    2000-04-24

    Quality metrics for structured and unstructured mesh generation are placed within an algebraic framework to form a mathematical theory of mesh quality metrics. The theory, based on the Jacobian and related matrices, provides a means of constructing, classifying, and evaluating mesh quality metrics. The Jacobian matrix is factored into geometrically meaningful parts. A nodally-invariant Jacobian matrix can be defined for simplicial elements using a weight matrix derived from the Jacobian matrix of an ideal reference element. Scale and orientation-invariant algebraic mesh quality metrics are defined. the singular value decomposition is used to study relationships between metrics. Equivalence of the element condition number and mean ratio metrics is proved. Condition number is shown to measure the distance of an element to the set of degenerate elements. Algebraic measures for skew, length ratio, shape, volume, and orientation are defined abstractly, with specific examples given. Combined metrics for shape and volume, shape-volume-orientation are algebraically defined and examples of such metrics are given. Algebraic mesh quality metrics are extended to non-simplical elements. A series of numerical tests verify the theoretical properties of the metrics defined.

  9. Data Quality Metrics

    OpenAIRE

    Sýkorová, Veronika

    2008-01-01

    The aim of the thesis is to prove measurability of the Data Quality which is a relatively subjective measure and thus is difficult to measure. In doing this various aspects of measuring the quality of data are analyzed and a Complex Data Quality Monitoring System is introduced with the aim to provide a concept for measuring/monitoring the overall Data Quality in an organization. The system is built on a metrics hierarchy decomposed into particular detailed metrics, dimensions enabling multidi...

  10. Metric qualities of the cognitive behavioral assessment for outcome evaluation to estimate psychological treatment effects

    OpenAIRE

    Bertolotti, Giorgio; Michielin, Paolo; Vidotto, Giulio; Sanavio, Ezio; Bottesi, Gioia; Bettinardi, Ornella; Zotti, Anna Maria

    2015-01-01

    Background Cognitive behavioral assessment for outcome evaluation was developed to evaluate psychological treatment interventions, especially for counseling and psychotherapy. It is made up of 80 items and five scales: anxiety, well-being, perception of positive change, depression, and psychological distress. The aim of the study was to present the metric qualities and to show validity and reliability of the five constructs of the questionnaire both in nonclinical and clinical subjects. Metho...

  11. Quality Assessment of Adaptive Bitrate Videos using Image Metrics and Machine Learning

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Brunnström, Kjell

    2015-01-01

    Adaptive bitrate (ABR) streaming is widely used for distribution of videos over the internet. In this work, we investigate how well we can predict the quality of such videos using well-known image metrics, information about the bitrate levels, and a relatively simple machine learning method...

  12. Does the lentic-lotic character of rivers affect invertebrate metrics used in the assessment of ecological quality?

    Directory of Open Access Journals (Sweden)

    Stefania ERBA

    2009-02-01

    Full Text Available The importance of local hydraulic conditions on the structuring of freshwater biotic communities is widely recognized by the scientific community. In spite of this, most current methods based upon invertebrates do not take this factor into account in their assessment of ecological quality. The aim of this paper is to investigate the influence of local hydraulic conditions on invertebrate community metrics and to estimate their potential weight in the evaluation of river water quality. The dataset used consisted of 130 stream sites located in four broad European geographical contexts: Alps, Central mountains, Mediterranean mountains and Lowland streams. Using River Habitat Survey data, the river hydromorphology was evaluated by means of the Lentic-lotic River Descriptor and the Habitat Modification Score. To quantify the level of water pollution, a synoptic Organic Pollution Descriptor was calculated. For their established, wide applicability, STAR Intercalibration Common Metrics and index were selected as biological quality indices. Significant relationships between selected environmental variables and biological metrics devoted to the evaluation of ecological quality were obtained by means of Partial Least Squares regression analysis. The lentic-lotic character was the most significant factor affecting invertebrate communities in the Mediterranean mountains, even if it is a relevant factor for most quality metrics also in the Alpine and Central mountain rivers. Therefore, this character should be taken into account when assessing ecological quality of rivers because it can greatly affect the assignment of ecological status.

  13. SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, T; Ruan, D [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject to random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be

  14. SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation

    International Nuclear Information System (INIS)

    Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject to random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be

  15. High definition video quality assessment metric built upon full reference ratios

    OpenAIRE

    Jiménez Bermejo, David

    2012-01-01

    Métrica de calidad de video de alta definición construida a partir de ratios de referencia completa. La medida de calidad de video, en inglés Visual Quality Assessment (VQA), es uno de los mayores retos por solucionar en el entorno multimedia. La calidad de vídeo tiene un impacto altísimo en la percepción del usuario final (consumidor) de los servicios sustentados en la provisión de contenidos multimedia y, por tanto, factor clave en la valoración del nuevo paradigma denominado Calidad de la ...

  16. Developing a quality assurance metric

    OpenAIRE

    Love, Steve; Scoble, Rosa

    2006-01-01

    Abstract There are a variety of techniques that lecturers can use to get feedback on their teaching - for example, module feedback and coursework results. However, a question arises about how reliable and valid are the content that goes into these quality assurance metrics. The aim of this article is to present a new approach for collecting and analysing qualitative feedback from students that could be used...

  17. A management-oriented framework for selecting metrics used to assess habitat- and path-specific quality in spatially structured populations

    Science.gov (United States)

    Sam Nicol; Ruscena Wiederholt; Diffendorfer, James E.; Brady Mattsson; Thogmartin, Wayne E.; Semmens, Darius J.; Laura Lopez-Hoffman; Ryan Norris

    2016-01-01

    Mobile species with complex spatial dynamics can be difficult to manage because their population distributions vary across space and time, and because the consequences of managing particular habitats are uncertain when evaluated at the level of the entire population. Metrics to assess the importance of habitats and pathways connecting habitats in a network are necessary to guide a variety of management decisions. Given the many metrics developed for spatially structured models, it can be challenging to select the most appropriate one for a particular decision. To guide the management of spatially structured populations, we define three classes of metrics describing habitat and pathway quality based on their data requirements (graph-based, occupancy-based, and demographic-based metrics) and synopsize the ecological literature relating to these classes. Applying the first steps of a formal decision-making approach (problem framing, objectives, and management actions), we assess the utility of metrics for particular types of management decisions. Our framework can help managers with problem framing, choosing metrics of habitat and pathway quality, and to elucidate the data needs for a particular metric. Our goal is to help managers to narrow the range of suitable metrics for a management project, and aid in decision-making to make the best use of limited resources.

  18. Assessments of habitat preferences and quality depend on spatial scale and metrics of fitness

    Science.gov (United States)

    Chalfoun, A.D.; Martin, T.E.

    2007-01-01

    1. Identifying the habitat features that influence habitat selection and enhance fitness is critical for effective management. Ecological theory predicts that habitat choices should be adaptive, such that fitness is enhanced in preferred habitats. However, studies often report mismatches between habitat preferences and fitness consequences across a wide variety of taxa based on a single spatial scale and/or a single fitness component. 2. We examined whether habitat preferences of a declining shrub steppe songbird, the Brewer's sparrow Spizella breweri, were adaptive when multiple reproductive fitness components and spatial scales (landscape, territory and nest patch) were considered. 3. We found that birds settled earlier and in higher densities, together suggesting preference, in landscapes with greater shrub cover and height. Yet nest success was not higher in these landscapes; nest success was primarily determined by nest predation rates. Thus landscape preferences did not match nest predation risk. Instead, nestling mass and the number of nesting attempts per pair increased in preferred landscapes, raising the possibility that landscapes were chosen on the basis of food availability rather than safe nest sites. 4. At smaller spatial scales (territory and nest patch), birds preferred different habitat features (i.e. density of potential nest shrubs) that reduced nest predation risk and allowed greater season-long reproductive success. 5. Synthesis and applications. Habitat preferences reflect the integration of multiple environmental factors across multiple spatial scales, and individuals may have more than one option for optimizing fitness via habitat selection strategies. Assessments of habitat quality for management prescriptions should ideally include analysis of diverse fitness consequences across multiple ecologically relevant spatial scales. ?? 2007 The Authors.

  19. Application of Sigma Metrics for the Assessment of Quality Assurance in Clinical Biochemistry Laboratory in India: A Pilot Study

    OpenAIRE

    Singh, Bhawna; Goswami, Binita; Gupta, Vinod Kumar; Chawla, Ranjna; Mallika, Venkatesan

    2010-01-01

    Ensuring quality of laboratory services is the need of the hour in the field of health care. Keeping in mind the revolution ushered by six sigma concept in corporate world, health care sector may reap the benefits of the same. Six sigma provides a general methodology to describe performance on sigma scale. We aimed to gauge our laboratory performance by sigma metrics. Internal quality control (QC) data was analyzed retrospectively over a period of 6 months from July 2009 to December 2009. Lab...

  20. THE QUALITY METRICS OF INFORMATION SYSTEMS

    OpenAIRE

    Zora Arsovski; Slavko Arsovski

    2008-01-01

    Information system is a special kind of products which is depend upon great number variables related to nature, conditions during implementation and organizational clime and culture. Because that quality metrics of information system (QMIS) has to reflect all previous aspects of information systems. In this paper are presented basic elements of QMIS, characteristics of implementation and operation metrics for IS, team - management quality metrics for IS and organizational aspects of quality m...

  1. Identification of Suited Quality Metrics for Natural and Medical Images

    Directory of Open Access Journals (Sweden)

    Kirti V. Thakur

    2016-06-01

    Full Text Available To assess quality of the denoised image is one of the important task in image denoising application.Numerous quality metrics are proposed by researchers with their particular characteristics till today. In practice, image acquisition system is different for natural and medical images. Hence noise introduced in these images is also different in nature. Considering this fact, authors in this paper tried to identify the suited quality metrics for Gaussian, speckle and Poisson corrupted natural, ultrasound and X-ray images respectively. In this paper, sixteen different quality metrics from full reference category are evaluated with respect to noise variance and suited quality metric for particular type of noise is identified. Strong need to develop noise dependent quality metric is also identified in this work.

  2. Reliable Software Development with Proposed Quality Oriented Software Testing Metrics

    OpenAIRE

    Latika Kharb; Dr. Vijay Singh Rathore

    2011-01-01

    For an effective test measurement, a software tester requires a testing metrics that could measure the quality and productivity of software development process along with increasing its reusability, correctness and maintainability. Until now, the understanding of measuring software quality is not yet sophisticated enough and is still far away from being standardized and in order to assess the software quality, an appropriate set of software metrics needs to be identified that could express th...

  3. Using quality metrics with laser range scanners

    Science.gov (United States)

    MacKinnon, David K.; Aitken, Victor; Blais, Francois

    2008-02-01

    We have developed a series of new quality metrics that are generalizable to a variety of laser range scanning systems, including those acquiring measurements in the mid-field. Moreover, these metrics can be integrated into either an automated scanning system, or a system that guides a minimally trained operator through the scanning process. In particular, we represent the quality of measurements with regard to aliasing and sampling density for mid-field measurements, two issues that have not been well addressed in contemporary literature. We also present a quality metric that addresses the issue of laser spot motion during sample acquisition. Finally, we take into account the interaction between measurement resolution and measurement uncertainty where necessary. These metrics are presented within the context of an adaptive scanning system in which quality metrics are used to minimize the number of measurements obtained during the acquisition of a single range image.

  4. An Underwater Color Image Quality Evaluation Metric.

    Science.gov (United States)

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score. PMID:26513783

  5. Reliable Software Development with Proposed Quality Oriented Software Testing Metrics

    Directory of Open Access Journals (Sweden)

    Latika Kharb

    2011-07-01

    Full Text Available For an effective test measurement, a software tester requires a testing metrics that could measure the quality and productivity of software development process along with increasing its reusability, correctness and maintainability. Until now, the understanding of measuring software quality is not yet sophisticated enough and is still far away from being standardized and in order to assess the software quality, an appropriate set of software metrics needs to be identified that could express these quality attributes. Our research objective in this paper is to construct and define a set of easy-to measure software metrics for testing to be used as early indicators of external measures of quality. So,we’ve emphasized on the fact that reliable software development with respect to quality could be well achieved by using our set of testing metrics, and for that we’ve given the practical results of evaluation

  6. Application of sigma metrics for the assessment of quality assurance in clinical biochemistry laboratory in India: a pilot study.

    Science.gov (United States)

    Singh, Bhawna; Goswami, Binita; Gupta, Vinod Kumar; Chawla, Ranjna; Mallika, Venkatesan

    2011-04-01

    Ensuring quality of laboratory services is the need of the hour in the field of health care. Keeping in mind the revolution ushered by six sigma concept in corporate world, health care sector may reap the benefits of the same. Six sigma provides a general methodology to describe performance on sigma scale. We aimed to gauge our laboratory performance by sigma metrics. Internal quality control (QC) data was analyzed retrospectively over a period of 6 months from July 2009 to December 2009. Laboratory mean, standard deviation and coefficient of variation were calculated for all the parameters. Sigma was calculated for both the levels of internal QC. Satisfactory sigma values (>6) were elicited for creatinine, triglycerides, SGOT, CPK-Total and Amylase. Blood urea performed poorly on the sigma scale with sigma six sigma standards for all the analytical processes. PMID:22468038

  7. SAPHIRE 8 Quality Assurance Software Metrics Report

    Energy Technology Data Exchange (ETDEWEB)

    Kurt G. Vedros

    2011-08-01

    The purpose of this review of software metrics is to examine the quality of the metrics gathered in the 2010 IV&V and to set an outline for results of updated metrics runs to be performed. We find from the review that the maintenance of accepted quality standards presented in the SAPHIRE 8 initial Independent Verification and Validation (IV&V) of April, 2010 is most easily achieved by continuing to utilize the tools used in that effort while adding a metric of bug tracking and resolution. Recommendations from the final IV&V were to continue periodic measurable metrics such as McCabe's complexity measure to ensure quality is maintained. The four software tools used to measure quality in the IV&V were CodeHealer, Coverage Validator, Memory Validator, Performance Validator, and Thread Validator. These are evaluated based on their capabilities. We attempted to run their latest revisions with the newer Delphi 2010 based SAPHIRE 8 code that has been developed and was successful with all of the Validator series of tools on small tests. Another recommendation from the IV&V was to incorporate a bug tracking and resolution metric. To improve our capability of producing this metric, we integrated our current web reporting system with the SpiraTest test management software purchased earlier this year to track requirements traceability.

  8. Validation of a Quality Management Metric

    OpenAIRE

    Grossman, Mary Alice.

    2000-01-01

    The quality of software management in a development program is a major factor in determining the success of a program. The four main areas where a software program manager can affect the outcome of a program are requirements management, estimation/planning management, people management, and risk management. In this thesis a quality management metric (QMM) was used to measure the performance of ten software managers on Department of Defense (DoD) software development programs. Informal verific...

  9. Design Metrics Which Predict Source Code Quality

    OpenAIRE

    Hartson, H.Rex; Smith, Eric C.; Henry, Sallie M.; Selig, Calvin

    1987-01-01

    Since the inception of software engineering, the major goal has been to control the development and maintenance of reliable software. To this end, many different design methodologies have been presented as a means to improve software quality through semantic clarity and syntactic accuracy during the specification and design phases of the software life cycle. On the other end of the life cycle, software quality metrics have been proposed to supply quantitative measures of the resultant softwar...

  10. How to evaluate objective video quality metrics reliably

    DEFF Research Database (Denmark)

    Korhonen, Jari; Burini, Nino; You, Junyong;

    2012-01-01

    The typical procedure for evaluating the performance of different objective quality metrics and indices involves comparisons between subjective quality ratings and the quality indices obtained using the objective metrics in question on the known video sequences. Several correlation indicators can...

  11. Golden Horn Estuary: Description of the ecosystem and an attempt to assess its ecological quality status using various classification metrics

    Directory of Open Access Journals (Sweden)

    S. ALBAYRAK

    2012-12-01

    Full Text Available In this paper, we describe the pelagic and benthic ecosystem of the Golden Horn  estuary opening into the Marmara Sea. To improve the water quality of the estuary, which had long been subject to severe anthropogenic pollution (industrial, chemical, shipping,  industrial facilities were moved from the estuary in the 1980s, followed by a rehabilitation plan in the 1990s. Our results, based on chemical parameters and phytoplankton showed some signs of improvement of water conditions in the upper layer. However, macrozoobenthic findings of this study did not reflect such a recovery in bottom life.An approach to the Ecological Quality Status (EQS assessment was performed by applying the biotic indices BENTIX, AMBI, BOPA, BO2A. Our final assessment was based on 'expert-judgements' and revealed a very disturbed overall ecosystem with 'bad' EQS for the station at the head of the estuary,  'poor' in the rest of the estuary and 'moderate' EQS only in the middle station.

  12. A Game Assessment Metric for the Online Gamer

    OpenAIRE

    DENIEFFE, D.; CARRIG, B.; D. Marshall; PICOVICI, D.

    2007-01-01

    This paper describes a new game assessment metric for the online gamer. The metric is based on a mathematical model currently used for network planning assessment. Beside the traditional network-based parameters such as delay, jitter and packet loss, new parameters based on online players' game experience/knowledge are introduced. The metric aims to estimate game quality as perceived by an online player. Measurements can be achieved in real-time or near real-time and could be useful to both o...

  13. New Quality Metrics for Web Search Results

    Science.gov (United States)

    Metaxas, Panagiotis Takis; Ivanova, Lilia; Mustafaraj, Eni

    Web search results enjoy an increasing importance in our daily lives. But what can be said about their quality, especially when querying a controversial issue? The traditional information retrieval metrics of precision and recall do not provide much insight in the case of web information retrieval. In this paper we examine new ways of evaluating quality in search results: coverage and independence. We give examples on how these new metrics can be calculated and what their values reveal regarding the two major search engines, Google and Yahoo. We have found evidence of low coverage for commercial and medical controversial queries, and high coverage for a political query that is highly contested. Given the fact that search engines are unwilling to tune their search results manually, except in a few cases that have become the source of bad publicity, low coverage and independence reveal the efforts of dedicated groups to manipulate the search results.

  14. Metric qualities of the cognitive behavioral assessment for outcome evaluation to estimate psychological treatment effects

    OpenAIRE

    Bertolotti G; Michielin P; Vidotto G; Sanavio E; Bottesi G; Bettinardi O; Zotti AM

    2015-01-01

    Giorgio Bertolotti,1 Paolo Michielin,2 Giulio Vidotto,2 Ezio Sanavio,2 Gioia Bottesi,2 Ornella Bettinardi,3 Anna Maria Zotti4 1Psychology Unit, Salvatore Maugeri Foundation, IRCCS, Scientific Institute, Tradate, VA, 2Department of General Psychology, Padua University, Padova, 3Department of Mental Health and Addictive Behavior, AUSL Piacenza, Piacenza, 4Salvatore Maugeri Foundation, IRCCS, Scientific Institute, Veruno, NO, Italy Background: Cognitive behavioral assessment for outcome evalu...

  15. Towards Video Quality Metrics Based on Colour Fractal Geometry

    Directory of Open Access Journals (Sweden)

    Richard Noël

    2010-01-01

    Full Text Available Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.

  16. A Metric to Assess the Performance of MLIR Services

    Directory of Open Access Journals (Sweden)

    N. Moganarangan

    2014-03-01

    Full Text Available Information Retrieval plays a vital role in extraction of relevant information. Many researches have been working on for satisfying user needs, tough the problem arises when accessing multilingual information. This Multilingual environment provides a platform where a query can be formed in one language and the result can be in the same language and/or different languages. Performance evaluation of Information Retrieval for monolingual environments especially for English are developed and standardized from its inception. There is no specialized evaluation model available for evaluating the performance of services related to multilingual environments or systems. The unavailability of MLIR domain specific standards is a challenging task. This paper presented enhanced metric to assess the performance of MLIR systems over its counterpart IR metric. This analysis shows that the performance of the enhanced metric is better than the conventional metric. And also these metric can facilitate the researchers and developers to improve the quality of the MLIR systems in the present and future scenarios.

  17. Experiences with Software Quality Metrics in the EMI Middleware

    OpenAIRE

    Alandes, Maria

    2012-01-01

    he EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that n...

  18. [Clinical trial data management and quality metrics system].

    Science.gov (United States)

    Chen, Zhao-hua; Huang, Qin; Deng, Ya-zhong; Zhang, Yue; Xu, Yu; Yu, Hao; Liu, Zong-fan

    2015-11-01

    Data quality management system is essential to ensure accurate, complete, consistent, and reliable data collection in clinical research. This paper is devoted to various choices of data quality metrics. They are categorized by study status, e.g. study start up, conduct, and close-out. In each category, metrics for different purposes are listed according to ALCOA+ principles such us completeness, accuracy, timeliness, traceability, etc. Some general quality metrics frequently used are also introduced. This paper contains detail information as much as possible to each metric by providing definition, purpose, evaluation, referenced benchmark, and recommended targets in favor of real practice. It is important that sponsors and data management service providers establish a robust integrated clinical trial data quality management system to ensure sustainable high quality of clinical trial deliverables. It will also support enterprise level of data evaluation and bench marking the quality of data across projects, sponsors, data management service providers by using objective metrics from the real clinical trials. We hope this will be a significant input to accelerate the improvement of clinical trial data quality in the industry. PMID:26911027

  19. Experiences with Software Quality Metrics in the EMI middleware

    International Nuclear Information System (INIS)

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project to extract “code metrics” on the status of the software products and “process metrics” related to the quality of the development and support process such as reaction time to critical bugs, requirements tracking and delays in product releases.

  20. A universal color image quality metric

    NARCIS (Netherlands)

    Toet, A.; Lucassen, M.P.

    2003-01-01

    We extend a recently introduced universal grayscale image quality index to a newly developed perceptually decorrelated color space. The resulting color image quality index quantifies the distortion of a processed color image relative to its original version. We evaluated the new color image quality

  1. Efficacy of algal metrics for assessing nutrient and organic enrichment in flowing waters

    Science.gov (United States)

    Porter, S.D.; Mueller, D.K.; Spahr, N.E.; Munn, M.D.; Dubrovsky, N.M.

    2008-01-01

    1. Algal-community metrics were calculated for periphyton samples collected from 976 streams and rivers by the U.S. Geological Survey’s National Water-Quality Assessment (NAWQA) Programme during 1993–2001 to evaluate national and regional relations with water chemistry and to compare whether algal-metric values differ significantly among undeveloped and developed land-use classifications.

  2. First statistical analysis of Geant4 quality software metrics

    Science.gov (United States)

    Ronchieri, Elisabetta; Grazia Pia, Maria; Giacomini, Francesco

    2015-12-01

    Geant4 is a simulation system of particle transport through matter, widely used in several experimental areas from high energy physics and nuclear experiments to medical studies. Some of its applications may involve critical use cases; therefore they would benefit from an objective assessment of the software quality of Geant4. In this paper, we provide a first statistical evaluation of software metrics data related to a set of Geant4 physics packages. The analysis aims at identifying risks for Geant4 maintainability, which would benefit from being addressed at an early stage. The findings of this pilot study set the grounds for further extensions of the analysis to the whole of Geant4 and to other high energy physics software systems.

  3. Experiences with Software Quality Metrics in the EMI Middleware

    CERN Document Server

    CERN. Geneva

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project t...

  4. Experiences with Software Quality Metrics in the EMI middlewate

    CERN Document Server

    Alandes, M; Meneses, D; Pucciani, G

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project to ...

  5. Quality assessment of protein NMR structures

    OpenAIRE

    Rosato A.; Montelione G.T.; Tejero R.

    2013-01-01

    Biomolecular NMR structures are now routinely used in biology, chemistry, and bioinformatics. Methods and metrics for assessing the accuracy and precision of protein NMR structures are beginning to be standardized across the biological NMR community. These include both knowledge-based assessment metrics, parameterized from the database of protein structures, and model versus data assessment metrics. On line servers are available that provide comprehensive protein structure quality assessment ...

  6. Development of soil quality metrics using mycorrhizal fungi

    Energy Technology Data Exchange (ETDEWEB)

    Baar, J.

    2010-07-01

    Based on the Treaty on Biological Diversity of Rio de Janeiro in 1992 for maintaining and increasing biodiversity, several countries have started programmes monitoring soil quality and the above- and below ground biodiversity. Within the European Union, policy makers are working on legislation for soil protection and management. Therefore, indicators are needed to monitor the status of the soils and these indicators reflecting the soil quality, can be integrated in working standards or soil quality metrics. Soil micro-organisms, particularly arbuscular mycorrhizal fungi (AMF), are indicative of soil changes. These soil fungi live in symbiosis with the great majority of plants and are sensitive to changes in the physico-chemical conditions of the soil. The aim of this study was to investigate whether AMF are reliable and sensitive indicators for disturbances in the soils and can be used for the development of soil quality metrics. Also, it was studied whether soil quality metrics based on AMF meet requirements to applicability by users and policy makers. Ecological criterions were set for the development of soil quality metrics for different soils. Multiple root samples containing AMF from various locations in The Netherlands were analyzed. The results of the analyses were related to the defined criterions. This resulted in two soil quality metrics, one for sandy soils and a second one for clay soils, with six different categories ranging from very bad to very good. These soil quality metrics meet the majority of requirements for applicability and are potentially useful for the development of legislations for the protection of soil quality. (Author) 23 refs.

  7. Intersection of quality metrics and Medicare policy.

    Science.gov (United States)

    Nau, David P

    2011-12-01

    The federal government is increasing its push for a high-value health care system by increasing transparency and accountability related to quality. The Medicare program has begun to publicly rate the quality of Medicare plans, including prescription drug plans, and is transforming its payment policies to reward plans that deliver the highest levels of quality. These policies will have a cascade effect on pharmacies and pharmacists as the Medicare plans look for assistance in improving the quality of medication use. This commentary describes the Medicare policies directed toward improvement of quality and their effect on pharmacy payment and opportunities for pharmacists to affirm their role in a high-quality medication use system. PMID:22045907

  8. Quality metrics can help the expert during neurological clinical trials

    Science.gov (United States)

    Mahé, L.; Autrusseau, F.; Desal, H.; Guédon, J.; Der Sarkissian, H.; Le Teurnier, Y.; Davila, S.

    2016-03-01

    Carotid surgery is a frequent act corresponding to 15 to 20 thousands operations per year in France. Cerebral perfusion has to be tracked before and after carotid surgery. In this paper, a diagnosis support using quality metrics is proposed to detect vascular lesions on MR images. Our key stake is to provide a detection tool mimicking the human visual system behavior during the visual inspection. Relevant Human Visual System (HVS) properties should be integrated in our lesion detection method, which must be robust to common distortions in medical images. Our goal is twofold: to help the neuroradiologist to perform its task better and faster but also to provide a way to reduce the risk of bias in image analysis. Objective quality metrics (OQM) are methods whose goal is to predict the perceived quality. In this work, we use Objective Quality Metrics to detect perceivable differences between pairs of images.

  9. Compressed image quality metric based on perceptually weighted distortion.

    Science.gov (United States)

    Hu, Sudeng; Jin, Lina; Wang, Hanli; Zhang, Yun; Kwong, Sam; Kuo, C-C Jay

    2015-12-01

    Objective quality assessment for compressed images is critical to various image compression systems that are essential in image delivery and storage. Although the mean squared error (MSE) is computationally simple, it may not be accurate to reflect the perceptual quality of compressed images, which is also affected dramatically by the characteristics of human visual system (HVS), such as masking effect. In this paper, an image quality metric (IQM) is proposed based on perceptually weighted distortion in terms of the MSE. To capture the characteristics of HVS, a randomness map is proposed to measure the masking effect and a preprocessing scheme is proposed to simulate the processing that occurs in the initial part of HVS. Since the masking effect highly depends on the structural randomness, the prediction error from neighborhood with a statistical model is used to measure the significance of masking. Meanwhile, the imperceptible signal with high frequency could be removed by preprocessing with low-pass filters. The relation is investigated between the distortions before and after masking effect, and a masking modulation model is proposed to simulate the masking effect after preprocessing. The performance of the proposed IQM is validated on six image databases with various compression distortions. The experimental results show that the proposed algorithm outperforms other benchmark IQMs. PMID:26415170

  10. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark;

    2012-01-01

    deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use of such...... methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed...

  11. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark;

    2011-01-01

    deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use of such...... methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed...

  12. A priori discretization quality metrics for distributed hydrologic modeling applications

    Science.gov (United States)

    Liu, Hongli; Tolson, Bryan; Craig, James; Shafii, Mahyar; Basu, Nandita

    2016-04-01

    In distributed hydrologic modelling, a watershed is treated as a set of small homogeneous units that address the spatial heterogeneity of the watershed being simulated. The ability of models to reproduce observed spatial patterns firstly depends on the spatial discretization, which is the process of defining homogeneous units in the form of grid cells, subwatersheds, or hydrologic response units etc. It is common for hydrologic modelling studies to simply adopt a nominal or default discretization strategy without formally assessing alternative discretization levels. This approach lacks formal justifications and is thus problematic. More formalized discretization strategies are either a priori or a posteriori with respect to building and running a hydrologic simulation model. A posteriori approaches tend to be ad-hoc and compare model calibration and/or validation performance under various watershed discretizations. The construction and calibration of multiple versions of a distributed model can become a seriously limiting computational burden. Current a priori approaches are more formalized and compare overall heterogeneity statistics of dominant variables between candidate discretization schemes and input data or reference zones. While a priori approaches are efficient and do not require running a hydrologic model, they do not fully investigate the internal spatial pattern changes of variables of interest. Furthermore, the existing a priori approaches focus on landscape and soil data and do not assess impacts of discretization on stream channel definition even though its significance has been noted by numerous studies. The primary goals of this study are to (1) introduce new a priori discretization quality metrics considering the spatial pattern changes of model input data; (2) introduce a two-step discretization decision-making approach to compress extreme errors and meet user-specified discretization expectations through non-uniform discretization threshold

  13. Quality Assessment in Oncology

    International Nuclear Information System (INIS)

    The movement to improve healthcare quality has led to a need for carefully designed quality indicators that accurately reflect the quality of care. Many different measures have been proposed and continue to be developed by governmental agencies and accrediting bodies. However, given the inherent differences in the delivery of care among medical specialties, the same indicators will not be valid across all of them. Specifically, oncology is a field in which it can be difficult to develop quality indicators, because the effectiveness of an oncologic intervention is often not immediately apparent, and the multidisciplinary nature of the field necessarily involves many different specialties. Existing and emerging comparative effectiveness data are helping to guide evidence-based practice, and the increasing availability of these data provides the opportunity to identify key structure and process measures that predict for quality outcomes. The increasing emphasis on quality and efficiency will continue to compel the medical profession to identify appropriate quality measures to facilitate quality improvement efforts and to guide accreditation, credentialing, and reimbursement. Given the wide-reaching implications of quality metrics, it is essential that they be developed and implemented with scientific rigor. The aims of the present report were to review the current state of quality assessment in oncology, identify existing indicators with the best evidence to support their implementation, and propose a framework for identifying and refining measures most indicative of true quality in oncologic care.

  14. Quality Assessment in Oncology

    Energy Technology Data Exchange (ETDEWEB)

    Albert, Jeffrey M. [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Das, Prajnan, E-mail: prajdas@mdanderson.org [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2012-07-01

    The movement to improve healthcare quality has led to a need for carefully designed quality indicators that accurately reflect the quality of care. Many different measures have been proposed and continue to be developed by governmental agencies and accrediting bodies. However, given the inherent differences in the delivery of care among medical specialties, the same indicators will not be valid across all of them. Specifically, oncology is a field in which it can be difficult to develop quality indicators, because the effectiveness of an oncologic intervention is often not immediately apparent, and the multidisciplinary nature of the field necessarily involves many different specialties. Existing and emerging comparative effectiveness data are helping to guide evidence-based practice, and the increasing availability of these data provides the opportunity to identify key structure and process measures that predict for quality outcomes. The increasing emphasis on quality and efficiency will continue to compel the medical profession to identify appropriate quality measures to facilitate quality improvement efforts and to guide accreditation, credentialing, and reimbursement. Given the wide-reaching implications of quality metrics, it is essential that they be developed and implemented with scientific rigor. The aims of the present report were to review the current state of quality assessment in oncology, identify existing indicators with the best evidence to support their implementation, and propose a framework for identifying and refining measures most indicative of true quality in oncologic care.

  15. Pragmatic quality metrics for evolutionary software development models

    Science.gov (United States)

    Royce, Walker

    1990-01-01

    Due to the large number of product, project, and people parameters which impact large custom software development efforts, measurement of software product quality is a complex undertaking. Furthermore, the absolute perspective from which quality is measured (customer satisfaction) is intangible. While we probably can't say what the absolute quality of a software product is, we can determine the relative quality, the adequacy of this quality with respect to pragmatic considerations, and identify good and bad trends during development. While no two software engineers will ever agree on an optimum definition of software quality, they will agree that the most important perspective of software quality is its ease of change. We can call this flexibility, adaptability, or some other vague term, but the critical characteristic of software is that it is soft. The easier the product is to modify, the easier it is to achieve any other software quality perspective. This paper presents objective quality metrics derived from consistent lifecycle perspectives of rework which, when used in concert with an evolutionary development approach, can provide useful insight to produce better quality per unit cost/schedule or to achieve adequate quality more efficiently. The usefulness of these metrics is evaluated by applying them to a large, real world, Ada project.

  16. On the Efficiency of Image Metrics for Evaluating the Visual Quality of 3D Models.

    Science.gov (United States)

    Lavoue, Guillaume; Larabi, Mohamed Chaker; Vasa, Libor

    2016-08-01

    3D meshes are deployed in a wide range of application processes (e.g., transmission, compression, simplification, watermarking and so on) which inevitably introduce geometric distortions that may alter the visual quality of the rendered data. Hence, efficient model-based perceptual metrics, operating on the geometry of the meshes being compared, have been recently introduced to control and predict these visual artifacts. However, since the 3D models are ultimately visualized on 2D screens, it seems legitimate to use images of the models (i.e., snapshots from different viewpoints) to evaluate their visual fidelity. In this work we investigate the use of image metrics to assess the visual quality of 3D models. For this goal, we conduct a wide-ranging study involving several 2D metrics, rendering algorithms, lighting conditions and pooling algorithms, as well as several mean opinion score databases. The collected data allow (1) to determine the best set of parameters to use for this image-based quality assessment approach and (2) to compare this approach to the best performing model-based metrics and determine for which use-case they are respectively adapted. We conclude by exploring several applications that illustrate the benefits of image-based quality assessment. PMID:26394428

  17. A Validation of Object-Oriented Design Metrics as Quality Indicators

    Science.gov (United States)

    Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio

    1997-01-01

    This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.

  18. Using Qualitative and Quantitative Methods to Choose a Habitat Quality Metric for Air Pollution Policy Evaluation.

    Science.gov (United States)

    Rowe, Edwin C; Ford, Adriana E S; Smart, Simon M; Henrys, Peter A; Ashmore, Mike R

    2016-01-01

    Atmospheric nitrogen (N) deposition has had detrimental effects on species composition in a range of sensitive habitats, although N deposition can also increase agricultural productivity and carbon storage, and favours a few species considered of importance for conservation. Conservation targets are multiple, and increasingly incorporate services derived from nature as well as concepts of intrinsic value. Priorities vary. How then should changes in a set of species caused by drivers such as N deposition be assessed? We used a novel combination of qualitative semi-structured interviews and quantitative ranking to elucidate the views of conservation professionals specialising in grasslands, heathlands and mires. Although conservation management goals are varied, terrestrial habitat quality is mainly assessed by these specialists on the basis of plant species, since these are readily observed. The presence and abundance of plant species that are scarce, or have important functional roles, emerged as important criteria for judging overall habitat quality. However, species defined as 'positive indicator-species' (not particularly scarce, but distinctive for the habitat) were considered particularly important. Scarce species are by definition not always found, and the presence of functionally important species is not a sufficient indicator of site quality. Habitat quality as assessed by the key informants was rank-correlated with the number of positive indicator-species present at a site for seven of the nine habitat classes assessed. Other metrics such as species-richness or a metric of scarcity were inconsistently or not correlated with the specialists' assessments. We recommend that metrics of habitat quality used to assess N pollution impacts are based on the occurrence of, or habitat-suitability for, distinctive species. Metrics of this type are likely to be widely applicable for assessing habitat change in response to different drivers. The novel combined

  19. Using Qualitative and Quantitative Methods to Choose a Habitat Quality Metric for Air Pollution Policy Evaluation

    Science.gov (United States)

    Ford, Adriana E. S.; Smart, Simon M.; Henrys, Peter A.; Ashmore, Mike R.

    2016-01-01

    Atmospheric nitrogen (N) deposition has had detrimental effects on species composition in a range of sensitive habitats, although N deposition can also increase agricultural productivity and carbon storage, and favours a few species considered of importance for conservation. Conservation targets are multiple, and increasingly incorporate services derived from nature as well as concepts of intrinsic value. Priorities vary. How then should changes in a set of species caused by drivers such as N deposition be assessed? We used a novel combination of qualitative semi-structured interviews and quantitative ranking to elucidate the views of conservation professionals specialising in grasslands, heathlands and mires. Although conservation management goals are varied, terrestrial habitat quality is mainly assessed by these specialists on the basis of plant species, since these are readily observed. The presence and abundance of plant species that are scarce, or have important functional roles, emerged as important criteria for judging overall habitat quality. However, species defined as ‘positive indicator-species’ (not particularly scarce, but distinctive for the habitat) were considered particularly important. Scarce species are by definition not always found, and the presence of functionally important species is not a sufficient indicator of site quality. Habitat quality as assessed by the key informants was rank-correlated with the number of positive indicator-species present at a site for seven of the nine habitat classes assessed. Other metrics such as species-richness or a metric of scarcity were inconsistently or not correlated with the specialists’ assessments. We recommend that metrics of habitat quality used to assess N pollution impacts are based on the occurrence of, or habitat-suitability for, distinctive species. Metrics of this type are likely to be widely applicable for assessing habitat change in response to different drivers. The novel combined

  20. Beyond metrics? Utilizing 'soft intelligence' for healthcare quality and safety.

    Science.gov (United States)

    Martin, Graham P; McKee, Lorna; Dixon-Woods, Mary

    2015-10-01

    Formal metrics for monitoring the quality and safety of healthcare have a valuable role, but may not, by themselves, yield full insight into the range of fallibilities in organizations. 'Soft intelligence' is usefully understood as the processes and behaviours associated with seeking and interpreting soft data-of the kind that evade easy capture, straightforward classification and simple quantification-to produce forms of knowledge that can provide the basis for intervention. With the aim of examining current and potential practice in relation to soft intelligence, we conducted and analysed 107 in-depth qualitative interviews with senior leaders, including managers and clinicians, involved in healthcare quality and safety in the English National Health Service. We found that participants were in little doubt about the value of softer forms of data, especially for their role in revealing troubling issues that might be obscured by conventional metrics. Their struggles lay in how to access softer data and turn them into a useful form of knowing. Some of the dominant approaches they used risked replicating the limitations of hard, quantitative data. They relied on processes of aggregation and triangulation that prioritised reliability, or on instrumental use of soft data to animate the metrics. The unpredictable, untameable, spontaneous quality of soft data could be lost in efforts to systematize their collection and interpretation to render them more tractable. A more challenging but potentially rewarding approach involved processes and behaviours aimed at disrupting taken-for-granted assumptions about quality, safety, and organizational performance. This approach, which explicitly values the seeking out and the hearing of multiple voices, is consistent with conceptual frameworks of organizational sensemaking and dialogical understandings of knowledge. Using soft intelligence this way can be challenging and discomfiting, but may offer a critical defence against the

  1. Modeling quality attributes and metrics for web service selection

    Science.gov (United States)

    Oskooei, Meysam Ahmadi; Daud, Salwani binti Mohd; Chua, Fang-Fang

    2014-06-01

    Since the service-oriented architecture (SOA) has been designed to develop the system as a distributed application, the service selection has become a vital aspect of service-oriented computing (SOC). Selecting the appropriate web service with respect to quality of service (QoS) through using mathematical solution for optimization of problem turns the service selection problem into a common concern for service users. Nowadays, number of web services that provide the same functionality is increased and selection of services from a set of alternatives which differ in quality parameters can be difficult for service consumers. In this paper, a new model for QoS attributes and metrics is proposed to provide a suitable solution for optimizing web service selection and composition with low complexity.

  2. FCE: A QUALITY METRIC FOR COTS BASED SOFTWARE DESIGN

    Directory of Open Access Journals (Sweden)

    M.V.VIJAYA SARADHI,

    2010-05-01

    Full Text Available The software that is based on component is aimed at developing large software systems thorough combining the existing software components. Before integrate different components, first one need to identify whether functional and non functional properties of different components are feasible and required to be integrated to developnew system or software. Deriving a quality measure for reusable components has proven to be challenging task now a days. This paper proposes a quality metric that provides benefit at both project and process level, namely Fault Clearance Effectiveness (FCE. This paper identifies the different characteristics that component should have so that it can be used again and again. Component qualification is a system of finding out the fitness for use of existing components that will be used to develop a new system.

  3. A task-based quality control metric for digital mammography

    International Nuclear Information System (INIS)

    A reader study was conducted to tune the parameters of an observer model used to predict the detectability index (d' ) of test objects as a task-based quality control (QC) metric for digital mammography. A simple test phantom was imaged to measure the model parameters, namely, noise power spectrum, modulation transfer function and test-object contrast. These are then used in a non-prewhitening observer model, incorporating an eye-filter and internal noise, to predict d'. The model was tuned by measuring d' of discs in a four-alternative forced choice reader study. For each disc diameter, d' was used to estimate the threshold thicknesses for detectability. Data were obtained for six types of digital mammography systems using varying detector technologies and x-ray spectra. A strong correlation was found between measured and modeled values of d', with Pearson correlation coefficient of 0.96. Repeated measurements from separate images of the test phantom show an average coefficient of variation in d' for different systems between 0.07 and 0.10. Standard deviations in the threshold thickness ranged between 0.001 and 0.017 mm. The model is robust and the results are relatively system independent, suggesting that observer model d' shows promise as a cross platform QC metric for digital mammography. (paper)

  4. "Assessment of different bioequivalent metrics in Rifampin bioequivalence study "

    Directory of Open Access Journals (Sweden)

    "Rouini MR

    2002-08-01

    Full Text Available The use of secondary metrics has become special interest in bioequivalency studies. The applicability of partial area method, truncated AUC and Cmax/AUC has been argued by many authors. This study aims to evaluate the possible superiority of these metrics to primary metrics (i.e. AUCinf, Cmax and Tmax. The suitability of truncated AUC for assessment of absorption extent as well as Cmax/AUC and partial AUC for the evaluation of absorption rate in bioequivalency determination was investigated following administration of same product as test and reference to 7 healthy volunteers. Among the pharmacokinetic parameters obtained, Cmax/AUCinf was a better indicator or absorption rate and the AUCinf was more sensitive than truncated AUC in evaluation of absorption extent.

  5. Operator-based metric for nuclear operations automation assessment

    Energy Technology Data Exchange (ETDEWEB)

    Zacharias, G.L.; Miao, A.X.; Kalkan, A. [Charles River Analytics Inc., Cambridge, MA (United States)] [and others

    1995-04-01

    Continuing advances in real-time computational capabilities will support enhanced levels of smart automation and AI-based decision-aiding systems in the nuclear power plant (NPP) control room of the future. To support development of these aids, we describe in this paper a research tool, and more specifically, a quantitative metric, to assess the impact of proposed automation/aiding concepts in a manner that can account for a number of interlinked factors in the control room environment. In particular, we describe a cognitive operator/plant model that serves as a framework for integrating the operator`s information-processing capabilities with his procedural knowledge, to provide insight as to how situations are assessed by the operator, decisions made, procedures executed, and communications conducted. Our focus is on the situation assessment (SA) behavior of the operator, the development of a quantitative metric reflecting overall operator awareness, and the use of this metric in evaluating automation/aiding options. We describe the results of a model-based simulation of a selected emergency scenario, and metric-based evaluation of a range of contemplated NPP control room automation/aiding options. The results demonstrate the feasibility of model-based analysis of contemplated control room enhancements, and highlight the need for empirical validation.

  6. Operator-based metric for nuclear operations automation assessment

    International Nuclear Information System (INIS)

    Continuing advances in real-time computational capabilities will support enhanced levels of smart automation and AI-based decision-aiding systems in the nuclear power plant (NPP) control room of the future. To support development of these aids, we describe in this paper a research tool, and more specifically, a quantitative metric, to assess the impact of proposed automation/aiding concepts in a manner that can account for a number of interlinked factors in the control room environment. In particular, we describe a cognitive operator/plant model that serves as a framework for integrating the operator's information-processing capabilities with his procedural knowledge, to provide insight as to how situations are assessed by the operator, decisions made, procedures executed, and communications conducted. Our focus is on the situation assessment (SA) behavior of the operator, the development of a quantitative metric reflecting overall operator awareness, and the use of this metric in evaluating automation/aiding options. We describe the results of a model-based simulation of a selected emergency scenario, and metric-based evaluation of a range of contemplated NPP control room automation/aiding options. The results demonstrate the feasibility of model-based analysis of contemplated control room enhancements, and highlight the need for empirical validation

  7. Enhancing the quality metric of protein microarray image

    Institute of Scientific and Technical Information of China (English)

    王立强; 倪旭翔; 陆祖康; 郑旭峰; 李映笙

    2004-01-01

    The novel method of improving the quality metric of protein microarray image presented in this paper reduces impulse noise by using an adaptive median filter that employs the switching scheme based on local statistics characters; and achieves the impulse detection by using the difference between the standard deviation of the pixels within the filter window and the current pixel of concern. It also uses a top-hat filter to correct the background variation. In order to decrease time consumption, the top-hat filter core is cross structure. The experimental results showed that, for a protein microarray image contaminated by impulse noise and with slow background variation, the new method can significantly increase the signal-to-noise ratio, correct the trends in the background, and enhance the flatness of the background and the consistency of the signal intensity.

  8. Digitization and metric conversion for image quality test targets: Part II

    Science.gov (United States)

    Kress, William C.

    2003-12-01

    A common need of the INCITS W1.1 Macro Uniformity, Color Rendition and Micro Uniformity ad hoc efforts is to digitize image quality test targets and derive parameters that correlate with image quality assessments. The digitized data should be in a colorimetric color space such as CIELAB and the process of digitizing will introduce no spatial artifacts that reduce the accuracy of image quality parameters. Input digitizers come in many forms including inexpensive scanners used in the home, a range of sophisticated scanners used for graphic arts and scanners used for scientific and industrial measurements (e.g., microdensitometers). Some of these are capable of digitizing hard copy output for image quality objective metrices, and this report focuses on assessment of high quality flatbed scanners for that role. Digitization using flatbed scanners is attractive because they are relatively inexpensive, easy to use, and most are available with document feeders permitting analysis of a stack of documents with little user interaction. Other authors have addressed using scanners for image quality measurements. This paper focuses (1) on color transformations from RGB to CIELAB and (2) sampling issues and demonstrates that flatbed scanners can have a high level of accuracy for generating accurate, stable images in the CIELAB metric. Previous discussion and experimental results focusing on color conversions had been presented at PICS 2003. This paper reviews the past discussion with some refinement based on recent experiments and extends the analysis into color accuracy verification and sampling issues.

  9. Macroinvertebrate and diatom metrics as indicators of water-quality conditions in connected depression wetlands in the Mississippi Alluvial Plain

    Science.gov (United States)

    Justus, Billy; Burge, David; Cobb, Jennifer; Marsico, Travis; Bouldin, Jennifer

    2016-01-01

    Methods for assessing wetland conditions must be established so wetlands can be monitored and ecological services can be protected. We evaluated biological indices compiled from macroinvertebrate and diatom metrics developed primarily for streams to assess their ability to indicate water quality in connected depression wetlands. We collected water-quality and biological samples at 24 connected depressions dominated by water tupelo (Nyssa aquatica) or bald cypress (Taxodium distichum) (water depths = 0.5–1.0 m). Water quality of the least-disturbed connected depressions was characteristic of swamps in the southeastern USA, which tend to have low specific conductance, nutrient concentrations, and pH. We compared 162 macroinvertebrate metrics and 123 diatom metrics with a water-quality disturbance gradient. For most metrics, we evaluated richness, % richness, abundance, and % relative abundance values. Three of the 4 macroinvertebrate metrics that were most beneficial for identifying disturbance in connected depressions decreased along the disturbance gradient even though they normally increase relative to stream disturbance. The negative relationship to disturbance of some taxa (e.g., dipterans, mollusks, and crustaceans) that are considered tolerant in streams suggests that the tolerance scale for some macroinvertebrates can differ markedly between streams and wetlands. Three of the 4 metrics chosen for the diatom index reflected published tolerances or fit the usual perception of metric response to disturbance. Both biological indices may be useful in connected depressions elsewhere in the Mississippi Alluvial Plain Ecoregion and could have application in other wetland types. Given the paradoxical relationship of some macroinvertebrate metrics to dissolved O2 (DO), we suggest that the diatom metrics may be easier to interpret and defend for wetlands with low DO concentrations in least-disturbed conditions.

  10. Sigma metrics in clinical chemistry laboratory – A guide to quality control

    Directory of Open Access Journals (Sweden)

    Usha S. Adiga

    2015-10-01

    Full Text Available Background: Six sigma is a process of quality measurement and improvement program used in industries. Sigma methodology can be applied wherever an outcome of a process is to be measured. A poor outcome is counted as an error or defect. This is quantified as defects per million (DPM. Six sigma provides a more quantitative frame work for evaluating process performance with evidence for process improvement and describes how many sigma fit within the tolerance limits. Sigma metrics can be used effectively in laboratory services. The present study was undertaken to evaluate the quality of the analytical performance of clinical chemistry laboratory by calculating sigma metrics. Methodology: The study was conducted in the clinical biochemistry laboratory of Karwar Institute of Medical Sciences, Karwar. Sigma metrics of 15 parameters with automated chemistry analyzer, transasia XL 640 were analyzed. The analytes assessed were glucose, urea, creatinine, uric acid, total bilirubin (BT, direct bilirubin (BD, total protein, albumin, SGOT, SGPT, ALP, Total cholesterol, triglycerides, HDL and Calcium. Results: We have sigma values <3 for Urea, ALT, BD, BT, Ca, creatinine (L1 and urea, AST, BD (L2. Sigma lies between 3-6 for Glucose, AST, cholesterol, uric acid, total protein(L1 and ALT, cholesterol, BT, calcium, creatinine and glucose (L2.Sigma was more than 6 for Triglyceride, ALP, HDL, albumin (L1 and TG, uric acid, ALP, HDL, albumin, total protein(L2. Conclusion: Sigma metrics helps to assess analytical methodologies and augment laboratory performance. It acts as a guide for planning quality control strategy. It can be a self assessment tool regarding the functioning of clinical laboratory.

  11. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale

    CERN Document Server

    Emmons, Scott; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie network clustering. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms -- Blondel, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 o...

  12. SU-E-J-155: Automatic Quantitative Decision Making Metric for 4DCT Image Quality

    International Nuclear Information System (INIS)

    Purpose: To develop a quantitative decision making metric for automatically detecting irregular breathing using a large patient population that received phase-sorted 4DCT. Methods: This study employed two patient cohorts. Cohort#1 contained 256 patients who received a phasesorted 4DCT. Cohort#2 contained 86 patients who received three weekly phase-sorted 4DCT scans. A previously published technique used a single abdominal surrogate to calculate the ratio of extreme inhalation tidal volume to normal inhalation tidal volume, referred to as the κ metric. Since a single surrogate is standard for phase-sorted 4DCT in radiation oncology clinical practice, tidal volume was not quantified. Without tidal volume, the absolute κ metric could not be determined, so a relative κ (κrel) metric was defined based on the measured surrogate amplitude instead of tidal volume. Receiver operator characteristic (ROC) curves were used to quantitatively determine the optimal cutoff value (jk) and efficiency cutoff value (τk) of κrel to automatically identify irregular breathing that would reduce the image quality of phase-sorted 4DCT. Discriminatory accuracy (area under the ROC curve) of κrel was calculated by a trapezoidal numeric integration technique. Results: The discriminatory accuracy of ?rel was found to be 0.746. The key values of jk and tk were calculated to be 1.45 and 1.72 respectively. For values of ?rel such that jk≤κrel≤τk, the decision to reacquire the 4DCT would be at the discretion of the physician. This accounted for only 11.9% of the patients in this study. The magnitude of κrel held consistent over 3 weeks for 73% of the patients in cohort#3. Conclusion: The decision making metric, ?rel, was shown to be an accurate classifier of irregular breathing patients in a large patient population. This work provided an automatic quantitative decision making metric to quickly and accurately assess the extent to which irregular breathing is occurring during phase

  13. SU-E-J-155: Automatic Quantitative Decision Making Metric for 4DCT Image Quality

    Energy Technology Data Exchange (ETDEWEB)

    Kiely, J Blanco; Olszanski, A; Both, S; White, B [University of Pennsylvania, Philadelphia, PA (United States); Low, D [Deparment of Radiation Oncology, University of California Los Angeles, Los Angeles, CA (United States)

    2015-06-15

    Purpose: To develop a quantitative decision making metric for automatically detecting irregular breathing using a large patient population that received phase-sorted 4DCT. Methods: This study employed two patient cohorts. Cohort#1 contained 256 patients who received a phasesorted 4DCT. Cohort#2 contained 86 patients who received three weekly phase-sorted 4DCT scans. A previously published technique used a single abdominal surrogate to calculate the ratio of extreme inhalation tidal volume to normal inhalation tidal volume, referred to as the κ metric. Since a single surrogate is standard for phase-sorted 4DCT in radiation oncology clinical practice, tidal volume was not quantified. Without tidal volume, the absolute κ metric could not be determined, so a relative κ (κrel) metric was defined based on the measured surrogate amplitude instead of tidal volume. Receiver operator characteristic (ROC) curves were used to quantitatively determine the optimal cutoff value (jk) and efficiency cutoff value (τk) of κrel to automatically identify irregular breathing that would reduce the image quality of phase-sorted 4DCT. Discriminatory accuracy (area under the ROC curve) of κrel was calculated by a trapezoidal numeric integration technique. Results: The discriminatory accuracy of ?rel was found to be 0.746. The key values of jk and tk were calculated to be 1.45 and 1.72 respectively. For values of ?rel such that jk≤κrel≤τk, the decision to reacquire the 4DCT would be at the discretion of the physician. This accounted for only 11.9% of the patients in this study. The magnitude of κrel held consistent over 3 weeks for 73% of the patients in cohort#3. Conclusion: The decision making metric, ?rel, was shown to be an accurate classifier of irregular breathing patients in a large patient population. This work provided an automatic quantitative decision making metric to quickly and accurately assess the extent to which irregular breathing is occurring during phase

  14. Setting Maintenance Quality Objectives and Prioritizing Maintenance Work by Using Quality Metrics

    OpenAIRE

    Schneidewind, Norman F.

    1991-01-01

    We show how metrics that are collected and validated during development can be used during maintenance to control quality and prioritize maintenance work. Our approach is to capitalize on knowledge acquired and experience gained with the software during development through measurement. The motivation for this research stems from the need to provide maintenance management with the following: 1) quantitative basis for establishing quality objectives during ...

  15. Assessment and improvement of radiation oncology trainee contouring ability utilizing consensus-based penalty metrics

    International Nuclear Information System (INIS)

    The objective of this study was to develop and assess the feasibility of utilizing consensus-based penalty metrics for the purpose of critical structure and organ at risk (OAR) contouring quality assurance and improvement. A Delphi study was conducted to obtain consensus on contouring penalty metrics to assess trainee-generated OAR contours. Voxel-based penalty metric equations were used to score regions of discordance between trainee and expert contour sets. The utility of these penalty metric scores for objective feedback on contouring quality was assessed by using cases prepared for weekly radiation oncology radiation oncology trainee treatment planning rounds. In two Delphi rounds, six radiation oncology specialists reached agreement on clinical importance/impact and organ radiosensitivity as the two primary criteria for the creation of the Critical Structure Inter-comparison of Segmentation (CriSIS) penalty functions. Linear/quadratic penalty scoring functions (for over- and under-contouring) with one of four levels of severity (none, low, moderate and high) were assigned for each of 20 OARs in order to generate a CriSIS score when new OAR contours are compared with reference/expert standards. Six cases (central nervous system, head and neck, gastrointestinal, genitourinary, gynaecological and thoracic) then were used to validate 18 OAR metrics through comparison of trainee and expert contour sets using the consensus derived CriSIS functions. For 14 OARs, there was an improvement in CriSIS score post-educational intervention. The use of consensus-based contouring penalty metrics to provide quantitative information for contouring improvement is feasible.

  16. Benchmarking of quality metrics on ultra-high definition video sequences

    OpenAIRE

    Hanhart, Philippe; Korshunov, Pavel; Ebrahimi, Touradj

    2013-01-01

    The performance of objective quality metrics for high-definition (HD) video sequences is well studied, but little is known about their performance for ultra-high definition (UHD) video sequences. This paper analyzes the performance of several common objective quality metrics (PSNR, VSNR, SSIM, MS-SSIM, VIF, and VQM) on three different 4K UHD video sequences using subjective scores as ground truth. The findings confirm the content-dependent nature of most metrics (with VIF being the only exce...

  17. Video Object Relevance Metrics for Overall Segmentation Quality Evaluation

    OpenAIRE

    Correia Paulo; Pereira Fernando

    2006-01-01

    Video object segmentation is a task that humans perform efficiently and effectively, but which is difficult for a computer to perform. Since video segmentation plays an important role for many emerging applications, as those enabled by the MPEG-4 and MPEG-7 standards, the ability to assess the segmentation quality in view of the application targets is a relevant task for which a standard, or even a consensual, solution is not available. This paper considers the evaluation of overall segmenta...

  18. Design and Implementation of Performance Metrics for Evaluation of Assessments Data

    CERN Document Server

    Ahmed, Irfan

    2015-01-01

    The objective of this paper is to design performance metrics and respective formulas to quantitatively evaluate the achievement of set objectives and expected outcomes both at the course and program levels. Evaluation is defined as one or more processes for interpreting the data acquired through the assessment processes in order to determine how well the set objectives and outcomes are being attained. Even though assessment processes for accreditation are well documented but existence of an evaluation process is assumed. This paper focuses on evaluation process to provide insights and techniques for data interpretation. It gives a complete evaluation process from the data collection through various assessment methods, performance metrics, to the presentations in the form of tables and graphs. Authors hope that the articulated description of evaluation formulas will help convergence to high quality standard in evaluation process.

  19. Economic Benefits: Metrics and Methods for Landscape Performance Assessment

    Directory of Open Access Journals (Sweden)

    Zhen Wang

    2016-04-01

    Full Text Available This paper introduces an expanding research frontier in the landscape architecture discipline, landscape performance research, which embraces the scientific dimension of landscape architecture through evidence-based designs that are anchored in quantitative performance assessment. Specifically, this paper summarizes metrics and methods for determining landscape-derived economic benefits that have been utilized in the Landscape Performance Series (LPS initiated by the Landscape Architecture Foundation. This paper identifies 24 metrics and 32 associated methods for the assessment of economic benefits found in 82 published case studies. Common issues arising through research in quantifying economic benefits for the LPS are discussed and the various approaches taken by researchers are clarified. The paper also provides an analysis of three case studies from the LPS that are representative of common research methods used to quantify economic benefits. The paper suggests that high(er levels of sustainability in the built environment require the integration of economic benefits into landscape performance assessment portfolios in order to forecast project success and reduce uncertainties. Therefore, evidence-based design approaches increase the scientific rigor of landscape architecture education and research, and elevate the status of the profession.

  20. Metrics-based assessments of research: incentives for 'institutional plagiarism'?

    Science.gov (United States)

    Berry, Colin

    2013-06-01

    The issue of plagiarism--claiming credit for work that is not one's own, rightly, continues to cause concern in the academic community. An analysis is presented that shows the effects that may arise from metrics-based assessments of research, when credit for an author's outputs (chiefly publications) is given to an institution that did not support the research but which subsequently employs the author. The incentives for what is termed here "institutional plagiarism" are demonstrated with reference to the UK Research Assessment Exercise in which submitting units of assessment are shown in some instances to derive around twice the credit for papers produced elsewhere by new recruits, compared to papers produced 'in-house'. PMID:22371031

  1. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  2. Large-scale seismic waveform quality metric calculation using Hadoop

    Science.gov (United States)

    Magana-Zook, S.; Gaylord, J. M.; Knapp, D. R.; Dodge, D. A.; Ruppert, S. D.

    2016-09-01

    In this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of ~0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. These experiments were conducted multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely

  3. Quality Metrics and Reliability Analysis of Laser Communication System

    Directory of Open Access Journals (Sweden)

    A. Arockia Bazil Raj

    2016-03-01

    Full Text Available Beam wandering is the main cause for major power loss in laser communication. To analyse this prerequisite at our environment, a 155 Mbps data transmission experimental setup is built with necessary optoelectronic components for the link range of 0.5 km at an altitude of 15.25 m. A neuro-controller is developed inside the FPGA and used to stabilise the received beam at the centre of detector plane. The Q-factor and bit error rate variation profiles are calculated using the signal statistics obtained from the eye-diagram. The performance improvements on the laser communication system due to the incorporation of beam wandering mitigation control are investigated and discussed in terms of various communication quality assessment key parameters.Defence Science Journal, Vol. 66, No. 2, March 2016, pp. 175-185, DOI: http://dx.doi.org/10.14429/dsj.66.9707

  4. Trend Analysis of Key Cellular Network Quality Performance Metrics

    Directory of Open Access Journals (Sweden)

    Patrick O. Olabisi

    2014-07-01

    Full Text Available Assessment and analysis of key quality performance indicators of a cellular network is better done over a period of time like days or months in order to have a proper perspective of the reliability of performance of the network or of its base stations (BSs as had been done in this work than to do so over hourly periods of the day or in isolated manner. This normally helps to consider investigating various social and environmental factors that may be affecting the functionality, reliability, and capacity of the network systems. The effect on one key performance indicator is proved to be more likely to affect all other performance indicators of the network or its base stations as was discovered for majorly the fourth day of our measurements. With the highest total traffic occurring on the fourth day other indicators were also worsen, thereby affecting the service quality experienced by the users. KPIs considered were Total Traffic, CSSR, CDR, HoSR, SDCCH Cong, SDR, TCH Cong and TCHA BH.

  5. Fostering software quality assessment

    OpenAIRE

    Brandtner, Martin

    2013-01-01

    Software quality assessment shall monitor and guide the evolution of a system based on quality measurements. This continuous process should ideally involve multiple stakeholders and provide adequate information for each of them to use. We want to support an effective selection of quality measurements based on the type of software and individual information needs of the involved stakeholders. We propose an approach that brings together quality measurements and individual information needs for ...

  6. Assessing exposure metrics for PM and birth weight models.

    Science.gov (United States)

    Gray, Simone C; Edwards, Sharon E; Miranda, Marie Lynn

    2010-07-01

    The link between air pollution exposure and adverse birth outcomes is of public health concern due to the relationship between poor pregnancy outcomes and the onset of childhood and adult diseases. As personal exposure measurements are difficult and expensive to obtain, proximate measures of air pollution exposure are traditionally used. We explored how different air pollution exposure metrics affect birth weight regression models. We examined the effect of maternal exposure to ambient levels of particulate matter pregnancy for 2000-2002 (n=350,754). County-level averages of air pollution concentrations were estimated for the entire pregnancy and each trimester. For a finer spatially resolved metric, we calculated exposure averages for women living within 20, 10, and 5 km of a monitor. Multiple linear regression was used to determine the association between exposure and birth weight, adjusting for standard covariates. In the county-level model, an interquartile increase in PM(10) and PM(2.5) during the entire gestational period reduced the birth weight by 5.3 g (95% CI: 3.3-7.4) and 4.6 g (95% CI: 2.3-6.8), respectively. This model also showed a reduction in birth weight for PM(10) (7.1 g, 95% CI: 1.0-13.2) and PM(2.5) (10.4 g, 95% CI: 6.4-14.4) during the third trimester. Proximity models for 20, 10, and 5 km distances showed results similar to the county-level models. County-level models assume that exposure is spatially homogeneous over a larger surface area than proximity models. Sensitivity analysis showed that at varying spatial resolutions, there is still a stable and negative association between air pollution and birth weight, despite North Carolina's consistent attainment of federal air quality standards. PMID:19773814

  7. Pragmatic guidelines and quality metrics in business process modeling: a case study

    Directory of Open Access Journals (Sweden)

    Isel Moreno-Montes-de-Oca

    2014-04-01

    Full Text Available Business process modeling is one of the first steps towards achieving organizational goals. This is why business process modeling quality is an essential aspect for the development and technical support of any company. This work focuses on the quality of business process models at a conceptual l evel (design and evaluation. In the literature there are works that propose practical guidelines for modeling, while others focus on quality metrics that allow the evaluation of the models. In this paper we use practical guidelines during the modeling phase of a business process for postgraduate studies. We applied a set of quality metrics and compare the results with those obtained from a similar model that did not use guidelines. The results provide support for the use of guidelines as a way for business process modeling quality improvement, and the practical utility of quality metrics in their evaluation.

  8. Design of video quality metrics with multi-way data analysis a data driven approach

    CERN Document Server

    Keimel, Christian

    2016-01-01

    This book proposes a data-driven methodology using multi-way data analysis for the design of video-quality metrics. It also enables video- quality metrics to be created using arbitrary features. This data- driven design approach not only requires no detailed knowledge of the human visual system, but also allows a proper consideration of the temporal nature of video using a three-way prediction model, corresponding to the three-way structure of video. Using two simple example metrics, the author demonstrates not only that this purely data- driven approach outperforms state-of-the-art video-quality metrics, which are often optimized for specific properties of the human visual system, but also that multi-way data analysis methods outperform the combination of two-way data analysis methods and temporal pooling. .

  9. A Metric Tool for Predicting Source Code Quality from a PDL Design

    OpenAIRE

    Henry, Sallie M.; Selig, Calvin

    1987-01-01

    The software crisis has increased the demand for automated tools to assist software developers in the production of quality software. Quality metrics have given software developers a tool to measure software quality. These measurements, however, are available only after the software has been produced. Due to high cost, software managers are reluctant, to redesign and reimplement low quality software. Ideally, a life cycle which allows early measurement of software quality is a necessary ingre...

  10. Metrics for Assessment of Smart Grid Data Integrity Attacks

    Energy Technology Data Exchange (ETDEWEB)

    Annarita Giani; Miles McQueen; Russell Bent; Kameshwar Poolla; Mark Hinrichs

    2012-07-01

    There is an emerging consensus that the nation’s electricity grid is vulnerable to cyber attacks. This vulnerability arises from the increasing reliance on using remote measurements, transmitting them over legacy data networks to system operators who make critical decisions based on available data. Data integrity attacks are a class of cyber attacks that involve a compromise of information that is processed by the grid operator. This information can include meter readings of injected power at remote generators, power flows on transmission lines, and relay states. These data integrity attacks have consequences only when the system operator responds to compromised data by redispatching generation under normal or contingency protocols. These consequences include (a) financial losses from sub-optimal economic dispatch to service loads, (b) robustness/resiliency losses from placing the grid at operating points that are at greater risk from contingencies, and (c) systemic losses resulting from cascading failures induced by poor operational choices. This paper is focused on understanding the connections between grid operational procedures and cyber attacks. We first offer two examples to illustrate how data integrity attacks can cause economic and physical damage by misleading operators into taking inappropriate decisions. We then focus on unobservable data integrity attacks involving power meter data. These are coordinated attacks where the compromised data are consistent with the physics of power flow, and are therefore passed by any bad data detection algorithm. We develop metrics to assess the economic impact of these attacks under re-dispatch decisions using optimal power flow methods. These metrics can be use to prioritize the adoption of appropriate countermeasures including PMU placement, encryption, hardware upgrades, and advance attack detection algorithms.

  11. Visual signal quality assessment quality of experience (QOE)

    CERN Document Server

    Ma, Lin; Lin, Weisi; Ngan, King

    2015-01-01

    This book provides comprehensive coverage of the latest trends/advances in subjective and objective quality evaluation for traditional visual signals, such as 2D images and video, as well as the most recent challenges for the field of multimedia quality assessment and processing, such as mobile video and social media. Readers will learn how to ensure the highest storage/delivery/ transmission quality of visual content (including image, video, graphics, animation, etc.) from the server to the consumer, under resource constraints, such as computation, bandwidth, storage space, battery life, etc.    Provides an overview of quality assessment for traditional visual signals; Covers newly emerged visual signals such as social media, 3D image/video, mobile video, high dynamic range (HDR) images, graphics/animation, etc., which demand better quality of experience (QoE); Helps readers to develop better quality metrics and processing methods for newly emerged visual signals; Enables testing, optimizing, benchmarking...

  12. Diet quality assessment indexes

    OpenAIRE

    Kênia Mara Baiocchi de Carvalho; Eliane Said Dutra; Nathalia Pizato; Nádia Dias Gruezo; Marina Kiyomi Ito

    2014-01-01

    Various indices and scores based on admittedly healthy dietary patterns or food guides for the general population, or aiming at the prevention of diet-related diseases have been developed to assess diet quality. The four indices preferred by most studies are: the Diet Quality Index; the Healthy Eating Index; the Mediterranean Diet Score; and the Overall Nutritional Quality Index. Other instruments based on these indices have been developed and the words 'adapted', 'revised', or 'new version I...

  13. A quality metric for homology modeling: the H-factor

    Directory of Open Access Journals (Sweden)

    di Luccio Eric

    2011-02-01

    Full Text Available Abstract Background The analysis of protein structures provides fundamental insight into most biochemical functions and consequently into the cause and possible treatment of diseases. As the structures of most known proteins cannot be solved experimentally for technical or sometimes simply for time constraints, in silico protein structure prediction is expected to step in and generate a more complete picture of the protein structure universe. Molecular modeling of protein structures is a fast growing field and tremendous works have been done since the publication of the very first model. The growth of modeling techniques and more specifically of those that rely on the existing experimental knowledge of protein structures is intimately linked to the developments of high resolution, experimental techniques such as NMR, X-ray crystallography and electron microscopy. This strong connection between experimental and in silico methods is however not devoid of criticisms and concerns among modelers as well as among experimentalists. Results In this paper, we focus on homology-modeling and more specifically, we review how it is perceived by the structural biology community and what can be done to impress on the experimentalists that it can be a valuable resource to them. We review the common practices and provide a set of guidelines for building better models. For that purpose, we introduce the H-factor, a new indicator for assessing the quality of homology models, mimicking the R-factor in X-ray crystallography. The methods for computing the H-factor is fully described and validated on a series of test cases. Conclusions We have developed a web service for computing the H-factor for models of a protein structure. This service is freely accessible at http://koehllab.genomecenter.ucdavis.edu/toolkit/h-factor.

  14. Fovea based image quality assessment

    Science.gov (United States)

    Guo, Anan; Zhao, Debin; Liu, Shaohui; Cao, Guangyao

    2010-07-01

    Humans are the ultimate receivers of the visual information contained in an image, so the reasonable method of image quality assessment (IQA) should follow the properties of the human visual system (HVS). In recent years, IQA methods based on HVS-models are slowly replacing classical schemes, such as mean squared error (MSE) and Peak Signal-to-Noise Ratio (PSNR). IQA-structural similarity (SSIM) regarded as one of the most popular HVS-based methods of full reference IQA has apparent improvements in performance compared with traditional metrics in nature, however, it performs not very well when the images' structure is destroyed seriously or masked by noise. In this paper, a new efficient fovea based structure similarity image quality assessment (FSSIM) is proposed. It enlarges the distortions in the concerned positions adaptively and changes the importances of the three components in SSIM. FSSIM predicts the quality of an image through three steps. First, it computes the luminance, contrast and structure comparison terms; second, it computes the saliency map by extracting the fovea information from the reference image with the features of HVS; third, it pools the above three terms according to the processed saliency map. Finally, a commonly experimental database LIVE IQA is used for evaluating the performance of the FSSIM. Experimental results indicate that the consistency and relevance between FSSIM and mean opinion score (MOS) are both better than SSIM and PSNR clearly.

  15. Effective Implementation of Agile Practices - Object Oriented Metrics Tool to Improve Software Quality

    Directory of Open Access Journals (Sweden)

    K. Nageswara Rao

    2012-08-01

    Full Text Available Maintaining the quality of the software is the major challenge in the process of software development.Software inspections which use the methods like structured walkthroughs and formal code reviews involvecareful examination of each and every aspect/stage of software development. In Agile softwaredevelopment, refactoring helps to improve software quality. This refactoring is a technique to improvesoftware internal structure without changing its behaviour. After much study regarding the ways toimprove software quality, our research proposes an object oriented software metric tool called“MetricAnalyzer”. This tool is tested on different codebases and is proven to be much useful.

  16. Evaluating the efficiency of using the Autonomy Ratio Metric for assessing ArgoUML architecture

    OpenAIRE

    Niculescu, Mihnea; Dugerdil, Philippe

    2014-01-01

    Metrics in software engineering are used to evaluate quantitatively and qualitatively various attributes of (usually large) systems. These figures help synthetizing information such as size, quality or complexity of various element of the analyzed software. In the past few years, Professor Philippe Dugerdil has developed, at the Geneva School of Business Administration, a new metric, called the Autonomy Ratio, along with an analysis method and related software tools. The AR metric helps measu...

  17. Power quality assessment

    International Nuclear Information System (INIS)

    The electrical power systems are exposed to different types of power quality disturbances problems. Assessment of power quality is necessary for maintaining accurate operation of sensitive equipment's especially for nuclear installations, it also ensures that unnecessary energy losses in a power system are kept at a minimum which lead to more profits. With advanced in technology growing of industrial / commercial facilities in many region. Power quality problems have been a major concern among engineers; particularly in an industrial environment, where there are many large-scale type of equipment. Thus, it would be useful to investigate and mitigate the power quality problems. Assessment of Power quality requires the identification of any anomalous behavior on a power system, which adversely affects the normal operation of electrical or electronic equipment. The choice of monitoring equipment in a survey is also important to ascertain a solution to these power quality problems. A power quality assessment involves gathering data resources; analyzing the data (with reference to power quality standards); then, if problems exist, recommendation of mitigation techniques must be considered. The main objective of the present work is to investigate and mitigate of power quality problems in nuclear installations. Normally electrical power is supplied to the installations via two sources to keep good reliability. Each source is designed to carry the full load. The Assessment of power quality was performed at the nuclear installations for both sources at different operation conditions. The thesis begins with a discussion of power quality definitions and the results of previous studies in power quality monitoring. The assessment determines that one source of electricity was deemed to have relatively good power quality; there were several disturbances, which exceeded the thresholds. Among of them are fifth harmonic, voltage swell, overvoltage and flicker. While the second

  18. A Novel Spatial Pooling Strategy for Image Quality Assessment

    Institute of Scientific and Technical Information of China (English)

    Qiaohong Li; Yu-Ming Fang; Jing-Tao Xu

    2016-01-01

    A variety of existing image quality assessment (IQA) metrics share a similar two-stage framework: at the first stage, a quality map is constructed by comparison between local regions of reference and distorted images; at the second stage, the spatial pooling is adopted to obtain overall quality score. In this work, we propose a novel spatial pooling strategy for image quality assessment through statistical analysis of the quality map. Our in-depth analysis indicates that the overall image quality is sensitive to the quality distribution. Based on the analysis, the quality histogram and statistical descriptors extracted from the quality map are used as input to the support vector regression to obtain the final objective quality score. Experimental results on three large public IQA databases have demonstrated that the proposed spatial pooling strategy can greatly improve the quality prediction performance of the original IQA metrics in terms of correlation with human subjective ratings.

  19. Modeling quality video metrics of video streaming over optical network

    OpenAIRE

    Blanco Fernández, Sara

    2009-01-01

    Digital video data, stored in video databases and distributed through communication networks, is subject to various kinds of distortions during acquisition, compression, processing, transmission, and reproduction. Video quality is a characteristic of a video passed through a video transmission/processing system, a formal or informal measure of perceived video degradation (typically, compared to the original video). The impact of encoding and transmission impairments on the perceptual quality ...

  20. Image Quality Assessment

    Czech Academy of Sciences Publication Activity Database

    Kudělka, Miloš

    Vol. I. Praha : MATFYZPRESS, 2012 - (Šafránková, J.; Pavlů, J.), s. 94-99 ISBN 978-80-7378-224-5. [WDS'12. Praha (CZ), 29.05.2012-01.06.2012] Institutional support: RVO:67985556 Keywords : image quality * texture Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2013/RO/kudelka-image quality assessment.pdf

  1. Quality of Service Metrics in Wireless Sensor Networks: A Survey

    Science.gov (United States)

    Snigdh, Itu; Gupta, Nisha

    2016-03-01

    Wireless ad hoc network is characterized by autonomous nodes communicating with each other by forming a multi hop radio network and maintaining connectivity in a decentralized manner. This paper presents a systematic approach to the interdependencies and the analogy of the various factors that affect and constrain the wireless sensor network. This article elaborates the quality of service parameters in terms of methods of deployment, coverage and connectivity which affect the lifetime of the network that have been addressed, till date by the different literatures. The analogy of the indispensable rudiments was discussed that are important factors to determine the varied quality of service achieved, yet have not been duly focused upon.

  2. Mapping model validation metrics to subject matter expert scores for model adequacy assessment

    International Nuclear Information System (INIS)

    This paper develops a novel approach to incorporate the contributions of both quantitative validation metrics and qualitative subject matter expert (SME) evaluation criteria in model validation assessment. The relationship between validation metrics (input) and SME scores (output) is formulated as a classification problem, and a probabilistic neural network (PNN) is constructed to execute this mapping. Establishing PNN classifiers for a wide variety of combinations of validation metrics allows for a quantitative comparison of validation metric performance in representing SME judgment. An advantage to this approach is that it semi-automates the model validation process and subsequently is capable of incorporating the contributions of large data sets of disparate response quantities of interest in model validation assessment. The effectiveness of this approach is demonstrated on a complex real-world problem involving the shock qualification testing of a floating shock platform. A data set of experimental and simulated pairs of time history comparisons along with associated SME scores and computed validation metrics is obtained and utilized to construct the PNN classifiers through K-fold cross validation. A wide range of validation metrics for time history comparisons is considered including feature-specific metrics (phase and magnitude error), a frequency metric (shock response spectra), a time-frequency metric (wavelet decomposition), and a global metric (index of agreement). The PNN classifiers constructed using a Parzen kernel for the class conditional probability density function whose smoothing parameter is optimized using a genetic algorithm performs well in representing SME judgment. - Highlights: • A general framework to semi-automate adequacy assessment of a model is developed. • Validation metrics and expert opinion mapping are framed as a classification problem. • A framework to quantitatively evaluate metric performance is introduced. • The

  3. Patent Assessment Quality

    DEFF Research Database (Denmark)

    Burke, Paul F.; Reitzig, Markus

    2006-01-01

    The increasing number of patent applications worldwide and the extension of patenting to the areas of software and business methods have triggered a debate on "patent quality". While patent quality may have various dimensions, this paper argues that consistency in the decision making on the side of...... the patent office is one important dimension, particularly in new patenting areas (emerging technologies). In order to understand whether patent offices appear capable of providing consistent assessments of a patent's technological quality in such novel industries from the beginning, we study the...... concordance of the European Patent Office's (EPO's) granting and opoposition decisions for individual patents. We use the historical example of biotech patens filed between 1978 until 1986, the early stage of the industry. Our results indicate that the EPO shows systematically different assessments of...

  4. Quality assessment of healthcare systems

    OpenAIRE

    Koubeková, Eva

    2007-01-01

    Quality assessment of healthcare systems is considered to be the basic tool of developing strategic concepts in healthcare quality improvement and has a great impact on quality of life. The thesis' main focus is on possibilities of quality assessment on international quality model level and its transformation into national structures. It includes teoretical points of quality and economic evaluation of quality in healthcare. The objective is to assess the participation of czech hospitals in he...

  5. The impact of climate-induced distributional changes on the validity of biological water quality metrics.

    Science.gov (United States)

    Hassall, Christopher; Thompson, David J; Harvey, Ian F

    2010-01-01

    We present data on the distributional changes within an order of macroinvertebrates used in biological water quality monitoring. The British Odonata (dragonflies and damselflies) have been shown to be expanding their range northwards and this could potentially affect the use of water quality metrics. The results show that the families of Odonata that are used in monitoring are shifting their ranges poleward and that species richness is increasing through time at most UK latitudes. These past distributional shifts have had negligible effects on water quality indicators. However, variation in Odonata species richness (particularly in species-poor regions) has a significant effect on water quality metrics. We conclude with a brief review of current and predicted responses of aquatic macroinvertebrates to environmental warming and maintain that caution is warranted in the use of such dynamic biological indicators. PMID:19101810

  6. QESTRAL (Part 4): Test signals, combining metrics and the prediction of overall spatial quality

    OpenAIRE

    Dewhirst, M; Conetta, R; Rumsey, F; Jackson, PJB; Zielinski, S.; George, S.; Bech, S; Meares, D

    2008-01-01

    The QESTRAL project has developed an artificial listener that compares the perceived quality of a spatial audio reproduction to a reference reproduction. Test signals designed to identify distortions in both the foreground and background audio streams are created for both the reference and the impaired reproduction systems. Metrics are calculated from these test signals and are then combined using a regression model to give a measure of the overall perceived spatial quality of the impaired re...

  7. Translation Quality Assessment

    OpenAIRE

    Malcolm Williams

    2009-01-01

    The relevance of, and justification for, translation quality assessment (TQA) is stronger than ever: professional translators, their clients, translatological researchers and trainee translators all rely on TQA for different reasons. Yet whereas there is general agreement about the need for a translation to be "good," "satisfactory" or "acceptable," the definition of acceptability and of the means of determining it are matters of ongoing debate. National and international translation standard...

  8. ENHANCED ENSEMBLE PREDICTION ALGORITHMS FOR DETECTING FAULTY MODULES IN OBJECT ORIENTED SYSTEMS USING QUALITY METRICS

    Directory of Open Access Journals (Sweden)

    M. Punithavalli

    2012-01-01

    Full Text Available The high usage of software system poses high quality demand from users, which results in increased software complexity. To address these complexities, software quality engineering methods should be updated accordingly and enhance their quality assuring methods. Fault prediction, a sub-task of SQE, is designed to solve this issue and provide a strategy to identify faulty parts of a program, so that the testing process can concentrate only on those regions. This will improve the testing process and indirectly help to reduce development life cycle, project risks, resource and infrastructure costs. Measuring quality using software metrics for fault identification is gaining wide interest in software industry as they help to reduce time and cost. Existing system use either traditional simple metrics or object oriented metrics during fault detection combined with single classifier prediction system. This study combines the use of simple and object oriented metrics and uses a multiple classifier prediction system to identify module faults. In this study, a total of 20 metrics combining both traditional and OO metrics are used for fault detection. To analyze the performance of these metrics on fault module detection, the study proposes the use of ensemble classifiers that uses three frequently used classifiers, Back Propagation Neural Network (BPNN, Support Vector Machine (SVM and K-Nearest Neighbour (KNN. A novel classifier aggregation method is proposed to combine the classification results. Four methods, Sequential Selection, Random Selection with No Replacement, Selection with Bagging and Selection with Boosting, are used to generate different variants of input dataset. The three classifiers were grouped together as 2-classifier and 3-classifier prediction ensemble models. A total of 16 ensemble models were proposed for fault prediction. The performance of the proposed prediciton models was analyzed using accuracy, precision, recall and F

  9. Assessing Exposure Metrics for PM and Birthweight Models

    OpenAIRE

    Gray, Simone C.; Edwards, Sharon E.; Miranda, Marie Lynn

    2009-01-01

    The link between air pollution exposure and adverse birth outcomes is of public health concern due to the relationship between poor pregnancy outcomes and the onset of childhood and adult diseases. As personal exposure measurements are difficult and expensive to obtain, proximate measures of air pollution exposure are traditionally used. We explored how different air pollution exposure metrics affect birthweight regression models. We examined the effect of maternal exposure to ambient levels ...

  10. An overview of metrics-based approaches to support software components reusability assessment

    CERN Document Server

    Goulão, Miguel

    2011-01-01

    Objective: To present an overview on the current state of the art concerning metrics-based quality evaluation of software components and component assemblies. Method: Comparison of several approaches available in the literature, using a framework comprising several aspects, such as scope, intent, definition technique, and maturity. Results: The identification of common shortcomings of current approaches, such as ambiguity in definition, lack of adequacy of the specifying formalisms and insufficient validation of current quality models and metrics for software components. Conclusions: Quality evaluation of components and component-based infrastructures presents new challenges to the Experimental Software Engineering community.

  11. Program analysis methodology Office of Transportation Technologies: Quality Metrics final report

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2002-03-01

    "Quality Metrics" is the analytical process for measuring and estimating future energy, environmental and economic benefits of US DOE Office of Energy Efficiency and Renewable Energy (EE/RE) programs. This report focuses on the projected benefits of the programs currently supported by the Office of Transportation Technologies (OTT) within EE/RE. For analytical purposes, these various benefits are subdivided in terms of Planning Units which are related to the OTT program structure.

  12. Sigma metrics in clinical chemistry laboratory – A guide to quality control

    OpenAIRE

    Usha S. Adiga; A. Preethika; K.Swathi

    2015-01-01

    Background: Six sigma is a process of quality measurement and improvement program used in industries. Sigma methodology can be applied wherever an outcome of a process is to be measured. A poor outcome is counted as an error or defect. This is quantified as defects per million (DPM). Six sigma provides a more quantitative frame work for evaluating process performance with evidence for process improvement and describes how many sigma fit within the tolerance limits. Sigma metrics can be used e...

  13. Total quality management: an analysis and evaluation of the effectiveness of performance metrics for ACAT III programs of record

    OpenAIRE

    Higginbotham, Jayne Marie

    2014-01-01

    This project studies the metrics of a sample United States Army Aviation Acquisition Category (ACAT) III program. This program reports weekly metrics across the functional areas of logistics, business, and technology (software development and risk management), which are reviewed in functional-management staff calls. This project investigates whether these metrics align with total quality management (TQM) best-practice standards. The framework for the study is the National Institute of Standar...

  14. Video quality assessment for web content mirroring

    Science.gov (United States)

    He, Ye; Fei, Kevin; Fernandez, Gustavo A.; Delp, Edward J.

    2014-03-01

    Due to the increasing user expectation on watching experience, moving web high quality video streaming content from the small screen in mobile devices to the larger TV screen has become popular. It is crucial to develop video quality metrics to measure the quality change for various devices or network conditions. In this paper, we propose an automated scoring system to quantify user satisfaction. We compare the quality of local videos with the videos transmitted to a TV. Four video quality metrics, namely Image Quality, Rendering Quality, Freeze Time Ratio and Rate of Freeze Events are used to measure video quality change during web content mirroring. To measure image quality and rendering quality, we compare the matched frames between the source video and the destination video using barcode tools. Freeze time ratio and rate of freeze events are measured after extracting video timestamps. Several user studies are conducted to evaluate the impact of each objective video quality metric on the subjective user watching experience.

  15. Information System Quality Assessment Methods

    OpenAIRE

    Korn, Alexandra

    2014-01-01

    This thesis explores challenging topic of information system quality assessment and mainly process assessment. In this work the term Information System Quality is defined as well as different approaches in a quality definition for different domains of information systems are outlined. Main methods of process assessment are overviewed and their relationships are described. Process assessment methods are divided into two categories: ISO standards and best practices. The main objective of this w...

  16. MUSTANG: A Community-Facing Web Service to Improve Seismic Data Quality Awareness Through Metrics

    Science.gov (United States)

    Templeton, M. E.; Ahern, T. K.; Casey, R. E.; Sharer, G.; Weertman, B.; Ashmore, S.

    2014-12-01

    IRIS DMC is engaged in a new effort to provide broad and deep visibility into the quality of data and metadata found in its terabyte-scale geophysical data archive. Taking advantage of large and fast disk capacity, modern advances in open database technologies, and nimble provisioning of virtual machine resources, we are creating an openly accessible treasure trove of data measurements for scientists and the general public to utilize in providing new insights into the quality of this data. We have branded this statistical gathering system MUSTANG, and have constructed it as a component of the web services suite that IRIS DMC offers. MUSTANG measures over forty data metrics addressing issues with archive status, data statistics and continuity, signal anomalies, noise analysis, metadata checks, and station state of health. These metrics could potentially be used both by network operators to diagnose station problems and by data users to sort suitable data from unreliable or unusable data. Our poster details what MUSTANG is, how users can access it, what measurements they can find, and how MUSTANG fits into the IRIS DMC's data access ecosystem. Progress in data processing, approaches to data visualization, and case studies of MUSTANG's use for quality assurance will be presented. We want to illustrate what is possible with data quality assurance, the need for data quality assurance, and how the seismic community will benefit from this freely available analytics service.

  17. A Multi-Component Model for Assessing Learning Objects: The Learning Object Evaluation Metric (LOEM)

    Science.gov (United States)

    Kay, Robin H.; Knaack, Liesel

    2008-01-01

    While discussion of the criteria needed to assess learning objects has been extensive, a formal, systematic model for evaluation has yet to be thoroughly tested. The purpose of the following study was to develop and assess a multi-component model for evaluating learning objects. The Learning Object Evaluation Metric (LOEM) was developed from a…

  18. Integrated Metrics for Improving the Life Cycle Approach to Assessing Product System Sustainability

    Directory of Open Access Journals (Sweden)

    Wesley Ingwersen

    2014-03-01

    Full Text Available Life cycle approaches are critical for identifying and reducing environmental burdens of products. While these methods can indicate potential environmental impacts of a product, current Life Cycle Assessment (LCA methods fail to integrate the multiple impacts of a system into unified measures of social, economic or environmental performance related to sustainability. Integrated metrics that combine multiple aspects of system performance based on a common scientific or economic principle have proven to be valuable for sustainability evaluation. In this work, we propose methods of adapting four integrated metrics for use with LCAs of product systems: ecological footprint, emergy, green net value added, and Fisher information. These metrics provide information on the full product system in land, energy, monetary equivalents, and as a unitless information index; each bundled with one or more indicators for reporting. When used together and for relative comparison, integrated metrics provide a broader coverage of sustainability aspects from multiple theoretical perspectives that is more likely to illuminate potential issues than individual impact indicators. These integrated metrics are recommended for use in combination with traditional indicators used in LCA. Future work will test and demonstrate the value of using these integrated metrics and combinations to assess product system sustainability.

  19. Quality Assessment of Libraries

    OpenAIRE

    N.K. Dash; P. Padhi

    2010-01-01

    The concept of quality is not a new phenomenon for library and information science professionals as it is entrenched in library philosophy and practice. Service quality is viewed as a comparison of what the customer expected prior to the use of services and the perceived level of services received. Quality of service and user satisfaction is two significant facets of effective service management. Although the concept of quality is not new, measuring service quality as a management technique h...

  20. Portfolio Assessment and Quality Teaching

    Science.gov (United States)

    Kim, Youb; Yazdian, Lisa Sensale

    2014-01-01

    Our article focuses on using portfolio assessment to craft quality teaching. Extant research literature on portfolio assessment suggests that the primary purpose of assessment is to serve learning, and portfolio assessments facilitate the process of making linkages among assessment, curriculum, and student learning (Asp, 2000; Bergeron, Wermuth,…

  1. Optimal Rate Control in H.264 Video Coding Based on Video Quality Metric

    Directory of Open Access Journals (Sweden)

    R. Karthikeyan

    2014-05-01

    Full Text Available The aim of this research is to find a method for providing better visual quality across the complete video sequence in H.264 video coding standard. H.264 video coding standard with its significantly improved coding efficiency finds important applications in various digital video streaming, storage and broadcast. To achieve comparable quality across the complete video sequence with the constrains on bandwidth availability and buffer fullness, it is important to allocate more bits to frames with high complexity or a scene change and fewer bits to other less complex frames. A frame layer bit allocation scheme is proposed based on the perceptual quality metric as indicator of the frame complexity. The proposed model computes the Quality Index ratio (QIr of the predicted quality index of the current frame to the average quality index of all the previous frames in the group of pictures which is used for bit allocation to the current frame along with bits computed based on buffer availability. The standard deviation of the perceptual quality indicator MOS computed for the proposed model is significantly less which means the quality of the video sequence is identical throughout the full video sequence. Thus the experiment results shows that the proposed model effectively handles the scene changes and scenes with high motion for better visual quality.

  2. Image Signature Based Mean Square Error for Image Quality Assessment

    Institute of Scientific and Technical Information of China (English)

    CUI Ziguan; GAN Zongliang; TANG Guijin; LIU Feng; ZHU Xiuchang

    2015-01-01

    Motivated by the importance of Human visual system (HVS) in image processing, we propose a novel Image signature based mean square error (ISMSE) metric for full reference Image quality assessment (IQA). Efficient image signature based describer is used to predict visual saliency map of the reference image. The saliency map is incorporated into luminance diff erence between the reference and distorted images to obtain image quality score. The eff ect of luminance diff erence on visual quality with larger saliency value which is usually corresponding to foreground objects is highlighted. Experimental results on LIVE database release 2 show that by integrating the eff ects of image signature based saliency on luminance dif-ference, the proposed ISMSE metric outperforms several state-of-the-art HVS-based IQA metrics but with lower complexity.

  3. Image quality assessment for an Intel digital imaging chip set prototype

    Science.gov (United States)

    Booth, Lawrence A., Jr.; Austin, Phillip G.; Firsty, Caren; Metz, Werner A.

    1998-12-01

    Image quality assessments for Intel's digital imaging chip set prototype are made using objective and subjective image quality assessment criteria. Objective criteria such as signal to noise ratio, linearity, color error, dynamic range, and resolution are used to provide quantitative metrics for engineering development. Subjective criteria such as mean observer scores derived from single stimulus and paired comparison adjectival ratings provide overall product image quality assessment that are used to determine product acceptability assessments for product marketing analysis. These metrics along with the subjective assessment, serve as development tools which allow the product development team to focus on the critical areas which improve the image quality of the product.

  4. Retinal image quality assessment using generic features

    Science.gov (United States)

    Fasih, Mahnaz; Langlois, J. M. Pierre; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Retinal image quality assessment is an important step in automated eye disease diagnosis. Diagnosis accuracy is highly dependent on the quality of retinal images, because poor image quality might prevent the observation of significant eye features and disease manifestations. A robust algorithm is therefore required in order to evaluate the quality of images in a large database. We developed an algorithm for retinal image quality assessment based on generic features that is independent from segmentation methods. It exploits the local sharpness and texture features by applying the cumulative probability of blur detection metric and run-length encoding algorithm, respectively. The quality features are combined to evaluate the image's suitability for diagnosis purposes. Based on the recommendations of medical experts and our experience, we compared a global and a local approach. A support vector machine with radial basis functions was used as a nonlinear classifier in order to classify images to gradable and ungradable groups. We applied our methodology to 65 images of size 2592×1944 pixels that had been graded by a medical expert. The expert evaluated 38 images as gradable and 27 as ungradable. The results indicate very good agreement between the proposed algorithm's predictions and the medical expert's judgment: the sensitivity and specificity for the local approach are respectively 92% and 94%. The algorithm demonstrates sufficient robustness to identify relevant images for automated diagnosis.

  5. Revision and extension of Eco-LCA metrics for sustainability assessment of the energy and chemical processes.

    Science.gov (United States)

    Yang, Shiying; Yang, Siyu; Kraslawski, Andrzej; Qian, Yu

    2013-12-17

    Ecologically based life cycle assessment (Eco-LCA) is an appealing approach for the evaluation of resources utilization and environmental impacts of the process industries from an ecological scale. However, the aggregated metrics of Eco-LCA suffer from some drawbacks: the environmental impact metric has limited applicability; the resource utilization metric ignores indirect consumption; the renewability metric fails to address the quantitative distinction of resources availability; the productivity metric seems self-contradictory. In this paper, the existing Eco-LCA metrics are revised and extended for sustainability assessment of the energy and chemical processes. A new Eco-LCA metrics system is proposed, including four independent dimensions: environmental impact, resource utilization, resource availability, and economic effectiveness. An illustrative example of comparing assessment between a gas boiler and a solar boiler process provides insight into the features of the proposed approach. PMID:24228888

  6. A Stochastic Quality Metric for Optimal Control of Active Camera Network Configurations for 3D Computer Vision Tasks

    OpenAIRE

    Ilie, Adrian; Welch, Greg; Macenko, Marc

    2008-01-01

    International audience We present a stochastic state-space quality metric for use in controlling active camera networks aimed at 3D vision tasks such as surveillance, motion tracking, and 3D shape/appearance reconstruction. Specifically, the metric provides an estimate of the aggregate steady-state uncertainty of the 3D resolution of the objects of interest, as a function of camera parameters such as pan, tilt, and zoom. The use of stochastic state-space models for the quality metric resul...

  7. Formal analysis of security metrics and risk

    OpenAIRE

    Krautsevich L.; Martinelli F.; Yautsiukhin A.

    2011-01-01

    Security metrics are usually defined informally and, therefore, the rigourous analysis of these metrics is a hard task. This analysis is required to identify the existing relations between the security metrics, which try to quantify the same quality: security. Risk, computed as Annualised Loss Expectancy, is often used in order to give the overall assessment of security as a whole. Risk and security metrics are usually defined separately and the relation between these indicators have not been...

  8. A NO-REFERENCE QUALITY METRIC FOR EVALUATING BLUR IMAGE IN WAVELET DOMAIN

    Directory of Open Access Journals (Sweden)

    F. Kerouh

    2011-01-01

    Full Text Available In this paper, a no reference blur image quality metric based on wavelet transform is presented. As blur affects specially edges and image fine details, most blur estimation algorithms, are based primarily on an adequate edge detection methods. Here we propose a new approach by analyzing edges through a multi-resolution decomposition. The ability of wavelets to extract the high frequency component of an image has made them useful for edge analysis through different resolutions. Moreover, the multi-resolution analysis is performed on reduced images size, and this could lead to an execution time improvement. In addition, the edges persistence through resolutions may be involved in accuracy blur quality measure estimation. To prove the validity of the proposed method, blurred images from LIVE data base have been considered. Results show that the proposed method provides an accurate quality measure.

  9. Survey of Quantitative Research Metrics to Assess Pilot Performance in Upset Recovery

    Science.gov (United States)

    Le Vie, Lisa R.

    2016-01-01

    Accidents attributable to in-flight loss of control are the primary cause for fatal commercial jet accidents worldwide. The National Aeronautics and Space Administration (NASA) conducted a literature review to determine and identify the quantitative standards for assessing upset recovery performance. This review contains current recovery procedures for both military and commercial aviation and includes the metrics researchers use to assess aircraft recovery performance. Metrics include time to first input, recognition time and recovery time and whether that input was correct or incorrect. Other metrics included are: the state of the autopilot and autothrottle, control wheel/sidestick movement resulting in pitch and roll, and inputs to the throttle and rudder. In addition, airplane state measures, such as roll reversals, altitude loss/gain, maximum vertical speed, maximum/minimum air speed, maximum bank angle and maximum g loading are reviewed as well.

  10. Advancing efforts to achieve health equity: equity metrics for health impact assessment practice.

    Science.gov (United States)

    Heller, Jonathan; Givens, Marjory L; Yuen, Tina K; Gould, Solange; Jandu, Maria Benkhalti; Bourcier, Emily; Choi, Tim

    2014-11-01

    Equity is a core value of Health Impact Assessment (HIA). Many compelling moral, economic, and health arguments exist for prioritizing and incorporating equity considerations in HIA practice. Decision-makers, stakeholders, and HIA practitioners see the value of HIAs in uncovering the impacts of policy and planning decisions on various population subgroups, developing and prioritizing specific actions that promote or protect health equity, and using the process to empower marginalized communities. There have been several HIA frameworks developed to guide the inclusion of equity considerations. However, the field lacks clear indicators for measuring whether an HIA advanced equity. This article describes the development of a set of equity metrics that aim to guide and evaluate progress toward equity in HIA practice. These metrics also intend to further push the field to deepen its practice and commitment to equity in each phase of an HIA. Over the course of a year, the Society of Practitioners of Health Impact Assessment (SOPHIA) Equity Working Group took part in a consensus process to develop these process and outcome metrics. The metrics were piloted, reviewed, and refined based on feedback from reviewers. The Equity Metrics are comprised of 23 measures of equity organized into four outcomes: (1) the HIA process and products focused on equity; (2) the HIA process built the capacity and ability of communities facing health inequities to engage in future HIAs and in decision-making more generally; (3) the HIA resulted in a shift in power benefiting communities facing inequities; and (4) the HIA contributed to changes that reduced health inequities and inequities in the social and environmental determinants of health. The metrics are comprised of a measurement scale, examples of high scoring activities, potential data sources, and example interview questions to gather data and guide evaluators on scoring each metric. PMID:25347193

  11. Advancing Efforts to Achieve Health Equity: Equity Metrics for Health Impact Assessment Practice

    Directory of Open Access Journals (Sweden)

    Jonathan Heller

    2014-10-01

    Full Text Available Equity is a core value of Health Impact Assessment (HIA. Many compelling moral, economic, and health arguments exist for prioritizing and incorporating equity considerations in HIA practice. Decision-makers, stakeholders, and HIA practitioners see the value of HIAs in uncovering the impacts of policy and planning decisions on various population subgroups, developing and prioritizing specific actions that promote or protect health equity, and using the process to empower marginalized communities. There have been several HIA frameworks developed to guide the inclusion of equity considerations. However, the field lacks clear indicators for measuring whether an HIA advanced equity. This article describes the development of a set of equity metrics that aim to guide and evaluate progress toward equity in HIA practice. These metrics also intend to further push the field to deepen its practice and commitment to equity in each phase of an HIA. Over the course of a year, the Society of Practitioners of Health Impact Assessment (SOPHIA Equity Working Group took part in a consensus process to develop these process and outcome metrics. The metrics were piloted, reviewed, and refined based on feedback from reviewers. The Equity Metrics are comprised of 23 measures of equity organized into four outcomes: (1 the HIA process and products focused on equity; (2 the HIA process built the capacity and ability of communities facing health inequities to engage in future HIAs and in decision-making more generally; (3 the HIA resulted in a shift in power benefiting communities facing inequities; and (4 the HIA contributed to changes that reduced health inequities and inequities in the social and environmental determinants of health. The metrics are comprised of a measurement scale, examples of high scoring activities, potential data sources, and example interview questions to gather data and guide evaluators on scoring each metric.

  12. A Methodology for Software Design Quality Assessment of Design Enhancements

    Directory of Open Access Journals (Sweden)

    Sahar Reda

    2012-12-01

    Full Text Available The most important measure that must be considered in anysoftware product is its design quality. Measuring of the designquality in the early stages of software development is the key todevelop and enhance quality software. Research on objectoriented design metrics has produced a large number of metricsthat can be measured to identify design problems and assessdesign quality attributes. However the use of these design metricsis limited in practice due to the difficulty of measuring and usinga large number of metrics. This paper presents a methodology forsoftware design quality assessment. This methodology helps thedesigner to measure and assess the changes in design due todesign enhancements. The goal of this paper is to illustrate themethodology using practical software design examples andanalyze its utility in industrial projects. Finally, we present a casestudy to illustrate the methodology.

  13. Holistic Metrics for Assessment of the Greenness of Chemical Reactions in the Context of Chemical Education

    Science.gov (United States)

    Ribeiro, M. Gabriela T. C.; Machado, Adelio A. S. C.

    2013-01-01

    Two new semiquantitative green chemistry metrics, the green circle and the green matrix, have been developed for quick assessment of the greenness of a chemical reaction or process, even without performing the experiment from a protocol if enough detail is provided in it. The evaluation is based on the 12 principles of green chemistry. The…

  14. Using research metrics to evaluate the International Atomic Energy Agency guidelines on quality assurance for R&D

    Energy Technology Data Exchange (ETDEWEB)

    Bodnarczuk, M.

    1994-06-01

    The objective of the International Atomic Energy Agency (IAEA) Guidelines on Quality Assurance for R&D is to provide guidance for developing quality assurance (QA) programs for R&D work on items, services, and processes important to safety, and to support the siting, design, construction, commissioning, operation, and decommissioning of nuclear facilities. The standard approach to writing papers describing new quality guidelines documents is to present a descriptive overview of the contents of the document. I will depart from this approach. Instead, I will first discuss a conceptual framework of metrics for evaluating and improving basic and applied experimental science as well as the associated role that quality management should play in understanding and implementing these metrics. I will conclude by evaluating how well the IAEA document addresses the metrics from this conceptual framework and the broader principles of quality management.

  15. Testing Quality and Metrics for the LHC Magnet Powering System throughout Past and Future Commissioning

    CERN Document Server

    Anderson, D; Charifoulline, Z; Dragu, M; Fuchsberger, K; Garnier, JC; Gorzawski, AA; Koza, M; Krol, K; Rowan, S; Stamos, K; Zerlauth, M

    2014-01-01

    The LHC magnet powering system is composed of thousands of individual components to assure a safe operation when operating with stored energies as high as 10GJ in the superconducting LHC magnets. Each of these components has to be thoroughly commissioned following interventions and machine shutdown periods to assure their protection function in case of powering failures. As well as having dependable tracking of test executions it is vital that the executed commissioning steps and applied analysis criteria adequately represent the operational state of each component. The Accelerator Testing (AccTesting) framework in combination with a domain specific analysis language provides the means to quantify and improve the quality of analysis for future campaigns. Dedicated tools were developed to analyse in detail the reasons for failures and success of commissioning steps in past campaigns and to compare the results with newly developed quality metrics. Observed shortcomings and discrepancies are used to propose addi...

  16. The role of metrics and measurements in a software intensive total quality management environment

    Science.gov (United States)

    Daniels, Charles B.

    1992-01-01

    Paramax Space Systems began its mission as a member of the Rockwell Space Operations Company (RSOC) team which was the successful bidder on a massive operations consolidation contract for the Mission Operations Directorate (MOD) at JSC. The contract awarded to the team was the Space Transportation System Operations Contract (STSOC). Our initial challenge was to accept responsibility for a very large, highly complex and fragmented collection of software from eleven different contractors and transform it into a coherent, operational baseline. Concurrently, we had to integrate a diverse group of people from eleven different companies into a single, cohesive team. Paramax executives recognized the absolute necessity to develop a business culture based on the concept of employee involvement to execute and improve the complex process of our new environment. Our executives clearly understood that management needed to set the example and lead the way to quality improvement. The total quality management policy and the metrics used in this endeavor are presented.

  17. Definition of Metric Dependencies for Monitoring the Impact of Quality of Services on Quality of Processes

    OpenAIRE

    Mayerl, Christian; Hüner, Kai Moritz; Gaspar, Jens-Uwe; Momm, Christof; Abeck, Sebastian

    2007-01-01

    Service providers have to monitor the quality of offered services and to ensure the compliance of service levels provider and requester agreed on. Thereby, a service provider should notify a service requester about violations of service level agreements (SLAs). Furthermore, the provider should point to impacts on affected processes in which services are invoked. For that purpose, a model is needed to define dependencies between quality of processes and quality of invoked services. In order to...

  18. Assessing Metrics for Estimating Fire Induced Change in the Forest Understorey Structure Using Terrestrial Laser Scanning

    Directory of Open Access Journals (Sweden)

    Vaibhav Gupta

    2015-06-01

    Full Text Available Quantifying post-fire effects in a forested landscape is important to ascertain burn severity, ecosystem recovery and post-fire hazard assessments and mitigation planning. Reporting of such post-fire effects assumes significance in fire-prone countries such as USA, Australia, Spain, Greece and Portugal where prescribed burns are routinely carried out. This paper describes the use of Terrestrial Laser Scanning (TLS to estimate and map change in the forest understorey following a prescribed burn. Eighteen descriptive metrics are derived from bi-temporal TLS which are used to analyse and visualise change in a control and fire-altered plot. Metrics derived are Above Ground Height-based (AGH percentiles and heights, point count and mean intensity. Metrics such as AGH50change, mean AGHchange and point countchange are sensitive enough to detect subtle fire-induced change (28%–52% whilst observing little or no change in the control plot (0–4%. A qualitative examination with field measurements of the spatial distribution of burnt areas and percentage area burnt also show similar patterns. This study is novel in that it examines the behaviour of TLS metrics for estimating and mapping fire induced change in understorey structure in a single-scan mode with a minimal fixed reference system. Further, the TLS-derived metrics can be used to produce high resolution maps of change in the understorey landscape.

  19. Irrigation water quality assessments

    Science.gov (United States)

    Increasing demands on fresh water supplies by municipal and industrial users means decreased fresh water availability for irrigated agriculture in semi arid and arid regions. There is potential for agricultural use of treated wastewaters and low quality waters for irrigation but this will require co...

  20. The software product assurance metrics study: JPL's software systems quality and productivity

    Science.gov (United States)

    Bush, Marilyn W.

    1989-01-01

    The findings are reported of the Jet Propulsion Laboratory (JPL)/Software Product Assurance (SPA) Metrics Study, conducted as part of a larger JPL effort to improve software quality and productivity. Until recently, no comprehensive data had been assembled on how JPL manages and develops software-intensive systems. The first objective was to collect data on software development from as many projects and for as many years as possible. Results from five projects are discussed. These results reflect 15 years of JPL software development, representing over 100 data points (systems and subsystems), over a third of a billion dollars, over four million lines of code and 28,000 person months. Analysis of this data provides a benchmark for gauging the effectiveness of past, present and future software development work. In addition, the study is meant to encourage projects to record existing metrics data and to gather future data. The SPA long term goal is to integrate the collection of historical data and ongoing project data with future project estimations.

  1. Air quality assessment for Portugal

    OpenAIRE

    Monteiro, A; Miranda, A. I.; C. Borrego; R. Vautard

    2007-01-01

    According to the Air Quality Framework Directive, air pollutant concentration levels have to be assessed and reported annually by each European Union member state, taking into consideration European air quality standards. Plans and programmes should be implemented in zones and agglomerations where pollutant concentrations exceed the limit and target values. The main objective of this study is to perform a long-term air quality simulation for Portugal, using the CHIMERE chemistry-transport mod...

  2. A Code Level Based Programmer Assessment and Selection Criterion Using Metric Tools

    Directory of Open Access Journals (Sweden)

    Ezekiel U. Okike

    2014-11-01

    Full Text Available this study presents a code level measurement of computer programs developed by computer programmers using a Chidamber and Kemerer Java metric (CKJM tool and the Myers Briggs Type Indicator (MBTI tool. The identification of potential computer programmers using personality trait factors does not seem to be the best approach without a code level measurement of the quality of programs. Hence the need to evolve a metric tool which measures both personality traits of programmers and code level quality of programs developed by programmers. This is the focus of this study. In this experiment, a set of Java based programming tasks were given to 33 student programmers who could confidently use the Java programming language. The codes developed by these students were analyzed for quality using a CKJM tool. Cohesion, coupling and number of public methods (NPM metrics were used in the study. The choice of these three metrics from the CKJM suite was because they are useful in measuring well designed codes. By examining the cohesion values of classes, high cohesion ranges [0,1] and low coupling imply well designed code. Also number of methods (NPM in a well-designed class is always less than 5 when cohesion range is [0,1]. Results from this study show that 19 of the 33 programmers developed good and cohesive programs while 14 did not. Further analysis revealed the personality traits of programmers and the number of good programs written by them. Programmers with Introverted Sensing Thinking Judging (ISTJ traits produced the highest number of good programs, followed by Introverted iNtuitive Thinking Perceiving (INTP, Introverted iNtuitive Feelingng Perceiving (INTP, and Extroverted Sensing Thinking Judging (ESTJ

  3. Area of Concern: A new paradigm in life cycle assessment for the development of footprint metrics

    DEFF Research Database (Denmark)

    Ridoutt, Bradley G.; Pfister, Stephan; Manzardo, Alessandro;

    2016-01-01

    operating under the auspices of the UNEP/SETAC Life Cycle Initiative project on environmental life cycle impact assessment (LCIA) has been working to develop generic guidance for developers of footprint metrics. The purpose of this paper is to introduce a universal footprint definition and related...... terminology as well as to discuss modelling implications. The task force has worked from the perspective that footprints should be based on LCA methodology, underpinned by the same data systems and models as used in LCA. However, there are important differences in purpose and orientation relative to LCA...... area of concern as the basis for a universal footprint definition. In the same way that LCA uses impact category indicators to assess impacts that follow a common causeeffect pathway toward areas of protection, footprint metrics address areas of concern. The critical difference is that areas of concern...

  4. Attention modeling for video quality assessment:balancing global quality and local quality

    OpenAIRE

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2010-01-01

    This paper proposes to evaluate video quality by balancing two quality components: global quality and local quality. The global quality is a result from subjects allocating their ttention equally to all regions in a frame and all frames n a video. It is evaluated by image quality metrics (IQM) ith averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in ...

  5. On using Multiple Quality Link Metrics with Destination Sequenced Distance Vector Protocol for Wireless Multi-Hop Networks

    CERN Document Server

    Javaid, N; Khan, Z A; Djouani, K

    2012-01-01

    In this paper, we compare and analyze performance of five quality link metrics forWireless Multi-hop Networks (WMhNs). The metrics are based on loss probability measurements; ETX, ETT, InvETX, ML and MD, in a distance vector routing protocol; DSDV. Among these selected metrics, we have implemented ML, MD, InvETX and ETT in DSDV which are previously implemented with different protocols; ML, MD, InvETX are implemented with OLSR, while ETT is implemented in MR-LQSR. For our comparison, we have selected Throughput, Normalized Routing Load (NRL) and End-to-End Delay (E2ED) as performance parameters. Finally, we deduce that InvETX due to low computational burden and link asymmetry measurement outperforms among all metrics.

  6. Evaluating research assessment: metrics-based analysis exposes implicit bias in REF2014 results

    OpenAIRE

    Dix, Alan

    2016-01-01

    The recent UK research assessment exercise, REF2014, attempted to be as fair and transparent as possible. However, Alan Dix, a member of the computing sub-panel, reports how a post-hoc analysis of public domain REF data reveals substantial implicit and emergent bias in terms of discipline sub-areas (theoretical vs applied), institutions (Russell Group vs post-1992), and gender. While metrics are generally recognised as flawed, our human processes may be uniformly worse.

  7. Workshop summary: 'Integrating air quality and climate mitigation - is there a need for new metrics to support decision making?'

    Science.gov (United States)

    von Schneidemesser, E.; Schmale, J.; Van Aardenne, J.

    2013-12-01

    Air pollution and climate change are often treated at national and international level as separate problems under different regulatory or thematic frameworks and different policy departments. With air pollution and climate change being strongly linked with regard to their causes, effects and mitigation options, the integration of policies that steer air pollutant and greenhouse gas emission reductions might result in cost-efficient, more effective and thus more sustainable tackling of the two problems. To support informed decision making and to work towards an integrated air quality and climate change mitigation policy requires the identification, quantification and communication of present-day and potential future co-benefits and trade-offs. The identification of co-benefits and trade-offs requires the application of appropriate metrics that are well rooted in science, easy to understand and reflect the needs of policy, industry and the public for informed decision making. For the purpose of this workshop, metrics were loosely defined as a quantified measure of effect or impact used to inform decision-making and to evaluate mitigation measures. The workshop held on October 9 and 10 and co-organized between the European Environment Agency and the Institute for Advanced Sustainability Studies brought together representatives from science, policy, NGOs, and industry to discuss whether current available metrics are 'fit for purpose' or whether there is a need to develop alternative metrics or reassess the way current metrics are used and communicated. Based on the workshop outcome the presentation will (a) summarize the informational needs and current application of metrics by the end-users, who, depending on their field and area of operation might require health, policy, and/or economically relevant parameters at different scales, (b) provide an overview of the state of the science of currently used and newly developed metrics, and the scientific validity of these

  8. Attention modeling for video quality assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2010-01-01

    . The local quality of a video sequence is calculated by pooling local quality values over all frames with a temporal pooling scheme derived from the known relationship between perceived video quality and the frequency of temporal quality variations. The overall quality of a distorted video is a......This paper proposes to evaluate video quality by balancing two quality components: global quality and local quality. The global quality is a result from subjects allocating their ttention equally to all regions in a frame and all frames n a video. It is evaluated by image quality metrics (IQM) ith...... averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in modeling visual attention, which is then integrated into IQMs to calculate the local quality of a video frame...

  9. A New Normalizing Algorithm for BAC CGH Arrays with Quality Control Metrics

    Directory of Open Access Journals (Sweden)

    Jeffrey C. Miecznikowski

    2011-01-01

    Full Text Available The main focus in pin-tip (or print-tip microarray analysis is determining which probes, genes, or oligonucleotides are differentially expressed. Specifically in array comparative genomic hybridization (aCGH experiments, researchers search for chromosomal imbalances in the genome. To model this data, scientists apply statistical methods to the structure of the experiment and assume that the data consist of the signal plus random noise. In this paper we propose “SmoothArray”, a new method to preprocess comparative genomic hybridization (CGH bacterial artificial chromosome (BAC arrays and we show the effects on a cancer dataset. As part of our R software package “aCGHplus,” this freely available algorithm removes the variation due to the intensity effects, pin/print-tip, the spatial location on the microarray chip, and the relative location from the well plate. removal of this variation improves the downstream analysis and subsequent inferences made on the data. Further, we present measures to evaluate the quality of the dataset according to the arrayer pins, 384-well plates, plate rows, and plate columns. We compare our method against competing methods using several metrics to measure the biological signal. With this novel normalization algorithm and quality control measures, the user can improve their inferences on datasets and pinpoint problems that may arise in their BAC aCGH technology.

  10. The effect of assessment scale and metric selection on the greenhouse gas benefits of woody biomass

    International Nuclear Information System (INIS)

    Recent attention has focused on the net greenhouse gas (GHG) implications of using woody biomass to produce energy. In particular, a great deal of controversy has erupted over the appropriate manner and scale at which to evaluate these GHG effects. Here, we conduct a comparative assessment of six different assessment scales and four different metric calculation techniques against the backdrop of a common biomass demand scenario. We evaluate the net GHG balance of woody biomass co-firing in existing coal-fired facilities in the state of Virginia, finding that assessment scale and metric calculation technique do in fact strongly influence the net GHG balance yielded by this common scenario. Those assessment scales that do not include possible market effects attributable to increased biomass demand, including changes in forest area, forest management intensity, and traditional industry production, generally produce less-favorable GHG balances than those that do. Given the potential difficulty small operators may have generating or accessing information on the extent of these market effects, however, it is likely that stakeholders and policy makers will need to balance accuracy and comprehensiveness with reporting and administrative simplicity. -- Highlights: ► Greenhouse gas (GHG) effects of co-firing forest biomass with coal are assessed. ► GHG effect of replacing coal with forest biomass linked to scale, analytic approach. ► Not accounting for indirect market effects yields poorer relative GHG balances. ► Accounting systems must balance comprehensiveness with administrative simplicity.

  11. Sugar concentration in nectar: a quantitative metric of crop attractiveness for refined pollinator risk assessments.

    Science.gov (United States)

    Knopper, Loren D; Dan, Tereza; Reisig, Dominic D; Johnson, Josephine D; Bowers, Lisa M

    2016-10-01

    Those involved with pollinator risk assessment know that agricultural crops vary in attractiveness to bees. Intuitively, this means that exposure to agricultural pesticides is likely greatest for attractive plants and lowest for unattractive plants. While crop attractiveness in the risk assessment process has been qualitatively remarked on by some authorities, absent is direction on how to refine the process with quantitative metrics of attractiveness. At a high level, attractiveness of crops to bees appears to depend on several key variables, including but not limited to: floral, olfactory, visual and tactile cues; seasonal availability; physical and behavioral characteristics of the bee; plant and nectar rewards. Notwithstanding the complexities and interactions among these variables, sugar content in nectar stands out as a suitable quantitative metric by which to refine pollinator risk assessments for attractiveness. Provided herein is a proposed way to use sugar nectar concentration to adjust the exposure parameter (with what is called a crop attractiveness factor) in the calculation of risk quotients in order to derive crop-specific tier I assessments. This Perspective is meant to invite discussion on incorporating such changes in the risk assessment process. © 2016 The Authors. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. PMID:27197566

  12. Assessing sofware quality by micro patterns detection

    OpenAIRE

    Destefanis, Giuseppe

    2013-01-01

    One of the goals of Software Engineering is to reduce, or at least to try to control, the defectiveness of software systems during the development phase. Software engineers need to have empirical evidence that software metrics are related to software quality. Unfortunately, software quality is quite an elusive concept, software being an immaterial entity that cannot be physically measured in traditional ways. In general, software quality means many things. In software, the narowest s...

  13. A new quality assessment and improvement system for print media

    Science.gov (United States)

    Liu, Mohan; Konya, Iuliu; Nandzik, Jan; Flores-Herr, Nicolas; Eickeler, Stefan; Ndjiki-Nya, Patrick

    2012-12-01

    Print media collections of considerable size are held by cultural heritage organizations and will soon be subject to digitization activities. However, technical content quality management in digitization workflows strongly relies on human monitoring. This heavy human intervention is cost intensive and time consuming, which makes automization mandatory. In this article, a new automatic quality assessment and improvement system is proposed. The digitized source image and color reference target are extracted from the raw digitized images by an automatic segmentation process. The target is evaluated by a reference-based algorithm. No-reference quality metrics are applied to the source image. Experimental results are provided to illustrate the performance of the proposed system. We show that it features a good performance in the extraction as well as in the quality assessment step compared to the state-of-the-art. The impact of efficient and dedicated quality assessors on the optimization step is extensively documented.

  14. Applying Undertaker to quality assessment

    DEFF Research Database (Denmark)

    Archie, John G.; Paluszewski, Martin; Karplus, Kevin

    2009-01-01

    Our group tested three quality assessment functions in CASP8: a function which used only distance constraints derived from alignments (SAM-T08-MQAO), a function which added other single-model terms to the distance constraints (SAM-T08-MQAU), and a function which used both single-model and consens...

  15. Subjective and Objective Quality Assessment of Image: A Survey

    Directory of Open Access Journals (Sweden)

    Pedram Mohammadi

    2015-03-01

    Full Text Available With the increasing demand for image-based applications, the efficient and reliable evaluation of image quality has increased in importance. Measuring the image quality is of fundamental importance for numerous image processing applications, where the goal of image quality assessment (IQA methods is to automatically evaluate the quality of images in agreement with human quality judgments. Numerous IQA methods have been proposed over the past years to fulfill this goal. In this paper, a survey of the quality assessment methods for conventional image signals, as well as the newly emerged ones, which includes the high dynamic range (HDR images, is presented. A thorough explanation of the subjective and objective IQA, and their classification is provided. Six widely used subjective quality datasets, and performance measures are overviewed. Emphasis is given to the full-reference image quality assessment (FR-IQA methods, and 9 often-used quality measures (including mean squared error (MSE, structural similarity index (SSIM, multi-scale structural similarity index (MS-SSIM, visual information fidelity (VIF, most apparent distortion (MAD, feature similarity measure (FSIM, feature similarity measure for color images (FSIMC, dynamic range independent measure (DRIM, and tone-mapped images quality index (TMQI are thoroughly described. Moreover, the performance and computation time of these metrics on four subjective quality datasets are evaluated.

  16. Quality Metrics of Semi Automatic DTM from Large Format Digital Camera

    Science.gov (United States)

    Narendran, J.; Srinivas, P.; Udayalakshmi, M.; Muralikrishnan, S.

    2014-11-01

    The high resolution digital images from Ultracam-D Large Format Digital Camera (LFDC) was used for near automatic DTM generation. In the past, manual method for DTM generation was used which are time consuming and labour intensive. In this study LFDC in synergy with accurate position and orientation system and processes like image matching algorithms, distributed processing and filtering techniques for near automatic DTM generation. Traditionally the DTM accuracy is reported using check points collected from the field which are limited in number, time consuming and costly. This paper discusses the reliability of near automatic DTM generated from Ultracam-D for an operational project covering an area of nearly 600 Sq. Km. using 21,000 check points captured stereoscopically by experienced operators. The reliability of the DTM for the three study areas with different morphology is presented using large number of stereo check points and parameters related to statistical distribution of residuals such as skewness, kurtosis, standard deviation and linear error at 90% confidence interval. The residuals obtained for the three areas follow normal distribution in agreement with the majority of standards on positional accuracy. The quality metrics in terms of reliability were computed for the DTMs generated and the tables and graphs show the potential of Ultracam-D for the generation of semiautomatic DTM process for different terrain types.

  17. Inconsistencies in air quality metrics: 'Blue Sky' days and PM10 concentrations in Beijing

    International Nuclear Information System (INIS)

    International attention is focused on Beijing's efforts to improve air quality. The number of days reported as attaining the daily Chinese National Ambient Air Quality Standard for cities, called 'Blue Sky' days, has increased yearly from 100 in 1998 to 246 in 2007. However, analysis of publicly reported daily air pollution index (API) values for fine particulate matter (diameter≤10 μm, PM10), indicates a discrepancy between the reported 'Blue Sky' days (defined as API≤100, PM10≤150 μg m-3) and published monitoring station data. Here I show that reported improvements in air quality for 2006-2007 over 2002 levels can be attributed to (a) a shift in reported daily PM10 concentrations from just above to just below the national standard, and (b) a shift of monitoring stations in 2006 to less polluted areas. I found that calculating daily Beijing API for 2006 and 2007 using data from the original monitoring stations eliminates a bias in reported PM10 concentrations near the 'Blue Sky' boundary, and results in a number of 'Blue Sky' days and annual PM10 concentration near 2002 levels in 2006 and 2007 (203 days and ∼167 μg m-3 calculated for 2006-38 days fewer and a PM10 concentration ∼6 μg m-3 higher than reported; 191 'Blue Sky' days and ∼161 μg m-3 calculated for 2007-55 days fewer and a PM10 concentration ∼12 μg m-3 higher than reported; 203 days and 166 μg m-3 were reported in 2002). Furthermore, although different pollutants were monitored before daily reporting began and less stringent standards were implemented in June 2000, reported annual average concentrations of particulate (diameter≤100 μm, TSP) and nitrogen dioxide (NO2) indicate no improvement between 1998 and 2002. This analysis highlights the sensitivity of monitoring data in the evaluation of air quality trends, and the potential for the misinterpretation or manipulation of these trends on the basis of inconsistent metrics.

  18. Aerial Image Series Quality Assessment

    International Nuclear Information System (INIS)

    With the growing demand for geospatial data, the aerial imagery with high spatial, spectral, and temporal resolution achieves great development. It is imperative to evaluate whether the acquired images are qualified enough, since the further image mosaic asks for strict time consistency and a re-flight involves considerable resources. In this paper, we address the problem of quick aerial image series quality assessment. An image series quality analysis system is proposed, which includes single image quality assessment, image series quality assessment based on the image matching, and offering a visual matching result in real time for human validation when the computer achieves dubious results. For two images, the affine matrix is different for different parts of images, especially for images of wide field. Therefore we calculate transfer matrixes by using even-distributed control points from different image parts with the RANSAC technology, and use the image rotation angle for image mosaic for human validation. Extensive experiments conducted on aerial images show that the proposed method can obtain similar results with experts

  19. NEW VISUAL PERCEPTUAL POOLING STRATEGY FOR IMAGE QUALITY ASSESSMENT

    Institute of Scientific and Technical Information of China (English)

    Zhou Wujie; Jiang Gangyi; Yu Mei

    2012-01-01

    Most of Image Quality Assessment (IQA) metrics consist of two processes.In the first process,quality map of image is measured locally.In the second process,the last quality score is converted from the quality map by using the pooling strategy.The first process had been made effective and significant progresses,while the second process was always done in simple ways.In the second process of the pooling strategy,the optimal perceptual pooling weights should be determined and computed according to Human Visual System (HVS).Thus,a reliable spatial pooling mathematical model based on HVS is an important issue worthy of study.In this paper,a new Visual Perceptual Pooling Strategy (VPPS) for IQA is presented based on contrast sensitivity and luminance sensitivity of HVS.Experimental results with the LIVE database show that the visual perceptual weights,obtained by the proposed pooling strategy,can effectively and significantly improve the performances of the IQA metrics with Mean Structural SIMilarity (MSSIM) or Phase Quantization Code (PQC).It is confirmed that the proposed VPPS demonstrates promising results for improving the performances of existing IQA metrics.

  20. Assessment of radiobiological metrics applied to patient-specific QA process of VMAT prostate treatments.

    Science.gov (United States)

    Clemente-Gutiérrez, Francisco; Pérez-Vara, Consuelo; Clavo-Herranz, María H; López-Carrizosa, Concepción; Pérez-Regadera, José; Ibáñez-Villoslada, Carmen

    2016-01-01

    VMAT is a powerful technique to deliver hypofractionated prostate treatments. The lack of correlations between usual 2D pretreatment QA results and the clini-cal impact of possible mistakes has allowed the development of 3D verification systems. Dose determination on patient anatomy has provided clinical predictive capability to patient-specific QA process. Dose-volume metrics, as evaluation crite-ria, should be replaced or complemented by radiobiological indices. These metrics can be incorporated into individualized QA extracting the information for response parameters (gEUD, TCP, NTCP) from DVHs. The aim of this study is to assess the role of two 3D verification systems dealing with radiobiological metrics applied to a prostate VMAT QA program. Radiobiological calculations were performed for AAPM TG-166 test cases. Maximum differences were 9.3% for gEUD, -1.3% for TCP, and 5.3% for NTCP calculations. Gamma tests and DVH-based comparisons were carried out for both systems in order to assess their performance in 3D dose determination for prostate treatments (high-, intermediate-, and low-risk, as well as prostate bed patients). Mean gamma passing rates for all structures were bet-ter than 92.0% and 99.1% for both 2%/2 mm and 3%/3 mm criteria. Maximum discrepancies were (2.4% ± 0.8%) and (6.2% ± 1.3%) for targets and normal tis-sues, respectively. Values for gEUD, TCP, and NTCP were extracted from TPS and compared to the results obtained with the two systems. Three models were used for TCP calculations (Poisson, sigmoidal, and Niemierko) and two models for NTCP determinations (LKB and Niemierko). The maximum mean difference for gEUD calculations was (4.7% ± 1.3%); for TCP, the maximum discrepancy was (-2.4% ± 1.1%); and NTCP comparisons led to a maximum deviation of (1.5% ± 0.5%). The potential usefulness of biological metrics in patient-specific QA has been explored. Both systems have been successfully assessed as potential tools for evaluating the clinical

  1. Determine metrics and set targets for soil quality on agriculture residue and energy crop pathways

    Energy Technology Data Exchange (ETDEWEB)

    Ian Bonner; David Muth

    2013-09-01

    There are three objectives for this project: 1) support OBP in meeting MYPP stated performance goals for the Sustainability Platform, 2) develop integrated feedstock production system designs that increase total productivity of the land, decrease delivered feedstock cost to the conversion facilities, and increase environmental performance of the production system, and 3) deliver to the bioenergy community robust datasets and flexible analysis tools for establishing sustainable and viable use of agricultural residues and dedicated energy crops. The key project outcome to date has been the development and deployment of a sustainable agricultural residue removal decision support framework. The modeling framework has been used to produce a revised national assessment of sustainable residue removal potential. The national assessment datasets are being used to update national resource assessment supply curves using POLYSIS. The residue removal modeling framework has also been enhanced to support high fidelity sub-field scale sustainable removal analyses. The framework has been deployed through a web application and a mobile application. The mobile application is being used extensively in the field with industry, research, and USDA NRCS partners to support and validate sustainable residue removal decisions. The results detailed in this report have set targets for increasing soil sustainability by focusing on primary soil quality indicators (total organic carbon and erosion) in two agricultural residue management pathways and a dedicated energy crop pathway. The two residue pathway targets were set to, 1) increase residue removal by 50% while maintaining soil quality, and 2) increase soil quality by 5% as measured by Soil Management Assessment Framework indicators. The energy crop pathway was set to increase soil quality by 10% using these same indicators. To demonstrate the feasibility and impact of each of these targets, seven case studies spanning the US are presented

  2. Data quality assessment from provenance graphs

    OpenAIRE

    Huynh, Trung Dong; Ebden, Mark; Ramchurn, Sarvapali; Roberts, Stephen; Moreau, Luc

    2014-01-01

    Provenance is a domain-independent means to represent what happened in an application, which can help verify data and infer data quality. Provenance patterns can manifest real-world phenomena such as a significant interest in a piece of content, providing an indication of its quality, or even issues such as undesirable interactions within a group of contributors. This paper presents an application-independent methodology for analyzing data based on the network metrics of provenance graphs to ...

  3. Assessing the colour quality of LED sources

    DEFF Research Database (Denmark)

    Jost-Boissard, S.; Avouac, P.; Fontoynont, Marc

    2015-01-01

    sources and especially some LEDs. In this paper, several aspects of perceived colour quality are investigated using a side-by-side paired comparison method, and the following criteria: naturalness of fruits and vegetables, colourfulness of the Macbeth Color Checker chart, visual appreciation...... but also with a preference index or a memory index calculated without blue and purple hues. A very low correlation was found between appreciation and naturalness indicating that colour quality needs more than one metric to describe subjective aspects....

  4. Quality assessment of images displayed on LCD screen with local backlight dimming

    DEFF Research Database (Denmark)

    Mantel, Claire; Burini, Nino; Korhonen, Jari;

    2013-01-01

    This paper presents a subjective experiment collecting quality assessment of images displayed on a LCD with local backlight dimming using two methodologies: absolute category ratings and paired-comparison. Some well-known objective quality metrics are then applied to the stimuli and their...

  5. Quality assessment of schistosomiasis literature

    OpenAIRE

    Pao, Miranda Lee; Goffman, William

    1990-01-01

    Average impact per paper, a refinement of the use of impact factor, was used to assess the quality of publications produced by a small group of sponsored researchers. The average impact per paper associated with half of the literature published by grantees has been shown to exceed those taken from the total literature at large. Moreover, this indicator appears to be stable over the five years tested. Compared with the schistosomiasis literature as a whole, the subset ...

  6. QPLOT: A Quality Assessment Tool for Next Generation Sequencing Data

    Directory of Open Access Journals (Sweden)

    Bingshan Li

    2013-01-01

    Full Text Available Background. Next generation sequencing (NGS is being widely used to identify genetic variants associated with human disease. Although the approach is cost effective, the underlying data is susceptible to many types of error. Importantly, since NGS technologies and protocols are rapidly evolving, with constantly changing steps ranging from sample preparation to data processing software updates, it is important to enable researchers to routinely assess the quality of sequencing and alignment data prior to downstream analyses. Results. Here we describe QPLOT, an automated tool that can facilitate the quality assessment of sequencing run performance. Taking standard sequence alignments as input, QPLOT generates a series of diagnostic metrics summarizing run quality and produces convenient graphical summaries for these metrics. QPLOT is computationally efficient, generates webpages for interactive exploration of detailed results, and can handle the joint output of many sequencing runs. Conclusion. QPLOT is an automated tool that facilitates assessment of sequence run quality. We routinely apply QPLOT to ensure quick detection of diagnostic of sequencing run problems. We hope that QPLOT will be useful to the community as well.

  7. Orion Entry Handling Qualities Assessments

    Science.gov (United States)

    Bihari, B.; Tiggers, M.; Strahan, A.; Gonzalez, R.; Sullivan, K.; Stephens, J. P.; Hart, J.; Law, H., III; Bilimoria, K.; Bailey, R.

    2011-01-01

    The Orion Command Module (CM) is a capsule designed to bring crew back from the International Space Station (ISS), the moon and beyond. The atmospheric entry portion of the flight is deigned to be flown in autopilot mode for nominal situations. However, there exists the possibility for the crew to take over manual control in off-nominal situations. In these instances, the spacecraft must meet specific handling qualities criteria. To address these criteria two separate assessments of the Orion CM s entry Handling Qualities (HQ) were conducted at NASA s Johnson Space Center (JSC) using the Cooper-Harper scale (Cooper & Harper, 1969). These assessments were conducted in the summers of 2008 and 2010 using the Advanced NASA Technology Architecture for Exploration Studies (ANTARES) six degree of freedom, high fidelity Guidance, Navigation, and Control (GN&C) simulation. This paper will address the specifics of the handling qualities criteria, the vehicle configuration, the scenarios flown, the simulation background and setup, crew interfaces and displays, piloting techniques, ratings and crew comments, pre- and post-fight briefings, lessons learned and changes made to improve the overall system performance. The data collection tools, methods, data reduction and output reports will also be discussed. The objective of the 2008 entry HQ assessment was to evaluate the handling qualities of the CM during a lunar skip return. A lunar skip entry case was selected because it was considered the most demanding of all bank control scenarios. Even though skip entry is not planned to be flown manually, it was hypothesized that if a pilot could fly the harder skip entry case, then they could also fly a simpler loads managed or ballistic (constant bank rate command) entry scenario. In addition, with the evaluation set-up of multiple tasks within the entry case, handling qualities ratings collected in the evaluation could be used to assess other scenarios such as the constant bank angle

  8. Quality Assessment for Clinical Proteomics

    OpenAIRE

    Tabb, David L.

    2012-01-01

    Proteomics has emerged from the labs of technologists to enter widespread application in clinical contexts. This transition, however, has been hindered by overstated early claims of accuracy, concerns about reproducibility, and the challenges of handling batch effects properly. New efforts have produced sets of performance metrics and measurements of variability that establish sound expectations for experiments in clinical proteomics. As researchers begin incorporating these metrics in a qual...

  9. Critical Assessment of the Foundations of Power Transmission and Distribution Reliability Metrics and Standards.

    Science.gov (United States)

    Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan

    2016-01-01

    The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events. PMID:25976848

  10. Spatial Metrics based Landscape Structure and Dynamics Assessment for an emerging Indian Megalopolis

    Directory of Open Access Journals (Sweden)

    Ramachandra T V

    2012-04-01

    Full Text Available Human-induced land use changes are considered the prime agents of the global environmental changes. Urbanisation and associated growth patterns (urban sprawl are characteristic of spatial temporal changes that take place at regional levels. Unplanned urbanization and consequent impacts on natural resources including basic amenities has necessitated the investigations of spatial patterns of urbanization. A comprehensive assessment using quantitative methods and methodological understanding using rigorous methods is required to understand the patterns of change that occur as human processes transform the landscapes to help regional land use planners to easily identify, understand the necessary requirement. Tier II cities in India are undergoing rapid changes in recent times and need to be planned to minimize the impacts of unplanned urbanisation. Mysore is one of the rapidly urbanizing traditional regions of Karnataka, India. In this study, an integrated approach of remote sensing and spatial metrics with gradient analysis was used to identify the trends of urban land changes. The spatial and temporal dynamic pattern of the urbanization process of the megalopolis region considering the spatial data for the ?ve decades with 3 km buffer from the city boundary has been studied, which help in the implementation of location specific mitigation measures. The time series of gradient analysis through landscape metrics helped in describing, quantifying and monitoring the spatial configuration of urbanization at landscape levels. Results indicated a signi?cant increase of urban built-up area during the last four decades. Landscape metrics indicates the coalescence of urban areas occurred during the rapid urban growth from 2000 to 2009 indicating the clumped growth at the center with simple shapes and dispersed growth in the boundary region with convoluted shapes.

  11. Assessing water quality trends in catchments with contrasting hydrological regimes

    Science.gov (United States)

    Sherriff, Sophie C.; Shore, Mairead; Mellander, Per-Erik

    2016-04-01

    Environmental resources are under increasing pressure to simultaneously achieve social, economic and ecological aims. Increasing demand for food production, for example, has expanded and intensified agricultural systems globally. In turn, greater risks of diffuse pollutant delivery (suspended sediment (SS) and Phosphorus (P)) from land to water due to higher stocking densities, fertilisation rates and soil erodibility has been attributed to deterioration of chemical and ecological quality of aquatic ecosystems. Development of sustainable and resilient management strategies for agro-ecosystems must detect and consider the impact of land use disturbance on water quality over time. However, assessment of multiple monitoring sites over a region is challenged by hydro-climatic fluctuations and the propagation of events through catchments with contrasting hydrological regimes. Simple water quality metrics, for example, flow-weighted pollutant exports have potential to normalise the impact of catchment hydrology and better identify water quality fluctuations due to land use and short-term climate fluctuations. This paper assesses the utility of flow-weighted water quality metrics to evaluate periods and causes of critical pollutant transfer. Sub-hourly water quality (SS and P) and discharge data were collected from hydrometric monitoring stations at the outlets of five small (~10 km2) agricultural catchments in Ireland. Catchments possess contrasting land uses (predominantly grassland or arable) and soil drainage (poorly, moderately or well drained) characteristics. Flow-weighted water quality metrics were calculated and evaluated according to fluctuations in source pressure and rainfall. Flow-weighted water quality metrics successfully identified fluctuations in pollutant export which could be attributed to land use changes through the agricultural calendar, i.e., groundcover fluctuations. In particular, catchments with predominantly poor or moderate soil drainage

  12. SOME METRIC CHARACTERISTICS OF TESTS TO ASSESS BALL SPEED DURING OVERARM THROW PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Ante Prižmić

    2010-12-01

    Full Text Available The aim of the study was to determine metric characteristics of the 2 tests for evaluationhandball ball speed during over arm throw of handball ball. Research was conducted on a sampleof 50 students of the Faculty of kinesiology, average age of 20.4 years. Beside measurements ofbody height and body weight, speed of ball flight after over arm throw from sitting position (dis-tance 4 meters was assessed with radar gun. The tests of over arm throw were performed with ablocked and a free hand which does not perform a throw. Results show satisfactory reliability,sensitivity and validity of all tests. The homogeneity of tests was not good considering that thepositive trend of results was observed. This is a consequence of respondent adaptation to thetechnique of over arm throw performance. Factor analysis extracted a latent dimension that maybe called a factor of the ball speed during overarm throw performance. Respondents achievedsignificantly better results in the test RS because of biomechanical freer movement. This alsoconfirmed the pragmatic validity of the tests. The tests are best for use in sports like handball,water polo, tennis, volleyball, baseball or throwing disciplines in athletics because of the similarityof overarm performance and technical elements of the chosen sport. The advantages of tests arefast performance, easy execution and good metric characteristics and the defects poor homoge-neity and necessity for a radar gun.

  13. [Transcript assembly and quality assessment].

    Science.gov (United States)

    Deng, Feilong; Jia, Xianbo; Lai, Songjia; Liu, Yiping; Chen, Shiyi

    2015-09-01

    The transcript assembly is essential for transcriptome studies trom next-generation sequencing data. However, there are still many faults of algorithms in the present assemblers, which should be largely improved in the future. According to the requirement of reference genome or not, the transcript assembly could be classified into the genome-guided and de novo methods. The two methods have different algorithms and implementation processes. The quality of assembled transcripts depends on a large number of factors, such as the PCR amplification, sequencing techniques, assembly algorithm and genome character. Here, we reviewed the present tools of transcript assembly and various indexes for assessing the quality of assembled transcripts, which would help biologists to determine which assembler should be used in their studies. PMID:26955705

  14. MULTICRITERIA APPROACH FOR ASSESSMENT OF ENVIRONMENTAL QUALITY

    OpenAIRE

    Boris Agarski; Igor Budak; Janko Hodolič; Đorđe Vukelić

    2010-01-01

    Environment is important and inevitable element that has direct impact on life quality. Furthermore, environmental protection represents prerequisite for healthy and sustainable way of life. Environmental quality can be represented through specific indicators that can be identified, measured, analyzed, and assessed with adequate methods for assessment of environmental quality. Problem of insight in total environmental quality, caused by different, mutually incomparable, indicators of environm...

  15. Software Architecture Coupling Metric for Assessing Operational Responsiveness of Trading Systems

    Directory of Open Access Journals (Sweden)

    Claudiu VINTE

    2012-01-01

    Full Text Available The empirical observation that motivates our research relies on the difficulty to assess the performance of a trading architecture beyond a few synthetic indicators like response time, system latency, availability or volume capacity. Trading systems involve complex software architectures of distributed resources. However, in the context of a large brokerage firm, which offers a global coverage from both, market and client perspectives, the term distributed gains a critical significance indeed. Offering a low latency ordering system by nowadays standards is relatively easily achievable, but integrating it in a flexible manner within the broader information system architecture of a broker/dealer requires operational aspects to be factored in. We propose a metric for measuring the coupling level within software architecture, and employ it to identify architectural designs that can offer a higher level of operational responsiveness, which ultimately would raise the overall real-world performance of a trading system.

  16. A City and National Metric measuring Isolation from the Global Market for Food Security Assessment

    Science.gov (United States)

    Brown, Molly E.; Silver, Kirk Coleman; Rajagopalan, Krishnan

    2013-01-01

    The World Bank has invested in infrastructure in developing countries for decades. This investment aims to reduce the isolation of markets, reducing both seasonality and variability in food availability and food prices. Here we combine city market price data, global distance to port, and country infrastructure data to create a new Isolation Index for countries and cities around the world. Our index quantifies the isolation of a city from the global market. We demonstrate that an index built at the country level can be applied at a sub-national level to quantify city isolation. In doing so, we offer policy makers with an alternative metric to assess food insecurity. We compare our isolation index with other indices and economic data found in the literature.We show that our Index measures economic isolation regardless of economic stability using correlation and analysis

  17. Landscape Metric Modeling - a Technique for Forest Disturbance Assessment in Shendurney Wildlife Sanctuary

    Directory of Open Access Journals (Sweden)

    Subin Jose

    2011-12-01

    Full Text Available Deforestation and forest degradation are associated and progressive processes result in the anthropogenic stress, climate change, and conversion of the forest area into a mosaic of mature forest fragments, pasture, and degraded habitat. The present study addresses forest degradation assessment of landscape using landscape metrics. Geospatial techniques including GIS, remote sensing and fragstat methods are powerful tools in the assessment of forest degradation. The present study is carried out in Shendurney wildlife sanctuary located in the mega biodiversity hot spot of Western ghats, Kerala. A large extent of forest is affected by degradation in this region leading to depletion of forest biodiversity. For conservation of forest biodiversity and implementation of conservation strategies, forest degradation assessment of habitat destruction area is important. Two types of data are used in the study i.e. spatial and non-spatial data. Non-spatial data include both anthropogenic stress and climate data. The study shows that the disturbance index value ranges from 2.5 to 7.5 which has been reclassified into four disturbance zones as low disturbed, medium disturbed, high disturbed and very high disturbed. The analysis would play a key role in the formulation and implementation of forest conservation and management strategies.

  18. Modelling Saliency Awareness for Objective Video Quality Assessment

    OpenAIRE

    Engelke, Ulrich; Barkowsky, Marcus; Callet, Patrick Le; Zepernick, Hans-Jürgen

    2010-01-01

    Existing video quality metrics do usually not take into consideration that spatial regions in video frames are of varying saliency and thus, differently attract the viewer's attention. This paper proposes a model of saliency awareness to complement existing video quality metrics, with the aim to improve the agreement of objectively predicted quality with subjectively rated quality. For this purpose, we conducted a subjective experiment in which human observers rated the annoyance of vide...

  19. Comparing apples and oranges: assessment of the relative video quality in the presence of different types of distortions

    DEFF Research Database (Denmark)

    Reiter, Ulrich; Korhonen, Jari; You, Junyong

    2011-01-01

    Video quality assessment is essential for the performance analysis of visual communication applications. Objective metrics can be used for estimating the relative quality differences, but they typically give reliable results only if the compared videos contain similar types of quality distortion....

  20. Color Image Quality Assessment Based on CIEDE2000

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2012-01-01

    Full Text Available Combining the color difference formula of CIEDE2000 and the printing industry standard for visual verification, we present an objective color image quality assessment method correlated with subjective vision perception. An objective score conformed to subjective perception (OSCSP Q was proposed to directly reflect the subjective visual perception. In addition, we present a general method to calibrate correction factors of color difference formula under real experimental conditions. Our experiment results show that the present DE2000-based metric can be consistent with human visual system in general application environment.

  1. Multiscale Metrics to Assess Flood Resilience: Feedbacks from SMARTesT

    Science.gov (United States)

    Schertzer, D.; Tchiguirinskaia, I.; Lovejoy, S.

    2012-04-01

    The goal of the FP7 SMARTesT project is to greatly improve flood resilient technologies and systems. A major difficulty is that hydrological basins, in particular urban basins, are systems that are not only complicated due to their large number of components with multiple functions, but also complex. This explains many failures in flood management, as well as to assess, including with the help of numerical simulations, the resilience of a flood management system and therefore to optimize strategies. The term resilience has become extremely fashionable, although corresponding operational and mathematical definitions have remained rather elusive. The latter is required to analyse flood scenarios and simulations. It should be based on some conceptual definition, e.g. the definition of "ecological resilience" (Hollings 1973). The first attempt to define resilience metrics was based on the dynamical system approach. In spite of its mathematical elegance and apparent rigor, this approach suffers from a series of limitations. A common limitation with viability theory is the emergence of spatial scales in systems that are complex in time and space. As recently discussed (Folke et al., 2010), "multiscale resilience is fundamental for understanding the interplay between persistence and change, adaptability and transformability". An operational definition of multiscale resilience can be obtained as soon as scale symmetries are considered. The latter considerably reduce the space-time complexity by defining scale independent variables, called singularities. Ae scale independent resilient metrics should rely on singularities, e.g. to measure qualitative changes of their distribution. Incidentally, singularities are more and more used to analyse urban floods e.g. with the help done for climate scenario analysis. A radical point of view would correspond to define the scale independent analogues of the viability constraint set, viability kernel and resilient basin for

  2. Quality Assessment of Imputations in Administrative Data

    OpenAIRE

    Schnetzer, Matthias; Astleithner, Franz; Cetkovic, Predrag; Humer, Stefan; Lenk, Manuela; Moser, Mathias

    2015-01-01

    This article contributes a framework for the quality assessment of imputations within a broader structure to evaluate the quality of register-based data. Four quality-related hyperdimensions examine the data processing from the raw-data level to the final statistics. Our focus lies on the quality assessment of different imputation steps and their influence on overall data quality. We suggest classification rates as a measure of accuracy of imputation and derive several computat...

  3. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Shiraishi, Satomi; Moore, Kevin L., E-mail: kevinmoore@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California 92093 (United States); Tan, Jun [Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas 75490 (United States); Olsen, Lindsey A. [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States)

    2015-02-15

    Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V{sub 10Gy} (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM{sub clin} − QM{sub pred}, and a coefficient of determination, R{sup 2}. For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are

  4. Challenges, Solutions, and Quality Metrics of Personal Genome Assembly in Advancing Precision Medicine

    Directory of Open Access Journals (Sweden)

    Wenming Xiao

    2016-04-01

    Full Text Available Even though each of us shares more than 99% of the DNA sequences in our genome, there are millions of sequence codes or structure in small regions that differ between individuals, giving us different characteristics of appearance or responsiveness to medical treatments. Currently, genetic variants in diseased tissues, such as tumors, are uncovered by exploring the differences between the reference genome and the sequences detected in the diseased tissue. However, the public reference genome was derived with the DNA from multiple individuals. As a result of this, the reference genome is incomplete and may misrepresent the sequence variants of the general population. The more reliable solution is to compare sequences of diseased tissue with its own genome sequence derived from tissue in a normal state. As the price to sequence the human genome has dropped dramatically to around $1000, it shows a promising future of documenting the personal genome for every individual. However, de novo assembly of individual genomes at an affordable cost is still challenging. Thus, till now, only a few human genomes have been fully assembled. In this review, we introduce the history of human genome sequencing and the evolution of sequencing platforms, from Sanger sequencing to emerging “third generation sequencing” technologies. We present the currently available de novo assembly and post-assembly software packages for human genome assembly and their requirements for computational infrastructures. We recommend that a combined hybrid assembly with long and short reads would be a promising way to generate good quality human genome assemblies and specify parameters for the quality assessment of assembly outcomes. We provide a perspective view of the benefit of using personal genomes as references and suggestions for obtaining a quality personal genome. Finally, we discuss the usage of the personal genome in aiding vaccine design and development, monitoring host

  5. Total Probability of Collision as a Metric for Finite Conjunction Assessment and Collision Risk Management

    Science.gov (United States)

    Frigm, R.; Johnson, L.

    The Probability of Collision (Pc) has become a universal metric and statement of on-orbit collision risk. Although several flavors of the computation exist and are well-documented in the literature, the basic calculation requires the same input: estimates for the position, position uncertainty, and sizes of the two objects involved. The Pc is used operationally to make decisions on whether a given conjunction poses significant collision risk to the primary object (or space asset of concern). It is also used to determine necessity and degree of mitigative action (typically in the form of an orbital maneuver) to be performed. The predicted post-maneuver Pc also informs the maneuver planning process into regarding the timing, direction, and magnitude of the maneuver needed to mitigate the collision risk. Although the data sources, techniques, decision calculus, and workflows vary for different agencies and organizations, they all have a common thread. The standard conjunction assessment and collision risk concept of operations (CONOPS) predicts conjunctions, assesses the collision risk (typically, via the Pc), and plans and executes avoidance activities for conjunctions as a discrete events. As the space debris environment continues to increase and improvements are made to remote sensing capabilities and sensitivities to detect, track, and predict smaller debris objects, the number of conjunctions will in turn continue to increase. The expected order-of-magnitude increase in the number of predicted conjunctions will challenge the paradigm of treating each conjunction as a discrete event. The challenge will not be limited to workload issues, such as manpower and computing performance, but also the ability for satellite owner/operators to successfully execute their mission while also managing on-orbit collision risk. Executing a propulsive maneuver occasionally can easily be absorbed into the mission planning and operations tempo; whereas, continuously planning evasive

  6. Quality Research by Using Performance Evaluation Metrics for Software Systems and Components

    OpenAIRE

    Ion BULIGIU; Georgeta SOAVA

    2006-01-01

    Software performance and evaluation have four basic needs: (1) well-defined performance testing strategy, requirements, and focuses, (2) correct and effective performance evaluation models, (3) well-defined performance metrics, and (4) cost-effective performance testing and evaluation tools and techniques. This chapter first introduced a performance test process and discusses the performance testing objectives and focus areas. Then, it summarized the basic challenges and issues on performance...

  7. Total Probability of Collision as a Metric for Finite Conjunction Assessment and Collision Risk Management

    Science.gov (United States)

    Frigm, Ryan C.; Hejduk, Matthew D.; Johnson, Lauren C.; Plakalovic, Dragan

    2015-01-01

    On-orbit collision risk is becoming an increasing mission risk to all operational satellites in Earth orbit. Managing this risk can be disruptive to mission and operations, present challenges for decision-makers, and is time-consuming for all parties involved. With the planned capability improvements to detecting and tracking smaller orbital debris and capacity improvements to routinely predict on-orbit conjunctions, this mission risk will continue to grow in terms of likelihood and effort. It is very real possibility that the future space environment will not allow collision risk management and mission operations to be conducted in the same manner as it is today. This paper presents the concept of a finite conjunction assessment-one where each discrete conjunction is not treated separately but, rather, as a continuous event that must be managed concurrently. The paper also introduces the Total Probability of Collision as an analogous metric for finite conjunction assessment operations and provides several options for its usage in a Concept of Operations.

  8. Effects of display rendering on HDR image quality assessment

    Science.gov (United States)

    Zerman, Emin; Valenzise, Giuseppe; De Simone, Francesca; Banterle, Francesco; Dufaux, Frederic

    2015-09-01

    High dynamic range (HDR) displays use local backlight modulation to produce both high brightness levels and large contrast ratios. Thus, the display rendering algorithm and its parameters may greatly affect HDR visual experience. In this paper, we analyze the impact of display rendering on perceived quality for a specific display (SIM2 HDR47) and for a popular application scenario, i.e., HDR image compression. To this end, we assess whether significant differences exist between subjective quality of compressed images, when these are displayed using either the built-in rendering of the display, or a rendering algorithm developed by ourselves. As a second contribution of this paper, we investigate whether the possibility to estimate the true pixel-wise luminance emitted by the display, offered by our rendering approach, can improve the performance of HDR objective quality metrics that require true pixel-wise luminance as input.

  9. Monitoring cognitive function and need with the automated neuropsychological assessment metrics in Decompression Sickness (DCS) research

    Science.gov (United States)

    Nesthus, Thomas E.; Schiflett, Sammuel G.

    1993-01-01

    Hypobaric decompression sickness (DCS) research presents the medical monitor with the difficult task of assessing the onset and progression of DCS largely on the basis of subjective symptoms. Even with the introduction of precordial Doppler ultrasound techniques for the detection of venous gas emboli (VGE), correct prediction of DCS can be made only about 65 percent of the time according to data from the Armstrong Laboratory's (AL's) hypobaric DCS database. An AL research protocol concerned with exercise and its effects on denitrogenation efficiency includes implementation of a performance assessment test battery to evaluate cognitive functioning during a 4-h simulated 30,000 ft (9144 m) exposure. Information gained from such a test battery may assist the medical monitor in identifying early signs of DCS and subtle neurologic dysfunction related to cases of asymptomatic, but advanced, DCS. This presentation concerns the selection and integration of a test battery and the timely graphic display of subject test results for the principal investigator and medical monitor. A subset of the Automated Neuropsychological Assessment Metrics (ANAM) developed through the Office of Military Performance Assessment Technology (OMPAT) was selected. The ANAM software provides a library of simple tests designed for precise measurement of processing efficiency in a variety of cognitive domains. For our application and time constraints, two tests requiring high levels of cognitive processing and memory were chosen along with one test requiring fine psychomotor performance. Accuracy, speed, and processing throughout variables as well as RMS error were collected. An automated mood survey provided 'state' information on six scales including anger, happiness, fear, depression, activity, and fatigue. An integrated and interactive LOTUS 1-2-3 macro was developed to import and display past and present task performance and mood-change information.

  10. Performance metrics

    CERN Document Server

    Pijpers, F P

    2006-01-01

    Scientific output varies between research fields and between disciplines within a field such as astrophysics. Even in fields where publication is the primary output, there is considerable variation in publication and hence in citation rates. Data from the Smithsonian/NASA Astrophysics Data System is used to illustrate this problem and argue against a "one size fits all" approach to performance metrics, especially over the short time-span covered by the Research Assessment Exercise (soon underway in the UK).

  11. Visual Perception Based Objective Stereo Image Quality Assessment for 3D Video Communication

    Directory of Open Access Journals (Sweden)

    Gangyi Jiang

    2014-04-01

    Full Text Available Stereo image quality assessment is a crucial and challenging issue in 3D video communication. One of major difficulties is how to weigh binocular masking effect. In order to establish the assessment mode more in line with the human visual system, Watson model is adopted, which defines visibility threshold under no distortion composed of contrast sensitivity, masking effect and error in this study. As a result, we propose an Objective Stereo Image Quality Assessment method (OSIQA, organically combining a new Left-Right view Image Quality Assessment (LR-IQA metric and Depth Perception Image Quality Assessment (DP-IQA metric. The new LR-IQA metric is first given to calculate the changes of perception coefficients in each sub-band utilizing Watson model and human visual system after wavelet decomposition of left and right images in stereo image pair, respectively. Then, a concept of absolute difference map is defined to describe abstract differential value between the left and right view images and the DP-IQA metric is presented to measure structure distortion of the original and distorted abstract difference maps through luminance function, error sensitivity and contrast function. Finally, an OSIQA metric is generated by using multiplicative fitting of the LR-IQA and DP-IQA metrics based on weighting. Experimental results shows that the proposed method are highly correlated with human visual judgments (Mean Opinion Score and the correlation coefficient and monotony are more than 0.92 under five types of distortions such as Gaussian blur, Gaussian noise, JP2K compression, JPEG compression and H.264 compression.

  12. Metrics, Dose, and Dose Concept: The Need for a Proper Dose Concept in the Risk Assessment of Nanoparticles

    OpenAIRE

    Myrtill Simkó; Dietmar Nosske; Kreyling, Wolfgang G

    2014-01-01

    In: International Journal of Environmental Research and Public Health, Vol. 11 (2014), No. 4, 4026-4048; DOI: 10.3390/ijerph110404026In order to calculate the dose for nanoparticles (NP), (i) relevant information about the dose metrics and (ii) a proper dose concept are crucial. Since the appropriate metrics for NP toxicity are yet to be elaborated, a general dose calculation model for nanomaterials is not available. Here we propose how to develop a dose assessment model for NP in analogy to ...

  13. PRAGMATIC MODEL OF TRASLATION QUALITY ASSESSMENT

    OpenAIRE

    Vorobjeva, S.; Podrezenko, V.

    2006-01-01

    The study analyses various approaches to translation quality assessment. Functional and pragmatic translation quality evaluation model which is based on target text function being equivalent to source text function has been proposed.

  14. Healthcare quality maturity assessment model based on quality drivers.

    Science.gov (United States)

    Ramadan, Nadia; Arafeh, Mazen

    2016-04-18

    Purpose - Healthcare providers differ in their readiness and maturity levels regarding quality and quality management systems applications. The purpose of this paper is to serve as a useful quantitative quality maturity-level assessment tool for healthcare organizations. Design/methodology/approach - The model proposes five quality maturity levels (chaotic, primitive, structured, mature and proficient) based on six quality drivers: top management, people, operations, culture, quality focus and accreditation. Findings - Healthcare managers can apply the model to identify the status quo, quality shortcomings and evaluating ongoing progress. Practical implications - The model has been incorporated in an interactive Excel worksheet that visually displays the quality maturity-level risk meter. The tool has been applied successfully to local hospitals. Originality/value - The proposed six quality driver scales appear to measure healthcare provider maturity levels on a single quality meter. PMID:27120510

  15. The palmar metric: A novel radiographic assessment of the equine distal phalanx

    Directory of Open Access Journals (Sweden)

    M.A. Burd

    2014-08-01

    Full Text Available Digital radiographs are often used to subjectively assess the equine digit. Recently, quantitative and objective radiographic measurements have been reported that give new insight into the form and function of the equine digit. We investigated a radio-dense curvilinear profile along the distal phalanx on lateral radiographs we term the Palmar Curve (PC that we believe provides a measurement of the concavity of the distal phalanx of the horse. A second quantitative measurement, the Palmar Metric (PM was defined as the percent area under the PC. We correlated the PM and age from 544 radiographs of the distal phalanx from the left and right front feet of various breed horses of known age, and 278 radiographs of the front feet of Quarter Horses. The PM was negatively correlated with age and decreased at a rate of 0.28 % per year for horses of various breeds and 0.33 % per year for Quarter Horses. Therefore, veterinarians should be aware of age related change in the concave, parietal solar aspect of the distal phalanx in the horse.

  16. Convective Weather Forecast Quality Metrics for Air Traffic Management Decision-Making

    Science.gov (United States)

    Chatterji, Gano B.; Gyarfas, Brett; Chan, William N.; Meyn, Larry A.

    2006-01-01

    the process described in Refs. 5 through 7, in terms of percentage coverage or confidence level is notionally sound compared to characterizing in terms of probabilities because the probability of the forecast being correct can only be determined using actual observations. References 5 through 7 only use the forecast data and not the observations. The method for computing the probability of detection, false alarm ratio and several forecast quality metrics (Skill Scores) using both the forecast and observation data are given in Ref. 2. This paper extends the statistical verification method in Ref. 2 to determine co-occurrence probabilities. The method consists of computing the probability that a severe weather cell (grid location) is detected in the observation data in the neighborhood of the severe weather cell in the forecast data. Probabilities of occurrence at the grid location and in its neighborhood with higher severity, and with lower severity in the observation data compared to that in the forecast data are examined. The method proposed in Refs. 5 through 7 is used for computing the probability that a certain number of cells in the neighborhood of severe weather cells in the forecast data are seen as severe weather cells in the observation data. Finally, the probability of existence of gaps in the observation data in the neighborhood of severe weather cells in forecast data is computed. Gaps are defined as openings between severe weather cells through which an aircraft can safely fly to its intended destination. The rest of the paper is organized as follows. Section II summarizes the statistical verification method described in Ref. 2. The extension of this method for computing the co-occurrence probabilities in discussed in Section HI. Numerical examples using NCWF forecast data and NCWD observation data are presented in Section III to elucidate the characteristics of the co-occurrence probabilities. This section also discusses the procedure for computing

  17. Assessing the Greenness of Chemical Reactions in the Laboratory Using Updated Holistic Graphic Metrics Based on the Globally Harmonized System of Classification and Labeling of Chemicals

    Science.gov (United States)

    Ribeiro, M. Gabriela T. C.; Yunes, Santiago F.; Machado, Adelio A. S. C.

    2014-01-01

    Two graphic holistic metrics for assessing the greenness of synthesis, the "green star" and the "green circle", have been presented previously. These metrics assess the greenness by the degree of accomplishment of each of the 12 principles of green chemistry that apply to the case under evaluation. The criteria for assessment…

  18. Fifty shades of grey: Variability in metric-based assessment of surface waters using macroinvertebrates

    OpenAIRE

    Keizer-Vlek, H.E.

    2014-01-01

    Since the introduction of the European Water Framework Directive (WFD) in 2000, every member state is obligated to assess the effects of human activities on the ecological quality status of all water bodies and to indicate the level of confidence and precision of the results provided by the monitoring programs in their river basin management plans (European Commission, 2000). Currently, the statistical properties associated with aquatic monitoring programs are often unknown. Therefore, the ov...

  19. Towards Perceptually Driven Segmentation Evaluation Metrics

    OpenAIRE

    Drelie Gelasca, E.; Ebrahimi, T.; Farias, M; Carli, M; Mitra, S.

    2004-01-01

    To be reliable, an automatic segmentation evaluation metric has to be validated by subjective tests. In this paper, a formal protocol for subjective tests for segmentation quality assessment is presented. The most common artifacts produced by segmentation algorithms are identified and an extensive analysis of their effects on the perceived quality is performed. A psychophysical experiment was performed to assess the quality of video with segmentation errors. The results show how an objective ...

  20. IT PROJECT METRICS

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2007-09-01

    Full Text Available The objectives of IT projects are presented. The quality requirements that these projects must fulfill are established. Quality and evaluation indicators for running IT projects are built and verified. Project quality characteristics are presented and discussed. Model refinement for IT project metrics is treated and a software structure is proposed. For an IT project which is designed for software development, quality evaluation and project implementation mode metrics are used.

  1. Comparison of a Graphical and a Textual Design Language Using Software Quality Metrics

    OpenAIRE

    Henry, Sallie M.; Goff, Roger

    1988-01-01

    For many years the software engineering community has been attacking the software reliability problem on two fronts. First via design methodologies, languages and tools as a precheck on quality and second by measuring the quality of produced software as a postcheck. This research attempts to unify the approach to creating reliable software by providing the ability to measure the quality of a design prior to its implementation. A comparison of a graphical and a textual design language is pres...

  2. Quality assurance in performance assessments

    International Nuclear Information System (INIS)

    Following publication of the Site-94 report, SKI wishes to review how Quality Assurance (QA) issues could be treated in future work both in undertaking their own Performance Assessment (PA) calculations and in scrutinising documents supplied by SKB (on planning a repository for spent fuels in Sweden). The aim of this report is to identify the key QA issues and to outline the nature and content of a QA plan which would be suitable for SKI, bearing in mind the requirements and recommendations of relevant standards. Emphasis is on issues which are specific to Performance Assessments for deep repositories for radioactive wastes, but consideration is also given to issues which need to be addressed in all large projects. Given the long time over which the performance of a deep repository system must be evaluated, the demonstration that a repository is likely to perform satisfactorily relies on the use of computer-generated model predictions of system performance. This raises particular QA issues which are generally not encountered in other technical areas (for instance, power station operations). The traceability of the arguments used is a key QA issue, as are conceptual model uncertainty, and code verification and validation; these were all included in the consideration of overall uncertainties in the Site-94 project. Additionally, issues which are particularly relevant to SKI include: How QA in a PA fits in with the general QA procedures of the organisation undertaking the work. The relationship between QA as applied by the regulator and the implementor of a repository development programme. Section 2 introduces the discussion of these issues by reviewing the standards and guidance which are available from national and international organisations. This is followed in Section 3 by a review of specific issues which arise from the Site-94 exercise. An outline procedure for managing QA issues in SKI is put forward as a basis for discussion in Section 4. It is hoped that

  3. How to assess the quality of your analytical method?

    Science.gov (United States)

    Topic, Elizabeta; Nikolac, Nora; Panteghini, Mauro; Theodorsson, Elvar; Salvagno, Gian Luca; Miler, Marijana; Simundic, Ana-Maria; Infusino, Ilenia; Nordin, Gunnar; Westgard, Sten

    2015-10-01

    Laboratory medicine is amongst the fastest growing fields in medicine, crucial in diagnosis, support of prevention and in the monitoring of disease for individual patients and for the evaluation of treatment for populations of patients. Therefore, high quality and safety in laboratory testing has a prominent role in high-quality healthcare. Applied knowledge and competencies of professionals in laboratory medicine increases the clinical value of laboratory results by decreasing laboratory errors, increasing appropriate utilization of tests, and increasing cost effectiveness. This collective paper provides insights into how to validate the laboratory assays and assess the quality of methods. It is a synopsis of the lectures at the 15th European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Continuing Postgraduate Course in Clinical Chemistry and Laboratory Medicine entitled "How to assess the quality of your method?" (Zagreb, Croatia, 24-25 October 2015). The leading topics to be discussed include who, what and when to do in validation/verification of methods, verification of imprecision and bias, verification of reference intervals, verification of qualitative test procedures, verification of blood collection systems, comparability of results among methods and analytical systems, limit of detection, limit of quantification and limit of decision, how to assess the measurement uncertainty, the optimal use of Internal Quality Control and External Quality Assessment data, Six Sigma metrics, performance specifications, as well as biological variation. This article, which continues the annual tradition of collective papers from the EFLM continuing postgraduate courses in clinical chemistry and laboratory medicine, aims to provide further contributions by discussing the quality of laboratory methods and measurements and, at the same time, to offer continuing professional development to the attendees. PMID:26408611

  4. An information theoretic approach for privacy metrics

    OpenAIRE

    Michele Bezzi

    2010-01-01

    Organizations often need to release microdata without revealing sensitive information. To this scope, data are anonymized and, to assess the quality of the process, various privacy metrics have been proposed, such as k-anonymity, l-diversity, and t-closeness. These metrics are able to capture different aspects of the disclosure risk, imposing minimal requirements on the association of an individual with the sensitive attributes. If we want to combine them in a optimization problem, we need a ...

  5. qcML : an exchange format for quality control metrics from mass spectrometry experiments

    NARCIS (Netherlands)

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W P; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A; Kelstrup, Christian D; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S; Olsen, Jesper V; Heck, Albert J R; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart

    2014-01-01

    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exc

  6. Selection of metrics based on the seagrass Cymodocea nodosa and development of a biotic index (CYMOX) for assessing ecological status of coastal and transitional waters

    Science.gov (United States)

    Oliva, Silvia; Mascaró, Oriol; Llagostera, Izaskun; Pérez, Marta; Romero, Javier

    2012-12-01

    Bioindicators, based on a large variety of organisms, have been increasingly used in the assessment of the status of aquatic systems. In marine coastal waters, seagrasses have shown a great potential as bioindicator organisms, probably due to both their environmental sensitivity and the large amount of knowledge available. However, and as far as we are aware, only little attention has been paid to euryhaline species suitable for biomonitoring both transitional and marine waters. With the aim to contribute to this expanding field, and provide new and useful tools for managers, we develop here a multi-bioindicator index based on the seagrass Cymodocea nodosa. We first compiled from the literature a suite of 54 candidate metrics, i. e. measurable attribute of the concerned organism or community that adequately reflects properties of the environment, obtained from C. nodosa and its associated ecosystem, putatively responding to environmental deterioration. We then evaluated them empirically, obtaining a complete dataset on these metrics along a gradient of anthropogenic disturbance. Using this dataset, we selected the metrics to construct the index, using, successively: (i) ANOVA, to assess their capacity to discriminate among sites of different environmental conditions; (ii) PCA, to check the existence of a common pattern among the metrics reflecting the environmental gradient; and (iii) feasibility and cost-effectiveness criteria. Finally, 10 metrics (out of the 54 tested) encompassing from the physiological (δ15N, δ34S, % N, % P content of rhizomes), through the individual (shoot size) and the population (root weight ratio), to the community (epiphytes load) organisation levels, and some metallic pollution descriptors (Cd, Cu and Zn content of rhizomes) were retained and integrated into a single index (CYMOX) using the scores of the sites on the first axis of a PCA. These scores were reduced to a 0-1 (Ecological Quality Ratio) scale by referring the values to the

  7. Adding A Spending Metric To Medicare's Value-Based Purchasing Program Rewarded Low-Quality Hospitals.

    Science.gov (United States)

    Das, Anup; Norton, Edward C; Miller, David C; Ryan, Andrew M; Birkmeyer, John D; Chen, Lena M

    2016-05-01

    In fiscal year 2015 the Centers for Medicare and Medicaid Services expanded its Hospital Value-Based Purchasing program by rewarding or penalizing hospitals for their performance on both spending and quality. This represented a sharp departure from the program's original efforts to incentivize hospitals for quality alone. How this change redistributed hospital bonuses and penalties was unknown. Using data from 2,679 US hospitals that participated in the program in fiscal years 2014 and 2015, we found that the new emphasis on spending rewarded not only low-spending hospitals but some low-quality hospitals as well. Thirty-eight percent of low-spending hospitals received bonuses in fiscal year 2014, compared to 100 percent in fiscal year 2015. However, low-quality hospitals also began to receive bonuses (0 percent in fiscal year 2014 compared to 17 percent in 2015). All high-quality hospitals received bonuses in both years. The Centers for Medicare and Medicaid Services should consider incorporating a minimum quality threshold into the Hospital Value-Based Purchasing program to avoid rewarding low-quality, low-spending hospitals. PMID:27140997

  8. Assessing Quality in Mental Health Care

    OpenAIRE

    Ian Shaw

    1997-01-01

    Quality assessment in mental health services is undergoing change in the United Kingdom following the introduction of market reforms. Traditionally, service quality was monitored by professional practitioners with reference to user satisfaction. This became formalized, and the two main forms of quality assurance currently used are outlined. However, the government is concerned that this may be inadequate for the monitoring of quality standards, specified in contracts between service purchaser...

  9. Elliptical local vessel density: a fast and robust quality metric for retinal images

    OpenAIRE

    Giancardo, L.; Abramoff, M.D.; Chaum, E.; Karnowski, T.P.; Meriaudeau, F.; Tobin, K.W.

    2008-01-01

    A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches for automatically judging the image quality. We propose a new set of features independent of field of view or resolution to describe the morphology of the patient's vessels. Our initial results suggest that these features can be used to estimate the image quality i...

  10. Parasitology: United Kingdom National Quality Assessment Scheme.

    OpenAIRE

    Hawthorne, M; Chiodini, P L; Snell, J J; Moody, A H; Ramsay, A

    1992-01-01

    AIMS: To assess the results from parasitology laboratories taking part in a quality assessment scheme between 1986 and 1991; and to compare performance with repeat specimens. METHODS: Quality assessment of blood parasitology, including tissue parasites (n = 444; 358 UK, 86 overseas), and faecal parasitology, including extra-intestinal parasites (n = 205; 141 UK, 64 overseas), was performed. RESULTS: Overall, the standard of performance was poor. A questionnaire distributed to participants sho...

  11. Quality Assessment in the Primary care

    OpenAIRE

    Muharrem Ak

    2013-01-01

    -Quality Assessment in the Primary care Dear Editor; I have read the article titled as “Implementation of Rogi Kalyan Samiti (RKS) at Primary Health Centre Durvesh” with great interest. Shrivastava et all concluded that assessment mechanism for the achievement of objectives for the suggested RKS model was not successful (1). Hereby I would like to emphasize the importance of quality assessment (QA) especially in the era of newly established primary care implementations in our coun...

  12. Metrics to assess the mitigation of global warming by carbon capture and storage in the ocean and in geological reservoirs

    OpenAIRE

    Haugan, Peter Mosby; Joos, Fortunat

    2004-01-01

    Different metrics to assess mitigation of global warming by carbon capture and storage are discussed. The climatic impact of capturing 30% of the anthropogenic carbon emission and its storage in the ocean or in geological reservoir are evaluated for different stabilization scenarios using a reduced-form carbon cycle-climate model. The accumulated Global Warming Avoided (GWA) remains, after a ramp-up during the first ~50 years, in the range of 15 to 30% over the next millennium for de...

  13. Welfare Quality assessment protocol for laying hens = Welfare Quality assessment protocol voor leghennen

    OpenAIRE

    Niekerk, van, M.; H. Gunnink; Reenen, van, A Alexander

    2012-01-01

    Results of a study on the Welfare Quality® assessment protocol for laying hens. It reports the development of the integration of welfare assessment as scores per criteria as well as simplification of the Welfare Quality® assessment protocol. Results are given from assessment of 122 farms.

  14. Weighted-MSE based on Saliency map for assessing video quality of H.264 video streams

    OpenAIRE

    Boujut, Hugo; Benois-Pineau, Jenny; Hadar, Ofer; Ahmed, Toufik; Bonnet, Patrick

    2011-01-01

    Human vision system is very complex and has been studied for many years specifically for purposes of efficient encoding of visual, e.g. video content from digital TV. There have been physiological and psychological evidences which indicate that viewers do not pay equal attention to all exposed visual information, but only focus on certain areas known as focus of attention (FOA) or saliency regions. In this work, we propose a novel based objective quality assessment metric, for assessing the p...

  15. A priori mesh quality metrics for three-dimensional hybrid grids

    International Nuclear Information System (INIS)

    Use of general hybrid grids to attain complex-geometry field simulations poses a challenge on estimation of their quality. Apart from the typical problems of non-uniformity and non-orthogonality, the change in element topology is an extra issue to address. The present work derives and evaluates an a priori mesh quality indicator for structured, unstructured, as well as hybrid grids consisting of hexahedra, prisms, tetrahedra, and pyramids. Emphasis is placed on deriving a direct relation between the quality measure and mesh distortion. The work is based on use of the Finite Volume discretization for evaluation of first order spatial derivatives. The analytic form of the truncation error is derived and applied to elementary types of mesh distortion including typical hybrid grid interfaces. The corresponding analytic expressions provide relations between the truncation error and the degree of stretching, skewness, shearing, torsion, expansion, as well as the type of grid interface

  16. A priori mesh quality metrics for three-dimensional hybrid grids

    Energy Technology Data Exchange (ETDEWEB)

    Kallinderis, Y., E-mail: kallind@otenet.gr; Fotia, S., E-mail: soph.fotia@gmail.com

    2015-01-01

    Use of general hybrid grids to attain complex-geometry field simulations poses a challenge on estimation of their quality. Apart from the typical problems of non-uniformity and non-orthogonality, the change in element topology is an extra issue to address. The present work derives and evaluates an a priori mesh quality indicator for structured, unstructured, as well as hybrid grids consisting of hexahedra, prisms, tetrahedra, and pyramids. Emphasis is placed on deriving a direct relation between the quality measure and mesh distortion. The work is based on use of the Finite Volume discretization for evaluation of first order spatial derivatives. The analytic form of the truncation error is derived and applied to elementary types of mesh distortion including typical hybrid grid interfaces. The corresponding analytic expressions provide relations between the truncation error and the degree of stretching, skewness, shearing, torsion, expansion, as well as the type of grid interface.

  17. Elliptical Local Vessel Density: a Fast and Robust Quality Metric for Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Chaum, Edward [ORNL; Karnowski, Thomas Paul [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL; Abramoff, M.D. [University of Iowa

    2008-01-01

    A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches to the problem of automatically judging the image quality. We propose a new set of features independent of Field of View or resolution to describe the morphology of the patient's vessels. Our initial results suggest that they can be used to estimate the image quality in a time one order of magnitude shorter respect to previous techniques.

  18. Compression-based classification of biological sequences and structures via the Universal Similarity Metric: experimental assessment

    Directory of Open Access Journals (Sweden)

    Manzini Giovanni

    2007-07-01

    Full Text Available Abstract Background Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity, NCD (Normalized Compression Dissimilarity and CD (Compression Dissimilarity. Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. Results We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC

  19. Soil quality assessment in rice production systems

    OpenAIRE

    Rodrigues de Lima, A.C.

    2007-01-01

    In the state of Rio Grande do Sul, Brazil, rice production is one of the most important regional activities. Farmers are concerned that the land use practices for rice production in the Camaquã region may not be sustainable because of detrimental effects on soil quality. The study presented in this thesis aimed (a) to describe and understand how rice farmers assess soil quality; (b) to propose a minimum data set (MDS) to assess soil quality; (c) to establish which soil quality indicator(s) ca...

  20. Use of Frequency Response Metrics to Assess the Planning and Operating Requirements for Reliable Integration of Variable Renewable Generation

    Energy Technology Data Exchange (ETDEWEB)

    Eto, Joseph H.; Undrill, John; Mackin, Peter; Daschmans, Ron; Williams, Ben; Haney, Brian; Hunt, Randall; Ellis, Jeff; Illian, Howard; Martinez, Carlos; O' Malley, Mark; Coughlin, Katie; LaCommare, Kristina Hamachi

    2010-12-20

    An interconnected electric power system is a complex system that must be operated within a safe frequency range in order to reliably maintain the instantaneous balance between generation and load. This is accomplished by ensuring that adequate resources are available to respond to expected and unexpected imbalances and restoring frequency to its scheduled value in order to ensure uninterrupted electric service to customers. Electrical systems must be flexible enough to reliably operate under a variety of"change" scenarios. System planners and operators must understand how other parts of the system change in response to the initial change, and need tools to manage such changes to ensure reliable operation within the scheduled frequency range. This report presents a systematic approach to identifying metrics that are useful for operating and planning a reliable system with increased amounts of variable renewable generation which builds on existing industry practices for frequency control after unexpected loss of a large amount of generation. The report introduces a set of metrics or tools for measuring the adequacy of frequency response within an interconnection. Based on the concept of the frequency nadir, these metrics take advantage of new information gathering and processing capabilities that system operators are developing for wide-area situational awareness. Primary frequency response is the leading metric that will be used by this report to assess the adequacy of primary frequency control reserves necessary to ensure reliable operation. It measures what is needed to arrest frequency decline (i.e., to establish frequency nadir) at a frequency higher than the highest set point for under-frequency load shedding within an interconnection. These metrics can be used to guide the reliable operation of an interconnection under changing circumstances.

  1. [Establishing IAQ Metrics and Baseline Measures.] "Indoor Air Quality Tools for Schools" Update #20

    Science.gov (United States)

    US Environmental Protection Agency, 2009

    2009-01-01

    This issue of "Indoor Air Quality Tools for Schools" Update ("IAQ TfS" Update) contains the following items: (1) News and Events; (2) IAQ Profile: Establishing Your Baseline for Long-Term Success (Feature Article); (3) Insight into Excellence: Belleville Township High School District #201, 2009 Leadership Award Winner; and (4) Have Your Questions…

  2. A metrics-based comparison of secondary user quality between iOS and Android

    NARCIS (Netherlands)

    Amman, T.

    2014-01-01

    Native mobile applications gain popularity in the commercial market. There is no other econom- ical sector that grows as fast. A lot of economical research is done in this sector, but there is very little research that deals with qualities for mobile application developers. This paper compares the q

  3. Bringing Public Engagement into an Academic Plan and Its Assessment Metrics

    Science.gov (United States)

    Britner, Preston A.

    2012-01-01

    This article describes how public engagement was incorporated into a research university's current Academic Plan, how the public engagement metrics were selected and adopted, and how those processes led to subsequent strategic planning. Some recognition of the importance of civic engagement has followed, although there are many areas in which…

  4. Using Landscape Metrics Analysis and Analytic Hierarchy Process to Assess Water Harvesting Potential Sites in Jordan

    Directory of Open Access Journals (Sweden)

    Abeer Albalawneh

    2015-09-01

    Full Text Available Jordan is characterized as a “water scarce” country. Therefore, conserving ecosystem services such as water regulation and soil retention is challenging. In Jordan, rainwater harvesting has been adapted to meet those challenges. However, the spatial composition and configuration features of a target landscape are rarely considered when selecting a rainwater-harvesting site. This study aimed to introduce landscape spatial features into the schemes for selecting a proper water-harvesting site. Landscape metrics analysis was used to quantify 10 metrics for three potential landscapes (i.e., Watershed 104 (WS 104, Watershed 59 (WS 59, and Watershed 108 (WS 108 located in the Jordanian Badia region. Results of the metrics analysis showed that the three non–vegetative land cover types in the three landscapes were highly suitable for serving as rainwater harvesting sites. Furthermore, Analytic Hierarchy Process (AHP was used to prioritize the fitness of the three target sites by comparing their landscape metrics. Results of AHP indicate that the non-vegetative land cover in the WS 104 landscape was the most suitable site for rainwater harvesting intervention, based on its dominance, connectivity, shape, and low degree of fragmentation. Our study advances the water harvesting network design by considering its landscape spatial pattern.

  5. Algal Attributes: An Autecological Classification of Algal Taxa Collected by the National Water-Quality Assessment Program

    Science.gov (United States)

    Porter, Stephen D.

    2008-01-01

    Algae are excellent indicators of water-quality conditions, notably nutrient and organic enrichment, and also are indicators of major ion, dissolved oxygen, and pH concentrations and stream microhabitat conditions. The autecology, or physiological optima and tolerance, of algal species for various water-quality contaminants and conditions is relatively well understood for certain groups of freshwater algae, notably diatoms. However, applications of autecological information for water-quality assessments have been limited because of challenges associated with compiling autecological literature from disparate sources, tracking name changes for a large number of algal species, and creating an autecological data base from which algal-indicator metrics can be calculated. A comprehensive summary of algal autecological attributes for North American streams and rivers does not exist. This report describes a large, digital data file containing 28,182 records for 5,939 algal taxa, generally species or variety, collected by the U.S. Geological Survey?s National Water-Quality Assessment (NAWQA) Program. The data file includes 37 algal attributes classified by over 100 algal-indicator codes or metrics that can be calculated easily with readily available software. Algal attributes include qualitative classifications based on European and North American autecological literature, and semi-quantitative, weighted-average regression approaches for estimating optima using regional and national NAWQA data. Applications of algal metrics in water-quality assessments are discussed and national quartile distributions of metric scores are shown for selected indicator metrics.

  6. Metrics to assess ecological condition, change, and impacts in sandy beach ecosystems.

    Science.gov (United States)

    Schlacher, Thomas A; Schoeman, David S; Jones, Alan R; Dugan, Jenifer E; Hubbard, David M; Defeo, Omar; Peterson, Charles H; Weston, Michael A; Maslo, Brooke; Olds, Andrew D; Scapini, Felicita; Nel, Ronel; Harris, Linda R; Lucrezi, Serena; Lastra, Mariano; Huijbers, Chantal M; Connolly, Rod M

    2014-11-01

    Complexity is increasingly the hallmark in environmental management practices of sandy shorelines. This arises primarily from meeting growing public demands (e.g., real estate, recreation) whilst reconciling economic demands with expectations of coastal users who have modern conservation ethics. Ideally, shoreline management is underpinned by empirical data, but selecting ecologically-meaningful metrics to accurately measure the condition of systems, and the ecological effects of human activities, is a complex task. Here we construct a framework for metric selection, considering six categories of issues that authorities commonly address: erosion; habitat loss; recreation; fishing; pollution (litter and chemical contaminants); and wildlife conservation. Possible metrics were scored in terms of their ability to reflect environmental change, and against criteria that are widely used for judging the performance of ecological indicators (i.e., sensitivity, practicability, costs, and public appeal). From this analysis, four types of broadly applicable metrics that also performed very well against the indicator criteria emerged: 1.) traits of bird populations and assemblages (e.g., abundance, diversity, distributions, habitat use); 2.) breeding/reproductive performance sensu lato (especially relevant for birds and turtles nesting on beaches and in dunes, but equally applicable to invertebrates and plants); 3.) population parameters and distributions of vertebrates associated primarily with dunes and the supralittoral beach zone (traditionally focused on birds and turtles, but expandable to mammals); 4.) compound measurements of the abundance/cover/biomass of biota (plants, invertebrates, vertebrates) at both the population and assemblage level. Local constraints (i.e., the absence of birds in highly degraded urban settings or lack of dunes on bluff-backed beaches) and particular issues may require alternatives. Metrics - if selected and applied correctly - provide

  7. Metric for the measurement of the quality of complex beams: a theoretical study.

    Science.gov (United States)

    Kaim, Sergiy; Lumeau, Julien; Smirnov, Vadim; Zeldovich, Boris; Glebov, Leonid

    2015-04-01

    We present a theoretical study of various definitions of laser beam width in a given cross section. Quality of the beam is characterized by dimensionless beam propagation products (BPPs) Δx·Δθ(x)/λ, which are different for the 21 definitions presented, but are close to 1. Six particular beams are studied in detail. In the process, we had to review the properties for the Fourier transform of various modifications and the relationships between them: physical Fourier transform (PFT), mathematical Fourier transform (MFT), and discrete Fourier transform (DFT). We found an axially symmetric self-MFT function, which may be useful for descriptions of diffraction-quality beams. In the appendices, we illustrate the thesis "the Fourier transform lives on the singularities of the original." PMID:26366763

  8. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark;

    2011-01-01

    . This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical...... Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals....

  9. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark;

    2012-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and depl...... Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals........ This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical......Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and...

  10. Improving quality and consistency of dissertation assessment

    OpenAIRE

    Pathirage, C. P.; Amaratunga, Dilanthi; Haigh, Richard

    2005-01-01

    During the last decade, there has been increasing calls for Higher Education to improve standards, increase the quality of assessment, and for greater accountability of lecturers. It is recognised that consistency in assessment is even more important where assessment is through one large piece of work, such as a dissertation, and where the assessment outcome will have a significant impact on the final grade of students. In this context, this paper outlines the initial literatur...

  11. Assessing quality across healthcare subsystems in Mexico.

    Science.gov (United States)

    Puig, Andrea; Pagán, José A; Wong, Rebeca

    2009-01-01

    Recent healthcare reform efforts in Mexico have focused on the need to improve the efficiency and equity of a fragmented healthcare system. In light of these reform initiatives, there is a need to assess whether healthcare subsystems are effective at providing high-quality healthcare to all Mexicans. Nationally representative household survey data from the 2006 Encuesta Nacional de Salud y Nutrición (National Health and Nutrition Survey) were used to assess perceived healthcare quality across different subsystems. Using a sample of 7234 survey respondents, we found evidence of substantial heterogeneity in healthcare quality assessments across healthcare subsystems favoring private providers over social security institutions. These differences across subsystems remained even after adjusting for socioeconomic, demographic, and health factors. Our analysis suggests that improvements in efficiency and equity can be achieved by assessing the factors that contribute to heterogeneity in quality across subsystems. PMID:19305224

  12. Packaget Water quality and their assessment

    OpenAIRE

    Hromádko, Tomáš

    2011-01-01

    The thesis deals with the quality of bottled water and their evaluation criteria. In the first chapter of the literature search are given the types of bottled waters, including thein requirements, and other variants of drinking water. The next section describes the assessment of water in its mineral and microbial composition and their individual components. The next chapter deals with non-traditional criteria for assessment of water quality, which are described in detail with their connection...

  13. Image quality measurements and metrics in full field digital mammography: An overview

    International Nuclear Information System (INIS)

    This paper gives an overview of test procedures developed to assess the performance of full field digital mammography systems. We make a distinction between tests of the individual components of the imaging chain and global system tests. Most tests are not yet fully standardised. Where possible, we illustrate the test methodologies on a selenium flat-panel system. (authors)

  14. MICROWAVE REMOTE SENSING IN SOIL QUALITY ASSESSMENT

    OpenAIRE

    Saha, S K

    2012-01-01

    Information of spatial and temporal variations of soil quality (soil properties) is required for various purposes of sustainable agriculture development and management. Traditionally, soil quality characterization is done by in situ point soil sampling and subsequent laboratory analysis. Such methodology has limitation for assessing the spatial variability of soil quality. Various researchers in recent past showed the potential utility of hyperspectral remote sensing technique for spatial est...

  15. Arbuscular mycorrhiza in soil quality assessment

    DEFF Research Database (Denmark)

    Kling, M.; Jakobsen, I.

    1998-01-01

    aggregates and to the protection of plants against drought and root pathogens. Assessment of soil quality, defined as the capacity of a soil to function within ecosystem boundaries to sustain biological productivity, maintain environmental quality, and promote plant health, should therefore include both...

  16. ON SOIL QUALITY AND ITS ASSESSING

    Directory of Open Access Journals (Sweden)

    N. Florea

    2007-10-01

    Full Text Available The term of “soil quality” is utilized until present with different connotations; its meaning became nowadays more comprehensive. The most adequate definition of the “soil quality” is: “the capacity of a specific kind of soil to function, within natural or managed ecosystem boundaries, to sustain plant and animal productivity, maintain or enhance water and air quality and support human health and habitation” (Karlen et al, 1998 One distinguishes a native soil quality, in natural conditions, and a meta-native soil quality, in managed conditions. Also, one can distinguish a stable side and a variable side of the soil quality. It is useful to consider also the term of “soilscape quality”, defined as weighted average of soil qualities of all the soils entering soil cover and their arrangement (expressed by the pedogeographical assemblage. The assessing soil quality can be made indirectly by a set of indicators. The kind and number of the quality indicators depend on the evaluation scale and the objective of the assessment. New researches are necessary to define more accurately the soil quality and to develop its evaluation. Assessing and monitoring soil quality have global implication in environment and society.

  17. SIMPLE QUALITY ASSESSMENT FOR BINARY IMAGES

    Institute of Scientific and Technical Information of China (English)

    Zhang Chun'e; Qiu Zhengding

    2007-01-01

    Usually image assessment methods could be classified into two categories: subjective assessments and objective ones. The latter are judged by the correlation coefficient with subjective quality measurement MOS (Mean Opinion Score). This paper presents an objective quality assessment algorithm special for binary images. In the algorithm, noise energy is measured by Euclidean distance between noises and signals and the structural effects caused by noise are described by Euler number change. The assessment on image quality is calculated quantitatively in terms of PSNR (Peak Signal to Noise Ratio). Our experiments show that the results of the algorithm are highly correlative with subjective MOS and the algorithm is more simple and computational saving than traditional objective assessment methods.

  18. Quality Assessment in the Primary care

    Directory of Open Access Journals (Sweden)

    Muharrem Ak

    2013-04-01

    Full Text Available -Quality Assessment in the Primary care Dear Editor; I have read the article titled as “Implementation of Rogi Kalyan Samiti (RKS at Primary Health Centre Durvesh” with great interest. Shrivastava et all concluded that assessment mechanism for the achievement of objectives for the suggested RKS model was not successful (1. Hereby I would like to emphasize the importance of quality assessment (QA especially in the era of newly established primary care implementations in our country. Promotion of quality has been fundamental part of primary care health services. Nevertheless variations in quality of care exist even in the developed countries. Accomplishment of quality in the primary care has some barriers like administration and directorial factors, absence of evidence-based medicine practice lack of continuous medical education. Quality of health care is no doubt multifaceted model that covers all components of health structures and processes of care. Quality in the primary care set up includes patient physician relationship, immunization, maternal, adolescent, adult and geriatric health care, referral, non-communicable disease management and prescribing (2. Most countries are recently beginning the implementation of quality assessments in all walks of healthcare. Organizations like European society for quality and safety in family practice (EQuiP endeavor to accomplish quality by collaboration. There are reported developments and experiments related to the methodology, processes and outcomes of quality assessments of health care. Quality assessments will not only contribute the accomplishment of the program / project but also detect the areas where obstacles also exist. In order to speed up the adoption of QA and to circumvent the occurrence of mistakes, health policy makers and family physicians from different parts of the world should share their experiences. Consensus on quality in preventive medicine implementations can help to yield

  19. No-reference quality assessment based on visual perception

    Science.gov (United States)

    Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao

    2014-11-01

    The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233

  20. Landscape Classifications for Landscape Metrics-based Assessment of Urban Heat Island: A Comparative Study

    International Nuclear Information System (INIS)

    In recent years, some studies have been carried out on the landscape analysis of urban thermal patterns. With the prevalence of thermal landscape, a key problem has come forth, which is how to classify thermal landscape into thermal patches. Current researches used different methods of thermal landscape classification such as standard deviation method (SD) and R method. To find out the differences, a comparative study was carried out in Xiamen using a 20-year winter time-serial Landsat images. After the retrieval of land surface temperature (LST), the thermal landscape was classified using the two methods separately. Then landscape metrics, 6 at class level and 14 at landscape level, were calculated and analyzed using Fragstats 3.3. We found that: (1) at the class level, all the metrics with SD method were evened and did not show an obvious trend along with the process of urbanization, while the R method could. (2) While at the landscape level, 6 of the 14 metrics remains the similar trends, 5 were different at local turn points of the curve, 3 of them differed completely in the shape of curves. (3) When examined with visual interpretation, SD method tended to exaggerate urban heat island effects than the R method

  1. Structural similarity analysis for brain MR image quality assessment

    Science.gov (United States)

    Punga, Mirela Visan; Moldovanu, Simona; Moraru, Luminita

    2014-11-01

    Brain MR images are affected and distorted by various artifacts as noise, blur, blotching, down sampling or compression and as well by inhomogeneity. Usually, the performance of pre-processing operation is quantified by using the quality metrics as mean squared error and its related metrics such as peak signal to noise ratio, root mean squared error and signal to noise ratio. The main drawback of these metrics is that they fail to take the structural fidelity of the image into account. For this reason, we addressed to investigate the structural changes related to the luminance and contrast variation (as non-structural distortions) and to denoising process (as structural distortion)through an alternative metric based on structural changes in order to obtain the best image quality.

  2. Quantitative statistical methods for image quality assessment.

    Science.gov (United States)

    Dutta, Joyita; Ahn, Sangtae; Li, Quanzheng

    2013-01-01

    Quantitative measures of image quality and reliability are critical for both qualitative interpretation and quantitative analysis of medical images. While, in theory, it is possible to analyze reconstructed images by means of Monte Carlo simulations using a large number of noise realizations, the associated computational burden makes this approach impractical. Additionally, this approach is less meaningful in clinical scenarios, where multiple noise realizations are generally unavailable. The practical alternative is to compute closed-form analytical expressions for image quality measures. The objective of this paper is to review statistical analysis techniques that enable us to compute two key metrics: resolution (determined from the local impulse response) and covariance. The underlying methods include fixed-point approaches, which compute these metrics at a fixed point (the unique and stable solution) independent of the iterative algorithm employed, and iteration-based approaches, which yield results that are dependent on the algorithm, initialization, and number of iterations. We also explore extensions of some of these methods to a range of special contexts, including dynamic and motion-compensated image reconstruction. While most of the discussed techniques were developed for emission tomography, the general methods are extensible to other imaging modalities as well. In addition to enabling image characterization, these analysis techniques allow us to control and enhance imaging system performance. We review practical applications where performance improvement is achieved by applying these ideas to the contexts of both hardware (optimizing scanner design) and image reconstruction (designing regularization functions that produce uniform resolution or maximize task-specific figures of merit). PMID:24312148

  3. Comment: Assessment of scientific quality is complicated

    NARCIS (Netherlands)

    T. Opthof; A.A.M. Wilde

    2009-01-01

    In their letter 'Assessing scientific quality in a multidisciplinary academic medical centre', Van Kammen, Van Lier and Gunning-Schepers respond to our paper on the assessment of the H-index amongst 28 professors in clinical cardiology appointed at the eight university medical centres in the Netherl

  4. Metrics, Dose, and Dose Concept: The Need for a Proper Dose Concept in the Risk Assessment of Nanoparticles

    Directory of Open Access Journals (Sweden)

    Myrtill Simkó

    2014-04-01

    Full Text Available In order to calculate the dose for nanoparticles (NP, (i relevant information about the dose metrics and (ii a proper dose concept are crucial. Since the appropriate metrics for NP toxicity are yet to be elaborated, a general dose calculation model for nanomaterials is not available. Here we propose how to develop a dose assessment model for NP in analogy to the radiation protection dose calculation, introducing the so-called “deposited and the equivalent dose”. As a dose metric we propose the total deposited NP surface area (SA, which has been shown frequently to determine toxicological responses e.g. of lung tissue. The deposited NP dose is proportional to the total surface area of deposited NP per tissue mass, and takes into account primary and agglomerated NP. By using several weighting factors the equivalent dose additionally takes into account various physico-chemical properties of the NP which are influencing the biological responses. These weighting factors consider the specific surface area, the surface textures, the zeta-potential as a measure for surface charge, the particle morphology such as the shape and the length-to-diameter ratio (aspect ratio, the band gap energy levels of metal and metal oxide NP, and the particle dissolution rate. Furthermore, we discuss how these weighting factors influence the equivalent dose of the deposited NP.

  5. Retinal image quality assessment through a visual similarity index

    OpenAIRE

    Pérez Rodríguez, Jorge; Espinosa Tomás, Julián; Vázquez Ferri, Carmen; Mas Candela, David

    2013-01-01

    Retinal image quality is commonly analyzed through parameters inherited from instrumental optics. These parameters are defined for ‘good optics’ so they are hard to translate into visual quality metrics. Instead of using point or artificial functions, we propose a quality index that takes into account properties of natural images. These images usually show strong local correlations that help to interpret the image. Our aim is to derive an objective index that quantifies the quality of vision ...

  6. When can we measure stress noninvasively? Postdeposition effects on a fecal stress metric confound a multiregional assessment.

    Science.gov (United States)

    Wilkening, Jennifer L; Ray, Chris; Varner, Johanna

    2016-01-01

    Measurement of stress hormone metabolites in fecal samples has become a common method to assess physiological stress in wildlife populations. Glucocorticoid metabolite (GCM) measurements can be collected noninvasively, and studies relating this stress metric to anthropogenic disturbance are increasing. However, environmental characteristics (e.g., temperature) can alter measured GCM concentration when fecal samples cannot be collected immediately after defecation. This effect can confound efforts to separate environmental factors causing predeposition physiological stress in an individual from those acting on a fecal sample postdeposition. We used fecal samples from American pikas (Ochotona princeps) to examine the influence of environmental conditions on GCM concentration by (1) comparing GCM concentration measured in freshly collected control samples to those placed in natural habitats for timed exposure, and (2) relating GCM concentration in samples collected noninvasively throughout the western United States to local environmental characteristics measured before and after deposition. Our timed-exposure trials clarified the spatial scale at which exposure to environmental factors postdeposition influences GCM concentration in pika feces. Also, fecal samples collected from occupied pika habitats throughout the species' range revealed significant relationships between GCM and metrics of climate during the postdeposition period (maximum temperature, minimum temperature, and precipitation during the month of sample collection). Conversely, we found no such relationships between GCM and metrics of climate during the predeposition period (prior to the month of sample collection). Together, these results indicate that noninvasive measurement of physiological stress in pikas across the western US may be confounded by climatic conditions in the postdeposition environment when samples cannot be collected immediately after defecation. Our results reiterate the importance

  7. Measuring Research Quality Using the Journal Impact Factor, Citations and "Ranked Journals": Blunt Instruments or Inspired Metrics?

    Science.gov (United States)

    Jarwal, Som D.; Brion, Andrew M.; King, Maxwell L.

    2009-01-01

    This paper examines whether three bibliometric indicators--the journal impact factor, citations per paper and the Excellence in Research for Australia (ERA) initiative's list of "ranked journals"--can predict the quality of individual research articles as assessed by international experts, both overall and within broad disciplinary groupings. The…

  8. [Radiological assessment of bone quality].

    Science.gov (United States)

    Ito, Masako

    2016-01-01

    Structural property of bone includes micro- or nano-structural property of the trabecular and cortical bone, and macroscopic geometry. Radiological technique is useful to analyze the bone structural property;micro-CT or synchrotron-CT is available to analyze micro- or nano-structural property of bone samples ex vivo, and multi-detector row CT(MDCT)or high-resolution peripheral QCT(HR-pQCT)is available to analyze human bone in vivo. For the analysis of hip geometry, CT-based hip structure analysis(HSA)is available aw sell se radiography and DXA-based HSA. These structural parameters are related to biomechanical property, and these assessment tools provide information of pathological changes or the effects of anti-osteoporotic agents on bone. PMID:26728530

  9. MICROWAVE REMOTE SENSING IN SOIL QUALITY ASSESSMENT

    Directory of Open Access Journals (Sweden)

    S. K. Saha

    2012-08-01

    Full Text Available Information of spatial and temporal variations of soil quality (soil properties is required for various purposes of sustainable agriculture development and management. Traditionally, soil quality characterization is done by in situ point soil sampling and subsequent laboratory analysis. Such methodology has limitation for assessing the spatial variability of soil quality. Various researchers in recent past showed the potential utility of hyperspectral remote sensing technique for spatial estimation of soil properties. However, limited research studies have been carried out showing the potential of microwave remote sensing data for spatial estimation of various soil properties except soil moisture. This paper reviews the status of microwave remote sensing techniques (active and passive for spatial assessment of soil quality parameters such as soil salinity, soil erosion, soil physical properties (soil texture & hydraulic properties; drainage condition; and soil surface roughness. Past and recent research studies showed that both active and passive microwave remote sensing techniques have great potentials for assessment of these soil qualities (soil properties. However, more research studies on use of multi-frequency and full polarimetric microwave remote sensing data and modelling of interaction of multi-frequency and full polarimetric microwave remote sensing data with soil are very much needed for operational use of satellite microwave remote sensing data in soil quality assessment.

  10. Assessing product image quality for online shopping

    Science.gov (United States)

    Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq

    2012-01-01

    Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.

  11. Quality assessment in meta-analisys

    Directory of Open Access Journals (Sweden)

    Giuseppe La Torre

    2006-06-01

    Full Text Available

    Background: An important characteristic of meta-analysis is that the results are determined both by the management of the meta-analysis process and by the features of studies included. The scientific rigor of potential primary studies varies considerably and the common objection to meta-analytic summaries is that they combine results from studies of different quality. Researchers began to develop quality scales for experimental studies, however now the interest of researchers is also focusing on observational studies. Since 1980, when Chalmers developed the first quality scale to assess primary studies included in metaanalysis, more than 100 scales have been developed, which vary dramatically in the quality and quantity of the items included. No standard lists of items exist, and the used quality scales lack empirically-supported components.

    Methods: Two of the most important and diffuse quality scales for experimental studies, Jadad system and Chalmers’ scale, and a quality scale used for observational studies, developed by Angelillo et al., are described and compared.

    Conclusion: The fallibility of meta-analysis is not surprising, considering the various bias that may be introduced by the processes of locating and selecting studies, including publication bias, language bias and citation bias. Quality assessment of the studies offers an estimate of the likelihood that their results will express the truth.

  12. Association of Landscape Metrics to Surface Water Biology in the Savannah River Basin

    OpenAIRE

    Nash, Maliha S.; Deborah J. Chaloud; Susan E. Franson

    2005-01-01

    Surface water quality for the Savannah River basin was assessed using water biology and landscape metrics. Two multivariate analyses, partial least square and canonical correlation, were used to describe how the structural variation in landscape metrics may affect surface water biology and to define the key landscape variable(s) that contribute the most to variation in surface water quality. The results showed that the key landscape metrics in this study area were: percent...

  13. Health outcomes in diabetics measured with Minnesota Community Measurement quality metrics

    Directory of Open Access Journals (Sweden)

    Takahashi PY

    2014-12-01

    Full Text Available Paul Y Takahashi,1 Jennifer L St Sauver,2 Lila J Finney Rutten,2 Robert M Jacobson,3 Debra J Jacobson,2 Michaela E McGree,2 Jon O Ebbert1 1Department of Internal Medicine, Division of Primary Care Internal Medicine, 2Department of Health Sciences Research, Mayo Clinic Robert D and Patricia E Kern Center for the Science of Health Care Delivery, 3Department of Pediatric and Adolescent Medicine, Division of Community Pediatrics, Mayo Clinic, Rochester, MN, USA Objective: Our objective was to understand the relationship between optimal diabetes control, as defined by Minnesota Community Measurement (MCM, and adverse health outcomes including emergency department (ED visits, hospitalizations, 30-day rehospitalization, intensive care unit (ICU stay, and mortality. Patients and methods: In 2009, we conducted a retrospective cohort study of empaneled Employee and Community Health patients with diabetes mellitus. We followed patients from 1 September 2009 until 30 June 2011 for hospitalization and until 5 January 2014 for mortality. Optimal control of diabetes mellitus was defined as achieving the following three measures: low-density lipoprotein (LDL cholesterol <100 mg/mL, blood pressure <140/90 mmHg, and hemoglobin A1c <8%. Using the electronic medical record, we assessed hospitalizations, ED visits, ICU stays, 30-day rehospitalizations, and mortality. The chi-square or Wilcoxon rank-sum tests were used to compare those with and without optimal control. We used Cox proportional hazard models to estimate the associations between optimal diabetes mellitus status and each outcome. Results: We identified 5,731 empaneled patients with diabetes mellitus; 2,842 (49.6% were in the optimal control category. After adjustment, we observed that non-optimally controlled patients had higher risks for hospitalization (hazard ratio [HR] 1.11; 95% confidence interval [CI] 1.00–1.23, ED visits (HR 1.15; 95% CI 1.06–1.25, and mortality (HR 1.29; 95% CI 1.09–1

  14. SOFTWARE METRICS VALIDATION METHODOLOGIES IN SOFTWARE ENGINEERING

    Directory of Open Access Journals (Sweden)

    K.P. Srinivasan

    2014-12-01

    Full Text Available In the software measurement validations, assessing the validation of software metrics in software engineering is a very difficult task due to lack of theoretical methodology and empirical methodology [41, 44, 45]. During recent years, there have been a number of researchers addressing the issue of validating software metrics. At present, software metrics are validated theoretically using properties of measures. Further, software measurement plays an important role in understanding and controlling software development practices and products. The major requirement in software measurement is that the measures must represent accurately those attributes they purport to quantify and validation is critical to the success of software measurement. Normally, validation is a collection of analysis and testing activities across the full life cycle and complements the efforts of other quality engineering functions and validation is a critical task in any engineering project. Further, validation objective is to discover defects in a system and assess whether or not the system is useful and usable in operational situation. In the case of software engineering, validation is one of the software engineering disciplines that help build quality into software. The major objective of software validation process is to determine that the software performs its intended functions correctly and provides information about its quality and reliability. This paper discusses the validation methodology, techniques and different properties of measures that are used for software metrics validation. In most cases, theoretical and empirical validations are conducted for software metrics validations in software engineering [1-50].

  15. User-Perceived Quality Assessment for VoIP Applications

    CERN Document Server

    Beuran, R; CERN. Geneva

    2004-01-01

    We designed and implemented a system that permits the measurement of network Quality of Service (QoS) parameters. This system allows us to objectively evaluate the requirements of network applications for delivering user-acceptable quality. To do this we compute accurately the network QoS parameters: one-way delay, jitter, packet loss and throughput. The measurement system makes use of a global clock to synchronise the time measurements in different points of the network. To study the behaviour of real network applications specific metrics must be defined in order to assess the user-perceived quality (UPQ) for each application. Since we measure simultaneously network QoS and application UPQ, we are able to correlate them. Determining application requirements has two main uses: (i) to predict the expected UPQ for an application running over a given network (based on the corresponding measured QoS parameters) and understand the causes of application failure; (ii) to design/configure networks that provide the ne...

  16. Visualization and quality assessment of the contrast transfer function estimation.

    Science.gov (United States)

    Sheth, Lisa K; Piotrowski, Angela L; Voss, Neil R

    2015-11-01

    The contrast transfer function (CTF) describes an undesirable distortion of image data from a transmission electron microscope. Many users of full-featured processing packages are often new to electron microscopy and are unfamiliar with the CTF concept. Here we present a common graphical output to clearly demonstrate the CTF fit quality independent of estimation software. Separately, many software programs exist to estimate the four CTF parameters, but their results are difficult to compare across multiple runs and it is all but impossible to select the best parameters to use for further processing. A new measurement is presented based on the correlation falloff of the calculated CTF oscillations against the normalized oscillating signal of the data, called the CTF resolution. It was devised to provide a robust numerical quality metric of every CTF estimation for high-throughput screening of micrographs and to select the best parameters for each micrograph. These new CTF visualizations and quantitative measures will help users better assess the quality of their CTF parameters and provide a mechanism to choose the best CTF tool for their data. PMID:26080023

  17. Data Matching, Integration, and Interoperability for a Metric Assessment of Monographs

    DEFF Research Database (Denmark)

    Zuccala, Alesia Ann; Cornacchia, Roberto

    2016-01-01

    experiment highlighted current problems related citation indices and the way that books are recorded by different citing authors. Our research further demonstrates the primary problem of matching book titles as ‘cited objects’ with book titles held in a union library catalog, given that books are always...... new Microsoft SQL database. The purpose of the experiment was to investigate co-varied metrics for a list of book titles based on their citation impact (from Scopus), presence in international libraries (WorldCat.org) and visibility as publically reviewed items (Goodreads). The results of our data...... recorded distinctly in libraries if published as separate editions with different International Standard Book Numbers (ISBNs). Due to various ‘matching’ problems related to the ISBN, we suggest a new type of identifier, a ‘Book Object Identifier’, which would allow bibliometricians to recognize a book...

  18. A Methodology for Software Design Quality Assessment of Design Enhancements

    OpenAIRE

    Sahar Reda; Hany Ammar; Osman Hegazy

    2012-01-01

    The most important measure that must be considered in anysoftware product is its design quality. Measuring of the designquality in the early stages of software development is the key todevelop and enhance quality software. Research on objectoriented design metrics has produced a large number of metricsthat can be measured to identify design problems and assessdesign quality attributes. However the use of these design metricsis limited in practice due to the difficulty of measuring and usinga ...

  19. Dental metric assessment of the omo fossils: implications for the phylogenetic position of Australopithecus africanus.

    Science.gov (United States)

    Hunt, K; Vitzthum, V J

    1986-10-01

    The discovery of Australopithecus afarensis has led to new interpretations of hominid phylogeny, some of which reject A. africanus as an ancestor of Homo. Analysis of buccolingual tooth crown dimensions in australopithecines and Homo species by Johanson and White (Science 202:321-330, 1979) revealed that the South African gracile australopithecines are intermediate in size between Laetoli/hadar hominids and South African robust hominids. Homo, on the other hand, displays dimensions similar to those of A. afarensis and smaller than those of other australopithecines. These authors conclude, therefore, that A. africanus is derived in the direction of A. robustus and is not an ancestor of the Homo clade. However, there is a considerable time gap (ca. 800,000 years) between the Laetoli/Hadar specimens and the earliest Homo specimens; "gracile" hominids from Omo fit into this chronological gap and are from the same geographic area. Because the early specimens at Omo have been designated A. afarensis and the later specimens classified as Homo habilis, Omo offers a unique opportunity to test hypotheses concerning hominid evolution, especially regarding the phylogenetic status of A. africanus. Comparisons of mean cheek teeth breadths disclosed the significant (P less than or equal to 0.05) differences between the Omo sample and the Laetoli/Hadar fossils (P4, M2, and M3), the Homo fossils (P3, P4, M1, M2, and M1), and A. africanus (M3). Of the several possible interpretations of these data, it appears that the high degree of similarity between the Omo sample and the South African gracile australopithecine material warrants considering the two as geographical variants of A. africanus. The geographic, chronologic, and metric attributes of the Omo sample argue for its lineal affinity with A. afarensis and Homo. In conclusion, a consideration of hominid postcanine dental metrics provides no basis for removing A. africanus from the ancestry of the Homo lineage. PMID:3099582

  20. Objective assessment of MPEG-2 video quality

    Science.gov (United States)

    Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano

    2002-07-01

    The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.

  1. Soil quality assessment under emerging regulatory requirements.

    Science.gov (United States)

    Bone, James; Head, Martin; Barraclough, Declan; Archer, Michael; Scheib, Catherine; Flight, Dee; Voulvoulis, Nikolaos

    2010-08-01

    New and emerging policies that aim to set standards for protection and sustainable use of soil are likely to require identification of geographical risk/priority areas. Soil degradation can be seen as the change or disturbance in soil quality and it is therefore crucial that soil and soil quality are well understood to protect soils and to meet legislative requirements. To increase this understanding a review of the soil quality definition evaluated its development, with a formal scientific approach to assessment beginning in the 1970s, followed by a period of discussion and refinement. A number of reservations about soil quality assessment expressed in the literature are summarised. Taking concerns into account, a definition of soil quality incorporating soil's ability to meet multifunctional requirements, to provide ecosystem services, and the potential for soils to affect other environmental media is described. Assessment using this definition requires a large number of soil function dependent indicators that can be expensive, laborious, prone to error, and problematic in comparison. Findings demonstrate the need for a method that is not function dependent, but uses a number of cross-functional indicators instead. This method to systematically prioritise areas where detailed investigation is required, using a ranking based against a desired level of action, could be relatively quick, easy and cost effective. As such this has potential to fill in gaps and compliment existing monitoring programs and assist in development and implementation of current and future soil protection legislation. PMID:20483160

  2. An assessment model for quality management

    Science.gov (United States)

    Völcker, Chr.; Cass, A.; Dorling, A.; Zilioli, P.; Secchi, P.

    2002-07-01

    SYNSPACE together with InterSPICE and Alenia Spazio is developing an assessment method to determine the capability of an organisation in the area of quality management. The method, sponsored by the European Space Agency (ESA), is called S9kS (SPiCE- 9000 for SPACE). S9kS is based on ISO 9001:2000 with additions from the quality standards issued by the European Committee for Space Standardization (ECSS) and ISO 15504 - Process Assessments. The result is a reference model that supports the expansion of the generic process assessment framework provided by ISO 15504 to nonsoftware areas. In order to be compliant with ISO 15504, requirements from ISO 9001 and ECSS-Q-20 and Q-20-09 have been turned into process definitions in terms of Purpose and Outcomes, supported by a list of detailed indicators such as Practices, Work Products and Work Product Characteristics. In coordination with this project, the capability dimension of ISO 15504 has been revised to be consistent with ISO 9001. As contributions from ISO 9001 and the space quality assurance standards are separable, the stripped down version S9k offers organisations in all industries an assessment model based solely on ISO 9001, and is therefore interesting to all organisations, which intend to improve their quality management system based on ISO 9001.

  3. The biological basis for environmental quality assessments

    International Nuclear Information System (INIS)

    A systematic approach is required to environmental quality assessments with regard to the Baltic regions in order to address the problem of pollution abatement. The proposed systematization of adaptive states stems from the general theory of adaptation. The various types of adaption are described. (AB)

  4. Water quality issues and energy assessments

    Energy Technology Data Exchange (ETDEWEB)

    Davis, M.J.; Chiu, S.

    1980-11-01

    This report identifies and evaluates the significant water quality issues related to regional and national energy development. In addition, it recommends improvements in the Office assessment capability. Handbook-style formating, which includes a system of cross-references and prioritization, is designed to help the reader use the material.

  5. Quality assessment of aluminized steel tubes

    OpenAIRE

    K. Żaba

    2010-01-01

    The results of assessments of the welded steel tubes with the Al-Si coating intended for the motorization needs – are presented in thepaper. The measurement of mechanical properties, tube diameters and thickness, internal flash heights as well as the alternative assessmentof the weld quality were performed. The obtained results are presented by means of tools available in the Statistica program andmacroscopic observations.

  6. Quality assessment: A performance-based approach to assessments

    International Nuclear Information System (INIS)

    Revision C to US Department of Energy (DOE) Order 5700.6 (6C) ''Quality Assurance'' (QA) brings significant changes to the conduct of QA. The Westinghouse government-owned, contractor-operated (GOCO) sites have updated their quality assurance programs to the requirements and guidance of 6C, and are currently implementing necessary changes. In late 1992, a Westinghouse GOCO team led by the Waste Isolation Division (WID) conducted what is believed to be the first assessment of implementation of a quality assurance program founded on 6C

  7. Assessing uncertainty in stormwater quality modelling.

    Science.gov (United States)

    Wijesiri, Buddhi; Egodawatta, Prasanna; McGree, James; Goonetilleke, Ashantha

    2016-10-15

    Designing effective stormwater pollution mitigation strategies is a challenge in urban stormwater management. This is primarily due to the limited reliability of catchment scale stormwater quality modelling tools. As such, assessing the uncertainty associated with the information generated by stormwater quality models is important for informed decision making. Quantitative assessment of build-up and wash-off process uncertainty, which arises from the variability associated with these processes, is a major concern as typical uncertainty assessment approaches do not adequately account for process uncertainty. The research study undertaken found that the variability of build-up and wash-off processes for different particle size ranges leads to processes uncertainty. After variability and resulting process uncertainties are accurately characterised, they can be incorporated into catchment stormwater quality predictions. Accounting of process uncertainty influences the uncertainty limits associated with predicted stormwater quality. The impact of build-up process uncertainty on stormwater quality predictions is greater than that of wash-off process uncertainty. Accordingly, decision making should facilitate the designing of mitigation strategies which specifically addresses variations in load and composition of pollutants accumulated during dry weather periods. Moreover, the study outcomes found that the influence of process uncertainty is different for stormwater quality predictions corresponding to storm events with different intensity, duration and runoff volume generated. These storm events were also found to be significantly different in terms of the Runoff-Catchment Area ratio. As such, the selection of storm events in the context of designing stormwater pollution mitigation strategies needs to take into consideration not only the storm event characteristics, but also the influence of process uncertainty on stormwater quality predictions. PMID:27423532

  8. Can we go beyond burned area assessment with fire patch metrics from global remote rensing?

    Science.gov (United States)

    Nogueira Pereira Messias, Joana; Ruffault, Julien; Chuvieco, Emilio; Mouillot, Florent

    2016-04-01

    Fire is a major event influencing global biogeochemical cycles and contribute to the emissions of CO2 and other greenhouse gases to the atmosphere. Global burned area (BA) datasets from remote sensing have provided the fruitful information for quantifying carbon emissions in global biogeochemical models, and for DGVM's benchmarking. Patch level analysis from pixel level information recently emerged as an informative additional feature of the regime as fire size distribution. The aim of this study is to evaluate the ability of global BA products to accurately represent characteristics of fire patches (size, complexity shape and spatial orientation). We selected a site in the Brazilian savannas (Cerrado), one of the most fire prone biome and one of the validation test site for the ESA fire-Cci project. We used the pixel-level burned area detected by Landsat, MCD45A1 and the newly delivered MERIS ESA fire-Cci for the period 2002-2009. A flood-fill algorithm adapted from Archibald and Roy (2009) was used to identify the individual fire patches (patch ID) according to the burned date (BD). For each patch ID, we calculated a panel of patch metrics as area, perimeter and core area, shape complexity (shape index and fractal dimension) and the feature of the ellipse fitted over the spatial distribution of pixels composing the patch (eccentricity and direction of the main axis). Paired fire patches overlapping between each BA products were compared. The correlation between patch metrics were evaluated by linear regression models for each inter-product comparison according to fire size classes. Our results showed significant patch overlaps (>30%) between products for patches with areas larger than 270ha, with more than 90% of patches overlapping between MERIS and MCD45A1. Fire Patch metrics correlations showed R2>0.6 for all comparisons of patch Area and Core Area, with a slope of 0.99 between MERIS and MCD45A1 illustrating the agreement between the two global products. The

  9. Quality Assessment of Urinary Stone Analysis

    DEFF Research Database (Denmark)

    Siener, Roswitha; Buchholz, Noor; Daudon, Michel;

    2016-01-01

    , fulfilled the quality requirements. According to the current standard, chemical analysis is considered to be insufficient for stone analysis, whereas infrared spectroscopy or X-ray diffraction is mandatory. However, the poor results of infrared spectroscopy highlight the importance of equipment, reference...... and chemical analysis. The aim of the present study was to assess the quality of urinary stone analysis of laboratories in Europe. Nine laboratories from eight European countries participated in six quality control surveys for urinary calculi analyses of the Reference Institute for Bioanalytics, Bonn......, Germany, between 2010 and 2014. Each participant received the same blinded test samples for stone analysis. A total of 24 samples, comprising pure substances and mixtures of two or three components, were analysed. The evaluation of the quality of the laboratory in the present study was based on the...

  10. Metrical Quantization

    CERN Document Server

    Klauder, J R

    1998-01-01

    Canonical quantization may be approached from several different starting points. The usual approaches involve promotion of c-numbers to q-numbers, or path integral constructs, each of which generally succeeds only in Cartesian coordinates. All quantization schemes that lead to Hilbert space vectors and Weyl operators---even those that eschew Cartesian coordinates---implicitly contain a metric on a flat phase space. This feature is demonstrated by studying the classical and quantum ``aggregations'', namely, the set of all facts and properties resident in all classical and quantum theories, respectively. Metrical quantization is an approach that elevates the flat phase space metric inherent in any canonical quantization to the level of a postulate. Far from being an unwanted structure, the flat phase space metric carries essential physical information. It is shown how the metric, when employed within a continuous-time regularization scheme, gives rise to an unambiguous quantization procedure that automatically ...

  11. Air Quality Assessment Using Interpolation Technique

    Directory of Open Access Journals (Sweden)

    Awkash Kumar

    2016-07-01

    Full Text Available Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance Weighting (IDW of Geographical Information System (GIS has been used to perform interpolation with the help of concentration data on air quality at three locations of Mumbai for the year 2008. The classification was done for the spatial and temporal variation in air quality levels for Mumbai region. The seasonal and annual variations of air quality levels for SO2, NOx and SPM (Suspended Particulate Matter have been focused in this study. Results show that SPM concentration always exceeded the permissible limit of National Ambient Air Quality Standard. Also, seasonal trends of pollutant SPM was low in monsoon due rain fall. The finding of this study will help to formulate control strategies for rational management of air pollution and can be used for many other regions.

  12. Assessment of multi-version NPP I and C systems safety. Metric-based approach, technique and tool

    International Nuclear Information System (INIS)

    The challenges related to problem of assessment of actual diversity level and evaluation of diversity-oriented NPP I and C systems safety are analyzed. There are risks of inaccurate assessment and problems of insufficient decreasing probability of CCFs. CCF probability of safety-critical systems may be essentially decreased due to application of several different types of diversity (multi-diversity). Different diversity types of FPGA-based NPP I and C systems, general approach and stages of diversity and safety assessment as a whole are described. Objectives of the report are: (a) analysis of the challenges caused by use of diversity approach in NPP I and C systems in context of FPGA and other modern technologies application; (b) development of multi-version NPP I and C systems assessment technique and tool based on check-list and metric-oriented approach; (c) case-study of the technique: assessment of multi-version FPGA-based NPP I and C developed by use of RadiyTM Platform. (author)

  13. Assessing the performance of macroinvertebrate metrics in the Challhuaco-Ñireco System (Northern Patagonia, Argentina

    Directory of Open Access Journals (Sweden)

    Melina Mauad

    2015-09-01

    Full Text Available ABSTRACT Seven sites were examined in the Challhuaco-Ñireco system, located in the reserve of the Nahuel Huapi National Park, however part of the catchment is urbanized, being San Carlos de Bariloche (150,000 inhabitants placed in the lower part of the basin. Physico-chemical variables were measured and benthic macroinvertebrates were collected during three consecutive years at seven sites from the headwater to the river outlet. Sites near the source of the river were characterised by Plecoptera, Ephemeroptera, Trichoptera and Diptera, whereas sites close to the river mouth were dominated by Diptera, Oligochaeta and Mollusca. Regarding functional feeding groups, collector-gatherers were dominant at all sites and this pattern was consistent among years. Ordination Analysis (RDA revealed that species assemblages distribution responded to the climatic and topographic gradient (temperature and elevation, but also were associated with variables related to human impact (conductivity, nitrate and phosphate contents. Species assemblages at headwaters were mostly represented by sensitive insects, whereas tolerant taxa such as Tubificidae, Lumbriculidae, Chironomidae and crustacean Aegla sp. were dominant at urbanised sites. Regarding macroinvertebrate metrics employed, total richness, EPT taxa, Shannon diversity index and Biotic Monitoring Patagonian Stream index resulted fairly consistent and evidenced different levels of disturbances at the stream, meaning that this measures are suitable for evaluation of the status of Patagonian mountain streams.

  14. Water Quality Assessment using Satellite Remote Sensing

    Science.gov (United States)

    Haque, Saad Ul

    2016-07-01

    The two main global issues related to water are its declining quality and quantity. Population growth, industrialization, increase in agriculture land and urbanization are the main causes upon which the inland water bodies are confronted with the increasing water demand. The quality of surface water has also been degraded in many countries over the past few decades due to the inputs of nutrients and sediments especially in the lakes and reservoirs. Since water is essential for not only meeting the human needs but also to maintain natural ecosystem health and integrity, there are efforts worldwide to assess and restore quality of surface waters. Remote sensing techniques provide a tool for continuous water quality information in order to identify and minimize sources of pollutants that are harmful for human and aquatic life. The proposed methodology is focused on assessing quality of water at selected lakes in Pakistan (Sindh); namely, HUBDAM, KEENJHAR LAKE, HALEEJI and HADEERO. These lakes are drinking water sources for several major cities of Pakistan including Karachi. Satellite imagery of Landsat 7 (ETM+) is used to identify the variation in water quality of these lakes in terms of their optical properties. All bands of Landsat 7 (ETM+) image are analyzed to select only those that may be correlated with some water quality parameters (e.g. suspended solids, chlorophyll a). The Optimum Index Factor (OIF) developed by Chavez et al. (1982) is used for selection of the optimum combination of bands. The OIF is calculated by dividing the sum of standard deviations of any three bands with the sum of their respective correlation coefficients (absolute values). It is assumed that the band with the higher standard deviation contains the higher amount of 'information' than other bands. Therefore, OIF values are ranked and three bands with the highest OIF are selected for the visual interpretation. A color composite image is created using these three bands. The water quality

  15. OBJECTIVE QUALITY ASSESSMENT OF IMAGE ENHANCEMENT METHODS IN DIGITAL MAMMOGRAPHY-A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Sheba K.U

    2016-08-01

    Full Text Available Mammography is the primary and most reliable technique for detection of breast cancer. Mammograms are examined for the presence of malignant masses and indirect signs of malignancy such as micro calcifications, architectural distortion and bilateral asymmetry. However, Mammograms are X-ray images taken with low radiation dosage which results in low contrast, noisy images. Also, malignancies in dense breast are difficult to detect due to opaque uniform background in mammograms. Hence, techniques for improving visual screening of mammograms are essential. Image enhancement techniques are used to improve the visual quality of the images. This paper presents the comparative study of different preprocessing techniques used for enhancement of mammograms in mini-MIAS data base. Performance of the image enhancement techniques is evaluated using objective image quality assessment techniques. They include simple statistical error metrics like PSNR and human visual system (HVS feature based metrics such as SSIM, NCC, UIQI, and Discrete Entropy

  16. Surveillance Metrics Sensitivity Study

    Energy Technology Data Exchange (ETDEWEB)

    Bierbaum, R; Hamada, M; Robertson, A

    2011-11-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.

  17. Surveillance metrics sensitivity study.

    Energy Technology Data Exchange (ETDEWEB)

    Hamada, Michael S. (Los Alamos National Laboratory); Bierbaum, Rene Lynn; Robertson, Alix A. (Lawrence Livermore Laboratory)

    2011-09-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.

  18. Assessing Quality of Data Standards: Framework and Illustration Using XBRL GAAP Taxonomy

    Science.gov (United States)

    Zhu, Hongwei; Wu, Harris

    The primary purpose of data standards or metadata schemas is to improve the interoperability of data created by multiple standard users. Given the high cost of developing data standards, it is desirable to assess the quality of data standards. We develop a set of metrics and a framework for assessing data standard quality. The metrics include completeness and relevancy. Standard quality can also be indirectly measured by assessing interoperability of data instances. We evaluate the framework using data from the financial sector: the XBRL (eXtensible Business Reporting Language) GAAP (Generally Accepted Accounting Principles) taxonomy and US Securities and Exchange Commission (SEC) filings produced using the taxonomy by approximately 500 companies. The results show that the framework is useful and effective. Our analysis also reveals quality issues of the GAAP taxonomy and provides useful feedback to taxonomy users. The SEC has mandated that all publicly listed companies must submit their filings using XBRL. Our findings are timely and have practical implications that will ultimately help improve the quality of financial data.

  19. Assessing the Quality of Diabetic Patients Care

    Directory of Open Access Journals (Sweden)

    Belkis Vicente Sánchez

    2012-12-01

    Full Text Available Background: to improve the efficiency and effectiveness of the actions of family doctors and nurses in this area is an indispensable requisite in order to achieve a comprehensive health care. Objective: to assess the quality of health care provided to diabetic patients by the family doctor in Abreus health area. Methods: a descriptive and observational study based on the application of tools to assess the performance of family doctors in the treatment of diabetes mellitus in the five family doctors consultation in Abreus health area from January to July 2011 was conducted. The five doctors working in these consultations, as well as the 172 diabetic patients were included in the study. At the same time, 172 randomly selected medical records were also revised. Through observation, the existence of some necessary material resources and the quality of their performance as well as the quality of medical records were evaluated. Patient criteria served to assess the quality of the health care provided. Results: scientific and technical training on diabetes mellitus has been insufficient; the necessary equipment for the appropriate care and monitoring of patients with diabetes is available; in 2.9% of medical records reviewed, interrogation appears in its complete form including the complete physical examination in 12 of them and the complete medical indications in 26. Conclusions: the quality of comprehensive medical care to diabetic patients included in the study is compromised. Doctors interviewed recognized the need to be trained in the diagnosis and treatment of diabetes in order to improve their professional performance and enhance the quality of the health care provided to these patients.

  20. A structural difference based image clutter metric with brain cognitive model constraints

    Science.gov (United States)

    Xu, Dejiang; Shi, Zelin; Luo, Haibo

    2013-03-01

    Previous clutter metrics have less than the desired accuracy in predicting targeting performance, in this paper, a structural difference based image clutter metric is proposed based on the given definition of image clutter metric. According to the sensitivity of human visual perception to image structural information, a structural similarity measure between the target and clutter images is firstly established. Previous clutter metrics not considering brain cognitive characteristics, we define an information content weight measure by introducing the widely accepted brain cognitive information extracting model in the field of image quality assessment (IQA), and then, pool the structural similarity measure to be a clutter metric, which can be entitled BSD metric. Comparative field tests show that BSD metric makes a more significant improvement than previously proposed metrics in predicting target acquisition performance including detection probability and search time.

  1. Drinking Water Quality Assessment in Tetova Region

    OpenAIRE

    B. H. Durmishi; Ismaili, M.; Shabani, A.; Sh. Abduli

    2012-01-01

    Problem statement: The quality of drinking water is a crucial factor for human health. The objective of this study was the assessment of physical, chemical and bacteriological quality of the drinking water in the city of Tetova and several surrounding villages in the Republic of Macedonia for the period May 2007-2008. The sampling and analysis are conducted in accordance with State Regulation No. 57/2004, which is in compliance with EU and WHO standards. A total of 415 samples were taken for ...

  2. Automated Data Quality Assessment of Marine Sensors

    OpenAIRE

    Smith, Daniel V; Leon Reznik; Paulo A. Souza; Timms, Greg P.

    2011-01-01

    The automated collection of data (e.g., through sensor networks) has led to a massive increase in the quantity of environmental and other data available. The sheer quantity of data and growing need for real-time ingestion of sensor data (e.g., alerts and forecasts from physical models) means that automated Quality Assurance/Quality Control (QA/QC) is necessary to ensure that the data collected is fit for purpose. Current automated QA/QC approaches provide assessments based upon hard classific...

  3. Perceived interest versus overt visual attention in image quality assessment

    Science.gov (United States)

    Engelke, Ulrich; Zhang, Wei; Le Callet, Patrick; Liu, Hantao

    2015-03-01

    We investigate the impact of overt visual attention and perceived interest on the prediction performance of image quality metrics. Towards this end we performed two respective experiments to capture these mechanisms: an eye gaze tracking experiment and a region-of-interest selection experiment. Perceptual relevance maps were created from both experiments and integrated into the design of the image quality metrics. Correlation analysis shows that indeed there is an added value of integrating these perceptual relevance maps. We reveal that the improvement in prediction accuracy is not statistically different between fixation density maps from eye gaze tracking data and region-of-interest maps, thus, indicating the robustness of different perceptual relevance maps for the performance gain of image quality metrics. Interestingly, however, we found that thresholding of region-of-interest maps into binary maps significantly deteriorates prediction performance gain for image quality metrics. We provide a detailed analysis and discussion of the results as well as the conceptual and methodological differences between capturing overt visual attention and perceived interest.

  4. Quality assurance in diagnostic radiology - assessing the fluoroscopic image quality

    International Nuclear Information System (INIS)

    The X-ray fluoroscopic image has a considerably lower resolution than the radiographic one. This requires a careful quality control aiming at optimal use of the fluoroscopic equipment. The basic procedures for image quality assessment of Image Intensifier/TV image are described. Test objects from Leeds University (UK) are used as prototypes. The results from examining 50 various fluoroscopic devices are shown. Their limiting spatial resolution varies between 0.8 lp/mm (at maximum II field size) and 2.24 lp/mm (at minimum field size). The mean value of the limiting spatial resolution for a 23 cm Image Intensifier is about 1.24 lp/mm. The mean limits of variation of the contrast/detail diagram for various fluoroscopic equipment are graphically expressed. 14 refs., 1 fig. (author)

  5. Retinal image quality assessment through a visual similarity index

    Science.gov (United States)

    Pérez, Jorge; Espinosa, Julián; Vázquez, Carmen; Mas, David

    2013-04-01

    Retinal image quality is commonly analyzed through parameters inherited from instrumental optics. These parameters are defined for 'good optics' so they are hard to translate into visual quality metrics. Instead of using point or artificial functions, we propose a quality index that takes into account properties of natural images. These images usually show strong local correlations that help to interpret the image. Our aim is to derive an objective index that quantifies the quality of vision by taking into account the local structure of the scene, instead of focusing on a particular aberration. As we show, this index highly correlates with visual acuity and allows inter-comparison of natural images around the retina. The usefulness of the index is proven through the analysis of real eyes before and after undergoing corneal surgery, which usually are hard to analyze with standard metrics.

  6. Toward assessing subjective quality of service of conversational mobile multimedia applications delivered over the internet: a methodology study

    OpenAIRE

    Dugénie, P; Munro, ATD; Barton, MH

    2002-01-01

    Some recent publications have proposed methodologies to assess the performance of multimedia services in introducing subjective estimate of the end-to-end quality of various applications. As a general statement, in order to obtain meaningful subjective results, the experiments must be repeatable and the elements of the whole chain of transmission between users must be restricted to a minimum number of objective quality metrics. This paper presents the approach to specifying the minimum qualit...

  7. Visual quality assessment by machine learning

    CERN Document Server

    Xu, Long; Kuo, C -C Jay

    2015-01-01

    The book encompasses the state-of-the-art visual quality assessment (VQA) and learning based visual quality assessment (LB-VQA) by providing a comprehensive overview of the existing relevant methods. It delivers the readers the basic knowledge, systematic overview and new development of VQA. It also encompasses the preliminary knowledge of Machine Learning (ML) to VQA tasks and newly developed ML techniques for the purpose. Hence, firstly, it is particularly helpful to the beginner-readers (including research students) to enter into VQA field in general and LB-VQA one in particular. Secondly, new development in VQA and LB-VQA particularly are detailed in this book, which will give peer researchers and engineers new insights in VQA.

  8. Website Quality Assessment Model (WQAM for Developing Efficient E-Learning Framework- A Novel Approach

    Directory of Open Access Journals (Sweden)

    R.Jayakumar

    2013-10-01

    Full Text Available The prodigious growth of internet as an environment for learning has led to the development of enormous sites to offer knowledge to the novices in an efficient manner. However, evaluating the quality of those sites is a substantial task. With that concern, this paper attempts to evaluate the quality measures for enhancing the site design and contents of an e-learning framework, as it relates to information retrieval over the internet. Moreover, the proposal explores two main processes. Firstly, evaluating a website quality with the defined high-level quality metrics such as accuracy, feasibility, utility and propriety using Website Quality Assessment Model (WQAM and secondly, developing an e-learning framework with improved quality. Specifically, the quality metrics are analyzedwith the feedback compliance obtained through a Questionnaire Sample (QS. By which, the area of the website that requires improvement can be identified and then, a new e-learning framework has been developed with the incorporation of those enhancements.

  9. QUAST: quality assessment tool for genome assemblies

    OpenAIRE

    Gurevich, Alexey; Saveliev, Vladislav; Vyahhi, Nikolay; Tesler, Glenn

    2013-01-01

    Summary: Limitations of genome sequencing techniques have led to dozens of assembly algorithms, none of which is perfect. A number of methods for comparing assemblers have been developed, but none is yet a recognized benchmark. Further, most existing methods for comparing assemblies are only applicable to new assemblies of finished genomes; the problem of evaluating assemblies of previously unsequenced species has not been adequately considered. Here, we present QUAST—a quality assessment too...

  10. Quality assessment of aluminized steel tubes

    Directory of Open Access Journals (Sweden)

    K. Żaba

    2010-07-01

    Full Text Available The results of assessments of the welded steel tubes with the Al-Si coating intended for the motorization needs – are presented in thepaper. The measurement of mechanical properties, tube diameters and thickness, internal flash heights as well as the alternative assessmentof the weld quality were performed. The obtained results are presented by means of tools available in the Statistica program andmacroscopic observations.

  11. Validation of no-reference image quality index for the assessment of digital mammographic images

    Science.gov (United States)

    de Oliveira, Helder C. R.; Barufaldi, Bruno; Borges, Lucas R.; Gabarda, Salvador; Bakic, Predrag R.; Maidment, Andrew D. A.; Schiabel, Homero; Vieira, Marcelo A. C.

    2016-03-01

    To ensure optimal clinical performance of digital mammography, it is necessary to obtain images with high spatial resolution and low noise, keeping radiation exposure as low as possible. These requirements directly affect the interpretation of radiologists. The quality of a digital image should be assessed using objective measurements. In general, these methods measure the similarity between a degraded image and an ideal image without degradation (ground-truth), used as a reference. These methods are called Full-Reference Image Quality Assessment (FR-IQA). However, for digital mammography, an image without degradation is not available in clinical practice; thus, an objective method to assess the quality of mammograms must be performed without reference. The purpose of this study is to present a Normalized Anisotropic Quality Index (NAQI), based on the Rényi entropy in the pseudo-Wigner domain, to assess mammography images in terms of spatial resolution and noise without any reference. The method was validated using synthetic images acquired through an anthropomorphic breast software phantom, and the clinical exposures on anthropomorphic breast physical phantoms and patient's mammograms. The results reported by this noreference index follow the same behavior as other well-established full-reference metrics, e.g., the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Reductions of 50% on the radiation dose in phantom images were translated as a decrease of 4dB on the PSNR, 25% on the SSIM and 33% on the NAQI, evidencing that the proposed metric is sensitive to the noise resulted from dose reduction. The clinical results showed that images reduced to 53% and 30% of the standard radiation dose reported reductions of 15% and 25% on the NAQI, respectively. Thus, this index may be used in clinical practice as an image quality indicator to improve the quality assurance programs in mammography; hence, the proposed method reduces the subjectivity

  12. A multi-scale metrics approach to forest fragmentation for Strategic Environmental Impact Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eunyoung, E-mail: eykim@kei.re.kr [Korea Environment Institute, 215 Jinheungno, Eunpyeong-gu, Seoul 122-706 (Korea, Republic of); Song, Wonkyong, E-mail: wksong79@gmail.com [Suwon Research Institute, 145 Gwanggyo-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-270 (Korea, Republic of); Lee, Dongkun, E-mail: dklee7@snu.ac.kr [Department of Landscape Architecture and Rural System Engineering, Seoul National University, 599 Gwanakro, Gwanak-gu, Seoul 151-921 (Korea, Republic of); Research Institute for Agriculture and Life Sciences, Seoul National University, Seoul 151-921 (Korea, Republic of)

    2013-09-15

    Forests are becoming severely fragmented as a result of land development. South Korea has responded to changing community concerns about environmental issues. The nation has developed and is extending a broad range of tools for use in environmental management. Although legally mandated environmental compliance requirements in South Korea have been implemented to predict and evaluate the impacts of land-development projects, these legal instruments are often insufficient to assess the subsequent impact of development on the surrounding forests. It is especially difficult to examine impacts on multiple (e.g., regional and local) scales in detail. Forest configuration and size, including forest fragmentation by land development, are considered on a regional scale. Moreover, forest structure and composition, including biodiversity, are considered on a local scale in the Environmental Impact Assessment process. Recently, the government amended the Environmental Impact Assessment Act, including the SEA, EIA, and small-scale EIA, to require an integrated approach. Therefore, the purpose of this study was to establish an impact assessment system that minimizes the impacts of land development using an approach that is integrated across multiple scales. This study focused on forest fragmentation due to residential development and road construction sites in selected Congestion Restraint Zones (CRZs) in the Greater Seoul Area of South Korea. Based on a review of multiple-scale impacts, this paper integrates models that assess the impacts of land development on forest ecosystems. The applicability of the integrated model for assessing impacts on forest ecosystems through the SEIA process is considered. On a regional scale, it is possible to evaluate the location and size of a land-development project by considering aspects of forest fragmentation, such as the stability of the forest structure and the degree of fragmentation. On a local scale, land-development projects should

  13. A multi-scale metrics approach to forest fragmentation for Strategic Environmental Impact Assessment

    International Nuclear Information System (INIS)

    Forests are becoming severely fragmented as a result of land development. South Korea has responded to changing community concerns about environmental issues. The nation has developed and is extending a broad range of tools for use in environmental management. Although legally mandated environmental compliance requirements in South Korea have been implemented to predict and evaluate the impacts of land-development projects, these legal instruments are often insufficient to assess the subsequent impact of development on the surrounding forests. It is especially difficult to examine impacts on multiple (e.g., regional and local) scales in detail. Forest configuration and size, including forest fragmentation by land development, are considered on a regional scale. Moreover, forest structure and composition, including biodiversity, are considered on a local scale in the Environmental Impact Assessment process. Recently, the government amended the Environmental Impact Assessment Act, including the SEA, EIA, and small-scale EIA, to require an integrated approach. Therefore, the purpose of this study was to establish an impact assessment system that minimizes the impacts of land development using an approach that is integrated across multiple scales. This study focused on forest fragmentation due to residential development and road construction sites in selected Congestion Restraint Zones (CRZs) in the Greater Seoul Area of South Korea. Based on a review of multiple-scale impacts, this paper integrates models that assess the impacts of land development on forest ecosystems. The applicability of the integrated model for assessing impacts on forest ecosystems through the SEIA process is considered. On a regional scale, it is possible to evaluate the location and size of a land-development project by considering aspects of forest fragmentation, such as the stability of the forest structure and the degree of fragmentation. On a local scale, land-development projects should

  14. Mass Customization Measurements Metrics

    DEFF Research Database (Denmark)

    Nielsen, Kjeld; Brunø, Thomas Ditlev; Jørgensen, Kaj Asbjørn;

    2014-01-01

    A recent survey has indicated that 17 % of companies have ceased mass customizing less than 1 year after initiating the effort. This paper presents measurement for a company’s mass customization performance, utilizing metrics within the three fundamental capabilities: robust process design, choice...... navigation, and solution space development. A mass customizer when assessing performance with these metrics can identify within which areas improvement would increase competitiveness the most and enable more efficient transition to mass customization....

  15. QoS Metrics for Cloud Computing Services Evaluation

    Directory of Open Access Journals (Sweden)

    Amid Khatibi Bardsiri

    2014-11-01

    Full Text Available Cloud systems are transforming the Information Technology trade by facultative the companies to provide admission to their structure and also software products to the membership foundation. Because of the vast range within the delivered Cloud solutions, from the customer’s perspective of an aspect, it's emerged as troublesome to decide whose providers they need to utilize and then what's the thought of his or her option. Especially, employing suitable metrics is vital in assessing practices. Nevertheless, to the most popular of our knowledge, there's no methodical explanation relating to metrics for estimating Cloud products and services. QoS (Quality of Service metrics playing an important role in selecting Cloud providers and also optimizing resource utilization efficiency. While many reports have got to devote to exploitation QoS metrics, relatively not much equipment supports the remark and investigation of QoS metrics of Cloud programs. To guarantee a specialized product is published, describing metrics for assessing the QoS might be an essential necessity. So, this text suggests various QoS metrics for service vendors, especially thinking about the consumer’s worry. This article provides the metrics list may stand to help the future study and also assessment within the field of Cloud service's evaluation.

  16. Scene reduction for subjective image quality assessment

    Science.gov (United States)

    Lewandowska (Tomaszewska), Anna

    2016-01-01

    Evaluation of image quality is important for many image processing systems, such as those used for acquisition, compression, restoration, enhancement, or reproduction. Its measurement is often accompanied by user studies, in which a group of observers rank or rate results of several algorithms. Such user studies, known as subjective image quality assessment experiments, can be very time consuming and do not guarantee conclusive results. This paper is intended to help design an efficient and rigorous quality assessment experiment. We propose a method of limiting the number of scenes that need to be tested, which can significantly reduce the experimental effort and still capture relevant scene-dependent effects. To achieve it, we employ a clustering technique and evaluate it on the basis of compactness and separation criteria. The correlation between the results obtained from a set of images in an initial database and the results received from reduced experiment are analyzed. Finally, we propose a procedure for reducing the initial scenes number. Four different assessment techniques were tested: single stimulus, double stimulus, forced choice, and similarity judgments. We conclude that in most cases, 9 to 12 judgments per evaluated algorithm for a large scene collection is sufficient to reduce the initial set of images.

  17. Quality of assessments within reach: Review study of research and results of the quality of assessments

    NARCIS (Netherlands)

    Maassen, N.A.M.; Otter, den D.; Wools, S.; Hemker, B.T.; Straetmans, G.J.J.M.; Eggen, T.J.H.M.

    2015-01-01

    Educational tests and assessments are important instruments to measure a student’s knowledge and skills. The question that is addressed in this review study is: “which aspects are currently considered as important to the quality of educational assessments?” Furthermore, it is explored how this infor

  18. Ecological Status of a Patagonian Mountain River: Usefulness of Environmental and Biotic Metrics for Rehabilitation Assessment

    Science.gov (United States)

    Laura, Miserendino M.; Adriana, M. Kutschker; Cecilia, Brand; La Ludmila, Manna; Cecilia, Prinzio Y. Di; Gabriela, Papazian; José, Bava

    2016-06-01

    This work evaluates the consequences of anthropogenic pressures at different sections of a Patagonian mountain river using a set of environmental and biological measures. A map of risk of soil erosion at a basin scale was also produced. The study was conducted at 12 sites along the Percy River system, where physicochemical parameters, riparian ecosystem quality, habitat condition, plants, and macroinvertebrates were investigated. While livestock and wood collection, the dominant activities at upper and mean basin sites resulted in an important loss of the forest cover still the riparian ecosystem remains in a relatively good status of conservation, as do the in-stream habitat conditions and physicochemical features. Besides, most indicators based on macroinvertebrates revealed that both upper and middle basin sections supported similar assemblages, richness, density, and most functional feeding group attributes. Instead, the lower urbanized basin showed increases in conductivity and nutrient values, poor quality in the riparian ecosystem, and habitat condition. According to the multivariate analysis, ammonia level, elevation, current velocity, and habitat conditions had explanatory power on benthos assemblages. Discharge, naturalness of the river channel, flood plain morphology, conservation status, and percent of urban areas were important moderators of plant composition. Finally, although the present land use in the basin would not produce a significant risk of soil erosion, unsustainable practices that promotes the substitution of the forest for shrubs would lead to severe consequences. Mitigation efforts should be directed to protect headwater forest, restore altered riparian ecosystem, and to control the incipient eutrophication process.

  19. Ecological Status of a Patagonian Mountain River: Usefulness of Environmental and Biotic Metrics for Rehabilitation Assessment.

    Science.gov (United States)

    Laura, Miserendino M; Adriana, M Kutschker; Cecilia, Brand; La Ludmila, Manna; Cecilia, Prinzio Y Di; Gabriela, Papazian; José, Bava

    2016-06-01

    This work evaluates the consequences of anthropogenic pressures at different sections of a Patagonian mountain river using a set of environmental and biological measures. A map of risk of soil erosion at a basin scale was also produced. The study was conducted at 12 sites along the Percy River system, where physicochemical parameters, riparian ecosystem quality, habitat condition, plants, and macroinvertebrates were investigated. While livestock and wood collection, the dominant activities at upper and mean basin sites resulted in an important loss of the forest cover still the riparian ecosystem remains in a relatively good status of conservation, as do the in-stream habitat conditions and physicochemical features. Besides, most indicators based on macroinvertebrates revealed that both upper and middle basin sections supported similar assemblages, richness, density, and most functional feeding group attributes. Instead, the lower urbanized basin showed increases in conductivity and nutrient values, poor quality in the riparian ecosystem, and habitat condition. According to the multivariate analysis, ammonia level, elevation, current velocity, and habitat conditions had explanatory power on benthos assemblages. Discharge, naturalness of the river channel, flood plain morphology, conservation status, and percent of urban areas were important moderators of plant composition. Finally, although the present land use in the basin would not produce a significant risk of soil erosion, unsustainable practices that promotes the substitution of the forest for shrubs would lead to severe consequences. Mitigation efforts should be directed to protect headwater forest, restore altered riparian ecosystem, and to control the incipient eutrophication process. PMID:26961305

  20. Comparing concentration-based (AOT40) and stomatal uptake (PODY) metrics for ozone risk assessment to European forests.

    Science.gov (United States)

    Anav, Alessandro; De Marco, Alessandra; Proietti, Chiara; Alessandri, Andrea; Dell'Aquila, Alessandro; Cionni, Irene; Friedlingstein, Pierre; Khvorostyanov, Dmitry; Menut, Laurent; Paoletti, Elena; Sicard, Pierre; Sitch, Stephen; Vitale, Marcello

    2016-04-01

    Tropospheric ozone (O3 ) produces harmful effects to forests and crops, leading to a reduction of land carbon assimilation that, consequently, influences the land sink and the crop yield production. To assess the potential negative O3 impacts to vegetation, the European Union uses the Accumulated Ozone over Threshold of 40 ppb (AOT40). This index has been chosen for its simplicity and flexibility in handling different ecosystems as well as for its linear relationships with yield or biomass loss. However, AOT40 does not give any information on the physiological O3 uptake into the leaves since it does not include any environmental constraints to O3 uptake through stomata. Therefore, an index based on stomatal O3 uptake (i.e. PODY), which describes the amount of O3 entering into the leaves, would be more appropriate. Specifically, the PODY metric considers the effects of multiple climatic factors, vegetation characteristics and local and phenological inputs rather than the only atmospheric O3 concentration. For this reason, the use of PODY in the O3 risk assessment for vegetation is becoming recommended. We compare different potential O3 risk assessments based on two methodologies (i.e. AOT40 and stomatal O3 uptake) using a framework of mesoscale models that produces hourly meteorological and O3 data at high spatial resolution (12 km) over Europe for the time period 2000-2005. Results indicate a remarkable spatial and temporal inconsistency between the two indices, suggesting that a new definition of European legislative standard is needed in the near future. Besides, our risk assessment based on AOT40 shows a good consistency compared to both in-situ data and other model-based datasets. Conversely, risk assessment based on stomatal O3 uptake shows different spatial patterns compared to other model-based datasets. This strong inconsistency can be likely related to a different vegetation cover and its associated parameterizations. PMID:26492093

  1. Advancing Efforts to Achieve Health Equity: Equity Metrics for Health Impact Assessment Practice

    OpenAIRE

    Jonathan Heller; Givens, Marjory L.; Yuen, Tina K.; Solange Gould; Maria Benkhalti Jandu; Emily Bourcier; Tim Choi

    2014-01-01

    Equity is a core value of Health Impact Assessment (HIA). Many compelling moral, economic, and health arguments exist for prioritizing and incorporating equity considerations in HIA practice. Decision-makers, stakeholders, and HIA practitioners see the value of HIAs in uncovering the impacts of policy and planning decisions on various population subgroups, developing and prioritizing specific actions that promote or protect health equity, and using the process to empower marginalized communit...

  2. Fingerprint Quality Assessment Combining Blind Image Quality, Texture and Minutiae Features

    OpenAIRE

    Z. Yao; Le Bars, Jean-Marie; Charrier, Christophe; Rosenberger, Christophe; Rosenberger, C

    2015-01-01

    International audience Biometric sample quality assessment approaches are generally designed in terms of utility property due to the potential difference between human perception of quality and the biometric quality requirements for a recog-nition system. This study proposes a utility based quality assessment method of fingerprints by considering several complementary aspects: 1) Image quality assessment without any reference which is consistent with human conception of inspecting quality,...

  3. Image quality assessment and human visual system

    Science.gov (United States)

    Gao, Xinbo; Lu, Wen; Tao, Dacheng; Li, Xuelong

    2010-07-01

    This paper summaries the state-of-the-art of image quality assessment (IQA) and human visual system (HVS). IQA provides an objective index or real value to measure the quality of the specified image. Since human beings are the ultimate receivers of visual information in practical applications, the most reliable IQA is to build a computational model to mimic the HVS. According to the properties and cognitive mechanism of the HVS, the available HVS-based IQA methods can be divided into two categories, i.e., bionics methods and engineering methods. This paper briefly introduces the basic theories and development histories of the above two kinds of HVS-based IQA methods. Finally, some promising research issues are pointed out in the end of the paper.

  4. Peer Review and Quality Assessment in Complete Denture Education.

    Science.gov (United States)

    Novetsky, Marvin; Razzoog, Michael E.

    1981-01-01

    A program in peer review and quality assessment at the University of Michigan denture department is described. The program exposes students to peer review in order to assess the quality of their treatment. (Author/MLW)

  5. Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...

  6. Metrics for Success: Strategies for Enabling Core Facility Performance and Assessing Outcomes.

    Science.gov (United States)

    Turpen, Paula B; Hockberger, Philip E; Meyn, Susan M; Nicklin, Connie; Tabarini, Diane; Auger, Julie A

    2016-04-01

    Core Facilities are key elements in the research portfolio of academic and private research institutions. Administrators overseeing core facilities (core administrators) require assessment tools for evaluating the need and effectiveness of these facilities at their institutions. This article discusses ways to promote best practices in core facilities as well as ways to evaluate their performance across 8 of the following categories: general management, research and technical staff, financial management, customer base and satisfaction, resource management, communications, institutional impact, and strategic planning. For each category, we provide lessons learned that we believe contribute to the effective and efficient overall management of core facilities. If done well, we believe that encouraging best practices and evaluating performance in core facilities will demonstrate and reinforce the importance of core facilities in the research and educational mission of institutions. It will also increase job satisfaction of those working in core facilities and improve the likelihood of sustainability of both facilities and personnel. PMID:26848284

  7. Metric Properties of the Neighborhood Inventory for Environmental Typology (NIfETy): An Environmental Assessment Tool for Measuring Indicators of Violence, Alcohol, Tobacco, and Other Drug Exposures

    Science.gov (United States)

    Furr-Holden, C. D. M.; Campbell, K. D. M.; Milam, A. J.; Smart, M. J.; Ialongo, N. A.; Leaf, P. J.

    2010-01-01

    Objectives: Establish metric properties of the Neighborhood Inventory for Environmental Typology (NIfETy). Method: A total of 919 residential block faces were assessed by paired raters using the NIfETy. Reliability was evaluated via interrater and internal consistency reliability; validity by comparing NIfETy data with youth self-reported…

  8. Service Quality and Process Maturity Assessment

    Directory of Open Access Journals (Sweden)

    Serek Radomir

    2013-12-01

    Full Text Available This article deals with service quality and the methods for its measurement and improvements to reach the so called service excellence. Besides older methods such as SERVQUAL and SERPERF, there are also shortly described capability maturity models based on which the own methodology is developed and used for process maturity assessment in organizations providing technical services. This method is equally described and accompanied by examples on pictures. The verification of method functionality is explored on finding a correlation between service employee satisfaction and average process maturity in a service organization. The results seem to be quite promising and open an arena for further studies.

  9. Assessment of sleep quality in powernapping

    DEFF Research Database (Denmark)

    Kooravand Takht Sabzy, Bashaer; Thomsen, Carsten E

    2011-01-01

    The purpose of this study is to assess the Sleep Quality (SQ) in powernapping. The contributed factors for SQ assessment are time of Sleep Onset (SO), Sleep Length (SL), Sleep Depth (SD), and detection of sleep events (K-complex (KC) and Sleep Spindle (SS)). Data from daytime nap for 10 subjects, 2...... days each, including EEG and ECG were recorded. The SD and sleep events were analyzed by applying spectral analysis. The SO time was detected by a combination of signal spectral analysis, Slow Rolling Eye Movement (SREM) detection, Heart Rate Variability (HRV) analysis and EEG segmentation using both...... Autocorrelation Function (ACF), and Crosscorrelation Function (CCF) methods. The EEG derivation FP1-FP2 filtered in a narrow band and used as an alternative to EOG for SREM detection. The ACF and CCF segmentation methods were also applied for detection of sleep events. The ACF method detects segment boundaries...

  10. Quality Assessment of Landsat Surface Reflectance Products Using MODIS Data

    Science.gov (United States)

    Feng, Min; Huang, Chengquan; Channan, Saurabh; Vermote, Eric; Masek, Jeffrey G.; Townshend, John R.

    2012-01-01

    Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat

  11. Quality assessment of Landsat surface reflectance products using MODIS data

    Science.gov (United States)

    Feng, Min; Huang, Chengquan; Channan, Saurabh; Vermote, Eric F.; Masek, Jeffrey G.; Townshend, John R.

    2012-01-01

    Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat

  12. Quality Assessment Dimensions of Distance Teaching/Learning Curriculum Designing

    Science.gov (United States)

    Volungeviciene, Airina; Tereseviciene, Margarita

    2008-01-01

    The paper presents scientific literature analysis in the area of distance teaching/learning curriculum designing and quality assessment. The aim of the paper is to identify quality assessment dimensions of distance teaching/learning curriculum designing. The authors of the paper agree that quality assessment should be considered during the…

  13. FunFOLDQA: a quality assessment tool for protein-ligand binding site residue predictions.

    Directory of Open Access Journals (Sweden)

    Daniel B Roche

    Full Text Available The estimation of prediction quality is important because without quality measures, it is difficult to determine the usefulness of a prediction. Currently, methods for ligand binding site residue predictions are assessed in the function prediction category of the biennial Critical Assessment of Techniques for Protein Structure Prediction (CASP experiment, utilizing the Matthews Correlation Coefficient (MCC and Binding-site Distance Test (BDT metrics. However, the assessment of ligand binding site predictions using such metrics requires the availability of solved structures with bound ligands. Thus, we have developed a ligand binding site quality assessment tool, FunFOLDQA, which utilizes protein feature analysis to predict ligand binding site quality prior to the experimental solution of the protein structures and their ligand interactions. The FunFOLDQA feature scores were combined using: simple linear combinations, multiple linear regression and a neural network. The neural network produced significantly better results for correlations to both the MCC and BDT scores, according to Kendall's τ, Spearman's ρ and Pearson's r correlation coefficients, when tested on both the CASP8 and CASP9 datasets. The neural network also produced the largest Area Under the Curve score (AUC when Receiver Operator Characteristic (ROC analysis was undertaken for the CASP8 dataset. Furthermore, the FunFOLDQA algorithm incorporating the neural network, is shown to add value to FunFOLD, when both methods are employed in combination. This results in a statistically significant improvement over all of the best server methods, the FunFOLD method (6.43%, and one of the top manual groups (FN293 tested on the CASP8 dataset. The FunFOLDQA method was also found to be competitive with the top server methods when tested on the CASP9 dataset. To the best of our knowledge, FunFOLDQA is the first attempt to develop a method that can be used to assess ligand binding site

  14. Novel approach for assessing uncertainty propagation via information-theoretic divergence metrics and multivariate Gaussian Copula modeling

    Science.gov (United States)

    Thelen, Brian J.; Rickerd, Chris J.; Burns, Joseph W.

    2014-06-01

    With all of the new remote sensing modalities available, with ever increasing capabilities, there is a constant desire to extend the current state of the art in physics-based feature extraction and to introduce new and innovative techniques that enable the exploitation within and across modalities, i.e., fusion. A key component of this process is finding the associated features from the various imaging modalities that provide key information in terms of exploitative fusion. Further, it is desired to have an automatic methodology for assessing the information in the features from the various imaging modalities, in the presence of uncertainty. In this paper we propose a novel approach for assessing, quantifying, and isolating the information in the features via a joint statistical modeling of the features with the Gaussian Copula framework. This framework allows for a very general modeling of distributions on each of the features while still modeling the conditional dependence between the features, and the final output is a relatively accurate estimate of the information-theoretic J-divergence metric, which is directly related to discriminability. A very useful aspect of this approach is that it can be used to assess which features are most informative, and what is the information content as a function of key uncertainties (e.g., geometry) and collection parameters (e.g., SNR and resolution). We show some results of applying the Gaussian Copula framework and estimating the J-Divergence on HRR data as generated from the AFRL public release data set known as the Backhoe Data Dome.

  15. Quantitative Metrics and Risk Assessment: The Three Tenets Model of Cybersecurity

    Directory of Open Access Journals (Sweden)

    Jeff Hughes

    2013-08-01

    Full Text Available Progress in operational cybersecurity has been difficult to demonstrate. In spite of the considerable research and development investments made for more than 30 years, many government, industrial, financial, and consumer information systems continue to be successfully attacked and exploited on a routine basis. One of the main reasons that progress has been so meagre is that most technical cybersecurity solutions that have been proposed to-date have been point solutions that fail to address operational tradeoffs, implementation costs, and consequent adversary adaptations across the full spectrum of vulnerabilities. Furthermore, sound prescriptive security principles previously established, such as the Orange Book, have been difficult to apply given current system complexity and acquisition approaches. To address these issues, the authors have developed threat-based descriptive methodologies to more completely identify system vulnerabilities, to quantify the effectiveness of possible protections against those vulnerabilities, and to evaluate operational consequences and tradeoffs of possible protections. This article begins with a discussion of the tradeoffs among seemingly different system security properties such as confidentiality, integrity, and availability. We develop a quantitative framework for understanding these tradeoffs and the issues that arise when those security properties are all in play within an organization. Once security goals and candidate protections are identified, risk/benefit assessments can be performed using a novel multidisciplinary approach, called “QuERIES.” The article ends with a threat-driven quantitative methodology, called “The Three Tenets”, for identifying vulnerabilities and countermeasures in networked cyber-physical systems. The goal of this article is to offer operational guidance, based on the techniques presented here, for informed decision making about cyber-physical system security.

  16. Content-aware objective video quality assessment

    Science.gov (United States)

    Ortiz-Jaramillo, Benhur; Niño-Castañeda, Jorge; Platiša, Ljiljana; Philips, Wilfried

    2016-01-01

    Since the end-user of video-based systems is often a human observer, prediction of user-perceived video quality (PVQ) is an important task for increasing the user satisfaction. Despite the large variety of objective video quality measures (VQMs), their lack of generalizability remains a problem. This is mainly due to the strong dependency between PVQ and video content. Although this problem is well known, few existing VQMs directly account for the influence of video content on PVQ. Recently, we proposed a method to predict PVQ by introducing relevant video content features in the computation of video distortion measures. The method is based on analyzing the level of spatiotemporal activity in the video and using those as parameters of the anthropomorphic video distortion models. We focus on the experimental evaluation of the proposed methodology based on a total of five public databases, four different objective VQMs, and 105 content related indexes. Additionally, relying on the proposed method, we introduce an approach for selecting the levels of video distortions for the purpose of subjective quality assessment studies. Our results suggest that when adequately combined with content related indexes, even very simple distortion measures (e.g., peak signal to noise ratio) are able to achieve high performance, i.e., high correlation between the VQM and the PVQ. In particular, we have found that by incorporating video content features, it is possible to increase the performance of the VQM by up to 20% relative to its noncontent-aware baseline.

  17. Water-quality impact assessment for hydropower

    International Nuclear Information System (INIS)

    A methodology to assess the impact of a hydropower facility on downstream water quality is described. Negative impacts can result from the substitution of discharges aerated over a spillway with minimally aerated turbine discharges that are often withdrawn from lower reservoir levels, where dissolved oxygen (DO) is typically low. Three case studies illustrate the proposed method and problems that can be encountered. Historic data are used to establish the probability of low-dissolved-oxygen occurrences. Synoptic surveys, combined with downstream monitoring, give an overall picture of the water-quality dynamics in the river and the reservoir. Spillway aeration is determined through measurements and adjusted for temperature. Theoretical computations of selective withdrawal are sensitive to boundary conditions, such as the location of the outlet-relative to the reservoir bottom, but withdrawal from the different layers is estimated from measured upstream and downstream temperatures and dissolved-oxygen profiles. Based on field measurements, the downstream water quality under hydropower operation is predicted. Improving selective withdrawal characteristics or diverting part of the flow over the spillway provided cost-effective mitigation solutions for small hydropower facilities (less than 15 MW) because of the low capital investment required

  18. Assessing quality and total quality in economic higher education

    OpenAIRE

    Catalina Sitnikov

    2008-01-01

    Nowadays, there are countries, systems and cultures where the issue of quality management and all the items implied are firmly on the agenda for higher education institutions. Whether a result of a growing climate of increasing accountability or an expansion in the size and diversity of student populations, both quality assurance and quality enhancement are now considered essential components of any quality management programme.

  19. Survey and Assessment of Land Ecological Quality in Cixi City

    OpenAIRE

    LIU, JUNBAO; Chen, Zhiyuan; Pan, Weifeng; Xie, Shaojuan

    2013-01-01

    Soil, atmosphere, water and quality of agricultural product constitute the content of land ecological quality. Cixi City, through survey pilot project of basic farmland quality, carried out high precision soil geochemical survey and survey of agricultural products, irrigation water and air quality, and established ecological quality evaluation model of land. Based on the evaluation of soil geochemical quality, we conducted comprehensive quality assessment of atmosphere, water, agricultural pr...

  20. Considerations of the Software Metric-based Methodology for Software Reliability Assessment in Digital I and C Systems

    International Nuclear Information System (INIS)

    Analog I and C systems have been replaced by digital I and C systems because the digital systems have many potential benefits to nuclear power plants in terms of operational and safety performance. For example, digital systems are essentially free of drifts, have higher data handling and storage capabilities, and provide improved performance by accuracy and computational capabilities. In addition, analog replacement parts become more difficult to obtain since they are obsolete and discontinued. There are, however, challenges to the introduction of digital technology into the nuclear power plants because digital systems are more complex than analog systems and their operation and failure modes are different. Especially, software, which can be the core of functionality in the digital systems, does not wear out physically like hardware and its failure modes are not yet defined clearly. Thus, some researches to develop the methodology for software reliability assessment are still proceeding in the safety-critical areas such as nuclear system, aerospace and medical devices. Among them, software metric-based methodology has been considered for the digital I and C systems of Korean nuclear power plants. Advantages and limitations of that methodology are identified and requirements for its application to the digital I and C systems are considered in this study

  1. Contribution to a quantitative assessment model for reliability-based metrics of electronic and programmable safety-related functions

    International Nuclear Information System (INIS)

    The use of fault-tolerant EP architectures has induced growing constraints, whose influence on reliability-based performance metrics is no more negligible. To face up the growing influence of simultaneous failure, this thesis proposes, for safety-related functions, a new-trend assessment method of reliability, based on a better taking into account of time-aspect. This report introduces the concept of information and uses it to interpret the failure modes of safety-related function as the direct result of the initiation and propagation of erroneous information until the actuator-level. The main idea is to distinguish the apparition and disappearance of erroneous states, which could be defined as intrinsically dependent of HW-characteristic and maintenance policies, and their possible activation, constrained through architectural choices, leading to the failure of safety-related function. This approach is based on a low level on deterministic SED models of the architecture and use non homogeneous Markov chains to depict the time-evolution of probabilities of errors. (author)

  2. A metric-based assessment of flood risk and vulnerability of rural communities in the Lower Shire Valley, Malawi

    Science.gov (United States)

    Adeloye, A. J.; Mwale, F. D.; Dulanya, Z.

    2015-06-01

    In response to the increasing frequency and economic damages of natural disasters globally, disaster risk management has evolved to incorporate risk assessments that are multi-dimensional, integrated and metric-based. This is to support knowledge-based decision making and hence sustainable risk reduction. In Malawi and most of Sub-Saharan Africa (SSA), however, flood risk studies remain focussed on understanding causation, impacts, perceptions and coping and adaptation measures. Using the IPCC Framework, this study has quantified and profiled risk to flooding of rural, subsistent communities in the Lower Shire Valley, Malawi. Flood risk was obtained by integrating hazard and vulnerability. Flood hazard was characterised in terms of flood depth and inundation area obtained through hydraulic modelling in the valley with Lisflood-FP, while the vulnerability was indexed through analysis of exposure, susceptibility and capacity that were linked to social, economic, environmental and physical perspectives. Data on these were collected through structured interviews of the communities. The implementation of the entire analysis within GIS enabled the visualisation of spatial variability in flood risk in the valley. The results show predominantly medium levels in hazardousness, vulnerability and risk. The vulnerability is dominated by a high to very high susceptibility. Economic and physical capacities tend to be predominantly low but social capacity is significantly high, resulting in overall medium levels of capacity-induced vulnerability. Exposure manifests as medium. The vulnerability and risk showed marginal spatial variability. The paper concludes with recommendations on how these outcomes could inform policy interventions in the Valley.

  3. Assessment of every day extremely low frequency (Elf) electromagnetic fields (50-60 Hz) exposure: which metrics?

    International Nuclear Information System (INIS)

    Because electricity is encountered at every moment of the day, at home with household appliances, or in every type of transportation, people are most of the time exposed to extremely low frequency (E.L.F.) electromagnetic fields (50-60 Hz) in a various way. Due to a lack of knowledge about the biological mechanisms of 50 Hz magnetic fields, studies seeking to identify health effects of exposure use central tendency metrics. The objective of our study is to provide better information about these exposure measurements from three categories of metrics. We calculated metrics of exposure measurements from data series (79 very day exposed subjects), made up approximately 20,000 recordings of magnetic fields, measured every 30 seconds for 7 days with an E.M.D.E.X. II dosimeter. These indicators were divided into three categories : central tendency metrics, dispersion metrics and variability metrics.We use Principal Component Analysis, a multidimensional technique to examine the relations between different exposure metrics for a group of subjects. Principal component Analysis (P.C.A.) enabled us to identify from the foreground 71.7% of the variance. The first component (42.7%) was characterized by central tendency; the second (29.0%) was composed of dispersion characteristics. The third component (17.2%) was composed of variability characteristics. This study confirm the need to improve exposure measurements by using at least two dimensions intensity and dispersion. (authors)

  4. 2003 SNL ASCI applications software quality engineering assessment report.

    Energy Technology Data Exchange (ETDEWEB)

    Schofield, Joseph Richard, Jr.; Ellis, Molly A.; Williamson, Charles Michael; Bonano, Lora A.

    2004-02-01

    This document describes the 2003 SNL ASCI Software Quality Engineering (SQE) assessment of twenty ASCI application code teams and the results of that assessment. The purpose of this assessment was to determine code team compliance with the Sandia National Laboratories ASCI Applications Software Quality Engineering Practices, Version 2.0 as part of an overall program assessment.

  5. Cyber threat metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Frye, Jason Neal; Veitch, Cynthia K.; Mateski, Mark Elliot; Michalski, John T.; Harris, James Mark; Trevino, Cassandra M.; Maruoka, Scott

    2012-03-01

    Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.

  6. Trajectory-Oriented Approach to Managing Traffic Complexity: Trajectory Flexibility Metrics and Algorithms and Preliminary Complexity Impact Assessment

    Science.gov (United States)

    Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek

    2009-01-01

    This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.

  7. Efficient neural-network-based no-reference approach to an overall quality metric for JPEG and JPEG2000 compressed images

    OpenAIRE

    H. Liu; Redi, J.A.; Alers, H.; R. Zunino; Heynderickx, I.E.J.R.

    2011-01-01

    Reliably assessing overall quality of JPEG/JPEG2000 coded images without having the original image as a reference is still challenging, mainly due to our limited understanding of how humans combine the various perceived artifacts to an overall quality judgment. A known approach to avoid the explicit simulation of human assessment of overall quality is the use of a neural network. Neural network approaches usually start by selecting active features from a set of generic image characteristics, ...

  8. Toward a No-Reference Image Quality Assessment Using Statistics of Perceptual Color Descriptors.

    Science.gov (United States)

    Lee, Dohyoung; Plataniotis, Konstantinos N

    2016-08-01

    Analysis of the statistical properties of natural images has played a vital role in the design of no-reference (NR) image quality assessment (IQA) techniques. In this paper, we propose parametric models describing the general characteristics of chromatic data in natural images. They provide informative cues for quantifying visual discomfort caused by the presence of chromatic image distortions. The established models capture the correlation of chromatic data between spatially adjacent pixels by means of color invariance descriptors. The use of color invariance descriptors is inspired by their relevance to visual perception, since they provide less sensitive descriptions of image scenes against viewing geometry and illumination variations than luminances. In order to approximate the visual quality perception of chromatic distortions, we devise four parametric models derived from invariance descriptors representing independent aspects of color perception: 1) hue; 2) saturation; 3) opponent angle; and 4) spherical angle. The practical utility of the proposed models is examined by deploying them in our new general-purpose NR IQA metric. The metric initially estimates the parameters of the proposed chromatic models from an input image to constitute a collection of quality-aware features (QAF). Thereafter, a machine learning technique is applied to predict visual quality given a set of extracted QAFs. Experimentation performed on large-scale image databases demonstrates that the proposed metric correlates well with the provided subjective ratings of image quality over commonly encountered achromatic and chromatic distortions, indicating that it can be deployed on a wide variety of color image processing problems as a generalized IQA solution. PMID:27305678

  9. Validation of an image-based technique to assess the perceptual quality of clinical chest radiographs with an observer study

    Science.gov (United States)

    Lin, Yuan; Choudhury, Kingshuk R.; McAdams, H. Page; Foos, David H.; Samei, Ehsan

    2014-03-01

    We previously proposed a novel image-based quality assessment technique1 to assess the perceptual quality of clinical chest radiographs. In this paper, an observer study was designed and conducted to systematically validate this technique. Ten metrics were involved in the observer study, i.e., lung grey level, lung detail, lung noise, riblung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. For each metric, three tasks were successively presented to the observers. In each task, six ROI images were randomly presented in a row and observers were asked to rank the images only based on a designated quality and disregard the other qualities. A range slider on the top of the images was used for observers to indicate the acceptable range based on the corresponding perceptual attribute. Five boardcertificated radiologists from Duke participated in this observer study on a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions. The observer data were analyzed in terms of the correlations between the observer ranking orders and the algorithmic ranking orders. Based on the collected acceptable ranges, quality consistency ranges were statistically derived. The observer study showed that, for each metric, the averaged ranking orders of the participated observers were strongly correlated with the algorithmic orders. For the lung grey level, the observer ranking orders completely accorded with the algorithmic ranking orders. The quality consistency ranges derived from this observer study were close to these derived from our previous study. The observer study indicates that the proposed image-based quality assessment technique provides a robust reflection of the perceptual image quality of the clinical chest radiographs. The derived quality consistency ranges can be used to automatically predict the acceptability of a clinical chest radiograph.

  10. Toward an efficient objective metric based on perceptual criteria

    Science.gov (United States)

    Quintard, Ludovic; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2008-01-01

    Quality assessment is a very challenging problem and will still as is since it is difficult to define universal tools. So, subjective assessment is one adapted way but it is tedious, time consuming and needs normalized room. Objective metrics can be with reference, with reduced reference and with no-reference. This paper presents a study carried out for the development of a no-reference objective metric dedicated to the quality evaluation of display devices. Initially, a subjective study has been devoted to this problem by asking a representative panel (15 male and 15 female; 10 young adults, 10 adults and 10 seniors) to answer questions regarding their perception of several criteria for quality assessment. These quality factors were hue, saturation, contrast and texture. This aims to define the importance of perceptual criteria in the human judgment of quality. Following the study, the factors that impact the quality evaluation of display devices have been proposed. The development of a no-reference objective metric has been performed by using statistical tools allowing to separate the important axes. This no-reference metric based on perceptual criteria by integrating some specificities of the human visual system (HVS) has a high correlation with the subjective data.

  11. Data Quality Assessment for Maritime Situation Awareness

    Science.gov (United States)

    Iphar, C.; Napoli, A.; Ray, C.

    2015-08-01

    The Automatic Identification System (AIS) initially designed to ensure maritime security through continuous position reports has been progressively used for many extended objectives. In particular it supports a global monitoring of the maritime domain for various purposes like safety and security but also traffic management, logistics or protection of strategic areas, etc. In this monitoring, data errors, misuse, irregular behaviours at sea, malfeasance mechanisms and bad navigation practices have inevitably emerged either by inattentiveness or voluntary actions in order to circumvent, alter or exploit such a system in the interests of offenders. This paper introduces the AIS system and presents vulnerabilities and data quality assessment for decision making in maritime situational awareness cases. The principles of a novel methodological approach for modelling, analysing and detecting these data errors and falsification are introduced.

  12. Quality assessment of clinical computed tomography

    Science.gov (United States)

    Berndt, Dorothea; Luckow, Marlen; Lambrecht, J. Thomas; Beckmann, Felix; Müller, Bert

    2008-08-01

    Three-dimensional images are vital for the diagnosis in dentistry and cranio-maxillofacial surgery. Artifacts caused by highly absorbing components such as metallic implants, however, limit the value of the tomograms. The dominant artifacts observed are blowout and streaks. Investigating the artifacts generated by metallic implants in a pig jaw, the data acquisition for the patients in dentistry should be optimized in a quantitative manner. A freshly explanted pig jaw including related soft-tissues served as a model system. Images were recorded varying the accelerating voltage and the beam current. The comparison with multi-slice and micro computed tomography (CT) helps to validate the approach with the dental CT system (3D-Accuitomo, Morita, Japan). The data are rigidly registered to comparatively quantify their quality. The micro CT data provide a reasonable standard for quantitative data assessment of clinical CT.

  13. Assessment of daylight quality in simple rooms

    DEFF Research Database (Denmark)

    Johnsen, Kjeld; Dubois, Marie-Claude; Sørensen, Karl Grau

    The present report documents the results of a study on daylight conditions in simple rooms of residential buildings. The overall objective of the study was to develop a basis for a method for the assessment of daylight quality in a room with simple geometry and window configurations. As a tool...... for the analyses the Radiance Lighting Simulation System was used. A large number of simulations were performed for 3 rooms (window configurations) under overcast, intermediate, and 40-50 sunny sky conditions for each window (7 months, three orientations and for every other hour with direct sun penetration through...... the windows). A number of light indicators allowed understanding and describing the geometry of daylight in the space in a very detailed and thorough manner. The inclusion of the daylight factor, horizontal illuminance, luminance distribution, cylindrical illuminance, the Daylight Glare Index, vertical...

  14. Quadrupolar metrics

    CERN Document Server

    Quevedo, Hernando

    2016-01-01

    We review the problem of describing the gravitational field of compact stars in general relativity. We focus on the deviations from spherical symmetry which are expected to be due to rotation and to the natural deformations of mass distributions. We assume that the relativistic quadrupole moment takes into account these deviations, and consider the class of axisymmetric static and stationary quadrupolar metrics which satisfy Einstein's equations in empty space and in the presence of matter represented by a perfect fluid. We formulate the physical conditions that must be satisfied for a particular spacetime metric to describe the gravitational field of compact stars. We present a brief review of the main static and axisymmetric exact solutions of Einstein's vacuum equations, satisfying all the physical conditions. We discuss how to derive particular stationary and axisymmetric solutions with quadrupolar properties by using the solution generating techniques which correspond either to Lie symmetries and B\\"acku...

  15. Columbia River system operations - water quality assessment

    International Nuclear Information System (INIS)

    In mid-1990, the U.S. Army Corps of Engineers, U.S. Bureau of Reclamation, and Bonneville Power Administration embarked on a Columbia River system operation review (SOR). The goal of the SOR is to establish an updated operation strategy which best recognizes the various river uses as identified through community input. Ninety alternative operations of the Columbia and Snake River systems were proposed by various users. These users included the general public, irrigation and utility districts, as well as local, state and various Federal government agencies involved with specific water resource interests in the Columbia River basin. Ten technical work groups were formed to cover the spectrum of interest and to evaluate the alternative operations. Using simplified tools and risk-based analysis, each work group analyzed and then ranked the alternatives according to the effect on the work group's specific interest. The focus of the water quality technical work group is the impact assessment, on water quality and dissolved gas saturation, of the various operations proposed by special interests (i.e., hydropower, navigation, flood control, irrigation, recreation, cultural resources, wildlife, and anadromous and resident fisheries)

  16. Metrical Quantization

    OpenAIRE

    Klauder, John R.

    1998-01-01

    Canonical quantization may be approached from several different starting points. The usual approaches involve promotion of c-numbers to q-numbers, or path integral constructs, each of which generally succeeds only in Cartesian coordinates. All quantization schemes that lead to Hilbert space vectors and Weyl operators---even those that eschew Cartesian coordinates---implicitly contain a metric on a flat phase space. This feature is demonstrated by studying the classical and quantum ``aggregati...

  17. Metrication manual

    International Nuclear Information System (INIS)

    In April 1978 a meeting of senior metrication officers convened by the Commonwealth Science Council of the Commonwealth Secretariat, was held in London. The participants were drawn from Australia, Bangladesh, Britain, Canada, Ghana, Guyana, India, Jamaica, Papua New Guinea, Solomon Islands and Trinidad and Tobago. Among other things, the meeting resolved to develop a set of guidelines to assist countries to change to SI and to compile such guidelines in the form of a working manual

  18. Quality assessment metrics for whole genome gene expression profiling of paraffin embedded samples

    OpenAIRE

    Mahoney, Douglas W.; Terry M. Therneau; Anderson, S. Keith; Jen, Jin; Kocher, Jean-Pierre A.; Reinholz, Monica M; Perez, Edith A.; Eckel-Passow, Jeanette E

    2013-01-01

    Background Formalin fixed, paraffin embedded tissues are most commonly used for routine pathology analysis and for long term tissue preservation in the clinical setting. Many institutions have large archives of Formalin fixed, paraffin embedded tissues that provide a unique opportunity for understanding genomic signatures of disease. However, genome-wide expression profiling of Formalin fixed, paraffin embedded samples have been challenging due to RNA degradation. Because of the significant h...

  19. 42 CFR 493.1249 - Standard: Preanalytic systems quality assessment.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Preanalytic systems quality assessment... AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Preanalytic Systems § 493.1249 Standard: Preanalytic systems quality assessment. (a)...

  20. 42 CFR 493.1299 - Standard: Postanalytic systems quality assessment.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Postanalytic systems quality assessment... AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Postanalytic Systems § 493.1299 Standard: Postanalytic systems quality assessment. (a)...

  1. 42 CFR 493.1289 - Standard: Analytic systems quality assessment.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Analytic systems quality assessment. 493... HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Analytic Systems § 493.1289 Standard: Analytic systems quality assessment. (a)...

  2. Quality Assessment of Compressed Video for Automatic License Plate Recognition

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Støttrup-Andersen, Jesper; Forchhammer, Søren; Madsen, John

    2014-01-01

    Definition of video quality requirements for video surveillance poses new questions in the area of quality assessment. This paper presents a quality assessment experiment for an automatic license plate recognition scenario. We explore the influence of the compression by H.264/AVC and H.265/HEVC...

  3. The quality of assessment visits in community nursing.

    NARCIS (Netherlands)

    Kerkstra, A.; Beemster, F.

    1994-01-01

    The aim of this study was the measurement of the quality of assessment visits of community nurses in The Netherlands. Process criteria were derived for the quality of the assessment visits from the quality standards of community nursing care established by Appelman et al. Over a period of 8 weeks, a

  4. Analysis of Temporal Effects in Quality Assessment of High Definition Video

    Directory of Open Access Journals (Sweden)

    M. Slanina

    2012-04-01

    Full Text Available The paper deals with the temporal properties of a~scoring session when assessing the subjective quality of full HD video sequences using the continuous video quality tests. The performed experiment uses a modification of the standard test methodology described in ITU-R Rec. BT.500. It focuses on the reactive times and the time needed for the user ratings to stabilize at the beginning of a video sequence. In order to compare the subjective scores with objective quality measures, we also provide an analysis of PSNR and VQM for the considered sequences to find that correlation of the objective metric results with user scores, recored during playback and after playback, differs significantly.

  5. Measuring and Assessing the Quality and Usefulness of Accounting Information

    OpenAIRE

    Gergana Tsoncheva

    2014-01-01

    High quality accounting information is of key importance for a large number of users, as it influences the quality of the decisions made. Providing high quality and useful accounting information is a prerequisite for the efficiency of the enterprise. Usefulness is determined by the quality of accounting information. Measuring and assessing the quality and usefulness of accounting information are of particular importance, as these activities will not only enhance the quality of economic decisi...

  6. Beef quality assessed at European research centres.

    Science.gov (United States)

    Dransfield, E; Nute, G R; Roberts, T A; Boccard, R; Touraille, C; Buchter, L; Casteels, M; Cosentino, E; Hood, D E; Joseph, R L; Schon, I; Paardekooper, E J

    1984-01-01

    Loin steaks and cubes of M. semimembranosus from eight (12 month old) Galloway steers and eight (16-18 month old) Charolais cross steers raised in England and from which the meat was conditioned for 2 or 10 days, were assessed in research centres in Belgium, Denmark, England, France, the Federal Republic of Germany, Ireland, Italy and the Netherlands. Laboratory panels assessed meat by grilling the steaks and cooking the cubes in casseroles according to local custom using scales developed locally and by scales used frequently at other research centres. The meat was mostly of good quality but with sufficient variation to obtain meaningful comparisons. Tenderness and juiciness were assessed most, and flavour least, consistently. Over the 32 meats, acceptability of steaks and casseroles was in general compounded from tenderness, juiciness and flavour. However, when the meat was tough, it dominated the overall judgement; but when tender, flavour played an important rôle. Irish and English panels tended to weight more on flavour and Italian panels on tenderness and juiciness. Juciness and tenderness were well correlated among all panels except in Italy and Germany. With flavour, however, Belgian, Irish, German and Dutch panels ranked the meats similarly and formed a group distinct from the others which did not. The panels showed a similar grouping for judgements of acceptability. French and Belgian panels judged the steaks from the older Charolais cross steers to have more flavour and be more juicy than average and tended to prefer them. Casseroles from younger steers were invariably preferred although the French and Belgian panels judged aged meat from older animals equally acceptable. These regional biases were thought to be derived mainly from differences in cooking, but variations in experience and perception of assessors also contributed. PMID:22055992

  7. Data Complexity Metrics for XML Web Services

    Directory of Open Access Journals (Sweden)

    MISRA, S.

    2009-06-01

    Full Text Available Web services that are based on eXtensible Markup Language (XML technologies enable integration of diverse IT processes and systems and have been gaining extraordinary acceptance from the basic to the most complicated business and scientific processes. The maintainability is one of the important factors that affect the quality of the Web services that can be seen a kind of software project. The effective management of any type of software projects requires modelling, measurement, and quantification. This study presents a metric for the assessment of the quality of the Web services in terms of its maintainability. For this purpose we proposed a data complexity metric that can be evaluated by analyzing WSDL (Web Service Description Language documents used for describing Web services.

  8. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin

    2015-01-01

    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  9. Lupus anticoagulant : case-based external quality assessment

    NARCIS (Netherlands)

    van den Besselaar, A. M. H. P.; Devreese, K. M. J.; de Groot, P. G.; Castel, A.

    2009-01-01

    Aims: A model for presenting case histories with quality assessment material is to be developed for the Dutch external quality assessment (EQA) scheme for blood coagulation testing. The purpose of the present study was to assess the performance of clinical laboratories in casebased EQA using the cas

  10. Metric dynamics

    CERN Document Server

    Siparov, S V

    2015-01-01

    The suggested approach makes it possible to produce a consistent description of motions of a physical system. It is shown that the concept of force fields defining the systems dynamics is equivalent to the choice of the corresponding metric of an anisotropic space, which is used for the modeling of physical reality and the processes that take place. The examples from hydrodynamics, electrodynamics, quantum mechanics and theory of gravitation are discussed. This approach makes it possible to get rid of some known paradoxes. It can be also used for the further development of the theory.

  11. A new air quality perception scale for global assessment of air pollution health effects.

    Science.gov (United States)

    Deguen, Séverine; Ségala, Claire; Pédrono, Gaëlle; Mesbah, Mounir

    2012-12-01

    Despite improvements in air quality in developed countries, air pollution remains a major public health issue. To fully assess the health impact, we must consider that air pollution exposure has both physical and psychological effects; this latter dimension, less documented, is more difficult to measure and subjective indicators constitute an appropriate alternative. In this context, this work presents the methodological development of a new scale to measure the perception of air quality, useful as an exposure or risk appraisal metric in public health contexts. On the basis of the responses from 2,522 subjects in eight French cities, psychometric methods are used to construct the scale from 22 items that assess risk perception (anxiety about health and quality of life) and the extent to which air pollution is a nuisance (sensorial perception and symptoms). The scale is robust, reproducible, and discriminates between subpopulations more susceptible to poor air pollution perception. The individual risk factors of poor air pollution perception are coherent with those findings in the risk perception literature. Perception of air pollution by the general public is a key issue in the development of comprehensive risk assessment studies as well as in air pollution risk management and policy. This study offers a useful new tool to measure such efforts and to help set priorities for air quality improvements in combination with air quality measurements. PMID:22852801

  12. Assessment of Quality Management Practices Within the Healthcare Industry

    OpenAIRE

    Miller, William J.; Sumner, Andrew T.; Richard H. Deane

    2009-01-01

    Problem Statement: Considerable effort has been devoted over the years by many organizations to adopt quality management practices, but few studies have assessed critical factors that affect quality practices in healthcare organizations. The problem addressed in this study was to assess the critical factors influencing the quality management practices in a single important industry (i.e., healthcare). Approach: A survey instrument was adapted from business quality literature and was sent to a...

  13. Food quality assessment by NIR hyperspectral imaging

    Science.gov (United States)

    Whitworth, Martin B.; Millar, Samuel J.; Chau, Astor

    2010-04-01

    Near infrared reflectance (NIR) spectroscopy is well established in the food industry for rapid compositional analysis of bulk samples. NIR hyperspectral imaging provides new opportunities to measure the spatial distribution of components such as moisture and fat, and to identify and measure specific regions of composite samples. An NIR hyperspectral imaging system has been constructed for food research applications, incorporating a SWIR camera with a cooled 14 bit HgCdTe detector and N25E spectrograph (Specim Ltd, Finland). Samples are scanned in a pushbroom mode using a motorised stage. The system has a spectral resolution of 256 pixels covering a range of 970-2500 nm and a spatial resolution of 320 pixels covering a swathe adjustable from 8 to 300 mm. Images are acquired at a rate of up to 100 lines s-1, enabling samples to be scanned within a few seconds. Data are captured using SpectralCube software (Specim) and analysed using ENVI and IDL (ITT Visual Information Solutions). Several food applications are presented. The strength of individual absorbance bands enables the distribution of particular components to be assessed. Examples are shown for detection of added gluten in wheat flour and to study the effect of processing conditions on fat distribution in chips/French fries. More detailed quantitative calibrations have been developed to study evolution of the moisture distribution in baguettes during storage at different humidities, to assess freshness of fish using measurements of whole cod and fillets, and for prediction of beef quality by identification and separate measurement of lean and fat regions.

  14. A Framework for Rapid and Systematic Software Quality Assessment

    OpenAIRE

    Brandtner, Martin

    2015-01-01

    Software quality assessment monitors and guides the evolution of a software system based on quality measurements. Continuous Integration (CI) environ- ments can provide measurement data to feed such continuous assessments. However, in modern CI environments, data is scattered across multiple CI tools (e.g., build tool, version control system). Even small quality assessments can become extremely time-consuming, because each stakeholder has to seek for the data she needs. In this thesis, we int...

  15. The quality of assessment visits in community nursing.

    OpenAIRE

    Kerkstra, A.; Beemster, F.

    1994-01-01

    The aim of this study was the measurement of the quality of assessment visits of community nurses in The Netherlands. Process criteria were derived for the quality of the assessment visits from the quality standards of community nursing care established by Appelman et al. Over a period of 8 weeks, a representative sample of 108 community nurses and 49 community nursing auxiliaries at 47 different locations paid a total number of 433 assessment visits. The nursing activities were recorded for ...

  16. MOBILE PHONE ACOUSTICS : PERFORMANCE EVALUATION OF SPECTRAL SUBTRACTION AND ELKO’S ALGORITHM USING SPEECH QUALITY METRICS

    OpenAIRE

    Chittajallu, Sai Kiran

    2013-01-01

    In recent years a great deal of effort has been expended to develop methods that determine the quality of speech through the use of comparative algorithms. These methods are designed to calculate an index value of quality that correlates to a mean opinion score given by human subjects in evaluation sessions. In this work PESQ (ITU-T Recommendation P.862) which is the new ITU-T benchmarking for objective measurement of speech quality. In mobile phone acoustics, the presence of noise and room r...

  17. Assessing the effects of sampling design on water quality status classification

    Science.gov (United States)

    Lloyd, Charlotte; Freer, Jim; Johnes, Penny; Collins, Adrian

    2013-04-01

    The Water Framework Directive (WFD) requires continued reporting of the water quality status of all European waterbodies, with this status partly determined by the time a waterbody exceeds different pollution concentration thresholds. Routine water quality monitoring most commonly takes place at weekly to monthly time steps meaning that potentially important pollution events can be missed. This has the potential to result in the misclassification of water quality status. Against this context, this paper investigates the implications of sampling design on a range of existing water quality status metrics routinely applied to WFD compliance assessments. Previous research has investigated the effect of sampling design on the calculation of annual nutrient and sediment loads using a variety of different interpolation and extrapolation models. This work builds on this foundation, extending the analysis to include the effects of sampling regime on flow- and concentration-duration curves as well as threshold-exceedance statistics, which form an essential part of WFD reporting. The effects of sampling regime on both the magnitude of the summary metrics and their corresponding uncertainties are investigated. This analysis is being undertaken on data collected as part of the Hampshire Avon Demonstration Test Catchment (DTC) project; a DEFRA funded initiative investigating cost-effective solutions for reducing diffuse pollution from agriculture. The DTC monitoring platform is collecting water quality data at a variety of temporal resolutions and using differing collection methods, including weekly grab samples, daily ISCO autosamples and high resolution samples (15-30 min time step) using analysers in situ on the river bank. Datasets collected during 2011-2013 were used to construct flow- and concentration-duration curves. A bootstrapping methodology was employed to resample randomly the individual datasets and produce distributions of the curves in order to quantify the

  18. Efficient neural-network-based no-reference approach to an overall quality metric for JPEG and JPEG2000 compressed images

    NARCIS (Netherlands)

    Liu, H.; Redi, J.A.; Alers, H.; Zunino, R.; Heynderickx, I.E.J.R.

    2011-01-01

    Reliably assessing overall quality of JPEG/JPEG2000 coded images without having the original image as a reference is still challenging, mainly due to our limited understanding of how humans combine the various perceived artifacts to an overall quality judgment. A known approach to avoid the explicit

  19. Assessment of irritant quality of detergents

    Directory of Open Access Journals (Sweden)

    Singh Sanjay

    1991-01-01

    Full Text Available Irritant quality of six commonly used detergents was tested ′by Kligman and Wooding′s technique. The detergents in increasing order of irritant quality were Surf, Sunlight, Nirma, Ekta, Fena and Wheel.

  20. Assessment of irritant quality of detergents

    OpenAIRE

    Singh Sanjay; Pandey S.; Singh Gurmohan

    1991-01-01

    Irritant quality of six commonly used detergents was tested ′by Kligman and Wooding′s technique. The detergents in increasing order of irritant quality were Surf, Sunlight, Nirma, Ekta, Fena and Wheel.

  1. Blind image quality assessment using statistical independence in the divisive normalization transform domain

    Science.gov (United States)

    Chu, Ying; Mou, Xuanqin; Fu, Hong; Ji, Zhen

    2015-11-01

    We present a general purpose blind image quality assessment (IQA) method using the statistical independence hidden in the joint distributions of divisive normalization transform (DNT) representations for natural images. The DNT simulates the redundancy reduction process of the human visual system and has good statistical independence for natural undistorted images; meanwhile, this statistical independence changes as the images suffer from distortion. Inspired by this, we investigate the changes in statistical independence between neighboring DNT outputs across the space and scale for distorted images and propose an independence uncertainty index as a blind IQA (BIQA) feature to measure the image changes. The extracted features are then fed into a regression model to predict the image quality. The proposed BIQA metric is called statistical independence (STAIND). We evaluated STAIND on five public databases: LIVE, CSIQ, TID2013, IRCCyN/IVC Art IQA, and intentionally blurred background images. The performances are relatively high for both single- and cross-database experiments. When compared with the state-of-the-art BIQA algorithms, as well as representative full-reference IQA metrics, such as SSIM, STAIND shows fairly good performance in terms of quality prediction accuracy, stability, robustness, and computational costs.

  2. Objective assessment of the impact of frame rate on video quality

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Korhonen, Jari; Forchhammer, Søren

    2012-01-01

    In this paper, we present a novel objective quality metric that takes the impact of frame rate into account. The proposed metric uses PSNR, frame rate and a content dependent parameter that can easily be obtained from spatial and temporal activity indices. The results have been validated on data ...

  3. MODERN PRINCIPLES OF QUALITY ASSESSMENT OF CARDIOVASCULAR DISEASES TREATMENT

    Directory of Open Access Journals (Sweden)

    A. Yu. Suvorov

    2015-09-01

    Full Text Available The most common ways of assessment of cardiovascular diseases treatment abroad, approaches to creation of such assessment methods are considered, as well as data on the principles of the assessment of treatment in Russia. Some foreign registers of acute myocardial infarction, the aim of which was therapy quality assessment, are given as examples. The problem of high-quality treatment based on data from evidence-based medicine, some legal aspects related to clinical guidelines in Russia are considered, as well as various ways of treatment quality assessment.

  4. Ljubljana quality selection (LQS) - innovative case of restaurant assessment system

    OpenAIRE

    Maja Uran Maravić; Daniela Gračan; Zrinka Zadel

    2014-01-01

    The purpose – The purpose of this paper is to briefly present the most well-known restaurant assessment systems where restaurant are assessed by experts. The aim is to highlight the strengths and weaknesses of each system. Design –The special focus is to give answers on questions: how are the restaurants assessed by experts, which are the elements and standards of assessment and whether they are consistent with the quality dimensions as advocated in the theory of service quality. Methodology ...

  5. Quality Index of Subtidal Macroalgae (QISubMac): A suitable tool for ecological quality status assessment under the scope of the European Water Framework Directive.

    Science.gov (United States)

    Le Gal, A; Derrien-Courtel, S

    2015-12-15

    Despite their representativeness and importance in coastal waters, subtidal rocky bottom habitats have been under-studied. This has resulted in a lack of available indicators for subtidal hard substrate communities. However, a few indicators using subtidal macroalgae have been developed in recent years for the purpose of being implemented into the Water Framework Directive (WFD). Accordingly, a quality index of subtidal macroalgae has been defined as a French assessment tool for subtidal rocky bottom habitats in coastal waters. This approach is based on 14 metrics that consider the depth penetration, composition (sensitive, characteristic and opportunistic) and biodiversity of macroalgae assemblages and complies with WFD requirements. Three ecoregions have been defined to fit with the geographical distribution of macroalgae along the French coastline. As a test, QISubMac was used to assess the water quality of 20 water bodies. The results show that QISubMac may discriminate among different quality classes of water bodies. PMID:26555795

  6. Survey and Assessment of Land Ecological Quality in Cixi City

    Institute of Scientific and Technical Information of China (English)

    Junbao; LIU; Zhiyuan; CHEN; Weifeng; PAN; Shaojuan; XIE

    2013-01-01

    Soil,atmosphere,water and quality of agricultural product constitute the content of land ecological quality.Cixi City,through survey pilot project of basic farmland quality,carried out high precision soil geochemical survey and survey of agricultural products,irrigation water and air quality,and established ecological quality evaluation model of land.Based on the evaluation of soil geochemical quality,we conducted comprehensive quality assessment of atmosphere,water,agricultural products,and assessed the ecological quality of agricultural land in Cixi City.The evaluation results show that the ecological quality of most agricultural land in Cixi City is excellent,and there is ecological risk only in some local areas such as urban periphery.The experimental results provide demonstration and basis for the fine management of basic farmland and ecological protection.

  7. Quality Assessment and Economic Sustainability of Translation

    OpenAIRE

    Muzii, Luigi

    2006-01-01

    The concept of quality is mature and widespread. However, its associated attributes can only be measured against a set of specifications since quality itself is a relative concept. Today, the concept of quality broadly corresponds to product suitability – meaning that the product meets the user’s requirements. But then, how does one know when a translation is good? No answer can be given to this very simple question without recall to translation criticism and the theory of t...

  8. Assessing the quality of e-courses

    OpenAIRE

    SCHREURS, Jeanne; Moreau, Rachel

    2007-01-01

    The EFQM model of quality management is a universal model and is applied in this paper in the school context for the organisation of e-courses. We identified some quality criteria in this EFQM school quality model. We defined a simplified e-learning EFQM model supporting the evaluation by the learner. Based on it a questionnaire has been structured that can be used for the evaluation by the learner.

  9. Fingerprint Quality Assessment With Multiple Segmentation

    OpenAIRE

    Z. Yao; Le Bars, Jean-Marie; Charrier, C.; Rosenberger, Christophe

    2015-01-01

    International audience —Image quality is an important factor to automated fingerprint identification systems (AFIS) because the matching performance could be significantly affected by poor quality samples. Most of the existing studies mainly focused on calculating a quality index represented by either a single feature or a combination of multiple features, and some others achieve this purpose with learning approaches which may depend on a prior-knowledge of matching performance. In this pa...

  10. Water Quality Assessment of Porsuk River, Turkey

    OpenAIRE

    Suheyla Yerel

    2010-01-01

    The surface water quality of Porsuk River in Turkey was evaluated by using the multivariate statistical techniques including principal component analysis, factor analysis and cluster analysis. When principal component analysis and factor analysis as applied to the surface water quality data obtain from the eleven different observation stations, three factors were determined, which were responsible from the 66.88% of total variance of the surface water quality in Porsuk River. Cluster analysis...

  11. Soil quality assessment under emerging regulatory requirements

    OpenAIRE

    Bone, James; Head, Martin; Barraclough, Declan; Archer, Michael; Scheib, Catherine; Flight, Dee; Voulvoulis, Nikolaos

    2010-01-01

    New and emerging policies that aim to set standards for protection and sustainable use of soil are likely to require identification of geographical risk/priority areas. Soil degradation can be seen as the change or disturbance in soil quality and it is therefore crucial that soil and soil quality are well understood to protect soils and to meet legislative requirements. To increase this understanding a review of the soil quality definition evaluated its development, with a formal scientific a...

  12. Solar thermal drying of apricots: Effect of spectrally-selective cabinet materials on drying rate and quality metrics (abstract)

    Science.gov (United States)

    Solar thermal (ST) drying is currently not in widespread commercial use due to concerns about slow drying rates and poor product quality. ST dryer cabinets could be constructed from spectrally-selective materials (materials which transmit only certain sunlight wavelength bands), but these types of ...

  13. A Review of Quality Measures for Assessing the Impact of Antimicrobial Stewardship Programs in Hospitals.

    Science.gov (United States)

    Akpan, Mary Richard; Ahmad, Raheelah; Shebl, Nada Atef; Ashiru-Oredope, Diane

    2016-01-01

    The growing problem of antimicrobial resistance (AMR) has led to calls for antimicrobial stewardship programs (ASP) to control antibiotic use in healthcare settings. Key strategies include prospective audit with feedback and intervention, and formulary restriction and preauthorization. Education, guidelines, clinical pathways, de-escalation, and intravenous to oral conversion are also part of some programs. Impact and quality of ASP can be assessed using process or outcome measures. Outcome measures are categorized as microbiological, patient or financial outcomes. The objective of this review was to provide an overview of quality measures for assessing ASP and the reported impact of ASP in peer-reviewed studies, focusing particularly on patient outcomes. A literature search of papers published in English between 1990 and June 2015 was conducted in five databases using a combination of search terms. Primary studies of any design were included. A total of 63 studies were included in this review. Four studies defined quality metrics for evaluating ASP. Twenty-one studies assessed the impact of ASP on antimicrobial utilization and cost, 25 studies evaluated impact on resistance patterns and/or rate of Clostridium difficile infection (CDI). Thirteen studies assessed impact on patient outcomes including mortality, length of stay (LOS) and readmission rates. Six of these 13 studies reported non-significant difference in mortality between pre- and post-ASP intervention, and five reported reductions in mortality rate. On LOS, six studies reported shorter LOS post intervention; a significant reduction was reported in one of these studies. Of note, this latter study reported significantly (p < 0.001) higher unplanned readmissions related to infections post-ASP. Patient outcomes need to be a key component of ASP evaluation. The choice of metrics is influenced by data and resource availability. Controlling for confounders must be considered in the design of evaluation studies

  14. A Review of Quality Measures for Assessing the Impact of Antimicrobial Stewardship Programs in Hospitals

    Directory of Open Access Journals (Sweden)

    Mary Richard Akpan

    2016-01-01

    Full Text Available The growing problem of antimicrobial resistance (AMR has led to calls for antimicrobial stewardship programs (ASP to control antibiotic use in healthcare settings. Key strategies include prospective audit with feedback and intervention, and formulary restriction and preauthorization. Education, guidelines, clinical pathways, de-escalation, and intravenous to oral conversion are also part of some programs. Impact and quality of ASP can be assessed using process or outcome measures. Outcome measures are categorized as microbiological, patient or financial outcomes. The objective of this review was to provide an overview of quality measures for assessing ASP and the reported impact of ASP in peer-reviewed studies, focusing particularly on patient outcomes. A literature search of papers published in English between 1990 and June 2015 was conducted in five databases using a combination of search terms. Primary studies of any design were included. A total of 63 studies were included in this review. Four studies defined quality metrics for evaluating ASP. Twenty-one studies assessed the impact of ASP on antimicrobial utilization and cost, 25 studies evaluated impact on resistance patterns and/or rate of Clostridium difficile infection (CDI. Thirteen studies assessed impact on patient outcomes including mortality, length of stay (LOS and readmission rates. Six of these 13 studies reported non-significant difference in mortality between pre- and post-ASP intervention, and five reported reductions in mortality rate. On LOS, six studies reported shorter LOS post intervention; a significant reduction was reported in one of these studies. Of note, this latter study reported significantly (p < 0.001 higher unplanned readmissions related to infections post-ASP. Patient outcomes need to be a key component of ASP evaluation. The choice of metrics is influenced by data and resource availability. Controlling for confounders must be considered in the design of

  15. Development and Validation of Assessing Quality Teaching Rubrics

    Science.gov (United States)

    Chen, Weiyun; Mason, Steve; Hammond-Bennett, Austin; Zlamout, Sandy

    2014-01-01

    Purpose: This study aimed at examining the psychometric properties of the Assessing Quality Teaching Rubric (AQTR) that was designed to assess in-service teachers' quality levels of teaching practices in daily lessons. Methods: 45 physical education lessons taught by nine physical education teachers to students in grades K-5 were videotaped. They…

  16. Higher Education Quality Assessment in China: An Impact Study

    Science.gov (United States)

    Liu, Shuiyun

    2015-01-01

    This research analyses an external higher education quality assessment scheme in China, namely, the Quality Assessment of Undergraduate Education (QAUE) scheme. Case studies were conducted in three Chinese universities with different statuses. Analysis shows that the evaluated institutions responded to the external requirements of the QAUE…

  17. Quality Assessment of Internationalised Studies: Theory and Practice

    Science.gov (United States)

    Juknyte-Petreikiene, Inga

    2013-01-01

    The article reviews forms of higher education internationalisation at an institutional level. The relevance of theoretical background of internationalised study quality assessment is highlighted and definitions of internationalised studies quality are presented. Existing methods of assessment of higher education internationalisation are criticised…

  18. Assessing Pre-Service Teachers' Quality Teaching Practices

    Science.gov (United States)

    Chen, Weiyun; Hendricks, Kristin; Archibald, Kelsi

    2011-01-01

    The purpose of this study was to design and validate the Assessing Quality Teaching Rubrics (AQTR) that assesses the pre-service teachers' quality teaching practices in a live lesson or a videotaped lesson. Twenty-one lessons taught by 13 Physical Education Teacher Education (PETE) students were videotaped. The videotaped lessons were evaluated…

  19. Different Academics' Characteristics, Different Perceptions on Quality Assessment?

    Science.gov (United States)

    Cardoso, Sonia; Rosa, Maria Joao; Santos, Cristina S.

    2013-01-01

    Purpose: The purpose of this paper is to explore Portuguese academics' perceptions on higher education quality assessment objectives and purposes, in general, and on the recently implemented system for higher education quality assessment and accreditation, in particular. It aims to discuss the differences of those perceptions dependent on some…

  20. Real Time Face Quality Assessment for Face Log Generation

    DEFF Research Database (Denmark)

    Kamal, Nasrollahi; Moeslund, Thomas B.

    2009-01-01

    Summarizing a long surveillance video to just a few best quality face images of each subject, a face-log, is of great importance in surveillance systems. Face quality assessment is the back-bone for face log generation and improving the quality assessment makes the face logs more reliable....... Developing a real time face quality assessment system using the most important facial features and employing it for face logs generation are the concerns of this paper. Extensive tests using four databases are carried out to validate the usability of the system....

  1. Quality Assurance of Assessment and Moderation Discourses Involving Sessional Staff

    Science.gov (United States)

    Grainger, Peter; Adie, Lenore; Weir, Katie

    2016-01-01

    Quality assurance is a major agenda in tertiary education. The casualisation of academic work, especially in teaching, is also a quality assurance issue. Casual or sessional staff members teach and assess more than 50% of all university courses in Australia, and yet the research in relation to the role sessional staff play in quality assurance of…

  2. Service Quality and Customer Satisfaction: An Assessment and Future Directions.

    Science.gov (United States)

    Hernon, Peter; Nitecki, Danuta A.; Altman, Ellen

    1999-01-01

    Reviews the literature of library and information science to examine issues related to service quality and customer satisfaction in academic libraries. Discusses assessment, the application of a business model to higher education, a multiple constituency approach, decision areas regarding service quality, resistance to service quality, and future…

  3. Food quality assessment in parent-offspring dyads

    DEFF Research Database (Denmark)

    Bech-Larsen, Tino; Jensen, Birger Boutrup

    When the buyer and the consumer of a food product are not identical, the risk of discrepancies between food quality expectations and experiences is even higher. We introduce the concept of dyadic quality assessment and apply it to an exploration of parents' willingness to pay for new and healthier...... in-between meals for their children. Results show poor congruence between parent and child quality assessment due to the two parties emphasising quite different quality aspects. Improved parental knowledge of their children's quality experience however has a significant effect on parents' willingness...... to pay. Accordingly, both parents and children should be involved when developing and testing healthy in-between meals....

  4. SOIL QUALITY ASSESSMENT USING FUZZY MODELING

    Science.gov (United States)

    Maintaining soil productivity is essential if agriculture production systems are to be sustainable, thus soil quality is an essential issue. However, there is a paucity of tools for measurement for the purpose of understanding changes in soil quality. Here the possibility of using fuzzy modeling t...

  5. Assessing water quality in Lake Naivasha

    NARCIS (Netherlands)

    Ndungu, Jane Njeri

    2014-01-01

    Water quality in aquatic systems is important because it maintains the ecological processes that support biodiversity. However, declining water quality due to environmental perturbations threatens the stability of the biotic integrity and therefore hinders the ecosystem services and functions of aqu

  6. Water depletion: An improved metric for incorporating seasonal and dry-year water scarcity into water risk assessments

    OpenAIRE

    Kate A. Brauman; Brian D. Richter; Sandra Postel; Marcus Malsy; Martina Flörke

    2016-01-01

    Abstract We present an improved water-scarcity metric we call water depletion, calculated as the fraction of renewable water consumptively used for human activities. We employ new data from the WaterGAP3 integrated global water resources model to illustrate water depletion for 15,091 watersheds worldwide, constituting 90% of total land area. Our analysis illustrates that moderate water depletion at an annual time scale is better characterized as high depletion at a monthly time scale and we a...

  7. Assessment of contaminant exposure, diet, and population metrics of river otters (Lontra canadensis) along the coast of southern Vancouver Island

    OpenAIRE

    Guertin, Daniel

    2009-01-01

    North American river otters (Lontra canadensis) are useful indicators of aquatic ecosystem health, but obtaining information on populations is difficult and expensive. By combining non-invasive faecal sampling with DNA genotyping techniques, I investigated: (i) environmental contaminant exposure, (ii) diet, and (iii) population metrics of river otters along the urban coast of southern Vancouver Island, British Columbia, Canada. In Victoria Harbour, mean faecal concentrations of polychlorinate...

  8. Integration of MODIS-derived metrics to assess interannual variability in snowpack, lake ice, and NDVI in southwest Alaska

    Science.gov (United States)

    Reed, B.; Budde, M.; Spencer, P.; Miller, A.E.

    2009-01-01

    Impacts of global climate change are expected to result in greater variation in the seasonality of snowpack, lake ice, and vegetation dynamics in southwest Alaska. All have wide-reaching physical and biological ecosystem effects in the region. We used Moderate Resolution Imaging Spectroradiometer (MODIS) calibrated radiance, snow cover extent, and vegetation index products for interpreting interannual variation in the duration and extent of snowpack, lake ice, and vegetation dynamics for southwest Alaska. The approach integrates multiple seasonal metrics across large ecological regions. Throughout the observation period (2001-2007), snow cover duration was stable within ecoregions, with variable start and end dates. The start of the lake ice season lagged the snow season by 2 to 3??months. Within a given lake, freeze-up dates varied in timing and duration, while break-up dates were more consistent. Vegetation phenology varied less than snow and ice metrics, with start-of-season dates comparatively consistent across years. The start of growing season and snow melt were related to one another as they are both temperature dependent. Higher than average temperatures during the El Ni??o winter of 2002-2003 were expressed in anomalous ice and snow season patterns. We are developing a consistent, MODIS-based dataset that will be used to monitor temporal trends of each of these seasonal metrics and to map areas of change for the study area.

  9. Assessing the Quality of MT Systems for Hindi to English Translation

    OpenAIRE

    Kalyani, Aditi; Kumud, Hemant; Singh, Shashi Pal; Kumar, Ajai

    2014-01-01

    Evaluation plays a vital role in checking the quality of MT output. It is done either manually or automatically. Manual evaluation is very time consuming and subjective, hence use of automatic metrics is done most of the times. This paper evaluates the translation quality of different MT Engines for Hindi-English (Hindi data is provided as input and English is obtained as output) using various automatic metrics like BLEU, METEOR etc. Further the comparison automatic evaluation results with Hu...

  10. Statistical quality assessment of a fingerprint

    Science.gov (United States)

    Hwang, Kyungtae

    2004-08-01

    The quality of a fingerprint is essential to the performance of AFIS (Automatic Fingerprint Identification System). Such a quality may be classified by clarity and regularity of ridge-valley structures.1,2 One may calculate thickness of ridge and valley to measure the clarity and regularity. However, calculating a thickness is not feasible in a poor quality image, especially, severely damaged images that contain broken ridges (or valleys). In order to overcome such a difficulty, the proposed approach employs the statistical properties in a local block, which involve the mean and spread of the thickness of both ridge and valley. The mean value is used for determining whether a fingerprint is wet or dry. For example, the black pixels are dominant if a fingerprint is wet, the average thickness of ridge is larger than one of valley, and vice versa on a dry fingerprint. In addition, a standard deviation is used for determining severity of damage. In this study, the quality is divided into three categories based on two statistical properties mentioned above: wet, good, and dry. The number of low quality blocks is used to measure a global quality of fingerprint. In addition, a distribution of poor blocks is also measured using Euclidean distances between groups of poor blocks. With this scheme, locally condensed poor blocks decreases the overall quality of an image. Experimental results on the fingerprint images captured by optical devices as well as by a rolling method show the wet and dry parts of image were successfully captured. Enhancing an image by employing morphology techniques that modifying the detected poor quality blocks is illustrated in section 3. However, more work needs to be done on designing a scheme to incorporate the number of poor blocks and their distributions for a global quality.

  11. Assessment of Total Quality Management Practices in a Public Organization

    OpenAIRE

    Özçakar, Necdet

    2010-01-01

    Total quality management has been very popular within the business world in last decades. Various researches showed that employees' assessments about total quality management practices have a great importance on the success of total quality management implemention. However the implemention of total quality management in public organizations is quite different than its implemention in private sector because of different natures of each sector. The aim of this research is to analyze the assessm...

  12. Computing and Interpreting Fisher Information as a Metric of Sustainability: Regime Changes in the United States Air Quality

    Science.gov (United States)

    As a key tool in information theory, Fisher Information has been used to explore the observable behavior of a variety of systems. In particular, recent work has demonstrated its ability to assess the dynamic order of real and model systems. However, in order to solidify the use o...

  13. Assessing Requirements Quality through Requirements Coverage

    Science.gov (United States)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software

  14. WATER QUALITY ASSESSMENT OF AMERICAN FALLS RESERVOIR

    Science.gov (United States)

    A water quality model was developed to support a TMDL for phosphorus related to phytoplankton growth in the reservoir. This report documents the conceptual model, available data, model evaluation, and simulation results.

  15. National Water Quality Assessment (NAWQA) Program

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — National scope of NAWQA water-quality sample- and laboratory-result data and other supporting information obtained from NWIS systems hosted by individual Water...

  16. Quality Assessment of Family Medicine Teams Based on Accreditation Standards

    OpenAIRE

    Valjevac, Salih; Ridjanovic, Zoran; Masic, Izet

    2009-01-01

    CONFLICT OF INTEREST: NONE DECLARED In order to speed up and simplify the self assessment and external assessment process, provide better overview and access to Accreditation Standards for Family Medicine Teams and better assessment documents archiving, Agency for Healthcare Quality and Accreditation in Federation of Bosnia and Herzegovina (AKAZ) has developed self assessment and externals assessment software for family medicine teams. This article presents the development of standardized sof...

  17. ON SOIL QUALITY AND ITS ASSESSING

    OpenAIRE

    N. Florea

    2007-01-01

    The term of “soil quality” is utilized until present with different connotations; its meaning became nowadays more comprehensive. The most adequate definition of the “soil quality” is: “the capacity of a specific kind of soil to function, within natural or managed ecosystem boundaries, to sustain plant and animal productivity, maintain or enhance water and air quality and support human health and habitation” (Karlen et al, 1998) One distinguishes a native soil quality, in natural conditions, ...

  18. Doctors or technicians: assessing quality of medical education

    Directory of Open Access Journals (Sweden)

    Tayyab Hasan

    2010-09-01

    Full Text Available Tayyab HasanPAPRSB Institute of Health Sciences, University Brunei Darussalam, Bandar Seri Begawan, BruneiAbstract: Medical education institutions usually adapt industrial quality management models that measure the quality of the process of a program but not the quality of the product. The purpose of this paper is to analyze the impact of industrial quality management models on medical education and students, and to highlight the importance of introducing a proper educational quality management model. Industrial quality management models can measure the training component in terms of competencies, but they lack the educational component measurement. These models use performance indicators to assess their process improvement efforts. Researchers suggest that the performance indicators used in educational institutions may only measure their fiscal efficiency without measuring the quality of the educational experience of the students. In most of the institutions, where industrial models are used for quality assurance, students are considered as customers and are provided with the maximum services and facilities possible. Institutions are required to fulfill a list of recommendations from the quality control agencies in order to enhance student satisfaction and to guarantee standard services. Quality of medical education should be assessed by measuring the impact of the educational program and quality improvement procedures in terms of knowledge base development, behavioral change, and patient care. Industrial quality models may focus on academic support services and processes, but educational quality models should be introduced in parallel to focus on educational standards and products.Keywords: educational quality, medical education, quality control, quality assessment, quality management models

  19. Assessing quality in software development: An agile methodology approach

    Directory of Open Access Journals (Sweden)

    V. Rodríguez-Hernández

    2015-06-01

    Full Text Available A novel methodology, result of 10 years of in-field testing, which makes possible the convergence of different types of models and quality standards for Engineering and Computer Science Faculties, is presented. Since most software-developing companies are small and medium sized, the projects developed must focuson SCRUM and Extreme Programming (XP, opposed to a RUP, which is quite heavy, as well as on Personal Software Process (PSP and Team Software Process (TSP, which provide students with competences and a structured framework. ISO 90003:2004 norm is employed to define the processes by means of a quality system without new requirements or changing the existing ones. Also, the model is based on ISO/IEC 25000 (ISO (IEC 9126 – ISO/IEC 14598 to allow comparing software built by different metrics.

  20. Key Elements for Judging the Quality of a Risk Assessment

    Science.gov (United States)

    Fenner-Crisp, Penelope A.; Dellarco, Vicki L.

    2016-01-01

    Background: Many reports have been published that contain recommendations for improving the quality, transparency, and usefulness of decision making for risk assessments prepared by agencies of the U.S. federal government. A substantial measure of consensus has emerged regarding the characteristics that high-quality assessments should possess. Objective: The goal was to summarize the key characteristics of a high-quality assessment as identified in the consensus-building process and to integrate them into a guide for use by decision makers, risk assessors, peer reviewers and other interested stakeholders to determine if an assessment meets the criteria for high quality. Discussion: Most of the features cited in the guide are applicable to any type of assessment, whether it encompasses one, two, or all four phases of the risk-assessment paradigm; whether it is qualitative or quantitative; and whether it is screening level or highly sophisticated and complex. Other features are tailored to specific elements of an assessment. Just as agencies at all levels of government are responsible for determining the effectiveness of their programs, so too should they determine the effectiveness of their assessments used in support of their regulatory decisions. Furthermore, if a nongovernmental entity wishes to have its assessments considered in the governmental regulatory decision-making process, then these assessments should be judged in the same rigorous manner and be held to similar standards. Conclusions: The key characteristics of a high-quality assessment can be summarized and integrated into a guide for judging whether an assessment possesses the desired features of high quality, transparency, and usefulness. Citation: Fenner-Crisp PA, Dellarco VL. 2016. Key elements for judging the quality of a risk assessment. Environ Health Perspect 124:1127–1135; http://dx.doi.org/10.1289/ehp.1510483 PMID:26862984

  1. Coastal Water Quality Assessment by Self-Organizing Map

    Institute of Scientific and Technical Information of China (English)

    NIU Zhiguang; ZHANG Hongwei; ZHANG Ying

    2005-01-01

    A new approach to coastal water quality assessment was put forward through study on self-organizing map (SOM). Firstly, the water quality data of Bohai Bay from 1999 to 2002 were prepared. Then, a set of software for coastal water quality assessment was developed based on the batch version algorithm of SOM and SOM toolbox in MATLAB environment. Furthermore, the training results of SOM could be analyzed with single water quality indexes, the value of N: P( atomic ratio) and the eutrophication index E so that the data were clustered into five different pollution types using k-means clustering method. Finally, it was realized that the monitoring data serial trajectory could be tracked and the new data be classified and assessed automatically. Through application it is found that this study helps to analyze and assess the coastal water quality by several kinds of graphics, which offers an easy decision support for recognizing pollution status and taking corresponding measures.

  2. MEASURING OBJECT-ORIENTED SYSTEMS BASED ON THE EXPERIMENTAL ANALYSIS OF THE COMPLEXITY METRICS

    Directory of Open Access Journals (Sweden)

    J.S.V.R.S.SASTRY,

    2011-05-01

    Full Text Available Metrics are used to help a software engineer in quantitative analysis to assess the quality of the design before a system is built. The focus of Object-Oriented metrics is on the class which is the fundamental building block of the Object-Oriented architecture. These metrics are focused on internal object structure and external object structure. Internal object structure reflects the complexity of each individual entity such as methods and classes. External complexity measures the interaction among entities such as Coupling and Inheritance. This paper mainly focuses on a set of object oriented metrics that can be used to measure the quality of an object oriented design. Two types of complexity metrics in Object-Oriented paradigm namely Mood metrics and Lorenz & Kidd metrics. Mood metrics consist of Method inheritance factor(MIF, Coupling factor(CF, Attribute inheritance factor(AIF, Method hiding factor(MHF, Attribute hiding factor(AHF, and polymorphism factor(PF. Lorenz & Kidd metrics consist of Number of operations overridden (NOO, Number operations added (NOA, Specialization index(SI. Mood metrics and Lorenz & Kidd metrics measurements are used mainly by designers and testers. Designers uses these metrics to access the software early in process,making changes that will reduce complexity and improve the continuing capability of the design. Testers use to test the software for finding the complexity, performance of the system, quality of the software. This paper reviews Mood metrics and Lorenz & Kidd metrics are validates theoretically and empirically methods. In thispaper, work has been done to explore the quality of design of software components using object oriented paradigm. A number of object oriented metrics have been proposed in the literature for measuring the design attributes such as inheritance, coupling, polymorphism etc. This paper, metrics have been used to analyzevarious features of software component. Complexity of methods

  3. Virginia Star Quality Initiative: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    Science.gov (United States)

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Virginia's Star Quality Initiative prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators…

  4. Palm Beach Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    Science.gov (United States)

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Palm Beach's Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…

  5. Factors influencing assessment quality in higher vocational education

    NARCIS (Netherlands)

    Baartman, L.; Gulikers, J.T.M.; Dijkstra, A.

    2013-01-01

    The development of assessments that are fit to assess professional competence in higher vocational education requires a reconsideration of assessment methods, quality criteria and (self)evaluation. This article examines the self-evaluations of nine courses of a large higher vocational education inst

  6. Assessment report for Hanford analytical services quality assurance plan

    International Nuclear Information System (INIS)

    This report documents the assessment results of DOE/RL-94-55, Hanford Analytical Services Quality Assurance Plan. The assessment was conducted using the Requirement and Self-Assessment Database (RSAD), which contains mandatory and nonmandatory DOE Order statements for the relevant DOE orders

  7. A new reduced-reference metric for measuring spatial resolution enhanced images

    Science.gov (United States)

    Qian, Shen-En; Chen, Guangyi

    2012-10-01

    Assessment of image quality is critical for many image processing algorithms, such as image acquisition, compression, restoration, enhancement, and reproduction. In general, image quality assessment algorithms are classified into three categories: full-reference (FR), reduced-reference (RR), and no-reference (NR) algorithms. The design of NR metrics is extremely difficult and little progress has been made. FR metrics are easier to design and the majority of image quality assessment algorithms are of this type. A FR metric requires the reference image and the test image to have the same size. This may not the case in real life of image processing. In spatial resolution enhancement of hyperspectral images, such as pan-sharpening, the size of the enhanced images is larger than that of the original image. Thus, the FR metric cannot be used. A common approach in practice is to first down-sample an original image to a low resolution image, then to spatially enhance the down-sampled low resolution image using a subject enhancement technique. In this way, the original image and the enhanced image have the same size and the FR metric can be applied to them. However, this common approach can never directly assess the image quality of the spatially enhanced image that is produced directly from the original image. In this paper, a new RR metric was proposed for measuring the visual fidelity of an image with higher spatial resolution. It does not require the sizes of the reference image and the test image to be the same. The iterative back projection (IBP) technique was chosen to enhance the spatial resolution of an image. Experimental results showed that the proposed RR metrics work well for measuring the visual quality of spatial resolution enhanced hyperspectral images. They are consistent with the corresponding FR metrics.

  8. The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing

    Science.gov (United States)

    DeLoach, Richard

    2016-01-01

    Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.

  9. The Application of Visual Saliency Models in Objective Image Quality Assessment: A Statistical Evaluation.

    Science.gov (United States)

    Zhang, Wei; Borji, Ali; Wang, Zhou; Le Callet, Patrick; Liu, Hantao

    2016-06-01

    Advances in image quality assessment have shown the potential added value of including visual attention aspects in its objective assessment. Numerous models of visual saliency are implemented and integrated in different image quality metrics (IQMs), but the gain in reliability of the resulting IQMs varies to a large extent. The causes and the trends of this variation would be highly beneficial for further improvement of IQMs, but are not fully understood. In this paper, an exhaustive statistical evaluation is conducted to justify the added value of computational saliency in objective image quality assessment, using 20 state-of-the-art saliency models and 12 best-known IQMs. Quantitative results show that the difference in predicting human fixations between saliency models is sufficient to yield a significant difference in performance gain when adding these saliency models to IQMs. However, surprisingly, the extent to which an IQM can profit from adding a saliency model does not appear to have direct relevance to how well this saliency model can predict human fixations. Our statistical analysis provides useful guidance for applying saliency models in IQMs, in terms of the effect of saliency model dependence, IQM dependence, and image distortion dependence. The testbed and software are made publicly available to the research community. PMID:26277009

  10. Groundwater Quality Assessment Based on Improved Water Quality Index in Pengyang County, Ningxia, Northwest China

    OpenAIRE

    Li Pei-Yue; Qian Hui; Wu Jian-Hua

    2010-01-01

    The aim of this work is to assess the groundwater quality in Pengyang County based on an improved water quality index. An information entropy method was introduced to assign weight to each parameter. For calculating WQI and assess the groundwater quality, total 74 groundwater samples were collected and all these samples subjected to comprehensive physicochemical analysis. Each of the groundwater samples was analyzed for 26 parameters and for computing WQI 14 parameters were chosen including c...

  11. Objective and Subjective Assessment of Digital Pathology Image Quality

    Directory of Open Access Journals (Sweden)

    Prarthana Shrestha

    2015-03-01

    Full Text Available The quality of an image produced by the Whole Slide Imaging (WSI scanners is of critical importance for using the image in clinical diagnosis. Therefore, it is very important to monitor and ensure the quality of images. Since subjective image quality assessments by pathologists are very time-consuming, expensive and difficult to reproduce, we propose a method for objective assessment based on clinically relevant and perceptual image parameters: sharpness, contrast, brightness, uniform illumination and color separation; derived from a survey of pathologists. We developed techniques to quantify the parameters based on content-dependent absolute pixel performance and to manipulate the parameters in a predefined range resulting in images with content-independent relative quality measures. The method does not require a prior reference model. A subjective assessment of the image quality is performed involving 69 pathologists and 372 images (including 12 optimal quality images and their distorted versions per parameter at 6 different levels. To address the inter-reader variability, a representative rating is determined as a one-tailed 95% confidence interval of the mean rating. The results of the subjective assessment support the validity of the proposed objective image quality assessment method to model the readers’ perception of image quality. The subjective assessment also provides thresholds for determining the acceptable level of objective quality per parameter. The images for both the subjective and objective quality assessment are based on the HercepTestTM slides scanned by the Philips Ultra Fast Scanners, developed at Philips Digital Pathology Solutions. However, the method is applicable also to other types of slides and scanners.

  12. Presentation: Visual analytics for automatic quality assessment of user-generated content on the English Wikipedia

    OpenAIRE

    David Strohmaier

    2015-01-01

    Related work has shown that it is possible to automatically measure the quality of Wikipedia articles. Yet, despite all these quality measures, it is difficult to identify what would improve an article. Therefore this master thesis is about an interactive graphic tool made for ranking and editing Wikipedia articles with support from quality measures. The contribution of this work is twofold: i) The Quality Analyzer that allows for creating new quality metrics and co...

  13. E-Services quality assessment framework for collaborative networks

    Science.gov (United States)

    Stegaru, Georgiana; Danila, Cristian; Sacala, Ioan Stefan; Moisescu, Mihnea; Mihai Stanescu, Aurelian

    2015-08-01

    In a globalised networked economy, collaborative networks (CNs) are formed to take advantage of new business opportunities. Collaboration involves shared resources and capabilities, such as e-Services that can be dynamically composed to automate CN participants' business processes. Quality is essential for the success of business process automation. Current approaches mostly focus on quality of service (QoS)-based service selection and ranking algorithms, overlooking the process of service composition which requires interoperable, adaptable and secure e-Services to ensure seamless collaboration, data confidentiality and integrity. Lack of assessment of these quality attributes can result in e-Service composition failure. The quality of e-Service composition relies on the quality of each e-Service and on the quality of the composition process. Therefore, there is the need for a framework that addresses quality from both views: product and process. We propose a quality of e-Service composition (QoESC) framework for quality assessment of e-Service composition for CNs which comprises of a quality model for e-Service evaluation and guidelines for quality of e-Service composition process. We implemented a prototype considering a simplified telemedicine use case which involves a CN in e-Healthcare domain. To validate the proposed quality-driven framework, we analysed service composition reliability with and without using the proposed framework.

  14. An experimental evaluation of the Sternberg task as a workload metric for helicopter Flight Handling Qualities (FHQ) research

    Science.gov (United States)

    Hemingway, J. C.

    1984-01-01

    The objective was to determine whether the Sternberg item-recognition task, employed as a secondary task measure of spare mental capacity for flight handling qualities (FHQ) simulation research, could help to differentiate between different flight-control conditions. FHQ evaluations were conducted on the Vertical Motion Simulator at Ames Research Center to investigate different primary flight-control configurations, and selected stability and control augmentation levels for helicopters engaged in low-level flight regimes. The Sternberg task was superimposed upon the primary flight-control task in a balanced experimental design. The results of parametric statistical analysis of Sternberg secondary task data failed to support the continued use of this task as a measure of pilot workload. In addition to the secondary task, subjects provided Cooper-Harper pilot ratings (CHPR) and responded to workload questionnaire. The CHPR data also failed to provide reliable statistical discrimination between FHQ treatment conditions; some insight into the behavior of the secondary task was gained from the workload questionnaire data.

  15. STUDY OF POND WATER QUALITY BY THE ASSESSMENT OF PHYSICOCHEMICAL PARAMETERS AND WATER QUALITY INDEX

    OpenAIRE

    Vinod Jena; Satish Dixit; Ravi ShrivastavaSapana Gupta; Sapana Gupta

    2013-01-01

    Water quality index (WQI) is a dimensionless number that combines multiple water quality factors into a single number by normalizing values to subjective rating curves. Conventionally it has been used for evaluating the quality of water for water resources suchas rivers, streams and lakes, etc. The present work is aimed at assessing the Water Quality Index (W.Q.I) ofpond water and the impact of human activities on it. Physicochemical parameters were monitored for the calculation of W.Q.I for ...

  16. Assessment of Quality Management Practices Within the Healthcare Industry

    Directory of Open Access Journals (Sweden)

    William J. Miller

    2009-01-01

    Full Text Available Problem Statement: Considerable effort has been devoted over the years by many organizations to adopt quality management practices, but few studies have assessed critical factors that affect quality practices in healthcare organizations. The problem addressed in this study was to assess the critical factors influencing the quality management practices in a single important industry (i.e., healthcare. Approach: A survey instrument was adapted from business quality literature and was sent to all hospitals in a large US Southeastern state. Valid responses were received from 147 of 189 hospitals yielding a 75.6% response rate. Factor analysis using principal component analysis with an orthogonal rotation was performed to assess 58 survey items designed to measure ten dimensions of hospital quality management practices. Results: Eight factors were shown to have a statistically significant effect on quality management practices and were classified into two groups: (1 four strategic factors (role of management leadership, role of the physician, customer focus, training resources investment and (2 four operational factors (role of quality department, quality data/reporting, process management/training and employee relations. The results of this study showed that a valid and reliable instrument was developed and used to assess quality management practices in hospitals throughout a large US state. Conclusion: The implications of this study provided an understanding that management of quality required both a focus on longer-term strategic leadership, as well as day-to-day operational management. It was recommended that healthcare researchers and practitioners focus on the critical factors identified and employ this survey instrument to manage and better understand the nature of hospital quality management practices across wider geographical regions and over longer time periods. Furthermore, this study extended the scope of existing quality management

  17. Functional and symptom impact of trametinib versus chemotherapy in BRAF V600E advanced or metastatic melanoma: quality-of-life analyses of the METRIC study

    OpenAIRE

    Schadendorf, D.; Amonkar, M. M.; Milhem, M.; Grotzinger, K.; L. V. Demidov; Rutkowski, P; Garbe, C.; Dummer, R.; Hassel, J C; Wolter, P; Mohr, P; Trefzer, U.; Lefeuvre-Plesse, C.; Rutten, A.; Steven, N.

    2014-01-01

    We report the first quality-of-life assessment of a MEK inhibitor in metastatic melanoma from a phase III study. Trametinib prolonged progression-free survival and improved overall survival versus chemotherapy in patients with BRAF V600 mutation-positive melanoma. Less functional impairment, smaller declines in health status, and less exacerbation of symptoms were observed with trametinib.

  18. Assessment of the assessment: Evaluation of the model quality estimates in CASP10

    OpenAIRE

    Kryshtafovych, Andriy; Barbato, Alessandro; Fidelis, Krzysztof; Monastyrskyy, Bohdan; Schwede, Torsten; Tramontano, Anna

    2013-01-01

    The article presents an assessment of the ability of the thirty-seven model quality assessment (MQA) methods participating in CASP10 to provide an a priori estimation of the quality of structural models, and of the 67 tertiary structure prediction groups to provide confidence estimates for their predicted coordinates. The assessment of MQA predictors is based on the methods used in previous CASPs, such as correlation between the predicted and observed quality of the models (both at the global...

  19. Quality Assessment and Improvement Methods in Statistics – what Works?

    Directory of Open Access Journals (Sweden)

    Hans Viggo Sæbø

    2014-12-01

    Full Text Available Several methods for quality assessment and assurance in statistics have been developed in a European context. Data Quality Assessment Methods (DatQAM were considered in a Eurostat handbook in 2007. These methods comprise quality reports and indicators, measurement of process variables, user surveys, self-assessments, audits, labelling and certifi cation. The entry point for the paper is the development of systematic quality work in European statistics with regard to good practices such as those described in the DatQAM handbook. Assessment is one issue, following up recommendations and implementation of improvement actions another. This leads to a discussion on the eff ect of approaches and tools: Which work well, which have turned out to be more of a challenge, and why? Examples are mainly from Statistics Norway, but these are believed to be representative for several statistical institutes.

  20. National Water-Quality Assessment (NAWQA) Area-Characterization Toolbox

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This is release 1.0 of the National Water-Quality Assessment (NAWQA) Area-Characterization Toolbox. These tools are designed to be accessed using ArcGIS Desktop...

  1. Water quality assessment of razorback sucker grow-out ponds

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — Water quality parameters had never been assessed in these grow-out ponds. Historically growth, condition, and survival of razorback suckers have been variable...

  2. National Impact Assessment of CMS Quality Measures Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The National Impact Assessment of the Centers for Medicare and Medicaid Services (CMS) Quality Measures Reports (Impact Reports) are mandated by section 3014(b), as...

  3. Assessment of Water Quality Conditions: Agassiz National Wildlife Refuge, 2012

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This is an assessment of water quality data collected from source water, discharge and within Agassiz Pool. In the summer of 2012, the U.S. Fish and Wildlife...

  4. Assessing the quality of a student-generated question repository

    CERN Document Server

    Bates, Simon P; Homer, Danny; Riise, Jonathan

    2013-01-01

    We present results from a study that categorizes and assesses the quality of questions and explanations authored by students, in question repositories produced as part of the summative assessment in introductory physics courses over the past two years. Mapping question quality onto the levels in the cognitive domain of Bloom's taxonomy, we find that students produce questions of high quality. More than three-quarters of questions fall into categories beyond simple recall, in contrast to similar studies of student-authored content in different subject domains. Similarly, the quality of student-authored explanations for questions was also high, with approximately 60% of all explanations classified as being of high or outstanding quality. Overall, 75% of questions met combined quality criteria, which we hypothesize is due in part to the in-class scaffolding activities that we provided for students ahead of requiring them to author questions.

  5. Using big data for quality assessment in oncology.

    Science.gov (United States)

    Broughman, James R; Chen, Ronald C

    2016-05-01

    There is increasing attention in the US healthcare system on the delivery of high-quality care, an issue central to oncology. In the report 'Crossing the Quality Chasm', the Institute of Medicine identified six aims for improving healthcare quality: safe, effective, patient-centered, timely, efficient and equitable. This article describes how current big data resources can be used to assess these six dimensions, and provides examples of published studies in oncology. Strengths and limitations of current big data resources for the evaluation of quality of care are also discussed. Finally, this article outlines a vision where big data can be used not only to retrospectively assess the quality of oncologic care, but help physicians deliver high-quality care in real time. PMID:27090300

  6. Assessing future trends in indoor air quality

    International Nuclear Information System (INIS)

    Several national and international health organizations have derived concentration levels below which adverse effects on men are not expected or levels below which the excess risk for individuals is less than a specified value. For every priority pollutant indoor concentrations below this limit are considered healthy. The percentage of Dutch homes exceeding such a limit is taken as a measure of indoor air quality for that component. The present and future indoor air quality of the Dutch housing stock is described for fourteen air pollutants. The highest percentages are scored by radon, environmental tobacco smoke, nitrogen dioxide from unvented combustion, and the potential presence of housedust mite and mould allergen in damp houses. Although the trend for all priority pollutants is downward the most serious ones remain high in the coming decades if no additional measures will be instituted

  7. Assessment of Groundwater Quality by Chemometrics.

    Science.gov (United States)

    Papaioannou, Agelos; Rigas, George; Kella, Sotiria; Lokkas, Filotheos; Dinouli, Dimitra; Papakonstantinou, Argiris; Spiliotis, Xenofon; Plageras, Panagiotis

    2016-07-01

    Chemometric methods were used to analyze large data sets of groundwater quality from 18 wells supplying the central drinking water system of Larissa city (Greece) during the period 2001 to 2007 (8.064 observations) to determine temporal and spatial variations in groundwater quality and to identify pollution sources. Cluster analysis grouped each year into three temporal periods (January-April (first), May-August (second) and September-December (third). Furthermore, spatial cluster analysis was conducted for each period and for all samples, and grouped the 28 monitoring Units HJI (HJI=represent the observations of the monitoring site H, the J-year and the period I) into three groups (A, B and C). Discriminant Analysis used only 16 from the 24 parameters to correctly assign 97.3% of the cases. In addition, Factor Analysis identified 7, 9 and 8 latent factors for groups A, B and C, respectively. PMID:27329059

  8. Acoustical Quality Assessment of the Classroom Environment

    CERN Document Server

    George, Marian

    2012-01-01

    Teaching is one of the most important factors affecting any education system. Many research efforts have been conducted to facilitate the presentation modes used by instructors in classrooms as well as provide means for students to review lectures through web browsers. Other studies have been made to provide acoustical design recommendations for classrooms like room size and reverberation times. However, using acoustical features of classrooms as a way to provide education systems with feedback about the learning process was not thoroughly investigated in any of these studies. We propose a system that extracts different sound features of students and instructors, and then uses machine learning techniques to evaluate the acoustical quality of any learning environment. We infer conclusions about the students' satisfaction with the quality of lectures. Using classifiers instead of surveys and other subjective ways of measures can facilitate and speed such experiments which enables us to perform them continuously...

  9. QRS detection based ECG quality assessment

    International Nuclear Information System (INIS)

    Although immediate feedback concerning ECG signal quality during recording is useful, up to now not much literature describing quality measures is available. We have implemented and evaluated four ECG quality measures. Empty lead criterion (A), spike detection criterion (B) and lead crossing point criterion (C) were calculated from basic signal properties. Measure D quantified the robustness of QRS detection when applied to the signal. An advanced Matlab-based algorithm combining all four measures and a simplified algorithm for Android platforms, excluding measure D, were developed. Both algorithms were evaluated by taking part in the Computing in Cardiology Challenge 2011. Each measure's accuracy and computing time was evaluated separately. During the challenge, the advanced algorithm correctly classified 93.3% of the ECGs in the training-set and 91.6 % in the test-set. Scores for the simplified algorithm were 0.834 in event 2 and 0.873 in event 3. Computing time for measure D was almost five times higher than for other measures. Required accuracy levels depend on the application and are related to computing time. While our simplified algorithm may be accurate for real-time feedback during ECG self-recordings, QRS detection based measures can further increase the performance if sufficient computing power is available. (paper)

  10. A ranking index for quality assessment of forensic DNA profiles

    OpenAIRE

    Ansell Ricky; Hedman Johannes; Nordgaard Anders

    2010-01-01

    Abstract Background Assessment of DNA profile quality is vital in forensic DNA analysis, both in order to determine the evidentiary value of DNA results and to compare the performance of different DNA analysis protocols. Generally the quality assessment is performed through manual examination of the DNA profiles based on empirical knowledge, or by comparing the intensities (allelic peak heights) of the capillary electrophoresis electropherograms. Results We recently developed a ranking index ...

  11. A Literature Review of Fingerprint Quality Assessment and Its Evaluation

    OpenAIRE

    Yao, Zhigang; Le Bars, Jean-Marie; Charrier, Christophe; Rosenberger, Christophe

    2016-01-01

    International audience Fingerprint quality assessment (FQA) has been a challenging issue due to a variety of noisy information contained in the samples, such as physical defect and distortions caused by sensing devices. Existing studies have made efforts to find out more suitable techniques for assessing fingerprint quality but it is difficult to achieve a common solution because of, for example, different image settings. This paper gives a twofold study related to FQA, including a literat...

  12. Quality Assessment of TPB-Based Questionnaires: A Systematic Review

    OpenAIRE

    Obiageli Crystal Oluka; Shaofa Nie; Yi Sun

    2014-01-01

    OBJECTIVE: This review is aimed at assessing the quality of questionnaires and their development process based on the theory of planned behavior (TPB) change model. METHODS: A systematic literature search for studies with the primary aim of TPB-based questionnaire development was conducted in relevant databases between 2002 and 2012 using selected search terms. Ten of 1,034 screened abstracts met the inclusion criteria and were assessed for methodological quality using two different appraisal...

  13. Groundwater Dynamics and Quality Assessment in an Agricultural Area

    OpenAIRE

    Stefano L. Russo; Adriano Fiorucci; Bartolomeo Vigna

    2011-01-01

    Problem statement: The analysis of the relationships among the different hydrogeological Units and the assessment of groundwater quality are fundamental to adopt suitable territorial planning measures aimed to reduce the potential groundwater pollution especially in agricultural regions. In this study, the characteristics of groundwater dynamics and the assessment of its quality in the Cuneo Plain (NW Italy) were examined. Approach: In order to define the geological setting an intense bibliog...

  14. Quality assessment for spectral domain optical coherence tomography (OCT) images

    OpenAIRE

    LIU, SHUANG; Paranjape, Amit S.; Elmaanaoui, Badr; Dewelle, Jordan; Rylander, H. Grady; Markey, Mia K.; Milner, Thomas E.

    2009-01-01

    Retinal nerve fiber layer (RNFL) thickness, a measure of glaucoma progression, can be measured in images acquired by spectral domain optical coherence tomography (OCT). The accuracy of RNFL thickness estimation, however, is affected by the quality of the OCT images. In this paper, a new parameter, signal deviation (SD), which is based on the standard deviation of the intensities in OCT images, is introduced for objective assessment of OCT image quality. Two other objective assessment paramete...

  15. Development of ambient air quality population-weighted metrics for use in time-series health studies.

    Science.gov (United States)

    Ivy, Diane; Mulholland, James A; Russell, Armistead G

    2008-05-01

    A robust methodology was developed to compute population-weighted daily measures of ambient air pollution for use in time-series studies of acute health effects. Ambient data, including criteria pollutants and four fine particulate matter (PM) components, from monitors located in the 20-county metropolitan Atlanta area over the time period of 1999-2004 were normalized, spatially resolved using inverse distance-square weighting to Census tracts, denormalized using descriptive spatial models, and population-weighted. Error associated with applying this procedure with fewer than the maximum number of observations was also calculated. In addition to providing more representative measures of ambient air pollution for the health study population than provided by a central monitor alone and dampening effects of measurement error and local source impacts, results were used to evaluate spatial variability and to identify air pollutants for which ambient concentrations are poorly characterized. The decrease in correlation of daily monitor observations with daily population-weighted average values with increasing distance of the monitor from the urban center was much greater for primary pollutants than for secondary pollutants. Of the criteria pollutant gases, sulfur dioxide observations were least representative because of the failure of ambient networks to capture the spatial variability of this pollutant for which concentrations are dominated by point source impacts. Daily fluctuations in PM of particles less than 10 microm in aerodynamic diameter (PM10) mass were less well characterized than PM of particles less than 2.5 microm in aerodynamic diameter (PM2.5) mass because of a smaller number of PM10 monitors with daily observations. Of the PM2.5 components, the carbon fractions were less well spatially characterized than sulfate and nitrate both because of primary emissions of elemental and organic carbon and because of differences in measurement techniques used to assess

  16. Assessing translation quality for cross language image retrieval

    OpenAIRE

    Clough, P.; Sanderson, M.

    2004-01-01

    Like other cross language tasks, we show that the quality of the translation resource, among other factors, has an effect on retrieval performance. Using data from the ImageCLEF test collection, we investigate the relationship between translation quality and retrieval performance when using Systran, a machine translation (MT) system, as a translation resource. The quality of translation is assessed manually by comparing the original ImageCLEF topics with the output from Systran and rated by a...

  17. QUALITY ASSESSMENT OF EGGS PACKED UNDER MODIFIED ATMOSPHERE

    OpenAIRE

    Aline Giampietro-Ganeco; Hirasilva Borba; Aline Mary Scatolini-Silva; Marcel Manente Boiago; Pedro Alves de Souza; Juliana Lolli Malagoli de Mello

    2015-01-01

    Eggs are perishable foods and lose quality quickly if not stored properly. From the moment of posture to the marketing of egg, quality loss occurs through gas exchange and water through the pores of the shell with the external environment and thus, studies involving modified atmosphere packaging are extremely important. The aim of the present study is to assess the internal quality of eggs packed under modified atmosphere and stored at room temperature. Six hundred and twelve fresh commercial...

  18. Quality evaluation of extra high quality images based on key assessment word

    Science.gov (United States)

    Kameda, Masashi; Hayashi, Hidehiko; Akamatsu, Shigeru; Miyahara, Makoto M.

    2001-06-01

    An all encompassing goal of our research is to develop an extra high quality imaging system which is able to convey a high level artistic impression faithfully. We have defined a high order sensation as such a high level artistic impression, and it is supposed that the high order sensation is expressed by the combination of the psychological factor which can be described by plural assessment words. In order to pursue the quality factors that are important for the reproduction of the high order sensation, we have focused on the image quality evaluation of the extra high quality images using the assessment words considering the high order sensation. In this paper, we have obtained the hierarchical structure between the collected assessment words and the principles of European painting based on the conveyance model of the high order sensation, and we have determined a key assessment word 'plasticity' which is able to evaluate the reproduction of the high order sensation more accurately. The results of the subjective assessment experiments using the prototype of the developed extra high quality imaging system have shown that the obtained key assessment word 'plasticity' is the most appropriate assessment word to evaluate the image quality of the extra high quality images quasi-quantitatively.

  19. No-Reference Video Quality Assessment by HEVC Codec Analysis

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2015-01-01

    transform coefficients, estimates the distortion, and assesses the video quality. The proposed scheme generates VQA features based on Intra coded frames, and then maps features using an Elastic Net to predict subjective video quality. A set of HEVC coded 4K UHD sequences are tested. Results show that the...

  20. Gaia: automated quality assessment of protein structure models

    OpenAIRE

    Kota, Pradeep; Ding, Feng; Ramachandran, Srinivas; Dokholyan, Nikolay V.

    2011-01-01

    Motivation: Increasing use of structural modeling for understanding structure–function relationships in proteins has led to the need to ensure that the protein models being used are of acceptable quality. Quality of a given protein structure can be assessed by comparing various intrinsic structural properties of the protein to those observed in high-resolution protein structures.

  1. Guidance on Data Quality Assessment for Life Cycle Inventory Data

    Science.gov (United States)

    Data quality within Life Cycle Assessment (LCA) is a significant issue for the future support and development of LCA as a decision support tool and its wider adoption within industry. In response to current data quality standards such as the ISO 14000 series, various entities wit...

  2. River Pollution: Part II. Biological Methods for Assessing Water Quality.

    Science.gov (United States)

    Openshaw, Peter

    1984-01-01

    Discusses methods used in the biological assessment of river quality and such indicators of clean and polluted waters as the Trent Biotic Index, Chandler Score System, and species diversity indexes. Includes a summary of a river classification scheme based on quality criteria related to water use. (JN)

  3. A new embedding quality assessment method for manifold learning

    CERN Document Server

    Zhang, Peng; Zhang, Bo

    2011-01-01

    Manifold learning is a hot research topic in the field of computer science. A crucial issue with current manifold learning methods is that they lack a natural quantitative measure to assess the quality of learned embeddings, which greatly limits their applications to real-world problems. In this paper, a new embedding quality assessment method for manifold learning, named as Normalization Independent Embedding Quality Assessment (NIEQA), is proposed. Compared with current assessment methods which are limited to isometric embeddings, the NIEQA method has a much larger application range due to two features. First, it is based on a new measure which can effectively evaluate how well local neighborhood geometry is preserved under normalization, hence it can be applied to both isometric and normalized embeddings. Second, it can provide both local and global evaluations to output an overall assessment. Therefore, NIEQA can serve as a natural tool in model selection and evaluation tasks for manifold learning. Experi...

  4. Assessing the link between coastal urbanization and the quality of nekton habitat in mangrove tidal tributaries

    Science.gov (United States)

    Krebs, Justin M.; Bell, Susan S.; McIvor, Carole C.

    2014-01-01

    To assess the potential influence of coastal development on habitat quality for estuarine nekton, we characterized body condition and reproduction for common nekton from tidal tributaries classified as undeveloped, industrial, urban or man-made (i.e., mosquito-control ditches). We then evaluated these metrics of nekton performance, along with several abundance-based metrics and community structure from a companion paper (Krebs et al. 2013) to determine which metrics best reflected variation in land-use and in-stream habitat among tributaries. Body condition was not significantly different among undeveloped, industrial, and man-made tidal tributaries for six of nine taxa; however, three of those taxa were in significantly better condition in urban compared to undeveloped tributaries. Palaemonetes shrimp were the only taxon in significantly poorer condition in urban tributaries. For Poecilia latipinna, there was no difference in body condition (length–weight) between undeveloped and urban tributaries, but energetic condition was significantly better in urban tributaries. Reproductive output was reduced for both P. latipinna (i.e., fecundity) and grass shrimp (i.e., very low densities, few ovigerous females) in urban tributaries; however a tradeoff between fecundity and offspring size confounded meaningful interpretation of reproduction among land-use classes for P. latipinna. Reproductive allotment by P. latipinna did not differ significantly among land-use classes. Canonical correspondence analysis differentiated urban and non-urban tributaries based on greater impervious surface, less natural mangrove shoreline, higher frequency of hypoxia and lower, more variable salinities in urban tributaries. These characteristics explained 36 % of the variation in nekton performance, including high densities of poeciliid fishes, greater energetic condition of sailfin mollies, and low densities of several common nekton and economically important taxa from urban tributaries

  5. Quality assessment of a placental perfusion protocol

    DEFF Research Database (Denmark)

    Mathiesen, Line; Mose, Tina; Mørck, Thit Juul;

    2010-01-01

    placental perfusion model in Copenhagen including control substances. The positive control substance antipyrine shows no difference in transport regardless of perfusion media used or of terms of delivery (n=59, p<0.05). Negative control studies with FITC marked dextran correspond with leakage criteria (<3...... ml h(-1) from the fetal reservoir) when adding 2 (n=7) and 20mg (n=9) FITC-dextran/100 ml fetal perfusion media. Success rate of the Copenhagen placental perfusions is provided in this study, including considerations and quality control parameters. Three checkpoints suggested to determine success...

  6. Quality assessment of plant transpiration water

    Science.gov (United States)

    Macler, Bruce A.; Janik, Daniel S.; Benson, Brian L.

    1990-01-01

    It has been proposed to use plants as elements of biologically-based life support systems for long-term space missions. Three roles have been brought forth for plants in this application: recycling of water, regeneration of air and production of food. This report discusses recycling of water and presents data from investigations of plant transpiration water quality. Aqueous nutrient solution was applied to several plant species and transpired water collected. The findings indicated that this water typically contained 0.3-6 ppm of total organic carbon, which meets hygiene water standards for NASA's space applications. It suggests that this method could be developed to achieve potable water standards.

  7. ASSESSING THE COST OF BEEF QUALITY

    OpenAIRE

    Forristall, Cody; May, Gary J.; Lawrence, John D.

    2002-01-01

    The number of U.S. fed cattle marketed through a value based or grid marketing system is increasing dramatically. Most grids reward Choice or better quality grades and some pay premiums for red meat yield. The Choice-Select (C-S) price spread increased 55 percent, over $3/cwt between 1989-91 and 1999-01. However, there is a cost associated with pursuing these carcass premiums. This paper examines these tradeoffs both in the feedlot and in a retained ownership scenario. Correlations between ca...

  8. FABASOFT BEST PRACTICES AND TEST METRICS MODEL

    Directory of Open Access Journals (Sweden)

    Nadica Hrgarek

    2007-06-01

    Full Text Available Software companies have to face serious problems about how to measure the progress of test activities and quality of software products in order to estimate test completion criteria, and if the shipment milestone will be reached on time. Measurement is a key activity in testing life cycle and requires established, managed and well documented test process, defined software quality attributes, quantitative measures, and using of test management and bug tracking tools. Test metrics are a subset of software metrics (product metrics, process metrics and enable the measurement and quality improvement of test process and/or software product. The goal of this paper is to briefly present Fabasoft best practices and lessons learned during functional and system testing of big complex software products, and to describe a simple test metrics model applied to the software test process with the purpose to better control software projects, measure and increase software quality.

  9. Entropic assessment of the management quality

    OpenAIRE

    Mlodetskyi, V. R.

    2015-01-01

    Problem statement. Management of the organization has traditionally been viewed as a system of basic functions of management: planning, organization, motivation and control, each of which has its own characteristics and are implemented through certain elements of organizational management structure. But the decomposition of control process in some areas, result in loss of integrity in assessing management effectiveness, both separate functions, and the organization as a whole. The situation i...

  10. Video quality assessment via gradient magnitude similarity deviation of spatial and spatiotemporal slices

    Science.gov (United States)

    Yan, Peng; Mou, Xuanqin; Xue, Wufeng

    2015-03-01

    Video quality assessment (VQA) has been a hot topic due to the rapidly increasing demands in related video applications. The existing state-of-art full reference (FR) VQA metric ViS3 uses adapted the Most Apparent Distortion (MAD) algorithm to capture spatial distortion first, and then quantifies the spatiotemporal distortion by spatiotemporal correlation and a HVS-based model from the spatiotemporal slices (STS) images. In this paper we argue that the STS images can provide enough information for measuring video distortion. Taking advantage of an effective and easy-applied FR image quality model GMSD, we propose to measure video quality by analysing the structural changes between the STS images of the reference videos and their distorted counterparts. This new VQA model is denoted as STS-GMSD. To further investigate the influence spatial dissimilarity, we also combine the frame-by-frame spatial GMSD factor with the STS-GMSD and propose another VQA model, named SSTS-GMSD. Extensive experimental evaluations on two benchmark video quality databases demonstrate that the proposed STS-GMSD outperforms the existing state-of-the-art FR-VQA methods. While STS-GMSD works all square with SSTS-GMSD, which validates that STS images contain enough information for FR-VQA model design.

  11. Measuring data quality for ongoing improvement a data quality assessment framework

    CERN Document Server

    Sebastian-Coleman, Laura

    2013-01-01

    The Data Quality Assessment Framework shows you how to measure and monitor data quality, ensuring quality over time. You'll start with general concepts of measurement and work your way through a detailed framework of more than three dozen measurement types related to five objective dimensions of quality: completeness, timeliness, consistency, validity, and integrity. Ongoing measurement, rather than one time activities will help your organization reach a new level of data quality. This plain-language approach to measuring data can be understood by both business and IT and provides pra

  12. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems.

    Science.gov (United States)

    Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A

    2015-01-01

    Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks. PMID:26158086

  13. School Indoor Air Quality Assessment and Program Implementation.

    Science.gov (United States)

    Prill, R.; Blake, D.; Hales, D.

    This paper describes the effectiveness of a three-step indoor air quality (IAQ) program implemented by 156 schools in the states of Washington and Idaho during the 2000-2001 school year. An experienced IAQ/building science specialist conducted walk-through assessments at each school. These assessments documented deficiencies and served as an…

  14. A new assessment method for image fusion quality

    Science.gov (United States)

    Li, Liu; Jiang, Wanying; Li, Jing; Yuchi, Ming; Ding, Mingyue; Zhang, Xuming

    2013-03-01

    Image fusion quality assessment plays a critically important role in the field of medical imaging. To evaluate image fusion quality effectively, a lot of assessment methods have been proposed. Examples include mutual information (MI), root mean square error (RMSE), and universal image quality index (UIQI). These image fusion assessment methods could not reflect the human visual inspection effectively. To address this problem, we have proposed a novel image fusion assessment method which combines the nonsubsampled contourlet transform (NSCT) with the regional mutual information in this paper. In this proposed method, the source medical images are firstly decomposed into different levels by the NSCT. Then the maximum NSCT coefficients of the decomposed directional images at each level are obtained to compute the regional mutual information (RMI). Finally, multi-channel RMI is computed by the weighted sum of the obtained RMI values at the various levels of NSCT. The advantage of the proposed method lies in the fact that the NSCT can represent image information using multidirections and multi-scales and therefore it conforms to the multi-channel characteristic of human visual system, leading to its outstanding image assessment performance. The experimental results using CT and MRI images demonstrate that the proposed assessment method outperforms such assessment methods as MI and UIQI based measure in evaluating image fusion quality and it can provide consistent results with human visual assessment.

  15. External Quality Assessments for Microbiologic Diagnosis of Diphtheria in Europe

    OpenAIRE

    Both, Leonard; Neal, Shona; De Zoysa, Aruni; Mann, Ginder; Czumbel, Ida; Efstratiou, Androulla

    2014-01-01

    The European Diphtheria Surveillance Network (EDSN) ensures the reliable epidemiological and microbiologic assessment of disease prevalence in the European Union. Here, we describe a survey of current diagnostic techniques for diphtheria surveillance conducted across the European Union and report the results from three external quality assessment (EQA) schemes performed between 2010 and 2014.

  16. EPIDEMIOLOGI UNTUK 'QUALITY ASSESSMENT' PELAYANAN KESEHATAN GIGI MULUT

    Directory of Open Access Journals (Sweden)

    Zaura Anggraeni Matram

    2015-08-01

    Full Text Available The need for quality assessment and assurance in health and oral health becomes an issue of major concern in Indonesia, particularly in relation to the significant decrease of available resources due to the persistence economical crisis. Financial and socioeconomical impacts have led to the need for low cost - high quality accessible oral care. Dentists are ultimately responsibel for the quality of care performed in Public Helath Center (Puskesmas especially for School and Community Dental Programmes often performed by various type of health manpower such as dental nurses and cadres (volunteers. In this paper, emphasis has been placed on two epidemiological models to assess the quality of outcomes of service as well as management control for quality assessment in School Dental Programme. Respectively epidemiological moderls were developed for assessing the effectiveness of oral health education and simple oral prophylaxis carried out the School Dental Programme (known as UKGS. With these epidemiological approaches, it is hope dentists will gain increase appreciation for qualitative assessment quality of care instead of just quantitavely meeting the target that many health administrations use it to indicate success.

  17. AVLIS Production Plant Preliminary Quality Assurance Plan and Assessment

    International Nuclear Information System (INIS)

    This preliminary Quality Assurance Plan and Assessment establishes the Quality Assurance requirements for the AVLIS Production Plant Project. The Quality Assurance Plan defines the management approach, organization, interfaces, and controls that will be used in order to provide adequate confidence that the AVLIS Production Plant design, procurement, construction, fabrication, installation, start-up, and operation are accomplished within established goals and objectives. The Quality Assurance Program defined in this document includes a system for assessing those elements of the project whose failure would have a significant impact on safety, environment, schedule, cost, or overall plant objectives. As elements of the project are assessed, classifications are provided to establish and assure that special actions are defined which will eliminate or reduce the probability of occurrence or control the consequences of failure. 8 figures, 18 tables

  18. Development of a dementia assessment quality database

    DEFF Research Database (Denmark)

    Johannsen, P.; Jørgensen, Kasper; Korner, A.;

    2011-01-01

    database for dementia evaluation in the secondary health system. One volume and seven process quality indicators on dementia evaluations are monitored. Indicators include frequency of demented patients, percentage of patients evaluated within three months, whether the work-up included blood tests, Mini...... Mental State Examination (MMSE), brain scan and activities of daily living and percentage of patients treated with anti-dementia drugs. Indicators can be followed over time in an individual clinic. Up to 20 variables are entered to calculate the indicators and to provide risk factor variables for the...... data analyses. RESULTS: The database was constructed in 2005 and covers 30% of the Danish population. Data from all consecutive cases evaluated for dementia in the secondary health system in the Capital Region of Denmark are entered. The database has shown that the basic diagnostic work-up programme...

  19. Quality assessment on FBTR reactor vessel

    International Nuclear Information System (INIS)

    Fast Breeder Test Reactor (FBTR) is a 40 MWt/13MWe, mixed carbide fueled, sodium cooled, loop type reactor built at Indira Gandhi Centre for Atomic Research (IGCAR), Kalpakkam. The Reactor Vessel (RV) is manufactured using modified AISI 316 austenitic stainless steel material as per FBTR specification. The acceptance criteria for non-destructive examination, quality of weld, test requirement, tolerances on various dimensions etc. specified in FBTR specification are very stringent compared to ASME Section III, Div. I, Class I components and other international codes applicable to pressure vessels and nuclear power plant components. During the manufacture and inspection of the Reactor Vessel, a systematic approach has been adopted towards the improvement of various procedures to achieve very high reliability of the Reactor Vessel. This paper explains the details of results achieved on fabrication tolerances, destructive and non-destructive testing on materials and welds and final tests on the reactor vessel. (author)

  20. Quality assessment on FBTR reactor vessel

    Energy Technology Data Exchange (ETDEWEB)

    Shanmugam, K.; Chandramohan, R.; Ramamurthy, M.K. [Indira Gandhi Centre for Atomic Research (IGCAR), Technical Coordination and Quality Assurance Group, Kalpakkam (India)

    1997-08-01

    Fast Breeder Test Reactor (FBTR) is a 40 MWt/13MWe, mixed carbide fueled, sodium cooled, loop type reactor built at Indira Gandhi Centre for Atomic Research (IGCAR), Kalpakkam. The Reactor Vessel (RV) is manufactured using modified AISI 316 austenitic stainless steel material as per FBTR specification. The acceptance criteria for non-destructive examination, quality of weld, test requirement, tolerances on various dimensions etc. specified in FBTR specification are very stringent compared to ASME Section III, Div. I, Class I components and other international codes applicable to pressure vessels and nuclear power plant components. During the manufacture and inspection of the Reactor Vessel, a systematic approach has been adopted towards the improvement of various procedures to achieve very high reliability of the Reactor Vessel. This paper explains the details of results achieved on fabrication tolerances, destructive and non-destructive testing on materials and welds and final tests on the reactor vessel. (author).

  1. Quality assessment of TPB-based questionnaires: a systematic review.

    Directory of Open Access Journals (Sweden)

    Obiageli Crystal Oluka

    Full Text Available OBJECTIVE: This review is aimed at assessing the quality of questionnaires and their development process based on the theory of planned behavior (TPB change model. METHODS: A systematic literature search for studies with the primary aim of TPB-based questionnaire development was conducted in relevant databases between 2002 and 2012 using selected search terms. Ten of 1,034 screened abstracts met the inclusion criteria and were assessed for methodological quality using two different appraisal tools: one for the overall methodological quality of each study and the other developed for the appraisal of the questionnaire content and development process. Both appraisal tools consisted of items regarding the likelihood of bias in each study and were eventually combined to give the overall quality score for each included study. RESULTS: 8 of the 10 included studies showed low risk of bias in the overall quality assessment of each study, while 9 of the studies were of high quality based on the quality appraisal of questionnaire content and development process. CONCLUSION: Quality appraisal of the questionnaires in the 10 reviewed studies was successfully conducted, highlighting the top problem areas (including: sample size estimation; inclusion of direct and indirect measures; and inclusion of questions on demographics in the development of TPB-based questionnaires and the need for researchers to provide a more detailed account of their development process.

  2. Assessing quality management in an R and D environment

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, B.D.

    1998-02-01

    Los Alamos National Laboratory (LANL) is a premier research and development institution operated by the University of California for the US Department of Energy. Since 1991, LANL has pursued a heightened commitment to developing world-class quality in management and operations. In 1994 LANL adopted the Malcolm Baldrige National Quality Award criteria as a framework for all activities and initiated more formalized customer focus and quality management. Five measurement systems drive the current integration of quality efforts: an annual Baldrige-based assessment, a customer focus program, customer-driven performance measurement, an employee performance management system and annual employee surveys, and integrated planning processes with associated goals and measures.

  3. Food quality assessment in parent–child dyads

    DEFF Research Database (Denmark)

    Bech-Larsen, Tino; Jensen, Birger Boutrup

    2011-01-01

    hall-test of children’s and parents’ quality formation and to the latter’s willingness to pay for such products. The findings show poor congruence between parent and child quality evaluations due to the two parties emphasising different quality aspects. Results also indicate, however, that improved...... parental knowledge of their children’s quality assessments significantly affect the willingness to pay. Accordingly, interaction between parents and children should be promoted when developing, testing and marketing new and healthier food products for children....

  4. Reliability of medical audit in quality assessment of medical care

    Directory of Open Access Journals (Sweden)

    Camacho Luiz Antonio Bastos

    1996-01-01

    Full Text Available Medical audit of hospital records has been a major component of quality of care assessment, although physician judgment is known to have low reliability. We estimated interrater agreement of quality assessment in a sample of patients with cardiac conditions admitted to an American teaching hospital. Physician-reviewers used structured review methods designed to improve quality assessment based on judgment. Chance-corrected agreement for the items considered more relevant to process and outcome of care ranged from low to moderate (0.2 to 0.6, depending on the review item and the principal diagnoses and procedures the patients underwent. Results from several studies seem to converge on this point. Comparisons among different settings should be made with caution, given the sensitivity of agreement measurements to prevalence rates. Reliability of review methods in their current stage could be improved by combining the assessment of two or more reviewers, and by emphasizing outcome-oriented events.

  5. Assessment on reliability of water quality in water distribution systems

    Institute of Scientific and Technical Information of China (English)

    伍悦滨; 田海; 王龙岩

    2004-01-01

    Water leaving the treatment works is usually of a high quality but its properties change during the transportation stage. Increasing awareness of the quality of the service provided within the water industry today and assessing the reliability of the water quality in a distribution system has become a major significance for decision on system operation based on water quality in distribution networks. Using together a water age model, a chlorine decay model and a model of acceptable maximum water age can assess the reliability of the water quality in a distribution system. First, the nodal water age values in a certain complex distribution system can be calculated by the water age model. Then, the acceptable maximum water age value in the distribution system is obtained based on the chlorine decay model. The nodes at which the water age values are below the maximum value are regarded as reliable nodes. Finally, the reliability index on the percentile weighted by the nodal demands reflects the reliability of the water quality in the distribution system. The approach has been applied in a real water distribution network. The contour plot based on the water age values determines a surface of the reliability of the water quality. At any time, this surface is used to locate high water age but poor reliability areas, which identify parts of the network that may be of poor water quality. As a result, the contour water age provides a valuable aid for a straight insight into the water quality in the distribution system.

  6. Preliminary quality assessment of bovine colostrum

    Directory of Open Access Journals (Sweden)

    Alessandro Taranto

    2013-02-01

    Full Text Available Data on bovine colostrum quality are scarce or absent, although Commission Regulations No 1662/2006 and No 1663/2006 include colostrum in the context of chapters on milk. Thus the aim of the present work is to study some physical, chemical, hygiene and safety quality parameters of bovine colostrum samples collected from Sicily and Calabria dairy herds. Thirty individual samples were sampled after 2-3 days from partum. The laboratory tests included: pH, fat (FT, total nitrogen (TN, lactose (LTS and dry matter (NM percentage (Lactostar and somatic cell count (CCS (DeLaval cell counter DCC. Bacterial counts included: standard plate count (SPC, total psychrophilic aerobic count (PAC, total, fecal coliforms by MPN (Most Probable Number, sulphite-reducing bacteria (SR. Salmonella spp. was determined. Bacteriological examinations were performed according to the American Public Health Association (APHA methods, with some adjustements related to the requirements of the study. Statistical analysis of data was performed by Spearman’s rank correlation coefficient. The results showed a low variability of pH values and FT, TN and DM percentage between samples; whereas LTS trend was less noticeable. A significant negative correlation (P<0.01 was observed between pH, TN and LTS amount. The correlation between LTS and TN contents was highly significant (P<0.001. Highly significant and negative was the correlation (P<0.001 between DM, NT and LTS content. SPC mean values were 7.54 x106 CFU/mL; PAC mean values were also high (3.3x106 CFU/mL. Acceptable values of coagulase positive staphylococci were showed; 3 Staphylococcus aureus and 1 Staphylococcus epidermidis strains was isolated. Coagulase negative staphylococci counts were low. A high variability in the number of TC, as for FC was observed; bacterial loads were frequently fairly high. Salmonella spp. and SR bacteria were absent. It was assumed that bacteria from samples had a prevailing environmental origin

  7. No Reference Video-Quality-Assessment Model for Monitoring Video Quality of IPTV Services

    Science.gov (United States)

    Yamagishi, Kazuhisa; Okamoto, Jun; Hayashi, Takanori; Takahashi, Akira

    Service providers should monitor the quality of experience of a communication service in real time to confirm its status. To do this, we previously proposed a packet-layer model that can be used for monitoring the average video quality of typical Internet protocol television content using parameters derived from transmitted packet headers. However, it is difficult to monitor the video quality per user using the average video quality because video quality depends on the video content. To accurately monitor the video quality per user, a model that can be used for estimating the video quality per video content rather than the average video quality should be developed. Therefore, to take into account the impact of video content on video quality, we propose a model that calculates the difference in video quality between the video quality of the estimation-target video and the average video quality estimated using a packet-layer model. We first conducted extensive subjective quality assessments for different codecs and video sequences. We then model their characteristics based on parameters related to compression and packet loss. Finally, we verify the performance of the proposed model by applying it to unknown data sets different from the training data sets used for developing the model.

  8. An assessment of groundwater quality using water quality index in Chennai, Tamil Nadu, India

    Directory of Open Access Journals (Sweden)

    I Nanda Balan

    2012-01-01

    Full Text Available Context : Water, the elixir of life, is a prime natural resource. Due to rapid urbanization in India, the availability and quality of groundwater have been affected. According to the Central Groundwater Board, 80% of Chennai′s groundwater has been depleted and any further exploration could lead to salt water ingression. Hence, this study was done to assess the groundwater quality in Chennai city. Aim : To assess the groundwater quality using water quality index in Chennai city. Materials and Methods: Chennai city was divided into three zones based on the legislative constituency and from these three zones three locations were randomly selected and nine groundwater samples were collected and analyzed for physiochemical properties. Results: With the exception of few parameters, most of the water quality assessment parameters showed parameters within the accepted standard values of Bureau of Indian Standards (BIS. Except for pH in a single location of zone 1, none of the parameters exceeded the permissible values for water quality assessment as prescribed by the BIS. Conclusion: This study demonstrated that in general the groundwater quality status of Chennai city ranged from excellent to good and the groundwater is fit for human consumption based on all the nine parameters of water quality index and fluoride content.

  9. Constructing Assessment Model of Primary and Secondary Educational Quality with Talent Quality as the Core Standard

    Science.gov (United States)

    Chen, Benyou

    2014-01-01

    Quality is the core of education and it is important to standardization construction of primary and secondary education in urban (U) and rural (R) areas. The ultimate goal of the integration of urban and rural education is to pursuit quality urban and rural education. Based on analysing the related policy basis and the existing assessment models…

  10. Metrics for Evaluating the Accuracy of Solar Power Forecasting: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, J.; Hodge, B. M.; Florita, A.; Lu, S.; Hamann, H. F.; Banunarayanan, V.

    2013-10-01

    Forecasting solar energy generation is a challenging task due to the variety of solar power systems and weather regimes encountered. Forecast inaccuracies can result in substantial economic losses and power system reliability issues. This paper presents a suite of generally applicable and value-based metrics for solar forecasting for a comprehensive set of scenarios (i.e., different time horizons, geographic locations, applications, etc.). In addition, a comprehensive framework is developed to analyze the sensitivity of the proposed metrics to three types of solar forecasting improvements using a design of experiments methodology, in conjunction with response surface and sensitivity analysis methods. The results show that the developed metrics can efficiently evaluate the quality of solar forecasts, and assess the economic and reliability impact of improved solar forecasting.

  11. Quantifying landscape pattern and assessing the land cover changes in Piatra Craiului National Park and Bucegi Natural Park, Romania, using satellite imagery and landscape metrics.

    Science.gov (United States)

    Vorovencii, Iosif

    2015-11-01

    Protected areas of Romania have enjoyed particular importance after 1989, but, at the same time, they were subject to different anthropogenic and natural pressures which resulted in the occurrence of land cover changes. These changes have generally led to landscape degradation inside and at the borders of the protected areas. In this article, 12 landscape metrics were used in order to quantify landscape pattern and assess land cover changes in two protected areas, Piatra Craiului National Park (PCNP) and Bucegi Natural Park (BNP). The landscape metrics were obtained from land cover maps derived from Landsat Thematic Mapper (TM) and Landsat Enhanced Thematic Mapper Plus (ETM+) images from 1987, 1993, 2000, 2009 and 2010. Three land cover classes were analysed in PCNP and five land cover map classes in BNP. The results show a landscape fragmentation trend for both parks, affecting different types of land covers. Between 1987 and 2010, in PCNP fragmentation was, in principle, the result not only of anthropogenic activities such as forest cuttings and illegal logging but also of natural causes. In BNP, between 1987 and 2009, the fragmentation affected the pasture which resulted in the occurrence of bare land and rocky areas because of the erosion on the Bucegi Plateau. PMID:26476552

  12. Quality assessment in competency based physiotherapy education

    DEFF Research Database (Denmark)

    Brandt, Jørgen

    2012-01-01

    monitor and improve didactics and teaching methods in alignment with these competencies. Description: This competence based assessment model for education is build on a combination of three curriculum types (Glatthorn;1987), 4 levels of evaluation (KirkPatrick;1998) and single and double loop learning...... connection is evaluated according to the learning level. One perspective of the learning level concerns tests and exams, where the student is being evaluated by teachers in formal settings. Another perspective is covered through a process, where the student evaluates her self, by marking her own judgement of......, managers, course coordinators and teachers with information at the level of premises, in relation to the development of the written curriculum and the institutional framework supporting the education. In this way the three curriculum types are interconnected through 4 levels of evaluation and single and...

  13. Assessing wine quality using isotopic methods

    International Nuclear Information System (INIS)

    Full text: The analytical methods used to determine the isotope ratios of deuterium, carbon-13 and oxygen-18 in wines have gained official recognition from the Office International de la Vigne et du Vin (OIV) and National Organisation of Vine and Wine. The amount of stable isotopes in water and carbon dioxide from plant organic materials and their distribution in sugar and ethanol molecules are influenced by geo-climatic conditions of the region, grape varieties and the year of harvest. For wine characterization, to prove the botanical and geographical origin of the raw material, the isotopic analysis by continuous flow mass spectrometry CF-IRMS has made a significant contribution. This paper emphasize the results of a study concerning the assessing of water adulterated wines and non-grape alcohol and sugar additions at different concentration levels, using CF-IRMS analytical technique. (authors)

  14. Voice and Speech Quality Perception Assessment and Evaluation

    CERN Document Server

    Jekosch, Ute

    2005-01-01

    Foundations of Voice and Speech Quality Perception starts out with the fundamental question of: "How do listeners perceive voice and speech quality and how can these processes be modeled?" Any quantitative answers require measurements. This is natural for physical quantities but harder to imagine for perceptual measurands. This book approaches the problem by actually identifying major perceptual dimensions of voice and speech quality perception, defining units wherever possible and offering paradigms to position these dimensions into a structural skeleton of perceptual speech and voice quality. The emphasis is placed on voice and speech quality assessment of systems in artificial scenarios. Many scientific fields are involved. This book bridges the gap between two quite diverse fields, engineering and humanities, and establishes the new research area of Voice and Speech Quality Perception.

  15. Assessing the Quality of M-Learning Systems using ISO/IEC 25010

    Directory of Open Access Journals (Sweden)

    Anal Acharya

    2013-09-01

    Full Text Available Mobile learning offers several advantages over other forms of learning like ubiquity and idle time utilization. However for these advantages to be properly addressed there should be a check on the system quality. Poor quality systems will invalidate these benefits. Quality estimation in M-learning systems can be broadly classified into two categories: software system quality and learning characteristics quality. In this work, a M-Learning frame work is first developed. Software System quality is then evaluated following the ISO/IEC 25010 Software Quality model by proposing a set of metrics which measure the characteristics of a M-Learning systems. The applications of these metrics were then illustrated numerically.

  16. Assessing the quality of a student-generated question repository

    Science.gov (United States)

    Bates, Simon P.; Galloway, Ross K.; Riise, Jonathan; Homer, Danny

    2014-12-01

    We present results from a study that categorizes and assesses the quality of questions and explanations authored by students in question repositories produced as part of the summative assessment in introductory physics courses over two academic sessions. Mapping question quality onto the levels in the cognitive domain of Bloom's taxonomy, we find that students produce questions of high quality. More than three-quarters of questions fall into categories beyond simple recall, in contrast to similar studies of student-authored content in different subject domains. Similarly, the quality of student-authored explanations for questions was also high, with approximately 60% of all explanations classified as being of high or outstanding quality. Overall, 75% of questions met combined quality criteria, which we hypothesize is due in part to the in-class scaffolding activities that we provided for students ahead of requiring them to author questions. This work presents the first systematic investigation into the quality of student produced assessment material in an introductory physics context, and thus complements and extends related studies in other disciplines.

  17. Germination tests for assessing biochar quality.

    Science.gov (United States)

    Rogovska, N; Laird, D; Cruse, R M; Trabue, S; Heaton, E

    2012-01-01

    Definition, analysis, and certification of biochar quality are crucial to the agronomic acceptance of biochar. While most biochars have a positive impact on plant growth, some may have adverse effects due to the presence of phytotoxic compounds. Conversely, some biochars may have the ability to adsorb and neutralize natural phytotoxic compounds found in soil. We evaluated the effects of biochars on seedling growth and absorption of allelochemicals present in corn ( L.) residues. Corn seeds were germinated in aqueous extracts of six biochars produced from varied feedstocks, thermochemical processes, and temperatures. Percent germination and shoot and radicle lengths were evaluated at the end of the germination period. Extracts from the six biochars had no effect on percent germination; however, extracts from three biochars produced at high conversion temperatures significantly inhibited shoot growth by an average of 16% relative to deionized (DI) water. Polycyclic aromatic hydrocarbons detected in the aqueous extracts are believed to be at least partly responsible for the reduction in seedling growth. Repeated leaching of biochars before extract preparation eliminated the negative effects on seedling growth. Biochars differ significantly in their capacity to adsorb allelochemicals present in corn residues. Germination of corn seeds in extracts of corn residue showed 94% suppression of radicle growth compared to those exposed to DI water; however, incubation of corn residue extracts with leached biochar for 24 h before initiating the germination test increased radicle length 6 to 12 times compared to the corn residue extract treatments. Germination tests appear to be a reliable procedure to differentiate between effects of different types of biochar on corn seedling growth. PMID:22751043

  18. Web metrics for library and information professionals

    CERN Document Server

    Stuart, David

    2014-01-01

    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional.Th...

  19. An information theoretic approach for privacy metrics

    Directory of Open Access Journals (Sweden)

    Michele Bezzi

    2010-12-01

    Full Text Available Organizations often need to release microdata without revealing sensitive information. To this scope, data are anonymized and, to assess the quality of the process, various privacy metrics have been proposed, such as k-anonymity, l-diversity, and t-closeness. These metrics are able to capture different aspects of the disclosure risk, imposing minimal requirements on the association of an individual with the sensitive attributes. If we want to combine them in a optimization problem, we need a common framework able to express all these privacy conditions. Previous studies proposed the notion of mutual information to measure the different kinds of disclosure risks and the utility, but, since mutual information is an average quantity, it is not able to completely express these conditions on single records. We introduce here the notion of one-symbol information (i.e., the contribution to mutual information by a single record that allows to express and compare the disclosure risk metrics. In addition, we obtain a relation between the risk values t and l, which can be used for parameter setting. We also show, by numerical experiments, how l-diversity and t-closeness can be represented in terms of two different, but equally acceptable, conditions on the information gain..

  20. Human Variome Project Quality Assessment Criteria for Variation Databases.

    Science.gov (United States)

    Vihinen, Mauno; Hancock, John M; Maglott, Donna R; Landrum, Melissa J; Schaafsma, Gerard C P; Taschner, Peter

    2016-06-01

    Numerous databases containing information about DNA, RNA, and protein variations are available. Gene-specific variant databases (locus-specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research, and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. This report elaborates on the developed quality criteria and how implementation of the quality scheme can be achieved. Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance. PMID:26919176

  1. Perceptual metrics and visualization tools for evaluation of page uniformity

    Science.gov (United States)

    Nguyen, Minh Q.; Jessome, Renee; Astling, Steve; Maggard, Eric; Nelson, Terry; Shaw, Mark; Allebach, Jan P.

    2014-01-01

    Uniformity is one of the issues of most critical concern for laser electrophotographic (EP) printers. Typically, full coverage constant-tint test pages are printed to assess uniformity. Exemplary nonuniformity defects include mottle, grain, pinholes, and "finger prints". It is a real challenge to make an overall Print Quality (PQ) assessment due to the large coverage of a letter-size, constant-tint printed test page and the variety of possible nonuniformity defects. In this paper, we propose a novel method that uses a block-based technique to analyze the page both visually and metrically. We use a grid of 150 pixels × 150 pixels ( ¼ inch × ¼ inch at 600-dpi resolution) square blocks throughout the scanned page. For each block, we examine two aspects: behavior of its pixels within the block (metrics of graininess) and behavior of the blocks within the printed page (metrics of nonuniformity). Both ΔE (CIE 1976) and the L* lightness channel are employed. For an input scanned page, we create eight visual outputs, each displaying a different aspect of nonuniformity. To apply machine learning, we train scanned pages of different 100% solid colors separately with the support vector machine (SVM) algorithm. We use two metrics as features for the SVM: average dispersion of page lightness and standard deviation in dispersion of page lightness. Our results show that we can predict, with 83% to 90% accuracy, the assignment by a print quality expert of one of two grades of uniformity in the print.

  2. Identification of Nominated Classes for Software Refactoring Using Object-Oriented Cohesion Metrics

    Directory of Open Access Journals (Sweden)

    Safwat M. Ibrahim

    2012-03-01

    Full Text Available The production of well-developed software reduces the cost of the software maintainability. Therefore, many software metrics have been developed to measure the quality of the software design. Measuring class cohesion is considered as one of the most important software quality measurements. Unfortunately, most of approaches that have been proposed on cohesion metrics do not consider the inherited attributes and methods in measuring class cohesion. This paper provides a novel assessment criterion for measuring the quality of a software design. In this context, inherited attributes and methods are considered in the assessment. This offers a guideline for choosing the proper Depth of Inheritance Tree (DIT that refers to the nominated classes for refactoring. Experiments are carried out on more than 35K classes from more than 16 open-source projects using the most used cohesion metrics.

  3. Non-human biota dose assessment. Sensitivity analysis and knowledge quality assessment

    International Nuclear Information System (INIS)

    This report provides a summary of a programme of work, commissioned within the BIOPROTA collaborative forum, to assess the quantitative and qualitative elements of uncertainty associated with biota dose assessment of potential impacts of long-term releases from geological disposal facilities (GDF). Quantitative and qualitative aspects of uncertainty were determined through sensitivity and knowledge quality assessments, respectively. Both assessments focused on default assessment parameters within the ERICA assessment approach. The sensitivity analysis was conducted within the EIKOS sensitivity analysis software tool and was run in both generic and test case modes. The knowledge quality assessment involved development of a questionnaire around the ERICA assessment approach, which was distributed to a range of experts in the fields of non-human biota dose assessment and radioactive waste disposal assessments. Combined, these assessments enabled critical model features and parameters that are both sensitive (i.e. have a large influence on model output) and of low knowledge quality to be identified for each of the three test cases. The output of this project is intended to provide information on those parameters that may need to be considered in more detail for prospective site-specific biota dose assessments for GDFs. Such information should help users to enhance the quality of their assessments and build greater confidence in the results. (orig.)

  4. An Approach for Assessing the Signature Quality of Various Chemical Assays when Predicting the Culture Media Used to Grow Microorganisms

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, Aimee E.; Sego, Landon H.; Webb-Robertson, Bobbie-Jo M.; Kreuzer, Helen W.; Anderson, Richard M.; Unwin, Stephen D.; Weimar, Mark R.; Tardiff, Mark F.; Corley, Courtney D.

    2013-02-01

    We demonstrate an approach for assessing the quality of a signature system designed to predict the culture medium used to grow a microorganism. The system was comprised of four chemical assays designed to identify various ingredients that could be used to produce the culture medium. The analytical measurements resulting from any combination of these four assays can be used in a Bayesian network to predict the probabilities that the microorganism was grown using one of eleven culture media. We evaluated combinations of the signature system by removing one or more of the assays from the Bayes network. We measured and compared the quality of the various Bayes nets in terms of fidelity, cost, risk, and utility, a method we refer to as Signature Quality Metrics

  5. Self-Organizing Maps for Fingerprint Image Quality Assessment

    DEFF Research Database (Denmark)

    Olsen, Martin Aastrup; Tabassi, Elham; Makarov, Anton;

    2013-01-01

    for a quality assessment algorithm is to meet the low computational complexity requirement of mobile platforms used in national biometric systems, by military and police forces. We propose a computationally efficient means of predicting biometric performance based on a combination of unsupervised and......Fingerprint quality assessment is a crucial task which needs to be conducted accurately in various phases in the biometric enrolment and recognition processes. Neglecting quality measurement will adversely impact accuracy and efficiency of biometric recognition systems (e.g. verification and...... identification of individuals). Measuring and reporting quality allows processing enhancements to increase probability of detection and track accuracy while decreasing probability of false alarms. Aside from predictive capabilities with respect to the recognition performance, another important design criteria...

  6. Quantifying subjective assessment of sleep quality, quality of life and depressed mood in children with enuresis

    OpenAIRE

    Üçer, Oktay; Gümüş, Bilal

    2013-01-01

    Aim The aim of this study was to compare a group of children who has monosymptomatic nocturnal enuresis (MNE) with a healthy control group by assessing their depression scales, quality of life and sleep quality. Methods Hundred and one children with MNE and 38 healthy controls are included in the study, aged between 8 and 16 years old. All participants were performed the Pediatric Quality of Life Inventory (PedsQL 4.0), Depression Scale for Children (CES-DC) and The Pittsburgh Sleep Quality I...

  7. No-Reference Video Quality Assessment using Codec Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    A no-reference video quality assessment (VQA) method is presented for videos distorted by H.264/AVC and MPEG-2. The assessment is performed without access to the bit-stream. Instead we analyze and estimate coefficients based on decoded pixels. The approach involves distinguishing between the two...... types of videos, estimating the level of quantization used in the I-frames, and exploiting this information to assess the video quality. In order to do this for H.264/AVC, the distribution of the DCT-coefficients after intra-prediction and deblocking are modeled. To obtain VQA features for H.264/AVC, we...... propose a novel estimation method of the quantization in H.264/AVC videos without bitstream access, which can also be used for Peak Signalto-Noise Ratio (PSNR) estimation. The results from the MPEG-2 and H.264/AVC analysis are mapped to a perceptual measure of video quality by Support Vector Regression...

  8. Medical education quality assessment. Perspectives in University Policlinic context.

    Directory of Open Access Journals (Sweden)

    Maricel Castellanos González

    2008-08-01

    Full Text Available Quality has currently a central role within our National Health System, particularly in the formative process of human resources where we need professionals more prepared every day and ready to face complex tasks. We make a bibliographic review related to quality assessment of educational process in health system to analyze the perspectives of the new model of University Policlinic, formative context of Medical Sciences students.

  9. Quality assessment of user-generated video using camera motion

    OpenAIRE

    Guo, Jinlin; Gurrin, Cathal; Hopfgartner, Frank; Zhang, ZhenXing; Lao, Songyang

    2013-01-01

    With user-generated video (UGV) becoming so popular on theWeb, the availability of a reliable quality assessment (QA) measure of UGV is necessary for improving the users’ quality of experience in videobased application. In this paper, we explore QA of UGV based on how much irregular camera motion it contains with low-cost manner. A blockmatch based optical flow approach has been employed to extract camera motion features in UGV, based on which, irregular camera motion is calculated and ...

  10. System Change: Quality Assessment and Improvement for Medicaid Managed Care

    OpenAIRE

    Smith, Wally R.; Cotter, J. James; Louis F Rossiter

    1996-01-01

    Rising Medicaid health expenditures have hastened the development of State managed care programs. Methods to monitor and improve health care under Medicaid are changing. Under fee-for-service (FFS), the primary concern was to avoid overutilization. Under managed care, it is to avoid underutilization. Quality enhancement thus moves from addressing inefficiency to addressing insufficiency of care. This article presents a case study of Virginia's redesign of Quality Assessment and Improvement (Q...

  11. Fish welfare and quality assessment by conventional and innovative methods.

    OpenAIRE

    Anna Concollato

    2015-01-01

    The overall aim of my research was on one side, to investigate the possibility of using rapid and nondestructive methods for the determination of fish fillets quality and their classification, on the other side, to find out the stunning/slaughtering method able to guarantee a minimal or to completely avoid stress condition at the moment immediately prior of the slaughtering process, by assessing the effects on fillets quality by conventional and innovative methods, from two ...

  12. Medical education quality assessment. Perspectives in University Policlinic context.

    OpenAIRE

    Maricel Castellanos González; Jorge Cañellas Granda; Iraldo Mir Ocampo; Miguel Aguila Toledo

    2008-01-01

    Quality has currently a central role within our National Health System, particularly in the formative process of human resources where we need professionals more prepared every day and ready to face complex tasks. We make a bibliographic review related to quality assessment of educational process in health system to analyze the perspectives of the new model of University Policlinic, formative context of Medical Sciences students.

  13. Assessment of Air Quality Status in Wuhan, China

    OpenAIRE

    Jiabei Song; Wu Guang; Linjun Li; Rongbiao Xiang

    2016-01-01

    In this study, air quality characteristics in Wuhan were assessed through descriptive statistics and Hierarchical Cluster Analysis (HCA). Results show that air quality has slightly improved over the recent years. While the NO2 concentration is still increasing, the PM10 concentration shows a clearly downward trend with some small fluctuations. In addition, the SO2 concentration has steadily decreased since 2008. Nevertheless, the current level of air pollutants is still quite high, with the P...

  14. Assessment of spatial audio quality based on sound attributes

    OpenAIRE

    LE BAGOUSSE, Sarah; Paquier, Mathieu; Colomes, Catherine

    2012-01-01

    International audience Spatial audio technologies become very important in audio broadcast services. But, there is a lack of methods for evaluating spatial audio quality. Standards do not take into account spatial dimension of sound and assessments are limited to the overall quality particularly in the context of audio coding. Through different elicitation methods, a long list of attributes has been established to characterize sound but it is difficult to include them in a listening test. ...

  15. Management assessments of Quality Assurance Program implementation effectiveness

    International Nuclear Information System (INIS)

    This paper describes a method currently being used by UNC Nuclear Industries, Richland, Washington, to help assure the effectiveness of Quality Assurance (QA) Program implementation. Assessments are conducted annually by management in each department, and the results summarized to the president and his staff. The purpose of these assessments is to review the adequacy of the department's implementing procedures, training/instruction on implementing procedures, and procedure implementation effectiveness. The primary purpose is to assess effectiveness and take improvement action where the need is indicated. The QA organization provides only general guidance in conducting the assessments

  16. Modeling the Color Image and Video Quality on Liquid Crystal Displays with Backlight Dimming

    OpenAIRE

    Korhonen, Jari; Mantel, Claire; Burini, Nino; Forchhammer, Søren

    2013-01-01

    Objective image and video quality metrics focus mostly on the digital representation of the signal. However, the display characteristics are also essential for the overall Quality of Experience (QoE). In this paper, we use a model of a backlight dimming system for Liquid Crystal Display (LCD) and show how the modeled image can be used as an input to quality assessment algorithms. For quality assessment, we propose an image quality metric, based on Peak Signal-to-Noise Ratio (PSNR) computation...

  17. Sediment quality and ecorisk assessment factors for a major river system

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, V.G. [Westinghouse Hanford Co., Richland, WA (United States); Wagner, J.J. [Pacific Northwest Lab., Richland, WA (United States); Cutshall, N.H. [Oak Ridge National Lab., TN (United States)

    1993-08-01

    Sediment-related water quality and risk assessment parameters for the Columbia River were developed using heavy metal loading and concentration data from Lake Roosevelt (river km 1120) to the mouth and adjacent coastal zone. Correlation of Pb, Zn, Hg, and Cd concentrations in downstream sediments with refinery operations in British Columbia suggest that solutes with K{sub d}`s > 10{sup 5} reach about 1 to 5 {mu}g/g per metric ton/year of input. A low-suspended load (upriver avg. <10 mg/L) and high particle-surface reactivity account for the high clay-fraction contaminant concentrations. In addition, a sediment exposure path was demonstrated based on analysis of post-shutdown biodynamics of a heavy metal radiotracer. The slow decline in sediment was attributed to resuspension, bioturbation, and anthropogenic disturbances. The above findings suggest that conservative sediment quality criteria should be used to restrict additional contaminant loading in the upper drainage basin. The issuance of an advisory for Lake Roosevelt, due in part to Hg accumulation in large sport fish, suggests more restrictive controls are needed. A monitoring strategy for assessing human exposure potential and the ecological health of the river is proposed.

  18. Sediment quality and ecorisk assessment factors for a major river system

    International Nuclear Information System (INIS)

    Sediment-related water quality and risk assessment parameters for the Columbia River were developed using heavy metal loading and concentration data from Lake Roosevelt (river km 1120) to the mouth and adjacent coastal zone. Correlation of Pb, Zn, Hg, and Cd concentrations in downstream sediments with refinery operations in British Columbia suggest that solutes with Kd's > 105 reach about 1 to 5 μg/g per metric ton/year of input. A low-suspended load (upriver avg. <10 mg/L) and high particle-surface reactivity account for the high clay-fraction contaminant concentrations. In addition, a sediment exposure path was demonstrated based on analysis of post-shutdown biodynamics of a heavy metal radiotracer. The slow decline in sediment was attributed to resuspension, bioturbation, and anthropogenic disturbances. The above findings suggest that conservative sediment quality criteria should be used to restrict additional contaminant loading in the upper drainage basin. The issuance of an advisory for Lake Roosevelt, due in part to Hg accumulation in large sport fish, suggests more restrictive controls are needed. A monitoring strategy for assessing human exposure potential and the ecological health of the river is proposed

  19. Quality of life assessment in dogs and cats receiving chemotherapy

    DEFF Research Database (Denmark)

    Vøls, Kåre K.; Heden, Martin A.; Kristensen, Annemarie Thuri;

    2016-01-01

    comparative analysis of published papers on the effects of chemotherapy on QoL in dogs and cats were conducted. This was supplemented with a comparison of the parameters and domains used in veterinary QoL-assessments with those used in the Pediatric Quality of Life Inventory (PedsQL™) questionnaire designed...... to assess QoL in toddlers. Each of the identified publications including QoL-assessment in dogs and cats receiving chemotherapy applied a different method of QoL-assessment. In addition, the veterinary QoL-assessments were mainly focused on physical clinical parameters, whereas the emotional (6....../11), social (4/11) and role (4/11) domains were less represented. QoL-assessment of cats and dogs receiving chemotherapy is in its infancy. The most commonly reported method to assess QoL was questionnaire based and mostly included physical and clinical parameters. Standardizing and including a complete range...

  20. Recreational stream assessment using Malaysia water quality index

    Science.gov (United States)

    Ibrahim, Hanisah; Kutty, Ahmad Abas

    2013-11-01

    River water quality assessment is crucial in order to quantify and monitor spatial and temporally. Malaysia is producing WQI and NWQS indices to evaluate river water quality. However, the study on recreational river water quality is still scarce. A study was conducted to determine selected recreational river water quality area and to determine impact of recreation on recreational stream. Three recreational streams namely Sungai Benus, Sungai Cemperuh and Sungai Luruh in Janda Baik, Pahang were selected. Five sampling stations were chosen from each river with a 200-400 m interval. Six water quality parameters which are BOD5, COD, TSS, pH, ammoniacal-nitrogen and dissolved oxygen were measured. Sampling and analysis was conducted following standard method prepared by USEPA. These parameters were used to calculate the water quality subindex and finally an indicative WQI value using Malaysia water quality index formula. Results indicate that all recreational streams have excellent water quality with WQI values ranging from 89 to 94. Most of water quality parameter was homogenous between sampling sites and between streams. An one-way ANOVA test indicates that no significant difference was observed between each sub index values (p> 0.05, α=0.05). Only BOD and COD exhibit slightly variation between stations that would be due to organic domestic wastes done by visitors. The study demonstrated that visitors impact on recreational is minimum and recreation streams are applicable for direct contact recreational.