WorldWideScience

Sample records for quality metrics utility

  1. Supporting analysis and assessments quality metrics: Utility market sector

    Energy Technology Data Exchange (ETDEWEB)

    Ohi, J. [National Renewable Energy Lab., Golden, CO (United States)

    1996-10-01

    In FY96, NREL was asked to coordinate all analysis tasks so that in FY97 these tasks will be part of an integrated analysis agenda that will begin to define a 5-15 year R&D roadmap and portfolio for the DOE Hydrogen Program. The purpose of the Supporting Analysis and Assessments task at NREL is to provide this coordination and conduct specific analysis tasks. One of these tasks is to prepare the Quality Metrics (QM) for the Program as part of the overall QM effort at DOE/EERE. The Hydrogen Program one of 39 program planning units conducting QM, a process begun in FY94 to assess benefits/costs of DOE/EERE programs. The purpose of QM is to inform decisionmaking during budget formulation process by describing the expected outcomes of programs during the budget request process. QM is expected to establish first step toward merit-based budget formulation and allow DOE/EERE to get {open_quotes}most bang for its (R&D) buck.{close_quotes} In FY96. NREL coordinated a QM team that prepared a preliminary QM for the utility market sector. In the electricity supply sector, the QM analysis shows hydrogen fuel cells capturing 5% (or 22 GW) of the total market of 390 GW of new capacity additions through 2020. Hydrogen consumption in the utility sector increases from 0.009 Quads in 2005 to 0.4 Quads in 2020. Hydrogen fuel cells are projected to displace over 0.6 Quads of primary energy in 2020. In future work, NREL will assess the market for decentralized, on-site generation, develop cost credits for distributed generation benefits (such as deferral of transmission and distribution investments, uninterruptible power service), cost credits for by-products such as heat and potable water, cost credits for environmental benefits (reduction of criteria air pollutants and greenhouse gas emissions), compete different fuel cell technologies against each other for market share, and begin to address economic benefits, especially employment.

  2. Beyond metrics? Utilizing 'soft intelligence' for healthcare quality and safety.

    Science.gov (United States)

    Martin, Graham P; McKee, Lorna; Dixon-Woods, Mary

    2015-10-01

    Formal metrics for monitoring the quality and safety of healthcare have a valuable role, but may not, by themselves, yield full insight into the range of fallibilities in organizations. 'Soft intelligence' is usefully understood as the processes and behaviours associated with seeking and interpreting soft data-of the kind that evade easy capture, straightforward classification and simple quantification-to produce forms of knowledge that can provide the basis for intervention. With the aim of examining current and potential practice in relation to soft intelligence, we conducted and analysed 107 in-depth qualitative interviews with senior leaders, including managers and clinicians, involved in healthcare quality and safety in the English National Health Service. We found that participants were in little doubt about the value of softer forms of data, especially for their role in revealing troubling issues that might be obscured by conventional metrics. Their struggles lay in how to access softer data and turn them into a useful form of knowing. Some of the dominant approaches they used risked replicating the limitations of hard, quantitative data. They relied on processes of aggregation and triangulation that prioritised reliability, or on instrumental use of soft data to animate the metrics. The unpredictable, untameable, spontaneous quality of soft data could be lost in efforts to systematize their collection and interpretation to render them more tractable. A more challenging but potentially rewarding approach involved processes and behaviours aimed at disrupting taken-for-granted assumptions about quality, safety, and organizational performance. This approach, which explicitly values the seeking out and the hearing of multiple voices, is consistent with conceptual frameworks of organizational sensemaking and dialogical understandings of knowledge. Using soft intelligence this way can be challenging and discomfiting, but may offer a critical defence against the

  3. Quality Metric Development Framework (qMDF

    Directory of Open Access Journals (Sweden)

    K. Mustafa

    2005-01-01

    Full Text Available Several object-oriented metrics have been developed and used in conjunction with the quality models to predict the overall quality of software. However, it may not be enough to propose metrics. The fundamental question may be of their validity, utility and reliability. It may be much significant to be sure that these metrics are really useful and for that their construct validity must be assured. Thereby, good quality metrics must be developed using a foolproof and sound framework / model. A critical review of literature on the attempts in this regard reveals that there is no standard framework or model available for such an important activity. This study presents a framework for the quality metric development called Metric Development Framework (qMDF, which is prescriptive in nature. qMDF is a general framework but it has been established specially with ideas of object-oriented metrics. qMDF has been implemented to develop a good quality design metric, as a validation of proposed framework. Finally, it is defended that adaptation of qMDF by metric developers would yield good quality metrics, while ensuring their construct validity, utility, reliability and reduced developmental effort.

  4. Quality through metrics.

    Science.gov (United States)

    Frederick, L; Kallal, T; Krook, H

    1999-01-01

    The Quality Assurance Unit analyzed 18 months of departmental data regarding the report-audit cycle. Process mapping was utilized to identify milestones in the cycle for measurement. Five milestones were identified in the audit cycle, as follows: (1) time from report receipt in quality assurance to start of audit, (2) total calendar days to audit a report, (3) actual person-hours to perform a report audit, (4) time from completion of audit to issuance of report, and (5) total time a report is in quality assurance. An interrelationship diagraph is a quality tool that is used to identify what activities impact the overall report-auditing process. Once the data collection procedure is defined, a spreadsheet is constructed that captures the data. The resulting information is presented in time charts and bar graphs to visually aid in interpretation and analysis. Using these quality tools and statistical analyses, the Quality Assurance Unit identified areas needing improvement and confirmed or dispelled previous assumptions regarding the report-auditing process. By mapping, measuring, analyzing, and displaying the data, the overall process was examined critically. This resulted in the identification of areas needing improvement and a greater understanding of the report-audit cycle. A further benefit from our increased knowledge was the ability to explain our findings objectively to our client groups. This sharing of information gave impetus to our clients to examine their report-generation process and to make improvements.

  5. Software Quality Metrics

    Science.gov (United States)

    1991-07-01

    March 1979, pp. 121-128. Gorla, Narasimhaiah, Alan C. Benander, and Barbara A. Benander, "Debugging Effort Estimation Using Software Metrics", IEEE...Society, IEEE Guide for the Use of IEEE Standard Dictionary of Measures to Produce Reliable Software, IEEE Std 982.2-1988, June 1989. Jones, Capers

  6. Quality Metrics in Inpatient Neurology.

    Science.gov (United States)

    Dhand, Amar

    2015-12-01

    Quality of care in the context of inpatient neurology is the standard of performance by neurologists and the hospital system as measured against ideal models of care. There are growing regulatory pressures to define health care value through concrete quantifiable metrics linked to reimbursement. Theoretical models of quality acknowledge its multimodal character with quantitative and qualitative dimensions. For example, the Donabedian model distils quality as a phenomenon of three interconnected domains, structure-process-outcome, with each domain mutually influential. The actual measurement of quality may be implicit, as in peer review in morbidity and mortality rounds, or explicit, in which criteria are prespecified and systemized before assessment. As a practical contribution, in this article a set of candidate quality indicators for inpatient neurology based on an updated review of treatment guidelines is proposed. These quality indicators may serve as an initial blueprint for explicit quality metrics long overdue for inpatient neurology.

  7. Enforcing Quality Metrics over Equipment Utilization Rates as Means to Reduce Centers for Medicare and Medicaid Services Imaging Costs and Improve Quality of Care

    Directory of Open Access Journals (Sweden)

    Amit Sura

    2011-01-01

    On examining quality metrics, such as appropriateness criteria and pre-authorization, promising results have ensued. The development and enforcement of appropriateness criteria lowers overutilization of studies without requiring unattainable fixed rates. Pre-authorization educates ordering physicians as to when imaging is indicated.

  8. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    Science.gov (United States)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  9. THE QUALITY METRICS OF INFORMATION SYSTEMS

    Directory of Open Access Journals (Sweden)

    Zora Arsovski

    2008-06-01

    Full Text Available Information system is a special kind of products which is depend upon great number variables related to nature, conditions during implementation and organizational clime and culture. Because that quality metrics of information system (QMIS has to reflect all previous aspects of information systems. In this paper are presented basic elements of QMIS, characteristics of implementation and operation metrics for IS, team - management quality metrics for IS and organizational aspects of quality metrics. In second part of this paper are presented results of study of QMIS in area of MIS (Management IS.

  10. Program for implementing software quality metrics

    Energy Technology Data Exchange (ETDEWEB)

    Yule, H.P.; Riemer, C.A.

    1992-04-01

    This report describes a program by which the Veterans Benefit Administration (VBA) can implement metrics to measure the performance of automated data systems and demonstrate that they are improving over time. It provides a definition of quality, particularly with regard to software. Requirements for management and staff to achieve a successful metrics program are discussed. It lists the attributes of high-quality software, then describes the metrics or calculations that can be used to measure these attributes in a particular system. Case studies of some successful metrics programs used by business are presented. The report ends with suggestions on which metrics the VBA should use and the order in which they should be implemented.

  11. Assessing Software Quality Through Visualised Cohesion Metrics

    Directory of Open Access Journals (Sweden)

    Timothy Shih

    2001-05-01

    Full Text Available Cohesion is one of the most important factors for software quality as well as maintainability, reliability and reusability. Module cohesion is defined as a quality attribute that seeks for measuring the singleness of the purpose of a module. The module of poor quality can be a serious obstacle to the system quality. In order to design a good software quality, software managers and engineers need to introduce cohesion metrics to measure and produce desirable software. A highly cohesion software is thought to be a desirable constructing. In this paper, we propose a function-oriented cohesion metrics based on the analysis of live variables, live span and the visualization of processing element dependency graph. We give six typical cohesion examples to be measured as our experiments and justification. Therefore, a well-defined, well-normalized, well-visualized and well-experimented cohesion metrics is proposed to indicate and thus enhance software cohesion strength. Furthermore, this cohesion metrics can be easily incorporated with software CASE tool to help software engineers to improve software quality.

  12. How to evaluate objective video quality metrics reliably

    DEFF Research Database (Denmark)

    Korhonen, Jari; Burini, Nino; You, Junyong

    2012-01-01

    The typical procedure for evaluating the performance of different objective quality metrics and indices involves comparisons between subjective quality ratings and the quality indices obtained using the objective metrics in question on the known video sequences. Several correlation indicators can...

  13. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  14. An equivalent relative utility metric for evaluating screening mammography.

    Science.gov (United States)

    Abbey, Craig K; Eckstein, Miguel P; Boone, John M

    2010-01-01

    Comparative studies of performance in screening mammography are often ambiguous. A new method will frequently show a higher sensitivity or detection rate than an existing standard with a concomitant increase in false positives or recalls. The authors propose an equivalent relative utility (ERU) metric based on signal detection theory to quantify screening performance in such comparisons. The metric is defined as the relative utility, as defined in classical signal detection theory, needed to make 2 systems equivalent. ERU avoids the problem of requiring a predefined putative relative utility, which has limited application of utility theory in receiver operating characteristic analysis. The metric can be readily estimated from recall and detection rates commonly reported in comparative clinical studies. An important practical advantage of ERU is that in prevalence matched populations, the measure can be estimated without an independent estimate of disease prevalence. Thus estimating ERU does not require a study with long-term follow-up to find cases of missed disease. The approach is applicable to any comparative screening study that reports results in terms of recall and detection rates, although the authors focus exclusively on screening mammography in this work. They derive the ERU from the definition of utility given in classical treatments of signal detection theory. They also investigate reasonable values of relative utility in screening mammography for use in interpreting ERU using data from a large clinical study. As examples of application of ERU, they reanalyze 2 recently published reports using recall and detection rates in screening mammography.

  15. Quality metrics currently used in academic radiology departments: results of the QUALMET survey.

    Science.gov (United States)

    Walker, Eric A; Petscavage-Thomas, Jonelle M; Fotos, Joseph S; Bruno, Michael A

    2017-03-01

    We present the results of the 2015 quality metrics (QUALMET) survey, which was designed to assess the commonalities and variability of selected quality and productivity metrics currently employed by a large sample of academic radiology departments representing all regions in the USA. The survey of key radiology metrics was distributed in March-April of 2015 via personal e-mail to 112 academic radiology departments. There was a 34.8% institutional response rate. We found that most academic departments of radiology commonly utilize metrics of hand hygiene, report turn around time (RTAT), relative value unit (RVU) productivity, patient satisfaction and participation in peer review. RTAT targets were found to vary widely. The implementation of radiology peer review and the variety of ways in which peer review results are used within academic radiology departments, the use of clinical decision support tools and requirements for radiologist participation in Maintenance of Certification also varied. Policies for hand hygiene and critical results communication were very similar across all institutions reporting, and most departments utilized some form of missed case/difficult case conference as part of their quality and safety programme, as well as some form of periodic radiologist performance reviews. Results of the QUALMET survey suggest many similarities in tracking and utilization of the selected quality and productivity metrics included in our survey. Use of quality indicators is not a fully standardized process among academic radiology departments. Advances in knowledge: This article examines the current quality and productivity metrics in academic radiology.

  16. A psychovisual quality metric in free-energy principle.

    Science.gov (United States)

    Zhai, Guangtao; Wu, Xiaolin; Yang, Xiaokang; Lin, Weisi; Zhang, Wenjun

    2012-01-01

    In this paper, we propose a new psychovisual quality metric of images based on recent developments in brain theory and neuroscience, particularly the free-energy principle. The perception and understanding of an image is modeled as an active inference process, in which the brain tries to explain the scene using an internal generative model. The psychovisual quality is thus closely related to how accurately visual sensory data can be explained by the generative model, and the upper bound of the discrepancy between the image signal and its best internal description is given by the free energy of the cognition process. Therefore, the perceptual quality of an image can be quantified using the free energy. Constructively, we develop a reduced-reference free-energy-based distortion metric (FEDM) and a no-reference free-energy-based quality metric (NFEQM). The FEDM and the NFEQM are nearly invariant to many global systematic deviations in geometry and illumination that hardly affect visual quality, for which existing image quality metrics wrongly predict severe quality degradation. Although with very limited or even without information on the reference image, the FEDM and the NFEQM are highly competitive compared with the full-reference SSIM image quality metric on images in the popular LIVE database. Moreover, FEDM and NFEQM can measure correctly the visual quality of some model-based image processing algorithms, for which the competing metrics often contradict with viewers' opinions.

  17. Utility of different glycemic control metrics for optimizingmanagement of diabetes

    Institute of Scientific and Technical Information of China (English)

    Klaus-Dieter Kohnert; Peter Heinke; Lutz Vogt; Eckhard Salzsieder

    2015-01-01

    The benchmark for assessing quality of long-termglycemic control and adjustment of therapy is currentlyglycated hemoglobin (HbA1c). Despite its importanceas an indicator for the development of diabeticcomplications, recent studies have revealed that thismetric has some limitations; it conveys a rather complexmessage, which has to be taken into considerationfor diabetes screening and treatment. On the basis ofrecent clinical trials, the relationship between HbA1cand cardiovascular outcomes in long-standing diabeteshas been called into question. It becomes obvious thatother surrogate and biomarkers are needed to betterpredict cardiovascular diabetes complications and assessefficiency of therapy. Glycated albumin, fructosamin,and 1,5-anhydroglucitol have received growing interestas alternative markers of glycemic control. In additionto measures of hyperglycemia, advanced glucosemonitoring methods became available. An indispensibleadjunct to HbA1c in routine diabetes care is selfmonitoringof blood glucose. This monitoring methodis now widely used, as it provides immediate feedbackto patients on short-term changes, involving fasting,preprandial, and postprandial glucose levels. Beyondthe traditional metrics, glycemic variability has beenidentified as a predictor of hypoglycemia, and it mightalso be implicated in the pathogenesis of vasculardiabetes complications. Assessment of glycemicvariability is thus important, but exact quantificationrequires frequently sampled glucose measurements. Inorder to optimize diabetes treatment, there is a needfor both key metrics of glycemic control on a day-to-daybasis and for more advanced, user-friendly monitoringmethods. In addition to traditional discontinuous glucosetesting, continuous glucose sensing has become auseful tool to reveal insufficient glycemic management.This new technology is particularly effective in patientswith complicated diabetes and provides the opportunityto characterize glucose dynamics. Several

  18. Pragmatic guidelines and quality metrics in business process modeling: a case study

    Directory of Open Access Journals (Sweden)

    Isel Moreno-Montes-de-Oca

    2014-04-01

    Full Text Available Business process modeling is one of the first steps towards achieving organizational goals. This is why business process modeling quality is an essential aspect for the development and technical support of any company. This work focuses on the quality of business process models at a conceptual l evel (design and evaluation. In the literature there are works that propose practical guidelines for modeling, while others focus on quality metrics that allow the evaluation of the models. In this paper we use practical guidelines during the modeling phase of a business process for postgraduate studies. We applied a set of quality metrics and compare the results with those obtained from a similar model that did not use guidelines. The results provide support for the use of guidelines as a way for business process modeling quality improvement, and the practical utility of quality metrics in their evaluation.

  19. [Clinical trial data management and quality metrics system].

    Science.gov (United States)

    Chen, Zhao-hua; Huang, Qin; Deng, Ya-zhong; Zhang, Yue; Xu, Yu; Yu, Hao; Liu, Zong-fan

    2015-11-01

    Data quality management system is essential to ensure accurate, complete, consistent, and reliable data collection in clinical research. This paper is devoted to various choices of data quality metrics. They are categorized by study status, e.g. study start up, conduct, and close-out. In each category, metrics for different purposes are listed according to ALCOA+ principles such us completeness, accuracy, timeliness, traceability, etc. Some general quality metrics frequently used are also introduced. This paper contains detail information as much as possible to each metric by providing definition, purpose, evaluation, referenced benchmark, and recommended targets in favor of real practice. It is important that sponsors and data management service providers establish a robust integrated clinical trial data quality management system to ensure sustainable high quality of clinical trial deliverables. It will also support enterprise level of data evaluation and bench marking the quality of data across projects, sponsors, data management service providers by using objective metrics from the real clinical trials. We hope this will be a significant input to accelerate the improvement of clinical trial data quality in the industry.

  20. Experiences with Software Quality Metrics in the EMI middleware

    Science.gov (United States)

    Alandes, M.; Kenny, E. M.; Meneses, D.; Pucciani, G.

    2012-12-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering - Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project to extract “code metrics” on the status of the software products and “process metrics” related to the quality of the development and support process such as reaction time to critical bugs, requirements tracking and delays in product releases.

  1. A universal color image quality metric

    NARCIS (Netherlands)

    Toet, A.; Lucassen, M.P.

    2003-01-01

    We extend a recently introduced universal grayscale image quality index to a newly developed perceptually decorrelated color space. The resulting color image quality index quantifies the distortion of a processed color image relative to its original version. We evaluated the new color image quality

  2. Experiences with Software Quality Metrics in the EMI middlewate

    CERN Document Server

    Alandes, M; Meneses, D; Pucciani, G

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project to ...

  3. Experiences with Software Quality Metrics in the EMI Middleware

    CERN Document Server

    CERN. Geneva

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project t...

  4. Development of soil quality metrics using mycorrhizal fungi

    Energy Technology Data Exchange (ETDEWEB)

    Baar, J.

    2010-07-01

    Based on the Treaty on Biological Diversity of Rio de Janeiro in 1992 for maintaining and increasing biodiversity, several countries have started programmes monitoring soil quality and the above- and below ground biodiversity. Within the European Union, policy makers are working on legislation for soil protection and management. Therefore, indicators are needed to monitor the status of the soils and these indicators reflecting the soil quality, can be integrated in working standards or soil quality metrics. Soil micro-organisms, particularly arbuscular mycorrhizal fungi (AMF), are indicative of soil changes. These soil fungi live in symbiosis with the great majority of plants and are sensitive to changes in the physico-chemical conditions of the soil. The aim of this study was to investigate whether AMF are reliable and sensitive indicators for disturbances in the soils and can be used for the development of soil quality metrics. Also, it was studied whether soil quality metrics based on AMF meet requirements to applicability by users and policy makers. Ecological criterions were set for the development of soil quality metrics for different soils. Multiple root samples containing AMF from various locations in The Netherlands were analyzed. The results of the analyses were related to the defined criterions. This resulted in two soil quality metrics, one for sandy soils and a second one for clay soils, with six different categories ranging from very bad to very good. These soil quality metrics meet the majority of requirements for applicability and are potentially useful for the development of legislations for the protection of soil quality. (Author) 23 refs.

  5. Software Quality Metrics for Geant4: An Initial Assessment

    CERN Document Server

    Ronchieri, Elisabetta; Giacomini, Francesco

    2016-01-01

    In the context of critical applications, such as shielding and radiation protection, ensuring the quality of simulation software they depend on is of utmost importance. The assessment of simulation software quality is important not only to determine its adoption in experimental applications, but also to guarantee reproducibility of outcome over time. In this study, we present initial results from an ongoing analysis of Geant4 code based on established software metrics. The analysis evaluates the current status of the code to quantify its characteristics with respect to documented quality standards; further assessments concern evolutions over a series of release distributions. We describe the selected metrics that quantify software attributes ranging from code complexity to maintainability, and highlight what metrics are most effective at evaluating radiation transport software quality. The quantitative assessment of the software is initially focused on a set of Geant4 packages, which play a key role in a wide...

  6. Quality metrics can help the expert during neurological clinical trials

    Science.gov (United States)

    Mahé, L.; Autrusseau, F.; Desal, H.; Guédon, J.; Der Sarkissian, H.; Le Teurnier, Y.; Davila, S.

    2016-03-01

    Carotid surgery is a frequent act corresponding to 15 to 20 thousands operations per year in France. Cerebral perfusion has to be tracked before and after carotid surgery. In this paper, a diagnosis support using quality metrics is proposed to detect vascular lesions on MR images. Our key stake is to provide a detection tool mimicking the human visual system behavior during the visual inspection. Relevant Human Visual System (HVS) properties should be integrated in our lesion detection method, which must be robust to common distortions in medical images. Our goal is twofold: to help the neuroradiologist to perform its task better and faster but also to provide a way to reduce the risk of bias in image analysis. Objective quality metrics (OQM) are methods whose goal is to predict the perceived quality. In this work, we use Objective Quality Metrics to detect perceivable differences between pairs of images.

  7. Image quality assessment metrics by using directional projection

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Objective image quality mcasure, which is a fundamental and challenging job in image processing, evaluates the image quality consistently with human perception automatically. On the assumption that any image distortion could be modeled as the difference between the directional projection-based maps of reference and distortion images, wc propose a new objective quality assessment method based on directional projection for full reference model. Experimental results show that the proposed metrics are well consistent with the subjective quality score.

  8. The compressed average image intensity metric for stereoscopic video quality assessment

    Science.gov (United States)

    Wilczewski, Grzegorz

    2016-09-01

    The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.

  9. Quality metric for spherical panoramic video

    Science.gov (United States)

    Zakharchenko, Vladyslav; Choi, Kwang Pyo; Park, Jeong Hoon

    2016-09-01

    Virtual reality (VR)/ augmented reality (AR) applications allow users to view artificial content of a surrounding space simulating presence effect with a help of special applications or devices. Synthetic contents production is well known process form computer graphics domain and pipeline has been already fixed in the industry. However emerging multimedia formats for immersive entertainment applications such as free-viewpoint television (FTV) or spherical panoramic video require different approaches in content management and quality assessment. The international standardization on FTV has been promoted by MPEG. This paper is dedicated to discussion of immersive media distribution format and quality estimation process. Accuracy and reliability of the proposed objective quality estimation method had been verified with spherical panoramic images demonstrating good correlation results with subjective quality estimation held by a group of experts.

  10. National Quality Forum Metrics for Thoracic Surgery.

    Science.gov (United States)

    Cipriano, Anthony; Burfeind, William R

    2017-08-01

    The National Quality Forum (NQF) is a multistakeholder, nonprofit, membership-based organization improving health care through preferential use of valid performance measures. NQF-endorsed measures are considered the gold standard for health care measurement in the United States. The Society of Thoracic Surgeons is the steward of the only six NQF-endorsed general thoracic surgery measures. These measures include one structure measure (participation in a national general thoracic surgery database), two process measures (recording of clinical stage and recording performance status before lung and esophageal resections), and three outcome measures (risk-adjusted morbidity and mortality after lung and esophageal resections and risk-adjusted length of stay greater than 14 days after lobectomy). Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Pragmatic quality metrics for evolutionary software development models

    Science.gov (United States)

    Royce, Walker

    1990-01-01

    Due to the large number of product, project, and people parameters which impact large custom software development efforts, measurement of software product quality is a complex undertaking. Furthermore, the absolute perspective from which quality is measured (customer satisfaction) is intangible. While we probably can't say what the absolute quality of a software product is, we can determine the relative quality, the adequacy of this quality with respect to pragmatic considerations, and identify good and bad trends during development. While no two software engineers will ever agree on an optimum definition of software quality, they will agree that the most important perspective of software quality is its ease of change. We can call this flexibility, adaptability, or some other vague term, but the critical characteristic of software is that it is soft. The easier the product is to modify, the easier it is to achieve any other software quality perspective. This paper presents objective quality metrics derived from consistent lifecycle perspectives of rework which, when used in concert with an evolutionary development approach, can provide useful insight to produce better quality per unit cost/schedule or to achieve adequate quality more efficiently. The usefulness of these metrics is evaluated by applying them to a large, real world, Ada project.

  12. Patient Safety and Quality Metrics in Pediatric Hospital Medicine.

    Science.gov (United States)

    Kumar, Bhanumathy

    2016-04-01

    Quality-based regulations, performance-based payouts, and open reporting have contributed to a growing focus on quality and safety metrics in health care. Medical errors are a well-known catastrophe in the field. Especially disturbing are estimates of pediatric safety issues, which hold a stronger capacity to cause serious issues than those found in adults. This article presents information collected in the past 2 decades pertaining to the issue of quality, and describes a preliminary list of potential solutions and methods of implementation. The beginning stages of a reconstructive journey of safety and quality in a Michigan pediatric hospital is introduced and discussed. Published by Elsevier Inc.

  13. Development of Quality Metrics in Ambulatory Pediatric Cardiology.

    Science.gov (United States)

    Chowdhury, Devyani; Gurvitz, Michelle; Marelli, Ariane; Anderson, Jeffrey; Baker-Smith, Carissa; Diab, Karim A; Edwards, Thomas C; Hougen, Tom; Jedeikin, Roy; Johnson, Jonathan N; Karpawich, Peter; Lai, Wyman; Lu, Jimmy C; Mitchell, Stephanie; Newburger, Jane W; Penny, Daniel J; Portman, Michael A; Satou, Gary; Teitel, David; Villafane, Juan; Williams, Roberta; Jenkins, Kathy

    2017-02-07

    The American College of Cardiology Adult Congenital and Pediatric Cardiology (ACPC) Section had attempted to create quality metrics (QM) for ambulatory pediatric practice, but limited evidence made the process difficult. The ACPC sought to develop QMs for ambulatory pediatric cardiology practice. Five areas of interest were identified, and QMs were developed in a 2-step review process. In the first step, an expert panel, using the modified RAND-UCLA methodology, rated each QM for feasibility and validity. The second step sought input from ACPC Section members; final approval was by a vote of the ACPC Council. Work groups proposed a total of 44 QMs. Thirty-one metrics passed the RAND process and, after the open comment period, the ACPC council approved 18 metrics. The project resulted in successful development of QMs in ambulatory pediatric cardiology for a range of ambulatory domains. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  14. Spread spectrum image watermarking based on perceptual quality metric.

    Science.gov (United States)

    Zhang, Fan; Liu, Wenyu; Lin, Weisi; Ngan, King Ngi

    2011-11-01

    Efficient image watermarking calls for full exploitation of the perceptual distortion constraint. Second-order statistics of visual stimuli are regarded as critical features for perception. This paper proposes a second-order statistics (SOS)-based image quality metric, which considers the texture masking effect and the contrast sensitivity in Karhunen-Loève transform domain. Compared with the state-of-the-art metrics, the quality prediction by SOS better correlates with several subjectively rated image databases, in which the images are impaired by the typical coding and watermarking artifacts. With the explicit metric definition, spread spectrum watermarking is posed as an optimization problem: we search for a watermark to minimize the distortion of the watermarked image and to maximize the correlation between the watermark pattern and the spread spectrum carrier. The simple metric guarantees the optimal watermark a closed-form solution and a fast implementation. The experiments show that the proposed watermarking scheme can take full advantage of the distortion constraint and improve the robustness in return.

  15. Towards Video Quality Metrics Based on Colour Fractal Geometry

    Directory of Open Access Journals (Sweden)

    Richard Noël

    2010-01-01

    Full Text Available Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.

  16. SU-E-T-222: How to Define and Manage Quality Metrics in Radiation Oncology.

    Science.gov (United States)

    Harrison, A; Cooper, K; DeGregorio, N; Doyle, L; Yu, Y

    2012-06-01

    Since the 2001 IOM Report Crossing the Quality Chasm: A New Health System for the 21st Century, the need to provide quality metrics in health care has increased. Quality metrics have yet to be defined for the field of radiation oncology. This study represents one institutes initial efforts defining and measuring quality metrics using our electronic medical record and verify system(EMR) as a primary data collection tool. This effort began by selecting meaningful quality metrics rooted in the IOM definition of quality (safe, timely, efficient, effective, equitable and patient-centered care) that were also measurable targets based on current data input and workflow. Elekta MOSAIQ 2.30.04D1 was used to generate reports on the number of Special Physics Consults(SPC) charged as a surrogate for treatment complexity, daily patient time in department(DTP) as a measure of efficiency and timeliness, and time from CT-simulation to first LINAC appointment(STL). The number of IMRT QAs delivered in the department was also analyzed to assess complexity. Although initial MOSAIQ reports were easily generated, the data needed to be assessed and adjusted for outliers. Patients with delays outside of radiation oncology such as chemotherapy or surgery were excluded from STL data. We found an average STL of six days for all CT-simulated patients and an average DTP of 52 minutes total time, with 23 minutes in the LINAC vault. Annually, 7.3% of all patient require additional physics support indicated by SPC. Utilizing our EMR, an entire year's worth of useful data characterizing our clinical experience was analyzed in less than one day. Having baseline quality metrics is necessary to improve patient care. Future plans include dissecting this data into more specific categories such as IMRT DTP, workflow timing following CT-simulation, beam-on hours, chart review outcomes, and dosimetric quality indicators. © 2012 American Association of Physicists in Medicine.

  17. Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling

    Science.gov (United States)

    Mog, Robert A.

    1997-01-01

    Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.

  18. Editorial: On the Quality of Quality Metrics: Rethinking What Defines a Good Colonoscopy.

    Science.gov (United States)

    Dominitz, Jason A; Spiegel, Brennan

    2016-05-01

    The colonoscopy quality assurance movement has focused on a variety of process metrics, including the adenoma detection rate (ADR). However, the ADR only ascertains whether or not at least one adenoma is identified. Supplemental measures that quantify all neoplasia have been proposed. In this issue of the American Journal of Gastroenterology, Aniwan and colleagues performed tandem screening colonoscopies to determine the adenoma miss rate among high-ADR endoscopists. This permitted validation of supplemental colonoscopy quality metrics. This study highlights potential limitations of ADR and the need for further refinement of colonoscopy quality metrics, although logistic challenges abound.

  19. Toward an impairment metric for stereoscopic video: a full-reference video quality metric to assess compressed stereoscopic video.

    Science.gov (United States)

    De Silva, Varuna; Arachchi, Hemantha Kodikara; Ekmekcioglu, Erhan; Kondoz, Ahmet

    2013-09-01

    The quality assessment of impaired stereoscopic video is a key element in designing and deploying advanced immersive media distribution platforms. A widely accepted quality metric to measure impairments of stereoscopic video is, however, still to be developed. As a step toward finding a solution to this problem, this paper proposes a full reference stereoscopic video quality metric to measure the perceptual quality of compressed stereoscopic video. A comprehensive set of subjective experiments is performed with 14 different stereoscopic video sequences, which are encoded using both the H.264 and high efficiency video coding compliant video codecs, to develop a subjective test results database of 116 test stimuli. The subjective results are analyzed using statistical techniques to uncover different patterns of subjective scoring for symmetrically and asymmetrically encoded stereoscopic video. The subjective result database is subsequently used for training and validating a simple but effective stereoscopic video quality metric considering heuristics of binocular vision. The proposed metric performs significantly better than state-of-the-art stereoscopic image and video quality metrics in predicting the subjective scores. The proposed metric and the subjective result database will be made publicly available, and it is expected that the proposed metric and the subjective assessments will have important uses in advanced 3D media delivery systems.

  20. Detection of image quality metamers based on the metric for unified image quality

    Science.gov (United States)

    Miyata, Kimiyoshi; Tsumura, Norimichi

    2012-01-01

    In this paper, we introduce a concept of the image quality metamerism as an expanded version of the metamerism defined in the color science. The concept is used to unify different image quality attributes, and applied to introduce a metric showing the degree of image quality metamerism to analyze a cultural property. Our global goal is to build a metric to evaluate total quality of images acquired by different imaging systems and observed under different viewing conditions. As the basic step to the global goal, the metric is consisted of color, spectral and texture information in this research, and applied to detect image quality metamers to investigate the cultural property. The property investigated is the oldest extant version of folding screen paintings that depict the thriving city of Kyoto designated as a nationally important cultural property in Japan. Gold colored areas painted by using high granularity colorants compared with other color areas in the property are evaluated based on the metric, then the metric is visualized as a map showing the possibility of the image quality metamer to the reference pixel.

  1. A task-based quality control metric for digital mammography

    Science.gov (United States)

    Maki Bloomquist, A. K.; Mainprize, J. G.; Mawdsley, G. E.; Yaffe, M. J.

    2014-11-01

    A reader study was conducted to tune the parameters of an observer model used to predict the detectability index (dʹ ) of test objects as a task-based quality control (QC) metric for digital mammography. A simple test phantom was imaged to measure the model parameters, namely, noise power spectrum, modulation transfer function and test-object contrast. These are then used in a non-prewhitening observer model, incorporating an eye-filter and internal noise, to predict dʹ. The model was tuned by measuring dʹ of discs in a four-alternative forced choice reader study. For each disc diameter, dʹ was used to estimate the threshold thicknesses for detectability. Data were obtained for six types of digital mammography systems using varying detector technologies and x-ray spectra. A strong correlation was found between measured and modeled values of dʹ, with Pearson correlation coefficient of 0.96. Repeated measurements from separate images of the test phantom show an average coefficient of variation in dʹ for different systems between 0.07 and 0.10. Standard deviations in the threshold thickness ranged between 0.001 and 0.017 mm. The model is robust and the results are relatively system independent, suggesting that observer model dʹ shows promise as a cross platform QC metric for digital mammography.

  2. Applicability of Existing Objective Metrics of Perceptual Quality for Adaptive Video Streaming

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Krasula, Lukás; Shahid, Muhammad

    2016-01-01

    Objective video quality metrics are designed to estimate the quality of experience of the end user. However, these objective metrics are usually validated with video streams degraded under common distortion types. In the presented work, we analyze the performance of published and known full......-reference and noreference quality metrics in estimating the perceived quality of adaptive bit-rate video streams knowingly out of scope. Experimental results indicate not surprisingly that state of the art objective quality metrics overlook the perceived degradations in the adaptive video streams and perform poorly...

  3. Enhancing the quality metric of protein microarray image

    Institute of Scientific and Technical Information of China (English)

    王立强; 倪旭翔; 陆祖康; 郑旭峰; 李映笙

    2004-01-01

    The novel method of improving the quality metric of protein microarray image presented in this paper reduces impulse noise by using an adaptive median filter that employs the switching scheme based on local statistics characters; and achieves the impulse detection by using the difference between the standard deviation of the pixels within the filter window and the current pixel of concern. It also uses a top-hat filter to correct the background variation. In order to decrease time consumption, the top-hat filter core is cross structure. The experimental results showed that, for a protein microarray image contaminated by impulse noise and with slow background variation, the new method can significantly increase the signal-to-noise ratio, correct the trends in the background, and enhance the flatness of the background and the consistency of the signal intensity.

  4. Metrics and the effective computational scientist: process, quality and communication.

    Science.gov (United States)

    Baldwin, Eric T

    2012-09-01

    Recent treatments of computational knowledge worker productivity have focused upon the value the discipline brings to drug discovery using positive anecdotes. While this big picture approach provides important validation of the contributions of these knowledge workers, the impact accounts do not provide the granular detail that can help individuals and teams perform better. I suggest balancing the impact-focus with quantitative measures that can inform the development of scientists. Measuring the quality of work, analyzing and improving processes, and the critical evaluation of communication can provide immediate performance feedback. The introduction of quantitative measures can complement the longer term reporting of impacts on drug discovery. These metric data can document effectiveness trends and can provide a stronger foundation for the impact dialogue.

  5. Software Metrics to Estimate Software Quality using Software Component Reusability

    Directory of Open Access Journals (Sweden)

    Prakriti Trivedi

    2012-03-01

    Full Text Available Today most of the applications developed using some existing libraries, codes, open sources etc. As a code is accessed in a program, it is represented as the software component. Such as in java beans and in .net ActiveX controls are the software components. These components are ready to use programming code or controls that excel the code development. A component based software system defines the concept of software reusability. While using these components the main question arise is whether to use such components is beneficial or not. In this proposed work we are trying to present the answer for the same question. In this work we are presenting a set of software matrix that will check the interconnection between the software component and the application. How strong this relation defines the software quality after using this software component. The overall metrics will return the final result in terms of the boundless of the component with application.

  6. A priori discretization quality metrics for distributed hydrologic modeling applications

    Science.gov (United States)

    Liu, Hongli; Tolson, Bryan; Craig, James; Shafii, Mahyar; Basu, Nandita

    2016-04-01

    In distributed hydrologic modelling, a watershed is treated as a set of small homogeneous units that address the spatial heterogeneity of the watershed being simulated. The ability of models to reproduce observed spatial patterns firstly depends on the spatial discretization, which is the process of defining homogeneous units in the form of grid cells, subwatersheds, or hydrologic response units etc. It is common for hydrologic modelling studies to simply adopt a nominal or default discretization strategy without formally assessing alternative discretization levels. This approach lacks formal justifications and is thus problematic. More formalized discretization strategies are either a priori or a posteriori with respect to building and running a hydrologic simulation model. A posteriori approaches tend to be ad-hoc and compare model calibration and/or validation performance under various watershed discretizations. The construction and calibration of multiple versions of a distributed model can become a seriously limiting computational burden. Current a priori approaches are more formalized and compare overall heterogeneity statistics of dominant variables between candidate discretization schemes and input data or reference zones. While a priori approaches are efficient and do not require running a hydrologic model, they do not fully investigate the internal spatial pattern changes of variables of interest. Furthermore, the existing a priori approaches focus on landscape and soil data and do not assess impacts of discretization on stream channel definition even though its significance has been noted by numerous studies. The primary goals of this study are to (1) introduce new a priori discretization quality metrics considering the spatial pattern changes of model input data; (2) introduce a two-step discretization decision-making approach to compress extreme errors and meet user-specified discretization expectations through non-uniform discretization threshold

  7. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale

    CERN Document Server

    Emmons, Scott; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie network clustering. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms -- Blondel, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 o...

  8. Evaluation of developmental metrics for utilization in a pediatric advanced automatic crash notification algorithm.

    Science.gov (United States)

    Doud, Andrea N; Weaver, Ashley A; Talton, Jennifer W; Barnard, Ryan T; Petty, John; Stitzel, Joel D

    2016-01-01

    Appropriate treatment at designated trauma centers (TCs) improves outcomes among injured children after motor vehicle crashes (MVCs). Advanced Automatic Crash Notification (AACN) has shown promise in improving triage to appropriate TCs. Pediatric-specific AACN algorithms have not yet been created. To create such an algorithm, it will be necessary to include some metric of development (age, height, or weight) as a covariate in the injury risk algorithm. This study sought to determine which marker of development should serve as a covariate in such an algorithm and to quantify injury risk at different levels of this metric. A retrospective review of occupants age pediatric AACN algorithm. Clinical judgment, literature review, and chi-square analysis were used to create groupings of the chosen metric that would discriminate injury patterns. Adjusted odds of particular injury types at the different levels of this metric were calculated from logistic regression while controlling for gender, vehicle velocity change (delta V), belted status (optimal, suboptimal, or unrestrained), and crash mode (rollover, rear, frontal, near-side, or far-side). NASS-CDS analysis produced 11,541 occupants age mass index (BMI) classifications. Adjusted odds of key injury types with respect to these age categorizations revealed that younger children were at increased odds of sustaining Abbreviated Injury Scale (AIS) 2+ and 3+ head injuries and AIS 3+ spinal injuries, whereas older children were at increased odds of sustaining thoracic fractures, AIS 3+ abdominal injuries, and AIS 2+ upper and lower extremity injuries. The injury patterns observed across developmental metrics in this study mirror those previously described among children with blunt trauma. This study identifies age as the metric best suited for use in a pediatric AACN algorithm and utilizes 12 years of data to provide quantifiable risks of particular injuries at different levels of this metric. This risk quantification will

  9. Software metrics: The key to quality software on the NCC project

    Science.gov (United States)

    Burns, Patricia J.

    1993-01-01

    Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.

  10. Fruit dry matter concentration: a new quality metric for apples.

    Science.gov (United States)

    Palmer, John W; Harker, F Roger; Tustin, D Stuart; Johnston, Jason

    2010-12-01

    In the fresh apple market fruit must be crisp and juicy to attract buyers to purchase again. However, recent studies have shown that consumer acceptability could be further enhanced by improving taste. This study evaluates the use of fruit dry matter concentration (DMC) as a new fruit quality metric for apple. Fruit samples collected at harvest, in the two main fruit growing regions of New Zealand, showed a variation in mean fruit DMC from 130 to 156 g kg(-1) with 'Royal Gala' and with 'Scifresh' from 152 to 176 g kg(-1). Individual fruit DMC showed a larger range, from 108 to 189 g kg(-1) with 'Royal Gala' and from 125 to 201 g kg(-1) with 'Scifresh'. Fruit DMC proved a more reliable predictor of total soluble solids after 12 weeks of air storage at 0.5 °C than TSS at harvest for both 'Royal Gala' and 'Scifresh'. Fruit DMC was also positively related to flesh firmness, although this relationship was not as strong as that seen with soluble solids and was more dependent on cultivar. Consumer studies showed that consumer preference was positively related to fruit DMC of 'Royal Gala' apples. Fruit DMC can therefore be measured before or at harvest, and be used to predict the sensory potential for the fruit after storage. Copyright © 2010 Society of Chemical Industry.

  11. Fly ash quality and utilization

    Energy Technology Data Exchange (ETDEWEB)

    Barta, L.E.; Lachner, L.; Wenzel, G.B. [Inst. for Energy, Budapest (Hungary); Beer, M.J. [Massachusetts Inst. of Technology, Cambridge, MA (United States)

    1995-12-01

    The quality of fly ash is of considerable importance to fly ash utilizers. The fly ash puzzolanic activity is one of the most important properties that determines the role of fly ash as a binding agent in the cementing process. The puzzolanic activity, however is a function of fly ash particle size and chemical composition. These parameters are closely related to the process of fly ash formation in pulverized coal fired furnaces. In turn, it is essential to understand the transformation of mineral matter during coal combustion. Due to the particle-to-particle variation of coal properties and the random coalescence of mineral particles, the properties of fly ash particles e.g. size, SiO{sub 2} content, viscosity can change considerably from particle to particle. These variations can be described by the use of the probability theory. Since the mean values of these randomly changing parameters are not sufficient to describe the behavior of individual fly ash particles during the formation of concrete, therefore it is necessary to investigate the distribution of these variables. Examples of these variations were examined by the Computer Controlled Scanning Electron Microscopy (CCSEM) for particle size and chemical composition for Texas lignite and Eagel Butte mineral matter and fly ash. The effect of combustion on the variations of these properties for both the fly ash and mineral matter were studied by using a laminar flow reactor. It is shown in our paper, that there are significant variations (about 40-50% around the mean values) of the above-listed properties for both coal samples. By comparing the particle size and chemical composition distributions of the mineral matter and fly ash, it was possible to conclude that for the Texas lignite mineral matter, the combustion did not effect significantly the distribution of these properties, however, for the Eagel Butte coal the combustion had a major impact on these mineral matter parameters.

  12. Reliability, Validity, Comparability and Practical Utility of Cybercrime-Related Data, Metrics, and Information

    Directory of Open Access Journals (Sweden)

    Nir Kshetri

    2013-02-01

    Full Text Available With an increasing pervasiveness, prevalence and severity of cybercrimes, various metrics, measures and statistics have been developed and used to measure various aspects of this phenomenon. Cybercrime-related data, metrics, and information, however, pose important and difficult dilemmas regarding the issues of reliability, validity, comparability and practical utility. While many of the issues of the cybercrime economy are similar to other underground and underworld industries, this economy also has various unique aspects. For one thing, this industry also suffers from a problem partly rooted in the incredibly broad definition of the term “cybercrime”. This article seeks to provide insights and analysis into this phenomenon, which is expected to advance our understanding into cybercrime-related information.

  13. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  14. Biotic, water-quality, and hydrologic metrics calculated for the analysis of temporal trends in National Water Quality Assessment Program Data in the Western United States

    Science.gov (United States)

    Wiele, Stephen M.; Brasher, Anne M.D.; Miller, Matthew P.; May, Jason T.; Carpenter, Kurt D.

    2012-01-01

    The U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program was established by Congress in 1991 to collect long-term, nationally consistent information on the quality of the Nation's streams and groundwater. The NAWQA Program utilizes interdisciplinary and dynamic studies that link the chemical and physical conditions of streams (such as flow and habitat) with ecosystem health and the biologic condition of algae, aquatic invertebrates, and fish communities. This report presents metrics derived from NAWQA data and the U.S. Geological Survey streamgaging network for sampling sites in the Western United States, as well as associated chemical, habitat, and streamflow properties. The metrics characterize the conditions of algae, aquatic invertebrates, and fish. In addition, we have compiled climate records and basin characteristics related to the NAWQA sampling sites. The calculated metrics and compiled data can be used to analyze ecohydrologic trends over time.

  15. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2012-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use...... and agreed up on two primary needs for the wide use of quality metrics: 1) an evolving list of comprehensive quality metrics and 2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics...

  16. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use...... and agreed up on two primary needs for the wide use of quality metrics: 1) an evolving list of comprehensive quality metrics and 2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics...

  17. Aquatic Acoustic Metrics Interface Utility for Underwater Sound Monitoring and Analysis

    Directory of Open Access Journals (Sweden)

    Thomas J. Carlson

    2012-05-01

    Full Text Available Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. In this paper, we provide a detailed description of a new software package, the Aquatic Acoustic Metrics Interface (AAMI, specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals. In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame. The features of the AAMI software are discussed, and several case studies are presented to illustrate its functionality.

  18. Large-scale seismic waveform quality metric calculation using Hadoop

    Science.gov (United States)

    Magana-Zook, S.; Gaylord, J. M.; Knapp, D. R.; Dodge, D. A.; Ruppert, S. D.

    2016-09-01

    In this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of 0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. These experiments were conducted multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely

  19. SU-E-T-776: Use of Quality Metrics for a New Hypo-Fractionated Pre-Surgical Mesothelioma Protocol

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, S; Mehta, V [Swedish Cancer Institute, Seattle, WA (United States)

    2015-06-15

    Purpose: The “SMART” (Surgery for Mesothelioma After Radiation Therapy) approach involves hypo-fractionated radiotherapy of the lung pleura to 25Gy over 5 days followed by surgical resection within 7. Early clinical results suggest that this approach is very promising, but also logistically challenging due to the multidisciplinary involvement. Due to the compressed schedule, high dose, and shortened planning time, the delivery of the planned doses were monitored for safety with quality metric software. Methods: Hypo-fractionated IMRT treatment plans were developed for all patients and exported to Quality Reports™ software. Plan quality metrics or PQMs™ were created to calculate an objective scoring function for each plan. This allows for an objective assessment of the quality of the plan and a benchmark for plan improvement for subsequent patients. The priorities of various components were incorporated based on similar hypo-fractionated protocols such as lung SBRT treatments. Results: Five patients have been treated at our institution using this approach. The plans were developed, QA performed, and ready within 5 days of simulation. Plan Quality metrics utilized in scoring included doses to OAR and target coverage. All patients tolerated treatment well and proceeded to surgery as scheduled. Reported toxicity included grade 1 nausea (n=1), grade 1 esophagitis (n=1), grade 2 fatigue (n=3). One patient had recurrent fluid accumulation following surgery. No patients experienced any pulmonary toxicity prior to surgery. Conclusion: An accelerated course of pre-operative high dose radiation for mesothelioma is an innovative and promising new protocol. Without historical data, one must proceed cautiously and monitor the data carefully. The development of quality metrics and scoring functions for these treatments allows us to benchmark our plans and monitor improvement. If subsequent toxicities occur, these will be easy to investigate and incorporate into the

  20. Metrical Segmentation in Dutch: Vowel Quality or Stress?

    Science.gov (United States)

    Quene, Hugo; Koster, Mariette L.

    1998-01-01

    Examines metrical segmentation strategy in Dutch. The first experiment shows that stress strongly affects Dutch listeners' ability and speed in spotting Dutch monosyllabic words in disyllabic nonwords. The second experiment finds the same stress effect when only the target words are presented without a subsequent syllable triggering segmentation.…

  1. Introduction to Investigation And Utilizing Lean Test Metrics In Agile Software Testing Methodologies

    Directory of Open Access Journals (Sweden)

    Padmaraj Nidagundi

    2016-04-01

    Full Text Available The growth of the software development industry approaches the new development methodologies to deliver the error free software to its end-user fulfilling the business values to product. The growth of tools and technology has brought the automation in the development and software testing process, it has also increased the demand of the fast testing and delivery of the software to end customers. Traditional software development methodologies to trending agile software development trend have brought new philosophy, dimensions, and processes having invested new tools to make process easy. The Agile development (Scrum, XP, FDD, BDD, ATDD, ASD, DSDM, Kanban, Crystal and Lean process also faces the software testing model crises because of the fast development of life cycles, fast delivery to end users without having appropriate test metrics which make the software testing process slow as well as increase the risk. The analysis of the testing metrics in the software testing process and setting the right lean test metrics help to improve the software quality effectively in agile process.

  2. Building a Reduced Reference Video Quality Metric with Very Low Overhead Using Multivariate Data Analysis

    Directory of Open Access Journals (Sweden)

    Tobias Oelbaum

    2008-10-01

    Full Text Available In this contribution a reduced reference video quality metric for AVC/H.264 is proposed that needs only a very low overhead (not more than two bytes per sequence. This reduced reference metric uses well established algorithms to measure objective features of the video such as 'blur' or 'blocking'. Those measurements are then combined into a single measurement for the overall video quality. The weights of the single features and the combination of those are determined using methods provided by multivariate data analysis. The proposed metric is verified using a data set of AVC/H.264 encoded videos and the corresponding results of a carefully designed and conducted subjective evaluation. Results show that the proposed reduced reference metric not only outperforms standard PSNR but also two well known full reference metrics.

  3. Extracting Patterns from Educational Traces via Clustering and Associated Quality Metrics

    NARCIS (Netherlands)

    Mihaescu, Marian; Tanasie, Alexandru; Dascalu, Mihai; Trausan-Matu, Stefan

    2016-01-01

    Clustering algorithms, pattern mining techniques and associated quality metrics emerged as reliable methods for modeling learners’ performance, comprehension and interaction in given educational scenarios. The specificity of available data such as missing values, extreme values or outliers, creates

  4. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    Science.gov (United States)

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  5. Improved structural similarity metric for the visible quality measurement of images

    Science.gov (United States)

    Lee, Daeho; Lim, Sungsoo

    2016-11-01

    The visible quality assessment of images is important to evaluate the performance of image processing methods such as image correction, compressing, and enhancement. The structural similarity is widely used to determine the visible quality; however, existing structural similarity metrics cannot correctly assess the perceived human visibility of images that have been slightly geometrically transformed or images that have undergone significant regional distortion. We propose an improved structural similarity metric that is more close to human visible evaluation. Compared with the existing metrics, the proposed method can more correctly evaluate the similarity between an original image and various distorted images.

  6. Better Metrics to Automatically Predict the Quality of a Text Summary

    Directory of Open Access Journals (Sweden)

    Judith D. Schlesinger

    2012-09-01

    Full Text Available In this paper we demonstrate a family of metrics for estimating the quality of a text summary relative to one or more human-generated summaries. The improved metrics are based on features automatically computed from the summaries to measure content and linguistic quality. The features are combined using one of three methods—robust regression, non-negative least squares, or canonical correlation, an eigenvalue method. The new metrics significantly outperform the previous standard for automatic text summarization evaluation, ROUGE.

  7. Quality metrics in high-dimensional data visualization: an overview and systematization.

    Science.gov (United States)

    Bertini, Enrico; Tatu, Andrada; Keim, Daniel

    2011-12-01

    In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research.

  8. The use of Software Quality Metrics in Software Maintenance

    OpenAIRE

    Kafura, Dennis G.; Reddy, Geereddy R.

    1985-01-01

    This paper reports on a modest study which relates seven different software complexity metrics to the experience of maintenance activities performed on a medium size sofhvare system. Three different versions of the system that evolved over aperiod of three years were analyzed in this study. A major revision of the system, while still in its design phase, was also analyzed. The results of this study indicate: (1) that the growth in system complexity as determined by the software...

  9. Design of video quality metrics with multi-way data analysis a data driven approach

    CERN Document Server

    Keimel, Christian

    2016-01-01

    This book proposes a data-driven methodology using multi-way data analysis for the design of video-quality metrics. It also enables video- quality metrics to be created using arbitrary features. This data- driven design approach not only requires no detailed knowledge of the human visual system, but also allows a proper consideration of the temporal nature of video using a three-way prediction model, corresponding to the three-way structure of video. Using two simple example metrics, the author demonstrates not only that this purely data- driven approach outperforms state-of-the-art video-quality metrics, which are often optimized for specific properties of the human visual system, but also that multi-way data analysis methods outperform the combination of two-way data analysis methods and temporal pooling. .

  10. Study on the quality evaluation metrics for compressed spaceborne hyperspectral data

    Institute of Scientific and Technical Information of China (English)

    LI; Xiaohui; ZHANG; Jing; LI; Chuanrong; LIU; Yi; LI; Ziyang; ZHU; Jiajia; ZENG; Xiangzhao

    2015-01-01

    Based on the raw data of spaceborne dispersive and interferometry imaging spectrometer,a set of quality evaluation metrics for compressed hyperspectral data is initially established in this paper.These quality evaluation metrics,which consist of four aspects including compression statistical distortion,sensor performance evaluation,data application performance and image quality,are suited to the comprehensive and systematical analysis of the impact of lossy compression in spaceborne hyperspectral remote sensing data quality.Furthermore,the evaluation results would be helpful to the selection and optimization of satellite data compression scheme.

  11. A management-oriented framework for selecting metrics used to assess habitat- and path-specific quality in spatially structured populations

    Science.gov (United States)

    Sam Nicol,; Ruscena Wiederholt,; Diffendorfer, James E.; Brady Mattsson,; Thogmartin, Wayne E.; Semmens, Darius J.; Laura Lopez-Hoffman,; Ryan Norris,

    2016-01-01

    Mobile species with complex spatial dynamics can be difficult to manage because their population distributions vary across space and time, and because the consequences of managing particular habitats are uncertain when evaluated at the level of the entire population. Metrics to assess the importance of habitats and pathways connecting habitats in a network are necessary to guide a variety of management decisions. Given the many metrics developed for spatially structured models, it can be challenging to select the most appropriate one for a particular decision. To guide the management of spatially structured populations, we define three classes of metrics describing habitat and pathway quality based on their data requirements (graph-based, occupancy-based, and demographic-based metrics) and synopsize the ecological literature relating to these classes. Applying the first steps of a formal decision-making approach (problem framing, objectives, and management actions), we assess the utility of metrics for particular types of management decisions. Our framework can help managers with problem framing, choosing metrics of habitat and pathway quality, and to elucidate the data needs for a particular metric. Our goal is to help managers to narrow the range of suitable metrics for a management project, and aid in decision-making to make the best use of limited resources.

  12. Using business intelligence to monitor clinical quality metrics.

    Science.gov (United States)

    Resetar, Ervina; Noirot, Laura A; Reichley, Richard M; Storey, Patricia; Skiles, Ann M; Traynor, Patrick; Dunagan, W Claiborne; Bailey, Thomas C

    2007-10-11

    BJC HealthCare (BJC) uses a number of industry standard indicators to monitor the quality of services provided by each of its hospitals. By establishing an enterprise data warehouse as a central repository of clinical quality information, BJC is able to monitor clinical quality performance in a timely manner and improve clinical outcomes.

  13. A no-reference video quality assessment metric based on ROI

    Science.gov (United States)

    Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan

    2015-01-01

    A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.

  14. Utility of different glycemic control metrics for optimizing management of diabetes.

    Science.gov (United States)

    Kohnert, Klaus-Dieter; Heinke, Peter; Vogt, Lutz; Salzsieder, Eckhard

    2015-02-15

    The benchmark for assessing quality of long-term glycemic control and adjustment of therapy is currently glycated hemoglobin (HbA1c). Despite its importance as an indicator for the development of diabetic complications, recent studies have revealed that this metric has some limitations; it conveys a rather complex message, which has to be taken into consideration for diabetes screening and treatment. On the basis of recent clinical trials, the relationship between HbA1c and cardiovascular outcomes in long-standing diabetes has been called into question. It becomes obvious that other surrogate and biomarkers are needed to better predict cardiovascular diabetes complications and assess efficiency of therapy. Glycated albumin, fructosamin, and 1,5-anhydroglucitol have received growing interest as alternative markers of glycemic control. In addition to measures of hyperglycemia, advanced glucose monitoring methods became available. An indispensible adjunct to HbA1c in routine diabetes care is self-monitoring of blood glucose. This monitoring method is now widely used, as it provides immediate feedback to patients on short-term changes, involving fasting, preprandial, and postprandial glucose levels. Beyond the traditional metrics, glycemic variability has been identified as a predictor of hypoglycemia, and it might also be implicated in the pathogenesis of vascular diabetes complications. Assessment of glycemic variability is thus important, but exact quantification requires frequently sampled glucose measurements. In order to optimize diabetes treatment, there is a need for both key metrics of glycemic control on a day-to-day basis and for more advanced, user-friendly monitoring methods. In addition to traditional discontinuous glucose testing, continuous glucose sensing has become a useful tool to reveal insufficient glycemic management. This new technology is particularly effective in patients with complicated diabetes and provides the opportunity to characterize

  15. A NEW OBJECT BASED QUALITY METRIC BASED ON SIFT AND SSIM

    OpenAIRE

    Decombas, Marc; Dufaux, Frederic; Renan, Erwann; Pesquet-Popescu, Beatrice; Capman, Francois

    2012-01-01

    ICIP2012; We propose a full reference visual quality metric to evaluate a semantic coding system which may not preserve exactly the position and/or the shape of objects. The metric is based on Scale-Invariant Feature Transform (SIFT) points. More specifically, Structural SIMilarity (SSIM) on windows around the SIFT points measures the compression artifacts (SSIM_SIFT). Conversely, the standard deviation of the matching distance between the SIFT points measures the geometric distortion (GEOMET...

  16. SU-E-T-572: A Plan Quality Metric for Evaluating Knowledge-Based Treatment Plans.

    Science.gov (United States)

    Chanyavanich, V; Lo, J; Das, S

    2012-06-01

    In prostate IMRT treatment planning, the variation in patient anatomy makes it difficult to estimate a priori the potentially achievable extent of dose reduction possible to the rectum and bladder. We developed a mutual information-based framework to estimate the achievable plan quality for a new patient, prior to any treatment planning or optimization. The knowledge-base consists of 250 retrospective prostate IMRT plans. Using these prior plans, twenty query cases were each matched with five cases from the database. We propose a simple DVH plan quality metric (PQ) based on the weighted-sum of the areas under the curve (AUC) of the PTV, rectum and bladder. We evaluate the plan quality of knowledge-based generated plans, and established a correlation between the plan quality and case similarity. The introduced plan quality metric correlates well (r2 = 0.8) with the mutual similarity between cases. A matched case with high anatomical similarity can be used to produce a new high quality plan. Not surprisingly, a poorly matched case with low degree of anatomical similarity tends to produce a low quality plan, since the adapted fluences from a dissimilar case cannot be modified sufficiently to yield acceptable PTV coverage. The plan quality metric is well-correlated to the degree of anatomical similarity between a new query case and matched cases. Further work will investigate how to apply this metric to further stratify and select cases for knowledge-based planning. © 2012 American Association of Physicists in Medicine.

  17. Design For Six Sigma with Critical-To-Quality Metrics for Research Investments

    Energy Technology Data Exchange (ETDEWEB)

    Logan, R W

    2005-06-22

    Design for Six Sigma (DFSS) has evolved as a worthy predecessor to the application of Six-Sigma principles to production, process control, and quality. At Livermore National Laboratory (LLNL), we are exploring the interrelation of our current research, development, and design safety standards as they would relate to the principles of DFSS and Six-Sigma. We have had success in prioritization of research and design using a quantitative scalar metric for value, so we further explore the use of scalar metrics to represent the outcome of our use of the DFSS process. We use the design of an automotive component as an example of combining DFSS metrics into a scalar decision quantity. We then extend this concept to a high-priority, personnel safety example representing work that is toward the mature end of DFSS, and begins the transition into Six-Sigma for safety assessments in a production process. This latter example and objective involves the balance of research investment, quality control, and system operation and maintenance of high explosive handling at LLNL and related production facilities. Assuring a sufficiently low probability of failure (reaction of a high explosive given an accidental impact) is a Critical-To-Quality (CTQ) component of our weapons and stockpile stewardship operation and cost. Our use of DFSS principles, with quantification and merging of CTQ metrics, provides ways to quantify clear (preliminary) paths forward for both the automotive example and the explosive safety example. The presentation of simple, scalar metrics to quantify the path forward then provides a focal point for qualitative caveats and discussion for inclusion of other metrics besides a single, provocative scalar. In this way, carrying a scalar decision metric along with the DFSS process motivates further discussion and ideas for process improvement from the DFSS into the Six-Sigma phase of the product. We end with an example of how our DFSS-generated scalar metric could be

  18. Recommendations for mass spectrometry data quality metrics for open access data (corollary to the Amsterdam Principles).

    Science.gov (United States)

    Kinsinger, Christopher R; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W; Deutsch, Eric W; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L; Omenn, Gilbert S; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L; Simpson, Richard J; Slotta, Douglas; Smith, Richard D; Stein, Stephen E; Tabb, David L; Tagle, Danilo; Yates, John R; Rodriguez, Henry

    2012-02-03

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the U.S. National Cancer Institute (NCI) convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: (1) an evolving list of comprehensive quality metrics and (2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals.

  19. Content based no-reference image quality metrics

    OpenAIRE

    Marini,, A.C.

    2012-01-01

    Images are playing a more and more important role in sharing, expressing, mining and exchanging information in our daily lives. Now we can all easily capture and share images anywhere and anytime. Since digital images are subject to a wide variety of distortions during acquisition, processing, compression, storage, transmission and reproduction; it becomes necessary to assess the Image Quality. In this thesis, starting from an organized overview of available Image Quality Assessment methods, ...

  20. Integrated concurrent utilization quality review, Part one.

    Science.gov (United States)

    Caterinicchio, R P

    1987-01-01

    This article is the first of a two-part series which argues for the concurrent management of the appropriateness, necessity, and quality of patient care. Intensifying scrutiny by the credentialing groups, the PROs and all third-party payors underscores the vital need to implement cost-effective information systems which integrate the departmentalized functions of patient-physician profiling, DRG case-mix analyses, length of stay monitoring, pre-admission/admission and continued stay review, discharge planning, risk management, incident reporting and quality review. In the domain of physician performance regarding admitting and practice patterns, the ability to exercise concurrent utilization-quality review means early detection and prevention of events which would otherwise result in denials of payment and/or compromised patient care. Concurrent utilization-quality review must, by definition, be managerially invasive and focused; hence, it is integral to maintaining the integrity of the services and product lines offered by the provider. In fact, if PPO status is a marketing agenda, then the institutional objectives of cost-effectiveness, productivity, value, and competitiveness can only be achieved through concurrent utilization-quality review.

  1. Comparison of surface-based and image-based quality metrics for the analysis of dimensional computed tomography data

    Directory of Open Access Journals (Sweden)

    Francisco A. Arenhart

    2016-11-01

    Full Text Available This paper presents a comparison of surface-based and image-based quality metrics for dimensional X-ray computed tomography (CT data. The chosen metrics are used to characterize two key aspects in acquiring signals with CT systems: the loss of information (blurring and the adding of unwanted information (noise. A set of structured experiments was designed to test the response of the metrics to different influencing factors. It is demonstrated that, under certain circumstances, the results of both types of metrics become conflicting, emphasizing the importance of using surface information for evaluating the quality dimensional CT data. Specific findings using both types of metrics are also discussed.

  2. Using full-reference image quality metrics for automatic image sharpening

    Science.gov (United States)

    Krasula, Lukas; Fliegel, Karel; Le Callet, Patrick; Klíma, Miloš

    2014-05-01

    Image sharpening is a post-processing technique employed for the artificial enhancement of the perceived sharpness by shortening the transitions between luminance levels or increasing the contrast on the edges. The greatest challenge in this area is to determine the level of perceived sharpness which is optimal for human observers. This task is complex because the enhancement is gained only until the certain threshold. After reaching it, the quality of the resulting image drops due to the presence of annoying artifacts. Despite the effort dedicated to the automatic sharpness estimation, none of the existing metrics is designed for localization of this threshold. Nevertheless, it is a very important step towards the automatic image sharpening. In this work, possible usage of full-reference image quality metrics for finding the optimal amount of sharpening is proposed and investigated. The intentionally over-sharpened "anchor image" was included to the calculation as the "anti-reference" and the final metric score was computed from the differences between reference, processed, and anchor versions of the scene. Quality scores obtained from the subjective experiment were used to determine the optimal combination of partial metric values. Five popular fidelity metrics - SSIM, MS-SSIM, IW-SSIM, VIF, and FSIM - were tested. The performance of the proposed approach was then verified in the subjective experiment.

  3. Quality Assessment of Adaptive Bitrate Videos using Image Metrics and Machine Learning

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Brunnström, Kjell

    2015-01-01

    Adaptive bitrate (ABR) streaming is widely used for distribution of videos over the internet. In this work, we investigate how well we can predict the quality of such videos using well-known image metrics, information about the bitrate levels, and a relatively simple machine learning method...

  4. Quality Metrics and Reliability Analysis of Laser Communication System

    Directory of Open Access Journals (Sweden)

    A. Arockia Bazil Raj

    2016-03-01

    Full Text Available Beam wandering is the main cause for major power loss in laser communication. To analyse this prerequisite at our environment, a 155 Mbps data transmission experimental setup is built with necessary optoelectronic components for the link range of 0.5 km at an altitude of 15.25 m. A neuro-controller is developed inside the FPGA and used to stabilise the received beam at the centre of detector plane. The Q-factor and bit error rate variation profiles are calculated using the signal statistics obtained from the eye-diagram. The performance improvements on the laser communication system due to the incorporation of beam wandering mitigation control are investigated and discussed in terms of various communication quality assessment key parameters.Defence Science Journal, Vol. 66, No. 2, March 2016, pp. 175-185, DOI: http://dx.doi.org/10.14429/dsj.66.9707

  5. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)*

    Science.gov (United States)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the “International Workshop on Proteomic Data Quality Metrics” in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: 1) an evolving list of comprehensive quality metrics and 2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals. PMID:22052993

  6. Improved color metrics in solid-state lighting via utilization of on-chip quantum dots

    Science.gov (United States)

    Mangum, Benjamin D.; Landes, Tiemo S.; Theobald, Brian R.; Kurtin, Juanita N.

    2017-02-01

    While Quantum Dots (QDs) have found commercial success in display applications, there are currently no widely available solid state lighting products making use of QD nanotechnology. In order to have real-world success in today's lighting market, QDs must be capable of being placed in on-chip configurations, as remote phosphor configurations are typically much more expensive. Here we demonstrate solid-state lighting devices made with on-chip QDs. These devices show robust reliability under both dry and wet high stress conditions. High color quality lighting metrics can easily be achieved using these narrow, tunable QD downconverters: CRI values of Ra > 90 as well as R9 values > 80 are readily available when combining QDs with green phosphors. Furthermore, we show that QDs afford a 15% increase in overall efficiency compared to traditional phosphor downconverted SSL devices. The fundamental limit of QD linewidth is examined through single particle QD emission studies. Using standard Cd-based QD synthesis, it is found that single particle linewidths of 20 nm FWHM represent a lower limit to the narrowness of QD emission in the near term.

  7. Quality metric in matched Laplacian of Gaussian response domain for blind adaptive optics image deconvolution

    Science.gov (United States)

    Guo, Shiping; Zhang, Rongzhi; Yang, Yikang; Xu, Rong; Liu, Changhai; Li, Jisheng

    2016-04-01

    Adaptive optics (AO) in conjunction with subsequent postprocessing techniques have obviously improved the resolution of turbulence-degraded images in ground-based astronomical observations or artificial space objects detection and identification. However, important tasks involved in AO image postprocessing, such as frame selection, stopping iterative deconvolution, and algorithm comparison, commonly need manual intervention and cannot be performed automatically due to a lack of widely agreed on image quality metrics. In this work, based on the Laplacian of Gaussian (LoG) local contrast feature detection operator, we propose a LoG domain matching operation to perceive effective and universal image quality statistics. Further, we extract two no-reference quality assessment indices in the matched LoG domain that can be used for a variety of postprocessing tasks. Three typical space object images with distinct structural features are tested to verify the consistency of the proposed metric with perceptual image quality through subjective evaluation.

  8. Quality metric for accurate overlay control in <20nm nodes

    Science.gov (United States)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  9. Comparison of macroinvertebrate-derived stream quality metrics between snag and riffle habitats

    Science.gov (United States)

    Stepenuck, K.F.; Crunkilton, R.L.; Bozek, Michael A.; Wang, L.

    2008-01-01

    We compared benthic macroinvertebrate assemblage structure at snag and riffle habitats in 43 Wisconsin streams across a range of watershed urbanization using a variety of stream quality metrics. Discriminant analysis indicated that dominant taxa at riffles and snags differed; Hydropsychid caddisflies (Hydropsyche betteni and Cheumatopsyche spp.) and elmid beetles (Optioservus spp. and Stenemlis spp.) typified riffles, whereas isopods (Asellus intermedius) and amphipods (Hyalella azteca and Gammarus pseudolimnaeus) predominated in snags. Analysis of covariance indicated that samples from snag and riffle habitats differed significantly in their response to the urbanization gradient for the Hilsenhoff biotic index (BI), Shannon's diversity index, and percent of filterers, shredders, and pollution intolerant Ephemeroptera, Plecoptera, and Trichoptera (EPT) at each stream site (p ??? 0.10). These differences suggest that although macroinvertebrate assemblages present in either habitat type are sensitive to detecting the effects of urbanization, metrics derived from different habitats should not be intermixed when assessing stream quality through biomonitoring. This can be a limitation to resource managers who wish to compare water quality among streams where the same habitat type is not available at all stream locations, or where a specific habitat type (i.e., a riffle) is required to determine a metric value (i.e., BI). To account for differences in stream quality at sites lacking riffle habitat, snag-derived metric values can be adjusted based on those obtained from riffles that have been exposed to the same level of urbanization. Comparison of nonlinear regression equations that related stream quality metric values from the two habitat types to percent watershed urbanization indicated that snag habitats had on average 30.2 fewer percent EPT individuals, a lower diversity index value than riffles, and a BI value of 0.29 greater than riffles. ?? 2008 American Water

  10. ENHANCED ENSEMBLE PREDICTION ALGORITHMS FOR DETECTING FAULTY MODULES IN OBJECT ORIENTED SYSTEMS USING QUALITY METRICS

    Directory of Open Access Journals (Sweden)

    M. Punithavalli

    2012-01-01

    Full Text Available The high usage of software system poses high quality demand from users, which results in increased software complexity. To address these complexities, software quality engineering methods should be updated accordingly and enhance their quality assuring methods. Fault prediction, a sub-task of SQE, is designed to solve this issue and provide a strategy to identify faulty parts of a program, so that the testing process can concentrate only on those regions. This will improve the testing process and indirectly help to reduce development life cycle, project risks, resource and infrastructure costs. Measuring quality using software metrics for fault identification is gaining wide interest in software industry as they help to reduce time and cost. Existing system use either traditional simple metrics or object oriented metrics during fault detection combined with single classifier prediction system. This study combines the use of simple and object oriented metrics and uses a multiple classifier prediction system to identify module faults. In this study, a total of 20 metrics combining both traditional and OO metrics are used for fault detection. To analyze the performance of these metrics on fault module detection, the study proposes the use of ensemble classifiers that uses three frequently used classifiers, Back Propagation Neural Network (BPNN, Support Vector Machine (SVM and K-Nearest Neighbour (KNN. A novel classifier aggregation method is proposed to combine the classification results. Four methods, Sequential Selection, Random Selection with No Replacement, Selection with Bagging and Selection with Boosting, are used to generate different variants of input dataset. The three classifiers were grouped together as 2-classifier and 3-classifier prediction ensemble models. A total of 16 ensemble models were proposed for fault prediction. The performance of the proposed prediciton models was analyzed using accuracy, precision, recall and F

  11. A quality metric for homology modeling: the H-factor

    Science.gov (United States)

    2011-01-01

    Background The analysis of protein structures provides fundamental insight into most biochemical functions and consequently into the cause and possible treatment of diseases. As the structures of most known proteins cannot be solved experimentally for technical or sometimes simply for time constraints, in silico protein structure prediction is expected to step in and generate a more complete picture of the protein structure universe. Molecular modeling of protein structures is a fast growing field and tremendous works have been done since the publication of the very first model. The growth of modeling techniques and more specifically of those that rely on the existing experimental knowledge of protein structures is intimately linked to the developments of high resolution, experimental techniques such as NMR, X-ray crystallography and electron microscopy. This strong connection between experimental and in silico methods is however not devoid of criticisms and concerns among modelers as well as among experimentalists. Results In this paper, we focus on homology-modeling and more specifically, we review how it is perceived by the structural biology community and what can be done to impress on the experimentalists that it can be a valuable resource to them. We review the common practices and provide a set of guidelines for building better models. For that purpose, we introduce the H-factor, a new indicator for assessing the quality of homology models, mimicking the R-factor in X-ray crystallography. The methods for computing the H-factor is fully described and validated on a series of test cases. Conclusions We have developed a web service for computing the H-factor for models of a protein structure. This service is freely accessible at http://koehllab.genomecenter.ucdavis.edu/toolkit/h-factor. PMID:21291572

  12. Environmental Quality and Aquatic Invertebrate Metrics Relationships at Patagonian Wetlands Subjected to Livestock Grazing Pressures.

    Directory of Open Access Journals (Sweden)

    Luis Beltrán Epele

    Full Text Available Livestock grazing can compromise the biotic integrity and health of wetlands, especially in remotes areas like Patagonia, which provide habitat for several endemic terrestrial and aquatic species. Understanding the effects of these land use practices on invertebrate communities can help prevent the deterioration of wetlands and provide insights for restoration. In this contribution, we assessed the responses of 36 metrics based on the structural and functional attributes of invertebrates (130 taxa at 30 Patagonian wetlands that were subject to different levels of livestock grazing intensity. These levels were categorized as low, medium and high based on eight features (livestock stock densities plus seven wetland measurements. Significant changes in environmental features were detected across the gradient of wetlands, mainly related to pH, conductivity, and nutrient values. Regardless of rainfall gradient, symptoms of eutrophication were remarkable at some highly disturbed sites. Seven invertebrate metrics consistently and accurately responded to livestock grazing on wetlands. All of them were negatively related to increased levels of grazing disturbance, with the number of insect families appearing as the most robust measure. A multivariate approach (RDA revealed that invertebrate metrics were significantly affected by environmental variables related to water quality: in particular, pH, conductivity, dissolved oxygen, nutrient concentrations, and the richness and coverage of aquatic plants. Our results suggest that the seven aforementioned metrics could be used to assess ecological quality in the arid and semi-arid wetlands of Patagonia, helping to ensure the creation of protected areas and their associated ecological services.

  13. Environmental Quality and Aquatic Invertebrate Metrics Relationships at Patagonian Wetlands Subjected to Livestock Grazing Pressures

    Science.gov (United States)

    2015-01-01

    Livestock grazing can compromise the biotic integrity and health of wetlands, especially in remotes areas like Patagonia, which provide habitat for several endemic terrestrial and aquatic species. Understanding the effects of these land use practices on invertebrate communities can help prevent the deterioration of wetlands and provide insights for restoration. In this contribution, we assessed the responses of 36 metrics based on the structural and functional attributes of invertebrates (130 taxa) at 30 Patagonian wetlands that were subject to different levels of livestock grazing intensity. These levels were categorized as low, medium and high based on eight features (livestock stock densities plus seven wetland measurements). Significant changes in environmental features were detected across the gradient of wetlands, mainly related to pH, conductivity, and nutrient values. Regardless of rainfall gradient, symptoms of eutrophication were remarkable at some highly disturbed sites. Seven invertebrate metrics consistently and accurately responded to livestock grazing on wetlands. All of them were negatively related to increased levels of grazing disturbance, with the number of insect families appearing as the most robust measure. A multivariate approach (RDA) revealed that invertebrate metrics were significantly affected by environmental variables related to water quality: in particular, pH, conductivity, dissolved oxygen, nutrient concentrations, and the richness and coverage of aquatic plants. Our results suggest that the seven aforementioned metrics could be used to assess ecological quality in the arid and semi-arid wetlands of Patagonia, helping to ensure the creation of protected areas and their associated ecological services. PMID:26448652

  14. Simulation of devices mobility to estimate wireless channel quality metrics in 5G networks

    Science.gov (United States)

    Orlov, Yu.; Fedorov, S.; Samuylov, A.; Gaidamaka, Yu.; Molchanov, D.

    2017-07-01

    The problem of channel quality estimation for devices in a wireless 5G network is formulated. As a performance metrics of interest we choose the signal-to-interference-plus-noise ratio, which depends essentially on the distance between the communicating devices. A model with a plurality of moving devices in a bounded three-dimensional space and a simulation algorithm to determine the distances between the devices for a given motion model are devised.

  15. Program analysis methodology Office of Transportation Technologies: Quality Metrics final report

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2002-03-01

    "Quality Metrics" is the analytical process for measuring and estimating future energy, environmental and economic benefits of US DOE Office of Energy Efficiency and Renewable Energy (EE/RE) programs. This report focuses on the projected benefits of the programs currently supported by the Office of Transportation Technologies (OTT) within EE/RE. For analytical purposes, these various benefits are subdivided in terms of Planning Units which are related to the OTT program structure.

  16. Developing a composite weighted quality metric to reflect the total benefit conferred by a health plan.

    Science.gov (United States)

    Taskler, Glen B; Braithwaite, R Scott

    2015-03-01

    To improve individual health quality measures, which are associated with varying degrees of health benefit, and composite quality metrics, which weight individual measures identically. We developed a health-weighted composite quality measure reflecting the total health benefit conferred by a health plan annually, using preventive care as a test case. Using national disease prevalence, we simulated a hypothetical insurance panel of individuals aged 25 to 84 years. For each individual, we estimated the gain in life expectancy associated with 1 year of health system exposure to encourage adherence to major preventive care guidelines, controlling for patient characteristics (age, race, gender, comorbidity) and variation in individual adherence rates. This personalized gain in life expectancy was used to proxy for the amount of health benefit conferred by a health plan annually to its members, and formed weights in our health-weighted composite quality measure. We aggregated health benefits across the health insurance membership panel to analyze total health system performance. Our composite quality metric gave the highest weights to health plans that succeeded in implementing tobacco cessation and weight loss. One year of compliance with these goals was associated with 2 to 10 times as much health benefit as compliance with easier-to-follow preventive care services, such as mammography, aspirin, and antihypertensives. For example, for women aged 55 to 64 years, successful interventions to encourage weight loss were associated with 2.1 times the health benefit of blood pressure reduction and 3.9 times the health benefit of increasing adherence with screening mammography. A single health-weighted quality metric may inform measurement of total health system performance.

  17. Sigma metrics in clinical chemistry laboratory – A guide to quality control

    Directory of Open Access Journals (Sweden)

    Usha S. Adiga

    2015-10-01

    Full Text Available Background: Six sigma is a process of quality measurement and improvement program used in industries. Sigma methodology can be applied wherever an outcome of a process is to be measured. A poor outcome is counted as an error or defect. This is quantified as defects per million (DPM. Six sigma provides a more quantitative frame work for evaluating process performance with evidence for process improvement and describes how many sigma fit within the tolerance limits. Sigma metrics can be used effectively in laboratory services. The present study was undertaken to evaluate the quality of the analytical performance of clinical chemistry laboratory by calculating sigma metrics. Methodology: The study was conducted in the clinical biochemistry laboratory of Karwar Institute of Medical Sciences, Karwar. Sigma metrics of 15 parameters with automated chemistry analyzer, transasia XL 640 were analyzed. The analytes assessed were glucose, urea, creatinine, uric acid, total bilirubin (BT, direct bilirubin (BD, total protein, albumin, SGOT, SGPT, ALP, Total cholesterol, triglycerides, HDL and Calcium. Results: We have sigma values <3 for Urea, ALT, BD, BT, Ca, creatinine (L1 and urea, AST, BD (L2. Sigma lies between 3-6 for Glucose, AST, cholesterol, uric acid, total protein(L1 and ALT, cholesterol, BT, calcium, creatinine and glucose (L2.Sigma was more than 6 for Triglyceride, ALP, HDL, albumin (L1 and TG, uric acid, ALP, HDL, albumin, total protein(L2. Conclusion: Sigma metrics helps to assess analytical methodologies and augment laboratory performance. It acts as a guide for planning quality control strategy. It can be a self assessment tool regarding the functioning of clinical laboratory.

  18. qcML: An Exchange Format for Quality Control Metrics from Mass Spectrometry Experiments*

    Science.gov (United States)

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W. P.; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A.; Kelstrup, Christian D.; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S.; Olsen, Jesper V.; Heck, Albert J. R.; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart

    2014-01-01

    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. PMID:24760958

  19. qcML: an exchange format for quality control metrics from mass spectrometry experiments.

    Science.gov (United States)

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W P; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A; Kelstrup, Christian D; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S; Olsen, Jesper V; Heck, Albert J R; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart

    2014-08-01

    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.

  20. Optimal Rate Control in H.264 Video Coding Based on Video Quality Metric

    Directory of Open Access Journals (Sweden)

    R. Karthikeyan

    2014-05-01

    Full Text Available The aim of this research is to find a method for providing better visual quality across the complete video sequence in H.264 video coding standard. H.264 video coding standard with its significantly improved coding efficiency finds important applications in various digital video streaming, storage and broadcast. To achieve comparable quality across the complete video sequence with the constrains on bandwidth availability and buffer fullness, it is important to allocate more bits to frames with high complexity or a scene change and fewer bits to other less complex frames. A frame layer bit allocation scheme is proposed based on the perceptual quality metric as indicator of the frame complexity. The proposed model computes the Quality Index ratio (QIr of the predicted quality index of the current frame to the average quality index of all the previous frames in the group of pictures which is used for bit allocation to the current frame along with bits computed based on buffer availability. The standard deviation of the perceptual quality indicator MOS computed for the proposed model is significantly less which means the quality of the video sequence is identical throughout the full video sequence. Thus the experiment results shows that the proposed model effectively handles the scene changes and scenes with high motion for better visual quality.

  1. The utility metric: a novel method to assess the overall performance of discrete brain-computer interfaces.

    Science.gov (United States)

    Dal Seno, Bernardo; Matteucci, Matteo; Mainardi, Luca T

    2010-02-01

    A relevant issue in a brain-computer interface (BCI) is the capability to efficiently convert user intentions into correct actions, and how to properly measure this efficiency. Usually, the evaluation of a BCI system is approached through the quantification of the classifier performance, which is often measured by means of the information transfer rate (ITR). A shortcoming of this approach is that the control interface design is neglected, and hence a poor description of the overall performance is obtained for real systems. To overcome this limitation, we propose a novel metric based on the computation of BCI Utility. The new metric can accurately predict the overall performance of a BCI system, as it takes into account both the classifier and the control interface characteristics. It is therefore suitable for design purposes, where we have to select the best options among different components and different parameters setup. In the paper, we compute Utility in two scenarios, a P300 speller and a P300 speller with an error correction system (ECS), for different values of accuracy of the classifier and recall of the ECS. Monte Carlo simulations confirm that Utility predicts the performance of a BCI better than ITR.

  2. The Nutrient Balance Concept: A New Quality Metric for Composite Meals and Diets.

    Directory of Open Access Journals (Sweden)

    Edward B Fern

    Full Text Available Combinations of foods that provide suitable levels of nutrients and energy are required for optimum health. Currently, however, it is difficult to define numerically what are 'suitable levels'.To develop new metrics based on energy considerations-the Nutrient Balance Concept (NBC-for assessing overall nutrition quality when combining foods and meals.The NBC was developed using the USDA Food Composition Database (Release 27 and illustrated with their MyPlate 7-day sample menus for a 2000 calorie food pattern. The NBC concept is centered on three specific metrics for a given food, meal or diet-a Qualifying Index (QI, a Disqualifying Index (DI and a Nutrient Balance (NB. The QI and DI were determined, respectively, from the content of 27 essential nutrients and 6 nutrients associated with negative health outcomes. The third metric, the Nutrient Balance (NB, was derived from the Qualifying Index (QI and provided key information on the relative content of qualifying nutrients in the food. Because the Qualifying and Disqualifying Indices (QI and DI were standardized to energy content, both become constants for a given food/meal/diet and a particular consumer age group, making it possible to develop algorithms for predicting nutrition quality when combining different foods.Combining different foods into composite meals and daily diets led to improved nutrition quality as seen by QI values closer to unity (indicating nutrient density was better equilibrated with energy density, DI values below 1.0 (denoting an acceptable level of consumption of disqualifying nutrients and increased NB values (signifying complementarity of foods and better provision of qualifying nutrients.The Nutrient Balance Concept (NBC represents a new approach to nutrient profiling and the first step in the progression from the nutrient evaluation of individual foods to that of multiple foods in the context of meals and total diets.

  3. The Nutrient Balance Concept: A New Quality Metric for Composite Meals and Diets.

    Science.gov (United States)

    Fern, Edward B; Watzke, Heribert; Barclay, Denis V; Roulin, Anne; Drewnowski, Adam

    2015-01-01

    Combinations of foods that provide suitable levels of nutrients and energy are required for optimum health. Currently, however, it is difficult to define numerically what are 'suitable levels'. To develop new metrics based on energy considerations-the Nutrient Balance Concept (NBC)-for assessing overall nutrition quality when combining foods and meals. The NBC was developed using the USDA Food Composition Database (Release 27) and illustrated with their MyPlate 7-day sample menus for a 2000 calorie food pattern. The NBC concept is centered on three specific metrics for a given food, meal or diet-a Qualifying Index (QI), a Disqualifying Index (DI) and a Nutrient Balance (NB). The QI and DI were determined, respectively, from the content of 27 essential nutrients and 6 nutrients associated with negative health outcomes. The third metric, the Nutrient Balance (NB), was derived from the Qualifying Index (QI) and provided key information on the relative content of qualifying nutrients in the food. Because the Qualifying and Disqualifying Indices (QI and DI) were standardized to energy content, both become constants for a given food/meal/diet and a particular consumer age group, making it possible to develop algorithms for predicting nutrition quality when combining different foods. Combining different foods into composite meals and daily diets led to improved nutrition quality as seen by QI values closer to unity (indicating nutrient density was better equilibrated with energy density), DI values below 1.0 (denoting an acceptable level of consumption of disqualifying nutrients) and increased NB values (signifying complementarity of foods and better provision of qualifying nutrients). The Nutrient Balance Concept (NBC) represents a new approach to nutrient profiling and the first step in the progression from the nutrient evaluation of individual foods to that of multiple foods in the context of meals and total diets.

  4. Macroinvertebrate and diatom metrics as indicators of water-quality conditions in connected depression wetlands in the Mississippi Alluvial Plain

    Science.gov (United States)

    Justus, Billy; Burge, David; Cobb, Jennifer; Marsico, Travis; Bouldin, Jennifer

    2016-01-01

    Methods for assessing wetland conditions must be established so wetlands can be monitored and ecological services can be protected. We evaluated biological indices compiled from macroinvertebrate and diatom metrics developed primarily for streams to assess their ability to indicate water quality in connected depression wetlands. We collected water-quality and biological samples at 24 connected depressions dominated by water tupelo (Nyssa aquatica) or bald cypress (Taxodium distichum) (water depths = 0.5–1.0 m). Water quality of the least-disturbed connected depressions was characteristic of swamps in the southeastern USA, which tend to have low specific conductance, nutrient concentrations, and pH. We compared 162 macroinvertebrate metrics and 123 diatom metrics with a water-quality disturbance gradient. For most metrics, we evaluated richness, % richness, abundance, and % relative abundance values. Three of the 4 macroinvertebrate metrics that were most beneficial for identifying disturbance in connected depressions decreased along the disturbance gradient even though they normally increase relative to stream disturbance. The negative relationship to disturbance of some taxa (e.g., dipterans, mollusks, and crustaceans) that are considered tolerant in streams suggests that the tolerance scale for some macroinvertebrates can differ markedly between streams and wetlands. Three of the 4 metrics chosen for the diatom index reflected published tolerances or fit the usual perception of metric response to disturbance. Both biological indices may be useful in connected depressions elsewhere in the Mississippi Alluvial Plain Ecoregion and could have application in other wetland types. Given the paradoxical relationship of some macroinvertebrate metrics to dissolved O2 (DO), we suggest that the diatom metrics may be easier to interpret and defend for wetlands with low DO concentrations in least-disturbed conditions.

  5. Using research metrics to evaluate the International Atomic Energy Agency guidelines on quality assurance for R&D

    Energy Technology Data Exchange (ETDEWEB)

    Bodnarczuk, M.

    1994-06-01

    The objective of the International Atomic Energy Agency (IAEA) Guidelines on Quality Assurance for R&D is to provide guidance for developing quality assurance (QA) programs for R&D work on items, services, and processes important to safety, and to support the siting, design, construction, commissioning, operation, and decommissioning of nuclear facilities. The standard approach to writing papers describing new quality guidelines documents is to present a descriptive overview of the contents of the document. I will depart from this approach. Instead, I will first discuss a conceptual framework of metrics for evaluating and improving basic and applied experimental science as well as the associated role that quality management should play in understanding and implementing these metrics. I will conclude by evaluating how well the IAEA document addresses the metrics from this conceptual framework and the broader principles of quality management.

  6. The role of metrics and measurements in a software intensive total quality management environment

    Science.gov (United States)

    Daniels, Charles B.

    1992-01-01

    Paramax Space Systems began its mission as a member of the Rockwell Space Operations Company (RSOC) team which was the successful bidder on a massive operations consolidation contract for the Mission Operations Directorate (MOD) at JSC. The contract awarded to the team was the Space Transportation System Operations Contract (STSOC). Our initial challenge was to accept responsibility for a very large, highly complex and fragmented collection of software from eleven different contractors and transform it into a coherent, operational baseline. Concurrently, we had to integrate a diverse group of people from eleven different companies into a single, cohesive team. Paramax executives recognized the absolute necessity to develop a business culture based on the concept of employee involvement to execute and improve the complex process of our new environment. Our executives clearly understood that management needed to set the example and lead the way to quality improvement. The total quality management policy and the metrics used in this endeavor are presented.

  7. Using image quality metrics to identify adversarial imagery for deep learning networks

    Science.gov (United States)

    Harguess, Josh; Miclat, Jeremy; Raheema, Julian

    2017-05-01

    Deep learning has continued to gain momentum in applications across many critical areas of research in computer vision and machine learning. In particular, deep learning networks have had much success in image classification, especially when training data are abundantly available, as is the case with the ImageNet project. However, several researchers have exposed potential vulnerabilities of these networks to carefully crafted adversarial imagery. Additionally, researchers have shown the sensitivity of these networks to some types of noise and distortion. In this paper, we investigate the use of no-reference image quality metrics to identify adversarial imagery and images of poor quality that could potentially fool a deep learning network or dramatically reduce its accuracy. Results are shown on several adversarial image databases with comparisons to popular image classification databases.

  8. Definition of Metric Dependencies for Monitoring the Impact of Quality of Services on Quality of Processes

    OpenAIRE

    2007-01-01

    Service providers have to monitor the quality of offered services and to ensure the compliance of service levels provider and requester agreed on. Thereby, a service provider should notify a service requester about violations of service level agreements (SLAs). Furthermore, the provider should point to impacts on affected processes in which services are invoked. For that purpose, a model is needed to define dependencies between quality of processes and quality of invoked services. In order to...

  9. The software product assurance metrics study: JPL's software systems quality and productivity

    Science.gov (United States)

    Bush, Marilyn W.

    1989-01-01

    The findings are reported of the Jet Propulsion Laboratory (JPL)/Software Product Assurance (SPA) Metrics Study, conducted as part of a larger JPL effort to improve software quality and productivity. Until recently, no comprehensive data had been assembled on how JPL manages and develops software-intensive systems. The first objective was to collect data on software development from as many projects and for as many years as possible. Results from five projects are discussed. These results reflect 15 years of JPL software development, representing over 100 data points (systems and subsystems), over a third of a billion dollars, over four million lines of code and 28,000 person months. Analysis of this data provides a benchmark for gauging the effectiveness of past, present and future software development work. In addition, the study is meant to encourage projects to record existing metrics data and to gather future data. The SPA long term goal is to integrate the collection of historical data and ongoing project data with future project estimations.

  10. Quality electric motor repair: A guidebook for electric utilities

    Energy Technology Data Exchange (ETDEWEB)

    Schueler, V.; Douglass, J.

    1995-08-01

    This guidebook provides utilities with a resource for better understanding and developing their roles in relation to electric motor repair shops and the industrial and commercial utility customers that use them. The guidebook includes information and tools that utilities can use to raise the quality of electric motor repair practices in their service territories.

  11. The Development and Demonstration of The Metric Assessment Tool

    Science.gov (United States)

    1993-09-01

    motivate continuous improvement and likewise quality. Attributen of MNaninafui Metrica Section Overview. The importance of metrics cannot be overstated...some of the attributes of meaningful measures discussed earlier in this chapter. The Metrica Handbook. This guide is utilized by a variety of Air...Metric Assessment Tool. 3-8 Metrica Belaction The metric assessment tool was designed to apply to any type of metric. Two criteria were established for

  12. SVD-based quality metric for image and video using machine learning.

    Science.gov (United States)

    Narwaria, Manish; Lin, Weisi

    2012-04-01

    We study the use of machine learning for visual quality evaluation with comprehensive singular value decomposition (SVD)-based visual features. In this paper, the two-stage process and the relevant work in the existing visual quality metrics are first introduced followed by an in-depth analysis of SVD for visual quality assessment. Singular values and vectors form the selected features for visual quality assessment. Machine learning is then used for the feature pooling process and demonstrated to be effective. This is to address the limitations of the existing pooling techniques, like simple summation, averaging, Minkowski summation, etc., which tend to be ad hoc. We advocate machine learning for feature pooling because it is more systematic and data driven. The experiments show that the proposed method outperforms the eight existing relevant schemes. Extensive analysis and cross validation are performed with ten publicly available databases (eight for images with a total of 4042 test images and two for video with a total of 228 videos). We use all publicly accessible software and databases in this study, as well as making our own software public, to facilitate comparison in future research.

  13. Recommendations for mass spectrometry data quality metrics for open access data(corollary to the Amsterdam principles)

    Energy Technology Data Exchange (ETDEWEB)

    Kingsinger, Christopher R.; Apffel, James; Baker, Mark S.; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph A.; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William S.; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry

    2011-12-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the U.S. National Cancer Institute (NCI) convened the 'International Workshop on Proteomic Data Quality Metrics' in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the search community, journals, funding agencies, and data repositories. Attendees discussed and agreed upon two primary needs for the wide use of quality metrics: (i)an evolving list of comprehensive quality metrics and (ii)standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in Proteomics, Proteomics Clinical Applications, Journal of Proteome Research, and Molecular and Cellular Proteomics, as a public service to the research community.The peer review process was a coordinated effort conducted by a panel of referees selected by the journals.

  14. On using Multiple Quality Link Metrics with Destination Sequenced Distance Vector Protocol for Wireless Multi-Hop Networks

    CERN Document Server

    Javaid, N; Khan, Z A; Djouani, K

    2012-01-01

    In this paper, we compare and analyze performance of five quality link metrics forWireless Multi-hop Networks (WMhNs). The metrics are based on loss probability measurements; ETX, ETT, InvETX, ML and MD, in a distance vector routing protocol; DSDV. Among these selected metrics, we have implemented ML, MD, InvETX and ETT in DSDV which are previously implemented with different protocols; ML, MD, InvETX are implemented with OLSR, while ETT is implemented in MR-LQSR. For our comparison, we have selected Throughput, Normalized Routing Load (NRL) and End-to-End Delay (E2ED) as performance parameters. Finally, we deduce that InvETX due to low computational burden and link asymmetry measurement outperforms among all metrics.

  15. Workshop summary: 'Integrating air quality and climate mitigation - is there a need for new metrics to support decision making?'

    Science.gov (United States)

    von Schneidemesser, E.; Schmale, J.; Van Aardenne, J.

    2013-12-01

    Air pollution and climate change are often treated at national and international level as separate problems under different regulatory or thematic frameworks and different policy departments. With air pollution and climate change being strongly linked with regard to their causes, effects and mitigation options, the integration of policies that steer air pollutant and greenhouse gas emission reductions might result in cost-efficient, more effective and thus more sustainable tackling of the two problems. To support informed decision making and to work towards an integrated air quality and climate change mitigation policy requires the identification, quantification and communication of present-day and potential future co-benefits and trade-offs. The identification of co-benefits and trade-offs requires the application of appropriate metrics that are well rooted in science, easy to understand and reflect the needs of policy, industry and the public for informed decision making. For the purpose of this workshop, metrics were loosely defined as a quantified measure of effect or impact used to inform decision-making and to evaluate mitigation measures. The workshop held on October 9 and 10 and co-organized between the European Environment Agency and the Institute for Advanced Sustainability Studies brought together representatives from science, policy, NGOs, and industry to discuss whether current available metrics are 'fit for purpose' or whether there is a need to develop alternative metrics or reassess the way current metrics are used and communicated. Based on the workshop outcome the presentation will (a) summarize the informational needs and current application of metrics by the end-users, who, depending on their field and area of operation might require health, policy, and/or economically relevant parameters at different scales, (b) provide an overview of the state of the science of currently used and newly developed metrics, and the scientific validity of these

  16. Evaluation of cassette-based digital radiography detectors using standardized image quality metrics: AAPM TG-150 Draft Image Detector Tests.

    Science.gov (United States)

    Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E

    2016-09-08

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be

  17. Water Quality Response to Forest Biomass Utilization

    Science.gov (United States)

    Benjamin Rau; Augustine Muwamba; Carl Trettin; Sudhanshu Panda; Devendra Amatya; Ernest Tollner

    2017-01-01

    Forested watersheds provide approximately 80% of freshwater drinking resources in the United States (Fox et al. 2007). The water originating from forested watersheds is typically of high quality when compared to agricul¬tural watersheds, and concentrations of nitrogen and phosphorus are nine times higher, on average, in agricultur¬al watersheds when compared to...

  18. A New Normalizing Algorithm for BAC CGH Arrays with Quality Control Metrics

    Directory of Open Access Journals (Sweden)

    Jeffrey C. Miecznikowski

    2011-01-01

    Full Text Available The main focus in pin-tip (or print-tip microarray analysis is determining which probes, genes, or oligonucleotides are differentially expressed. Specifically in array comparative genomic hybridization (aCGH experiments, researchers search for chromosomal imbalances in the genome. To model this data, scientists apply statistical methods to the structure of the experiment and assume that the data consist of the signal plus random noise. In this paper we propose “SmoothArray”, a new method to preprocess comparative genomic hybridization (CGH bacterial artificial chromosome (BAC arrays and we show the effects on a cancer dataset. As part of our R software package “aCGHplus,” this freely available algorithm removes the variation due to the intensity effects, pin/print-tip, the spatial location on the microarray chip, and the relative location from the well plate. removal of this variation improves the downstream analysis and subsequent inferences made on the data. Further, we present measures to evaluate the quality of the dataset according to the arrayer pins, 384-well plates, plate rows, and plate columns. We compare our method against competing methods using several metrics to measure the biological signal. With this novel normalization algorithm and quality control measures, the user can improve their inferences on datasets and pinpoint problems that may arise in their BAC aCGH technology.

  19. Automating Quality Metrics in the Era of Electronic Medical Records: Digital Signatures for Ventilator Bundle Compliance.

    Science.gov (United States)

    Lan, Haitao; Thongprayoon, Charat; Ahmed, Adil; Herasevich, Vitaly; Sampathkumar, Priya; Gajic, Ognjen; O'Horo, John C

    2015-01-01

    Ventilator-associated events (VAEs) are associated with increased risk of poor outcomes, including death. Bundle practices including thromboembolism prophylaxis, stress ulcer prophylaxis, oral care, and daily sedation breaks and spontaneous breathing trials aim to reduce rates of VAEs and are endorsed as quality metrics in the intensive care units. We sought to create electronic search algorithms (digital signatures) to evaluate compliance with ventilator bundle components as the first step in a larger project evaluating the ventilator bundle effect on VAE. We developed digital signatures of bundle compliance using a retrospective cohort of 542 ICU patients from 2010 for derivation and validation and testing of signature accuracy from a cohort of random 100 patients from 2012. Accuracy was evaluated against manual chart review. Overall, digital signatures performed well, with median sensitivity of 100% (range, 94.4%-100%) and median specificity of 100% (range, 100%-99.8%). Automated ascertainment from electronic medical records accurately assesses ventilator bundle compliance and can be used for quality reporting and research in VAE.

  20. Evaluation of the performance of a micromethod for measuring urinary iodine by using six sigma quality metrics.

    Science.gov (United States)

    Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud

    2013-09-01

    The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)Sigma metrics at all concentrations. Only one laboratory had TEcalc

  1. Implementing Composite Quality Metrics for Bipolar Disorder: Towards a More Comprehensive Approach to Quality Measurement

    Science.gov (United States)

    Kilbourne, Amy M.; Farmer, Carrie; Welsh, Deborah; Pincus, Harold Alan; Lasky, Elaine; Perron, Brian; Bauer, Mark S.

    2011-01-01

    Objective We implemented a set of processes of care measures for bipolar disorder that reflect psychosocial, patient preference, and continuum of care approaches to mental health, and examined whether veterans with bipolar disorder receive care concordant with these practices. Method Data from medical record reviews were used to assess key processes of care for 433 VA mental health outpatients with bipolar disorder. Both composite and individual processes of care measures were operationalized. Results Based on composite measures, 17% had documented assessment of psychiatric symptoms (e.g., psychotic, hallucinatory), 28% had documented patient treatment preferences (e.g., reasons for treatment discontinuation), 56% had documented substance abuse and psychiatric comorbidity assessment, and 62% had documentation of adequate cardiometabolic assessment. No-show visits were followed up 20% of the time and monitoring of weight gain was noted in only 54% of the patient charts. In multivariate analyses, history of homelessness (OR=1.61; 95% CI=1.05-2.46) and nonwhite race (OR=1.74; 95%CI=1.02-2.98) were associated with documentation of psychiatric symptoms and comorbidities, respectively. Conclusions Only half of patients diagnosed with bipolar disorder received care in accordance with clinical practice guidelines. High quality treatment of bipolar disorder includes not only adherence to treatment guidelines but also patient-centered care processes. PMID:21112457

  2. Quality Metrics and Systems Transformation: Are We Advancing Alcohol and Drug Screening in Primary Care?

    Science.gov (United States)

    Rieckmann, Traci; Renfro, Stephanie; McCarty, Dennis; Baker, Robin; McConnell, K John

    2017-05-31

    To examine the influence of Oregon's coordinated care organizations (CCOs) and pay-for-performance incentive model on completion of screening and brief intervention (SBI) and utilization of substance use disorder (SUD) treatment services. Secondary analysis of Medicaid encounter data from 2012 to 2015 and semiannual qualitative interviews with stakeholders in CCOs. Longitudinal mixed-methods design with simultaneous data collection with equal importance. Qualitative interviews were recorded, transcribed, and coded in ATLAS.ti. Quantitative data included Medicaid encounters 30 months prior to CCO implementation, a 6-month transition period, and 30 months following CCO implementation. Data were aggregated by half-year with analyses restricted to Medicaid recipients 18-64 years of age enrolled in a CCO, not eligible for Medicare coverage or Medicaid expansion. Quantitative analysis documented a significant increase in SBI rates coinciding with CCO implementation (0.1 to 4.6 percent). Completed SBI was not associated with increased initiation in treatment for SUD diagnoses. Qualitative analysis highlighted importance of aligning incentives, workflow redesign, and leadership to facilitate statewide SBI. Results provide modest support for use of a performance metric to expand SBI in primary care. Future research should examine health reform efforts that increase initiation and engagement in SUD treatment. © Health Research and Educational Trust.

  3. Calculating Air Quality and Climate Co-Benefits Metrics from Adjoint Elasticities in Chemistry-Climate Models

    Science.gov (United States)

    Spak, S.; Henze, D. K.; Carmichael, G. R.

    2013-12-01

    The science and policy communities both need common metrics that clearly, comprehensively, and intuitively communicate the relative sensitivities of air quality and climate to emissions control strategies, include emissions and process uncertainties, and minimize the range of error that is transferred to the metric. This is particularly important because most emissions control policies impact multiple short-lived climate forcing agents, and non-linear climate and health responses in space and time limit the accuracy and policy value of simple emissions-based calculations. Here we describe and apply new second-order elasticity metrics to support the direct comparison of emissions control policies for air quality and health co-benefits analyses using adjoint chemical transport and chemistry-climate models. Borrowing an econometric concept, the simplest elasticities in the atmospheric system are the percentage changes in concentrations due to a percentage change in the emissions. We propose a second-order elasticity metric, the Emissions Reduction Efficiency, which supports comparison across compounds, to long-lived climate forcing agents like CO2, and to other air quality impacts, at any temporal or spatial scale. These adjoint-based metrics (1) possess a single uncertainty range; (2) allow for the inclusion of related health and other impacts effects within the same framework; (3) take advantage of adjoint and forward sensitivity models; and (4) are easily understood. Using global simulations with the adjoint of GEOS-Chem, we apply these metrics to identify spatial and sectoral variability in the climate and health co-benefits of sectoral emissions controls on black carbon, sulfur dioxide, and PM2.5. We find spatial gradients in optimal control strategies on every continent, along with differences among megacities.

  4. APP软件质量度量的研究%Research on Quality Metrics of Mobile Applications

    Institute of Scientific and Technical Information of China (English)

    刘莉芳

    2016-01-01

    本文分析了APP软件的特点和应关注的质量特性,并对每个质量特性提出了相应的度量指标和度量方法。%This paper discusses the characteristics and essential quality attributes of apps, and puts forward corresponding metrics indexs for each quality attribute and measurement method.

  5. Engineering performance metrics

    Science.gov (United States)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  6. Evaluating water quality investments using cost utility analysis.

    Science.gov (United States)

    Hajkowicz, Stefan; Spencer, Rachel; Higgins, Andrew; Marinoni, Oswald

    2008-09-01

    This study borrows concepts from healthcare economics and uses cost utility analysis (CUA) to select an optimum portfolio of water quality enhancement projects in Perth, Western Australia. In CUA, costs are handled via standard discounted cash flow analysis, but the benefits, being intangible, are measured with a utility score. Our novel methodology combines CUA with a binary combinatorial optimisation solver, known as a 'knapsack algorithm', to identify the optimum portfolio of projects. We show how water quality projects can be selected to maximise an aggregate utility score while not exceeding a budget constraint. Our CUA model applies compromise programming (CP) to measure utility over multiple attributes in different units. CUA is shown to provide a transparent and analytically robust method to maximise benefits from water quality remediation investments under a constrained budget.

  7. The impact of interhospital transfers on surgical quality metrics for academic medical centers.

    Science.gov (United States)

    Crippen, Cristina J; Hughes, Steven J; Chen, Sugong; Behrns, Kevin E

    2014-07-01

    The emergence of pay-for-performance systems pose a risk to an academic medical center's (AMC) mission to provide care for interhospital surgical transfer patients. This study examines quality metrics and resource consumption for a sample of these patients from the University Health System Consortium (UHC) and our Department of Surgery (DOS). Standard benchmarks, including mortality rate, length of stay (LOS), and cost, were used to evaluate the impact of interhospital surgical transfers versus direct admission (DA) patients from January 2010 to December 2012. For 1,423,893 patients, the case mix index for transfer patients was 38 per cent (UHC) and 21 per cent (DOS) greater than DA patients. Mortality rates were 5.70 per cent (UHC) and 6.93 per cent (DOS) in transferred patients compared with 1.79 per cent (UHC) and 2.93 per cent (DOS) for DA patients. Mean LOS for DA patients was 4 days shorter. Mean total costs for transferred patients were greater $13,613 (UHC) and $13,356 (DOS). Transfer patients have poorer outcomes and consume more resources than DA patients. Early recognition and transfer of complex surgical patients may improve patient rescue and decrease resource consumption. Surgeons at AMCs and in the community should develop collaborative programs that permit collective assessment and decision-making for complicated surgical patients.

  8. 层次型Java软件质量度量模型研究%Research on Layered Quality Metrics Model for Java Programme

    Institute of Scientific and Technical Information of China (English)

    黄璜; 周欣; 孙家骕

    2003-01-01

    Metrics model is in fact a cluster of criterions to assess software, which may show the characteristics ofdifferent software systems or modules and then serve different demands from users. The research on software metricstries to give characteristic evaluations to software components in component extraction, and then supports users to se-lect reusable components in high quality.Java has been one of the main languages today. With consideration of characteristics of Java and research on somegeneral metrics model, our model: Quality Metrics Model for Java is born.Following the principle of "Factor-Criterion-Metrics", more detailed descriptions of factors, criterions and met-rics of our model are given. In fact, the metrics model shows us some way for consideration. Through this model, wehope to normalize the point of the views of users.In JavaSQMM, four activities organize software quality evaluating: understanding, function implementing,maintaining and reusing, and then four corresponding factors of quality come to birth, which are mixed by criteria andmetrics.When designing our Java metrics model, the original development of Object Oriented Metrics Model Tool for Ja-va(OOMTJava)provides the support to process of metrics semi-automatically.

  9. TerrorCat: a translation error categorization-based MT quality metric

    OpenAIRE

    2012-01-01

    We present TerrorCat, a submission to the WMT’12 metrics shared task. TerrorCat uses frequencies of automatically obtained translation error categories as base for pairwise comparison of translation hypotheses, which is in turn used to generate a score for every translation. The metric shows high overall correlation with human judgements on the system level and more modest results on the level of individual sentences.

  10. SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, T; Ruan, D [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject to random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be

  11. UTILIZATION OF QUALITY TOOLS: DOES SECTOR AND SIZE MATTER?

    Directory of Open Access Journals (Sweden)

    Luis Fonseca

    2015-12-01

    Full Text Available This research focuses on the influence of company sector and size on the level of utilization of Basic and Advanced Quality Tools. The paper starts with a literature review and then presents the methodology used for the survey. Based on the responses from 202 managers of Portuguese ISO 9001:2008 Quality Management System certified organizations, statistical tests were performed. Results show, with 95% confidence level, that industry and services have a similar proportion of use of Basic and Advanced Quality Tools. Concerning size, bigger companies show a higher trend to use Advanced Quality Tools than smaller ones. For Basic Quality Tools, there was no statistical significant difference at a 95% confidence level for different company sizes. The three basic Quality tools with higher utilization were Check sheets, Flow charts and Histograms (for Services or Control Charts/ (for Industry, however 22% of the surveyed organizations reported not using Basic Quality Tools, which highlights a major improvement opportunity for these companies. Additional studies addressing motivations, benefits and barriers for Quality Tools application should be undertaken for further validation and understanding of these results.

  12. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery.

    Science.gov (United States)

    Shiraishi, Satomi; Tan, Jun; Olsen, Lindsey A; Moore, Kevin L

    2015-02-01

    The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose-volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V10Gy (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution's VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QMclin - QMpred, and a coefficient of determination, R(2). For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. The most accurate predictions are obtained when plans are stratified based on proximity to OARs and their PTV

  13. Cost and quality of fuels for electric utility plants, 1994

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-14

    This document presents an annual summary of statistics at the national, Census division, State, electric utility, and plant levels regarding the quantity, quality, and cost of fossil fuels used to produce electricity. Purpose of this publication is to provide energy decision-makers with accurate, timely information that may be used in forming various perspectives on issues regarding electric power.

  14. Cost and quality of fuels for electric utility plants, 1992

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-02

    This publication presents an annual summary of statistics at the national, Census division, State, electric utility, and plant levels regarding the quantity, quality, and cost of fossil fuels used to produce electricity. The purpose of this publication is to provide energy decision-makers with accurate and timely information that may be used in forming various perspectives on issues regarding electric power.

  15. Analysis of reliability metrics and quality enhancement measures in current density imaging.

    Science.gov (United States)

    Foomany, F H; Beheshti, M; Magtibay, K; Masse, S; Foltz, W; Sevaptsidis, E; Lai, P; Jaffray, D A; Krishnan, S; Nanthakumar, K; Umapathy, K

    2013-01-01

    Low frequency current density imaging (LFCDI) is a magnetic resonance imaging (MRI) technique which enables calculation of current pathways within the medium of study. The induced current produces a magnetic flux which presents itself in phase images obtained through MRI scanning. A class of LFCDI challenges arises from the subject rotation requirement, which calls for reliability analysis metrics and specific image registration techniques. In this study these challenges are formulated and in light of proposed discussions, the reliability analysis of calculation of current pathways in a designed phantom and a pig heart is presented. The current passed is measured with less than 5% error for phantom, using CDI method. It is shown that Gauss's law for magnetism can be treated as reliability metric in matching the images in two orientations. For the phantom and pig heart the usefulness of image registration for mitigation of rotation errors is demonstrated. The reliability metric provides a good representation of the degree of correspondence between images in two orientations for phantom and pig heart. In our CDI experiments this metric produced values of 95% and 26%, for phantom, and 88% and 75% for pig heart, for mismatch rotations of 0 and 20 degrees respectively.

  16. Convective Weather Forecast Quality Metrics for Air Traffic Management Decision-Making

    Science.gov (United States)

    Chatterji, Gano B.; Gyarfas, Brett; Chan, William N.; Meyn, Larry A.

    2006-01-01

    the process described in Refs. 5 through 7, in terms of percentage coverage or confidence level is notionally sound compared to characterizing in terms of probabilities because the probability of the forecast being correct can only be determined using actual observations. References 5 through 7 only use the forecast data and not the observations. The method for computing the probability of detection, false alarm ratio and several forecast quality metrics (Skill Scores) using both the forecast and observation data are given in Ref. 2. This paper extends the statistical verification method in Ref. 2 to determine co-occurrence probabilities. The method consists of computing the probability that a severe weather cell (grid location) is detected in the observation data in the neighborhood of the severe weather cell in the forecast data. Probabilities of occurrence at the grid location and in its neighborhood with higher severity, and with lower severity in the observation data compared to that in the forecast data are examined. The method proposed in Refs. 5 through 7 is used for computing the probability that a certain number of cells in the neighborhood of severe weather cells in the forecast data are seen as severe weather cells in the observation data. Finally, the probability of existence of gaps in the observation data in the neighborhood of severe weather cells in forecast data is computed. Gaps are defined as openings between severe weather cells through which an aircraft can safely fly to its intended destination. The rest of the paper is organized as follows. Section II summarizes the statistical verification method described in Ref. 2. The extension of this method for computing the co-occurrence probabilities in discussed in Section HI. Numerical examples using NCWF forecast data and NCWD observation data are presented in Section III to elucidate the characteristics of the co-occurrence probabilities. This section also discusses the procedure for computing

  17. Metric Madness

    Science.gov (United States)

    Kroon, Cindy D.

    2007-01-01

    Created for a Metric Day activity, Metric Madness is a board game for two to four players. Students review and practice metric vocabulary, measurement, and calculations by playing the game. Playing time is approximately twenty to thirty minutes.

  18. Project, building and utilization of a tomograph of micro metric resolution to application in soil science; Projeto, costrucao e uso de um tomografo de resolucao micrometrica para aplicacoes em ciencias do solo

    Energy Technology Data Exchange (ETDEWEB)

    Macedo, Alvaro; Torre Neto, Andre; Cruvinel, Paulo Estevao; Crestana, Silvio [Empresa Brasileira de Pesquisa Agropecuaria (EMBRAPA), Sao Carlos, SP (Brazil). Centro Nacional de Pesquisa e Desenvolvimento de Instrumentacao Agropecuaria (CNPDIA)

    1996-08-01

    This paper describes the project , building and utilization of a tomograph of micro metric resolution in soil science. It describes the problems involved in soil`s science study and it describes the system and methodology 3 figs.

  19. Improving Quality Metric Adherence to Minimally Invasive Breast Biopsy among Surgeons Within a Multihospital Health Care System.

    Science.gov (United States)

    Tjoe, Judy A; Greer, Danielle M; Ihde, Sue E; Bares, Diane A; Mikkelson, Wendy M; Weese, James L

    2015-09-01

    Minimally invasive breast biopsy (MIBB) is the procedure of choice for diagnosing breast lesions indeterminate for malignancy. Multihospital health care systems face challenges achieving systemwide adherence to standardized guidelines among surgeons with varying practice patterns. This study tested whether providing individual feedback about surgeons' use of MIBB to diagnose breast malignancies improved quality metric adherence across a large health care organization. We conducted a prospective matched-pairs study to test differences (or lack of agreement) between periods before and after intervention. All analytical cases of primary breast cancer diagnosed during 2011 (period 1) and from July 2012 to June 2013 (period 2) across a multihospital health care system were reviewed for initial diagnosis by MIBB or open surgical biopsy. Open surgical biopsy was considered appropriate care only if MIBB could not be performed for reasons listed in the American Society of Breast Surgeons' quality measure for preoperative diagnosis of breast cancer. Individual and systemwide results of adherence to the MIBB metric during period 1 were sent to each surgeon in June 2012 and were later compared with period 2 results using McNemar's test of marginal homogeneity for matched binary responses. Forty-six surgeons were evaluated on use of MIBB to diagnose breast cancer. In period 1, metric adherence for 100% of cases was achieved by 37 surgeons, for a systemwide 100% compliance rate of 80.4%. After notification of individual performance, 44 of 46 surgeons used MIBB solely or otherwise appropriate care to diagnose breast cancer, which improved systemwide compliance to 95.7%. Providing individual and systemwide performance results to surgeons can increase self-awareness of practice patterns when diagnosing breast cancer, leading to standardized best-practice care across a large health care organization. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All

  20. A Medical Image Watermarking Technique for Embedding EPR and Its Quality Assessment Using No-Reference Metrics

    Directory of Open Access Journals (Sweden)

    Rupinder Kaur

    2013-01-01

    Full Text Available Digital watermarking can be used as an important tool for the security and copyright protection of digital multimedia content. The present paper explores its applications as a quality indicator of a watermarked medical image when subjected to intentional (noise, cropping, alteration or unintentional (compression, transmission or filtering operations. The watermark also carries EPR data along with a binary mark (used for quality assessment. The binary mark is used as a No-Reference (NR quality metrics that blindly estimates the quality of an image without the need of original image. It is a semi-fragile watermark which degrades at around the same rate as the original image and thus gives an indication of the quality degradation of the host image at the receiving end. In the proposed method, the original image is divided into two parts- ROI and non-ROI. ROI is an area that contains diagnostically important information and must be processed without any distortion. The binary mark and EPR are embedded into the DCT domain of Non-ROI. Embedding EPR within a medical image reduces storage and transmission overheads and no additional file has to be sent along with an image. The watermark (binary mark and EPR is extracted from non-ROI part at the receiving end and a measure of degradation of binary mark is used to estimate the quality of the original image. The performance of the proposed method is evaluated by calculating MSE and PSNR of original and extracted mark.

  1. Application of sigma metrics for the assessment of quality assurance in clinical biochemistry laboratory in India: a pilot study.

    Science.gov (United States)

    Singh, Bhawna; Goswami, Binita; Gupta, Vinod Kumar; Chawla, Ranjna; Mallika, Venkatesan

    2011-04-01

    Ensuring quality of laboratory services is the need of the hour in the field of health care. Keeping in mind the revolution ushered by six sigma concept in corporate world, health care sector may reap the benefits of the same. Six sigma provides a general methodology to describe performance on sigma scale. We aimed to gauge our laboratory performance by sigma metrics. Internal quality control (QC) data was analyzed retrospectively over a period of 6 months from July 2009 to December 2009. Laboratory mean, standard deviation and coefficient of variation were calculated for all the parameters. Sigma was calculated for both the levels of internal QC. Satisfactory sigma values (>6) were elicited for creatinine, triglycerides, SGOT, CPK-Total and Amylase. Blood urea performed poorly on the sigma scale with sigma six sigma standards for all the analytical processes.

  2. Quality of life utilities for pelvic inflammatory disease health states.

    Science.gov (United States)

    Smith, Kenneth J; Tsevat, Joel; Ness, Roberta B; Wiesenfeld, Harold C; Roberts, Mark S

    2008-03-01

    Quality of life utilities for health states associated with pelvic inflammatory disease (PID) have been estimated but not directly measured. Utilities for PID could have important implications on the cost-effectiveness of interventions to prevent and manage this disease. We obtained, in women with versus without a history of PID, visual analogue scale (VAS) and time-tradeoff (TTO) valuations for 5 PID-associated health states: ambulatory PID treatment, hospital PID treatment, ectopic pregnancy, chronic pelvic pain, and infertility. Subjects read brief scenarios describing the medical, functional, and social activity effects typically associated with each state, then gave valuations in the order above. Health state valuations were obtained from 56 women with and 150 women without a PID history. Subjects with a PID history had significantly lower mean valuations (P <0.05) on the VAS for ectopic pregnancy (0.55 vs. 0.63), pelvic pain (0.45 vs. 0.53), and infertility (0.53 vs. 0.66) but not on the TTO; VAS differences remained significant when controlling for demographic and childbearing characteristics. VAS and TTO valuations were similar in women with versus without a history of PID for the ambulatory and hospital PID treatment health states. PID has substantial impact on utility. In addition, some PID-related health states are valued less by women who have experienced PID, which could affect cost-effectiveness analyses of PID treatments when examined from the societal versus patient perspective.

  3. Adding A Spending Metric To Medicare's Value-Based Purchasing Program Rewarded Low-Quality Hospitals.

    Science.gov (United States)

    Das, Anup; Norton, Edward C; Miller, David C; Ryan, Andrew M; Birkmeyer, John D; Chen, Lena M

    2016-05-01

    In fiscal year 2015 the Centers for Medicare and Medicaid Services expanded its Hospital Value-Based Purchasing program by rewarding or penalizing hospitals for their performance on both spending and quality. This represented a sharp departure from the program's original efforts to incentivize hospitals for quality alone. How this change redistributed hospital bonuses and penalties was unknown. Using data from 2,679 US hospitals that participated in the program in fiscal years 2014 and 2015, we found that the new emphasis on spending rewarded not only low-spending hospitals but some low-quality hospitals as well. Thirty-eight percent of low-spending hospitals received bonuses in fiscal year 2014, compared to 100 percent in fiscal year 2015. However, low-quality hospitals also began to receive bonuses (0 percent in fiscal year 2014 compared to 17 percent in 2015). All high-quality hospitals received bonuses in both years. The Centers for Medicare and Medicaid Services should consider incorporating a minimum quality threshold into the Hospital Value-Based Purchasing program to avoid rewarding low-quality, low-spending hospitals.

  4. 软件质量度量过程及模型研究%Research on Process and Model of Software Quality Metrics

    Institute of Scientific and Technical Information of China (English)

    杜金环; 金璐璐

    2014-01-01

    软件质量度量是加强软件项目管理的重要工作,可以预测软件中潜在的错误,在软件产品完成之前进行度量,并根据度量结果改进软件质量。针对软件质量难于度量的现状,文中对度量过程及模型进行综合研究。首先,结合IEEE和CMMI的相关文献研究软件质量度量过程;接着,构建软件质量度量指标体系递阶层次分析结构模型;然后,以线性加权综合法理论为基础建立软件质量度量模型;最后,通过实例分析来说明模型的具体应用。文中的研究内容为软件度量提供了新方法,但由于软件质量涉及的不确定性因素较多,在实际运用时要充分考虑到软件的特殊性并借鉴其他学科的度量方法。%Software quality metric is the important work to strengthen the management of software projects,it can predict the potential er-rors in software,metric it before the completion of software product,and improve the software quality based on the metric results. For the current of software quality is difficult to metric,have a comprehensive study on metric process and model. First,combine the literature of IEEE and CMMI to research software quality metric process. Second, build the index system of software quality metrics and analysis structure model hierarchically. Then,build software quality metrics model based on the synthesis method theory of linear weighted. Final-ly,through cases analysis illustrate the specific application of this model. The content provides a new method for software metrics,but due to the lots uncertainties factors involved in software quality,the practical application should fully take the special of software into account, and learn it from other disciplines.

  5. Sound quality prediction based on systematic metric selection and shrinkage: Comparison of stepwise, lasso, and elastic-net algorithms and clustering preprocessing

    Science.gov (United States)

    Gauthier, Philippe-Aubert; Scullion, William; Berry, Alain

    2017-07-01

    Sound quality is the impression of quality that is transmitted by the sound of a device. Its importance in sound and acoustical design of consumer products no longer needs to be demonstrated. One of the challenges is the creation of a prediction model that is able to predict the results of a listening test while using metrics derived from the sound stimuli. Often, these models are either derived using linear regression on a limited set of experimenter-selected metrics, or using more complex algorithms such as neural networks. In the former case, the user-selected metrics can bias the model and reflect the engineer pre-conceived idea of sound quality while missing potential features. In the latter case, although prediction might be efficient, the model is often in the form of a black-box which is difficult to use as a sound design guideline for engineers. In this paper, preprocessing by participants clustering and three different algorithms are compared in order to construct a sound quality prediction model that does not suffer from these limitations. The lasso, elastic-net and stepwise algorithms are tested for listening tests of consumer product for which 91 metrics are used as potential predictors. Based on the reported results, it is shown that the most promising algorithm is the lasso which is able to (1) efficiently limit the number of metrics, (2) most accurately predict the results of listening tests, and (3) provide a meaningful model that can be used as understandable design guidelines.

  6. Cost of Quality (CoQ) metrics for telescope operations and project management

    Science.gov (United States)

    Radziwill, Nicole M.

    2006-06-01

    This study describes the goals, foundational work, and early returns associated with establishing a pilot quality cost program at the Robert C. Byrd Green Bank Telescope (GBT). Quality costs provide a means to communicate the results of process improvement efforts in the universal language of project management: money. This scheme stratifies prevention, appraisal, internal failure and external failure costs, and seeks to quantify and compare the up-front investment in planning and risk management versus the cost of rework. An activity-based Cost of Quality (CoQ) model was blended with the Cost of Software Quality (CoSQ) model that has been successfully deployed at Raytheon Electronic Systems (RES) for this pilot program, analyzing the efforts of the GBT Software Development Division. Using this model, questions that can now be answered include: What is an appropriate length for our development cycle? Are some observing modes more reliable than others? Are we testing too much, or not enough? How good is our software quality, not in terms of defects reported and fixed, but in terms of its impact on the user? The ultimate goal is to provide a higher quality of service to customers of the telescope.

  7. Application of sigma metrics for the assessment of quality control in clinical chemistry laboratory in Ghana: A pilot study

    Directory of Open Access Journals (Sweden)

    Justice Afrifa

    2015-01-01

    Full Text Available Background: Sigma metrics provide a uniquely defined scale with which we can assess the performance of a laboratory. The objective of this study was to assess the internal quality control (QC in the clinical chemistry laboratory of the University of Cape Cost Hospital (UCC using the six sigma metrics application. Materials and Methods: We used commercial control serum [normal (L1 and pathological (L2] for validation of quality control. Metabolites (glucose, urea, and creatinine, lipids [triglycerides (TG, total cholesterol, high-density lipoprotein cholesterol (HDL-C], enzymes [alkaline phosphatase (ALP, alanine aminotransferase (AST], electrolytes (sodium, potassium, chloride and total protein were assessed. Between-day imprecision (CVs, inaccuracy (Bias and sigma values were calculated for each control level. Results: Apart from sodium (2.40%, 3.83%, chloride (2.52% and 2.51% for both L1 and L2 respectively, and glucose (4.82%, cholesterol (4.86% for L2, CVs for all other parameters (both L1 and L2 were >5%. Four parameters (HDL-C, urea, creatinine and potassium achieved sigma levels >1 for both controls. Chloride and sodium achieved sigma levels >1 for L1 but 1 for L2. Glucose and ALP achieved a sigma level >1 for both control levels whereas TG achieved a sigma level >2 for both control levels. Conclusion: Unsatisfactory sigma levels (<3 where achieved for all parameters using both control levels, this shows instability and low consistency of results. There is the need for detailed assessment of the analytical procedures and the strengthening of the laboratory control systems in order to achieve effective six sigma levels for the laboratory.

  8. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2012-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed....... This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical...

  9. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed....... This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical...

  10. Elliptical Local Vessel Density: a Fast and Robust Quality Metric for Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Chaum, Edward [ORNL; Karnowski, Thomas Paul [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL; Abramoff, M.D. [University of Iowa

    2008-01-01

    A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches to the problem of automatically judging the image quality. We propose a new set of features independent of Field of View or resolution to describe the morphology of the patient's vessels. Our initial results suggest that they can be used to estimate the image quality in a time one order of magnitude shorter respect to previous techniques.

  11. A metrics-based comparison of secondary user quality between iOS and Android

    NARCIS (Netherlands)

    Amman, T.

    2014-01-01

    Native mobile applications gain popularity in the commercial market. There is no other econom- ical sector that grows as fast. A lot of economical research is done in this sector, but there is very little research that deals with qualities for mobile application developers. This paper compares the q

  12. [Establishing IAQ Metrics and Baseline Measures.] "Indoor Air Quality Tools for Schools" Update #20

    Science.gov (United States)

    US Environmental Protection Agency, 2009

    2009-01-01

    This issue of "Indoor Air Quality Tools for Schools" Update ("IAQ TfS" Update) contains the following items: (1) News and Events; (2) IAQ Profile: Establishing Your Baseline for Long-Term Success (Feature Article); (3) Insight into Excellence: Belleville Township High School District #201, 2009 Leadership Award Winner; and (4) Have Your Questions…

  13. A metrics-based comparison of secondary user quality between iOS and Android

    NARCIS (Netherlands)

    T. Amman

    2014-01-01

    htmlabstract Native mobile applications gain popularity in the commercial market. There is no other econom- ical sector that grows as fast. A lot of economical research is done in this sector, but there is very little research that deals with qualities for mobile application developers. This paper

  14. Determine metrics and set targets for soil quality on agriculture residue and energy crop pathways

    Energy Technology Data Exchange (ETDEWEB)

    Ian Bonner; David Muth

    2013-09-01

    There are three objectives for this project: 1) support OBP in meeting MYPP stated performance goals for the Sustainability Platform, 2) develop integrated feedstock production system designs that increase total productivity of the land, decrease delivered feedstock cost to the conversion facilities, and increase environmental performance of the production system, and 3) deliver to the bioenergy community robust datasets and flexible analysis tools for establishing sustainable and viable use of agricultural residues and dedicated energy crops. The key project outcome to date has been the development and deployment of a sustainable agricultural residue removal decision support framework. The modeling framework has been used to produce a revised national assessment of sustainable residue removal potential. The national assessment datasets are being used to update national resource assessment supply curves using POLYSIS. The residue removal modeling framework has also been enhanced to support high fidelity sub-field scale sustainable removal analyses. The framework has been deployed through a web application and a mobile application. The mobile application is being used extensively in the field with industry, research, and USDA NRCS partners to support and validate sustainable residue removal decisions. The results detailed in this report have set targets for increasing soil sustainability by focusing on primary soil quality indicators (total organic carbon and erosion) in two agricultural residue management pathways and a dedicated energy crop pathway. The two residue pathway targets were set to, 1) increase residue removal by 50% while maintaining soil quality, and 2) increase soil quality by 5% as measured by Soil Management Assessment Framework indicators. The energy crop pathway was set to increase soil quality by 10% using these same indicators. To demonstrate the feasibility and impact of each of these targets, seven case studies spanning the US are presented

  15. Towards Reliable Stereoscopic 3D Quality Evaluation: Subjective Assessment and Objective Metrics

    OpenAIRE

    Xing, Liyuan

    2013-01-01

    Stereoscopic three-dimensional (3D) services have become more popular recently amid promise of providing immersive quality of experience (QoE) to the end-users with the help of binocular depth. However, various arisen artifacts in the stereoscopic 3D processing chain might cause discomfort and severely degrade the QoE. Unfortunately, although the causes and nature of artifacts have already been clearly understood, it is impossible to eliminate them under the limitation of current stereoscopic...

  16. Mining and Utilizing Dataset Relevancy from Oceanographic Dataset (MUDROD) Metadata, Usage Metrics, and User Feedback to Improve Data Discovery and Access

    Science.gov (United States)

    Jiang, Y.

    2015-12-01

    Oceanographic resource discovery is a critical step for developing ocean science applications. With the increasing number of resources available online, many Spatial Data Infrastructure (SDI) components (e.g. catalogues and portals) have been developed to help manage and discover oceanographic resources. However, efficient and accurate resource discovery is still a big challenge because of the lack of data relevancy information. In this article, we propose a search engine framework for mining and utilizing dataset relevancy from oceanographic dataset metadata, usage metrics, and user feedback. The objective is to improve discovery accuracy of oceanographic data and reduce time for scientist to discover, download and reformat data for their projects. Experiments and a search example show that the propose engine helps both scientists and general users search for more accurate results with enhanced performance and user experience through a user-friendly interface.

  17. Challenges, Solutions, and Quality Metrics of Personal Genome Assembly in Advancing Precision Medicine.

    Science.gov (United States)

    Xiao, Wenming; Wu, Leihong; Yavas, Gokhan; Simonyan, Vahan; Ning, Baitang; Hong, Huixiao

    2016-04-22

    Even though each of us shares more than 99% of the DNA sequences in our genome, there are millions of sequence codes or structure in small regions that differ between individuals, giving us different characteristics of appearance or responsiveness to medical treatments. Currently, genetic variants in diseased tissues, such as tumors, are uncovered by exploring the differences between the reference genome and the sequences detected in the diseased tissue. However, the public reference genome was derived with the DNA from multiple individuals. As a result of this, the reference genome is incomplete and may misrepresent the sequence variants of the general population. The more reliable solution is to compare sequences of diseased tissue with its own genome sequence derived from tissue in a normal state. As the price to sequence the human genome has dropped dramatically to around $1000, it shows a promising future of documenting the personal genome for every individual. However, de novo assembly of individual genomes at an affordable cost is still challenging. Thus, till now, only a few human genomes have been fully assembled. In this review, we introduce the history of human genome sequencing and the evolution of sequencing platforms, from Sanger sequencing to emerging "third generation sequencing" technologies. We present the currently available de novo assembly and post-assembly software packages for human genome assembly and their requirements for computational infrastructures. We recommend that a combined hybrid assembly with long and short reads would be a promising way to generate good quality human genome assemblies and specify parameters for the quality assessment of assembly outcomes. We provide a perspective view of the benefit of using personal genomes as references and suggestions for obtaining a quality personal genome. Finally, we discuss the usage of the personal genome in aiding vaccine design and development, monitoring host immune-response, tailoring

  18. Informing the judgments of fingerprint analysts using quality metric and statistical assessment tools.

    Science.gov (United States)

    Langenburg, Glenn; Champod, Christophe; Genessay, Thibault

    2012-06-10

    The aim of this research was to evaluate how fingerprint analysts would incorporate information from newly developed tools into their decision making processes. Specifically, we assessed effects using the following: (1) a quality tool to aid in the assessment of the clarity of the friction ridge details, (2) a statistical tool to provide likelihood ratios representing the strength of the corresponding features between compared fingerprints, and (3) consensus information from a group of trained fingerprint experts. The measured variables for the effect on examiner performance were the accuracy and reproducibility of the conclusions against the ground truth (including the impact on error rates) and the analyst accuracy and variation for feature selection and comparison. The results showed that participants using the consensus information from other fingerprint experts demonstrated more consistency and accuracy in minutiae selection. They also demonstrated higher accuracy, sensitivity, and specificity in the decisions reported. The quality tool also affected minutiae selection (which, in turn, had limited influence on the reported decisions); the statistical tool did not appear to influence the reported decisions.

  19. Mass Customization Measurements Metrics

    DEFF Research Database (Denmark)

    Nielsen, Kjeld; Brunø, Thomas Ditlev; Jørgensen, Kaj Asbjørn

    2014-01-01

    A recent survey has indicated that 17 % of companies have ceased mass customizing less than 1 year after initiating the effort. This paper presents measurement for a company’s mass customization performance, utilizing metrics within the three fundamental capabilities: robust process design, choice...... navigation, and solution space development. A mass customizer when assessing performance with these metrics can identify within which areas improvement would increase competitiveness the most and enable more efficient transition to mass customization....

  20. The Sternberg Task as a Workload Metric in Flight Handling Qualities Research

    Science.gov (United States)

    Hemingway, J. C.

    1984-01-01

    The objective of this research was to determine whether the Sternberg item-recognition task, employed as a secondary task measure of spare mental capacity for flight handling qualities (FHQ) simulation research, could help to differentiate between different flight-control conditions. FHQ evaluations were conducted on the Vertical Motion Simulator at Ames Research Center to investigate different primary flight-control configurations, and selected stability and control augmentation levels for helicopers engaged in low-level flight regimes. The Sternberg task was superimposed upon the primary flight-control task in a balanced experimental design. The results of parametric statistical analysis of Sternberg secondary task data failed to support the continued use of this task as a measure of pilot workload. In addition to the secondary task, subjects provided Cooper-Harper pilot ratings (CHPR) and responded to a workload questionnaire. The CHPR data also failed to provide reliable statistical discrimination between FHQ treatment conditions; some insight into the behavior of the secondary task was gained from the workload questionnaire data.

  1. Assessments of habitat preferences and quality depend on spatial scale and metrics of fitness

    Science.gov (United States)

    Chalfoun, A.D.; Martin, T.E.

    2007-01-01

    1. Identifying the habitat features that influence habitat selection and enhance fitness is critical for effective management. Ecological theory predicts that habitat choices should be adaptive, such that fitness is enhanced in preferred habitats. However, studies often report mismatches between habitat preferences and fitness consequences across a wide variety of taxa based on a single spatial scale and/or a single fitness component. 2. We examined whether habitat preferences of a declining shrub steppe songbird, the Brewer's sparrow Spizella breweri, were adaptive when multiple reproductive fitness components and spatial scales (landscape, territory and nest patch) were considered. 3. We found that birds settled earlier and in higher densities, together suggesting preference, in landscapes with greater shrub cover and height. Yet nest success was not higher in these landscapes; nest success was primarily determined by nest predation rates. Thus landscape preferences did not match nest predation risk. Instead, nestling mass and the number of nesting attempts per pair increased in preferred landscapes, raising the possibility that landscapes were chosen on the basis of food availability rather than safe nest sites. 4. At smaller spatial scales (territory and nest patch), birds preferred different habitat features (i.e. density of potential nest shrubs) that reduced nest predation risk and allowed greater season-long reproductive success. 5. Synthesis and applications. Habitat preferences reflect the integration of multiple environmental factors across multiple spatial scales, and individuals may have more than one option for optimizing fitness via habitat selection strategies. Assessments of habitat quality for management prescriptions should ideally include analysis of diverse fitness consequences across multiple ecologically relevant spatial scales. ?? 2007 The Authors.

  2. Impact of artefact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data.

    Directory of Open Access Journals (Sweden)

    Thomas Samuel Carroll

    2014-04-01

    Full Text Available With the advent of ChIP-seq multiplexing technologies and the subsequent increase in ChIP-seq throughput, the development of working standards for the quality assessment of ChIP-seq studies has received significant attention. The ENCODE consortium’s large scale analysis of transcription factor binding and epigenetic marks as well as concordant work on ChIP-seq by other laboratories has established a new generation of ChIP-seq quality control measures. The use of these metrics alongside common processing steps has however not been evaluated. In this study, we investigate the effects of blacklisting and removal of duplicated reads on established metrics of ChIP-seq quality and show that the interpretation of these metrics is highly dependent on the ChIP-seq preprocessing steps applied. Further to this we perform the first investigation of the use of these metrics for ChIP-exo data and make recommendations for the adaptation of the NSC statistic to allow for the assessment of ChIP-exo efficiency.

  3. Impact of artifact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data.

    Science.gov (United States)

    Carroll, Thomas S; Liang, Ziwei; Salama, Rafik; Stark, Rory; de Santiago, Ines

    2014-01-01

    With the advent of ChIP-seq multiplexing technologies and the subsequent increase in ChIP-seq throughput, the development of working standards for the quality assessment of ChIP-seq studies has received significant attention. The ENCODE consortium's large scale analysis of transcription factor binding and epigenetic marks as well as concordant work on ChIP-seq by other laboratories has established a new generation of ChIP-seq quality control measures. The use of these metrics alongside common processing steps has however not been evaluated. In this study, we investigate the effects of blacklisting and removal of duplicated reads on established metrics of ChIP-seq quality and show that the interpretation of these metrics is highly dependent on the ChIP-seq preprocessing steps applied. Further to this we perform the first investigation of the use of these metrics for ChIP-exo data and make recommendations for the adaptation of the NSC statistic to allow for the assessment of ChIP-exo efficiency.

  4. QoS Metrics for Cloud Computing Services Evaluation

    Directory of Open Access Journals (Sweden)

    Amid Khatibi Bardsiri

    2014-11-01

    Full Text Available Cloud systems are transforming the Information Technology trade by facultative the companies to provide admission to their structure and also software products to the membership foundation. Because of the vast range within the delivered Cloud solutions, from the customer’s perspective of an aspect, it's emerged as troublesome to decide whose providers they need to utilize and then what's the thought of his or her option. Especially, employing suitable metrics is vital in assessing practices. Nevertheless, to the most popular of our knowledge, there's no methodical explanation relating to metrics for estimating Cloud products and services. QoS (Quality of Service metrics playing an important role in selecting Cloud providers and also optimizing resource utilization efficiency. While many reports have got to devote to exploitation QoS metrics, relatively not much equipment supports the remark and investigation of QoS metrics of Cloud programs. To guarantee a specialized product is published, describing metrics for assessing the QoS might be an essential necessity. So, this text suggests various QoS metrics for service vendors, especially thinking about the consumer’s worry. This article provides the metrics list may stand to help the future study and also assessment within the field of Cloud service's evaluation.

  5. Metrical Quantization

    CERN Document Server

    Klauder, J R

    1998-01-01

    Canonical quantization may be approached from several different starting points. The usual approaches involve promotion of c-numbers to q-numbers, or path integral constructs, each of which generally succeeds only in Cartesian coordinates. All quantization schemes that lead to Hilbert space vectors and Weyl operators---even those that eschew Cartesian coordinates---implicitly contain a metric on a flat phase space. This feature is demonstrated by studying the classical and quantum ``aggregations'', namely, the set of all facts and properties resident in all classical and quantum theories, respectively. Metrical quantization is an approach that elevates the flat phase space metric inherent in any canonical quantization to the level of a postulate. Far from being an unwanted structure, the flat phase space metric carries essential physical information. It is shown how the metric, when employed within a continuous-time regularization scheme, gives rise to an unambiguous quantization procedure that automatically ...

  6. Chest CT using spectral filtration: radiation dose, image quality, and spectrum of clinical utility

    Energy Technology Data Exchange (ETDEWEB)

    Braun, Franziska M.; Johnson, Thorsten R.C.; Sommer, Wieland H.; Thierfelder, Kolja M.; Meinel, Felix G. [University Hospital Munich, Institute for Clinical Radiology, Munich (Germany)

    2015-06-01

    To determine the radiation dose, image quality, and clinical utility of non-enhanced chest CT with spectral filtration. We retrospectively analysed 25 non-contrast chest CT examinations acquired with spectral filtration (tin-filtered Sn100 kVp spectrum) compared to 25 examinations acquired without spectral filtration (120 kV). Radiation metrics were compared. Image noise was measured. Contrast-to-noise-ratio (CNR) and figure-of-merit (FOM) were calculated. Diagnostic confidence for the assessment of various thoracic pathologies was rated by two independent readers. Effective chest diameters were comparable between groups (P = 0.613). In spectral filtration CT, median CTDI{sub vol}, DLP, and size-specific dose estimate (SSDE) were reduced (0.46 vs. 4.3 mGy, 16 vs. 141 mGy*cm, and 0.65 vs. 5.9 mGy, all P < 0.001). Spectral filtration CT had higher image noise (21.3 vs. 13.2 HU, P < 0.001) and lower CNR (47.2 vs. 75.3, P < 0.001), but was more dose-efficient (FOM 10,659 vs. 2,231/mSv, P < 0.001). Diagnostic confidence for parenchymal lung disease and osseous pathologies was lower with spectral filtration CT, but no significant difference was found for pleural pathologies, pulmonary nodules, or pneumonia. Non-contrast chest CT using spectral filtration appears to be sufficient for the assessment of a considerable spectrum of thoracic pathologies, while providing superior dose efficiency, allowing for substantial radiation dose reduction. (orig.)

  7. Improvement in Total Joint Replacement Quality Metrics: Year One Versus Year Three of the Bundled Payments for Care Improvement Initiative.

    Science.gov (United States)

    Dundon, John M; Bosco, Joseph; Slover, James; Yu, Stephen; Sayeed, Yousuf; Iorio, Richard

    2016-12-07

    In January 2013, a large, tertiary, urban academic medical center began participation in the Bundled Payments for Care Improvement (BPCI) initiative for total joint arthroplasty, a program implemented by the Centers for Medicare & Medicaid Services (CMS) in 2011. Medicare Severity-Diagnosis Related Groups (MS-DRGs) 469 and 470 were included. We participated in BPCI Model 2, by which an episode of care includes the inpatient and all post-acute care costs through 90 days following discharge. The goal for this initiative is to improve patient care and quality through a patient-centered approach with increased care coordination supported through payment innovation. Length of stay (LOS), readmissions, discharge disposition, and cost per episode of care were analyzed for year 3 compared with year 1 of the initiative. Multiple programs were implemented after the first year to improve performance metrics: a surgeon-directed preoperative risk-factor optimization program, enhanced care coordination and home services, a change in venous thromboembolic disease (VTED) prophylaxis to a risk-stratified protocol, infection-prevention measures, a continued emphasis on discharge to home rather than to an inpatient facility, and a quality-dependent gain-sharing program among surgeons. There were 721 Medicare primary total joint arthroplasty patients in year 1 and 785 in year 3; their data were compared. The average hospital LOS decreased from 3.58 to 2.96 days. The rate of discharge to an inpatient facility decreased from 44% to 28%. The 30-day all-cause readmission rate decreased from 7% to 5%; the 60-day all-cause readmission rate decreased from 11% to 6%; and the 90-day all-cause readmission rate decreased from 13% to 8%. The average 90-day cost per episode decreased by 20%. Mid-term results from the implementation of Medicare BPCI Model 2 for primary total joint arthroplasty demonstrated decreased LOS, decreased discharges to inpatient facilities, decreased readmissions, and

  8. Alternative metrics

    Science.gov (United States)

    2012-11-01

    As the old 'publish or perish' adage is brought into question, additional research-impact indices, known as altmetrics, are offering new evaluation alternatives. But such metrics may need to adjust to the evolution of science publishing.

  9. DLA Energy Biofuel Feedstock Metrics Study

    Science.gov (United States)

    2012-12-11

    moderately/highly in- vasive  Metric 2: Genetically modified organism ( GMO ) hazard, Yes/No and Hazard Category  Metric 3: Species hybridization...4– biofuel distribution Stage # 5– biofuel use Metric 1: State inva- siveness ranking Yes Minimal Minimal No No Metric 2: GMO hazard Yes...may utilize GMO microbial or microalgae species across the applicable biofuel life cycles (stages 1–3). The following consequence Metrics 4–6 then

  10. The Quality of Life Scale (QOLS: Reliability, Validity, and Utilization

    Directory of Open Access Journals (Sweden)

    Anderson Kathryn L

    2003-10-01

    Full Text Available Abstract The Quality of Life Scale (QOLS, created originally by American psychologist John Flanagan in the 1970's, has been adapted for use in chronic illness groups. This paper reviews the development and psychometric testing of the QOLS. A descriptive review of the published literature was undertaken and findings summarized in the frequently asked questions format. Reliability, content and construct validity testing has been performed on the QOLS and a number of translations have been made. The QOLS has low to moderate correlations with physical health status and disease measures. However, content validity analysis indicates that the instrument measures domains that diverse patient groups with chronic illness define as quality of life. The QOLS is a valid instrument for measuring quality of life across patient groups and cultures and is conceptually distinct from health status or other causal indicators of quality of life.

  11. Using the Consumer Experience with Pharmacy Services Survey as a quality metric for ambulatory care pharmacies: older adults' perspectives.

    Science.gov (United States)

    Shiyanbola, Olayinka O; Mott, David A; Croes, Kenneth D

    2016-05-26

    To describe older adults' perceptions of evaluating and comparing pharmacies based on the Consumer Experience with Pharmacy Services Survey (CEPSS), describe older adults' perceived importance of the CEPSS and its specific domains, and explore older adults' perceptions of the influence of specific CEPSS domains in choosing/switching pharmacies. Focus group methodology was combined with the administration of a questionnaire. The focus groups explored participants' perceived importance of the CEPSS and their perception of using the CEPSS to choose and/or switch pharmacies. Then, using the questionnaire, participants rated their perceived importance of each CEPSS domain in evaluating a pharmacy, and the likelihood of using CEPSS to switch pharmacies if their current pharmacy had low ratings. Descriptive and thematic analyses were done. 6 semistructured focus groups were conducted in a private meeting room in a Mid-Western state in the USA. 60 English-speaking adults who were at least 65 years, and had filled a prescription at a retail pharmacy within 90 days. During the focus groups, the older adults perceived the CEPSS to have advantages and disadvantages in evaluating and comparing pharmacies. Older adults thought the CEPSS was important in choosing the best pharmacies and avoiding the worst pharmacies. The perceived influence of the CEPSS in switching pharmacies varied depending on the older adult's personal experience or trust of other consumers' experience. Questionnaire results showed that participants perceived health/medication-focused communication as very important or extremely important (n=47, 82.5%) in evaluating pharmacies and would be extremely likely (n=21, 36.8%) to switch pharmacies if their pharmacy had low ratings in this domain. The older adults in this study are interested in using patient experiences as a quality metric for avoiding the worst pharmacies. Pharmacists' communication about health and medicines is perceived important and likely

  12. Quality of renewable energy utilization in transport in Sweden

    Energy Technology Data Exchange (ETDEWEB)

    Lampinen, Ari

    2015-04-01

    Renewable energy utilization in transportation (RES-T) is a long way behind its utilization in power (RES-E) and heat (RES-H) sectors. International and national environmental policies have recently given a lot of emphasis on this problem. For that reason information is sought on how to implement solutions both politically and technologically. As Sweden is a global leader in this area, it can provide valuable examples. In 2012 Sweden became the first country to reach the binding requirement of the European Union for at least 10 % share for renewable energy in transport energy consumption. But qualitative development has been even stronger than quantitative. Among the success stories behind qualitative progress, most noteworthy are those created by innovative municipal policies. By 2030 Sweden aims to achieve fossil fuel independent road transport system and by 2050 completely carbon neutral transport system in all modes of transport.

  13. Change in visual acuity is well correlated with change in image-quality metrics for both normal and keratoconic wavefront errors.

    Science.gov (United States)

    Ravikumar, Ayeswarya; Marsack, Jason D; Bedell, Harold E; Shi, Yue; Applegate, Raymond A

    2013-11-26

    We determined the degree to which change in visual acuity (VA) correlates with change in optical quality using image-quality (IQ) metrics for both normal and keratoconic wavefront errors (WFEs). VA was recorded for five normal subjects reading simulated, logMAR acuity charts generated from the scaled WFEs of 15 normal and seven keratoconic eyes. We examined the correlations over a large range of acuity loss (up to 11 lines) and a smaller, more clinically relevant range (up to four lines). Nine IQ metrics were well correlated for both ranges. Over the smaller range of primary interest, eight were also accurate and precise in estimating the variations in logMAR acuity in both normal and keratoconic WFEs. The accuracy for these eight best metrics in estimating the mean change in logMAR acuity ranged between ±0.0065 to ±0.017 logMAR (all less than one letter), and the precision ranged between ±0.10 to ±0.14 logMAR (all less than seven letters).

  14. Quality Markers in Cardiology. Main Markers to Measure Quality of Results (Outcomes) and Quality Measures Related to Better Results in Clinical Practice (Performance Metrics). INCARDIO (Indicadores de Calidad en Unidades Asistenciales del Área del Corazón): A SEC/SECTCV Consensus Position Paper.

    Science.gov (United States)

    López-Sendón, José; González-Juanatey, José Ramón; Pinto, Fausto; Cuenca Castillo, José; Badimón, Lina; Dalmau, Regina; González Torrecilla, Esteban; López-Mínguez, José Ramón; Maceira, Alicia M; Pascual-Figal, Domingo; Pomar Moya-Prats, José Luis; Sionis, Alessandro; Zamorano, José Luis

    2015-11-01

    Cardiology practice requires complex organization that impacts overall outcomes and may differ substantially among hospitals and communities. The aim of this consensus document is to define quality markers in cardiology, including markers to measure the quality of results (outcomes metrics) and quality measures related to better results in clinical practice (performance metrics). The document is mainly intended for the Spanish health care system and may serve as a basis for similar documents in other countries.

  15. Metric learning

    CERN Document Server

    Bellet, Aurelien; Sebban, Marc

    2015-01-01

    Similarity between objects plays an important role in both human cognitive processes and artificial systems for recognition and categorization. How to appropriately measure such similarities for a given task is crucial to the performance of many machine learning, pattern recognition and data mining methods. This book is devoted to metric learning, a set of techniques to automatically learn similarity and distance functions from data that has attracted a lot of interest in machine learning and related fields in the past ten years. In this book, we provide a thorough review of the metric learnin

  16. Quantity and Quality Equally Important in Utilizing Foreign Investment

    Institute of Scientific and Technical Information of China (English)

    裴长洪; 樊瑛

    2008-01-01

    A new round of FDI flow and new wave of international industry transfer has offered new opportunities for China to participate in international economic competition and cooperation. Utilization of foreign investment has crucial implications for enhancing China’s position in the international division of labor,cultivating new competitive advantages and strengthening overall competitiveness. China should grasp the opportunity and improve the investment environment in terms of legislation,system,policy,administration and service and ensure that foreign investment plays an important role in China’s economic development and promotion of independent innovation and industry upgrade.

  17. MO-D-213-06: Quantitative Image Quality Metrics Are for Physicists, Not Radiologists: How to Communicate to Your Radiologists Using Their Language

    Energy Technology Data Exchange (ETDEWEB)

    Szczykutowicz, T; Rubert, N; Ranallo, F [University Wisconsin-Madison, Madison, WI (United States)

    2015-06-15

    Purpose: A framework for explaining differences in image quality to non-technical audiences in medial imaging is needed. Currently, this task is something that is learned “on the job.” The lack of a formal methodology for communicating optimal acquisition parameters into the clinic effectively mitigates many technological advances. As a community, medical physicists need to be held responsible for not only advancing image science, but also for ensuring its proper use in the clinic. This work outlines a framework that bridges the gap between the results from quantitative image quality metrics like detectability, MTF, and NPS and their effect on specific anatomical structures present in diagnostic imaging tasks. Methods: Specific structures of clinical importance were identified for a body, an extremity, a chest, and a temporal bone protocol. Using these structures, quantitative metrics were used to identify the parameter space that should yield optimal image quality constrained within the confines of clinical logistics and dose considerations. The reading room workflow for presenting the proposed changes for imaging each of these structures is presented. The workflow consists of displaying images for physician review consisting of different combinations of acquisition parameters guided by quantitative metrics. Examples of using detectability index, MTF, NPS, noise and noise non-uniformity are provided. During review, the physician was forced to judge the image quality solely on those features they need for diagnosis, not on the overall “look” of the image. Results: We found that in many cases, use of this framework settled mis-agreements between physicians. Once forced to judge images on the ability to detect specific structures inter reader agreement was obtained. Conclusion: This framework will provide consulting, research/industrial, or in-house physicists with clinically relevant imaging tasks to guide reading room image review. This framework avoids use

  18. Climate Change Challenges of Managing Quality of Drinking Water: Survey Results from Utilities in California

    Science.gov (United States)

    Ekstrom, J.; Bedsworth, L. W.

    2015-12-01

    Scientists have established that climate change threatens sources of drinking water through many different pathways, both in terms of quantity and quality. Recognizing water utilities will face the brunt of these impacts, this study seeks to better understand the disconnect between the projections produced and the needs of utilities on-the-ground. As part of the first stage of the three-year study, this presentation reports results of a statewide survey evaluating how far along water utilities in California are along in preparing for the projected climate change impacts on water quality, the range in respondents' perspectives (and concerns) of climate change on water quality, and how the state's four-year drought is already presenting treatment challenges. On-going case studies are investigating the needs and capacity of utilities to prepare for and adapt to the projected water quality impacts from increasing extreme events and how or whether climate scientists can help meet these needs.

  19. Toward metrics and model validation in web-site QEM

    OpenAIRE

    Olsina Santos, Luis Antonio; Pons, Claudia; Rossi, Gustavo Héctor

    2000-01-01

    In this work, a conceptual framework and the associated strategies for metrics and model validation are analyzed regarding website measurement and evaluation. Particularly, we have conducted three case studies in different Web domains in order to evaluate and compare the quality of sites. For such an end the quantitative, model-based methodology, so-called Web-site QEM (Quality Evaluation Methodology), was utilized. In the assessment process of sites, definition of attributes and measurements...

  20. Rainwater harvesting, quality assessment and utilization in Kefalonia Island, Greece.

    Science.gov (United States)

    Sazakli, E; Alexopoulos, A; Leotsinidis, M

    2007-05-01

    The quality of harvested rainwater which is used for domestic and drinking purposes in the northern area of Kefalonia Island in SW Greece and the factors affecting it were assessed through 3-year surveillance. In 12 seasonal samplings, 156 rainwater and 144 ground- or mixed water samples were collected from ferroconcrete storage tanks (300-1000 m3 capacity), which are adjacent to cement-paved catchment areas (600-3000 m2). Common anions and major cations as well as the metals Fe, Mn, Cd, Pb, Cu, Cr, Ni and Zn were tested. The presence of three major groups of organic compounds, polycyclic aromatic hydrocarbons (PAHs), organochloride pesticides (OCPs) and volatile organic compounds (VOCs), was screened by common analytical techniques. All of the rainwater samples were within the guidelines for chemical parameters established by the 98/93/EU directive. As far as microbiological quality is concerned, total coliforms, Escherichia coli and enterococci were detected in 80.3%, 40.9% and 28.8% of the rainwater samples, respectively, although they were found in low concentrations. Chemical and microbiological parameters showed seasonal fluctuations. Principal component analysis revealed that microbiological parameters were affected mainly by the cleanness level of catchment areas, while chemical parameters were influenced by the sea proximity and human activities. Disinfection should be applied into the tanker trucks which distribute the water to the consumers and not into the big storage tanks in order to avoid by-products formation. Due to the lack of fluoride in rainwater samples, the consumers must become aware of the fact that the supplementation of this element is needed.

  1. Evaluating the Good Ontology Design Guideline (GoodOD) with the ontology quality requirements and evaluation method and metrics (OQuaRE).

    Science.gov (United States)

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies.

  2. Quality and utilization of food co-products and residues

    Science.gov (United States)

    Cooke, P.; Bao, G.; Broderick, C.; Fishman, M.; Liu, L.; Onwulata, C.

    2010-06-01

    Some agricultural industries generate large amounts of low value co-products/residues, including citrus peel, sugar beet pulp and whey protein from the production of orange juice, sugar and cheese commodities, respectively. National Program #306 of the USDA Agricultural Research Service aims to characterize and enhance quality and develop new processes and uses for value-added foods and bio-based products. In parallel projects, we applied scanning microscopies to examine the molecular organization of citrus pectin gels, covalent crosslinking to reduce debonding in sugar beet pulp-PLA composites and functional modification of whey protein through extrusion in order to evaluate new methods of processing and formulating new products. Also, qualitative attributes of fresh produce that could potentially guide germ line development and crop management were explored through fluorescence imaging: synthesis and accumulation of oleoresin in habanero peppers suggest a complicated mechanism of secretion that differs from the classical scheme. Integrated imaging appears to offer significant structural insights to help understand practical properties and features of important food co-products/residues.

  3. Quadrupolar metrics

    CERN Document Server

    Quevedo, Hernando

    2016-01-01

    We review the problem of describing the gravitational field of compact stars in general relativity. We focus on the deviations from spherical symmetry which are expected to be due to rotation and to the natural deformations of mass distributions. We assume that the relativistic quadrupole moment takes into account these deviations, and consider the class of axisymmetric static and stationary quadrupolar metrics which satisfy Einstein's equations in empty space and in the presence of matter represented by a perfect fluid. We formulate the physical conditions that must be satisfied for a particular spacetime metric to describe the gravitational field of compact stars. We present a brief review of the main static and axisymmetric exact solutions of Einstein's vacuum equations, satisfying all the physical conditions. We discuss how to derive particular stationary and axisymmetric solutions with quadrupolar properties by using the solution generating techniques which correspond either to Lie symmetries and B\\"acku...

  4. Quality and Diagnostic Utility of Mydriatic Smartphone Photography: The Smartphone Ophthalmoscopy Reliability Trial.

    Science.gov (United States)

    Adam, Murtaza K; Brady, Christopher J; Flowers, Alexis M; Juhn, Alexander T; Hsu, Jason; Garg, Sunir J; Murchison, Ann P; Spirn, Marc J

    2015-06-01

    Establish quality and diagnostic utility of mydriatic smartphone ophthalmoscopy (SO) fundus images compared to fundus camera (FC) images. In this prospective, cross-sectional study, 94 consecutive patients in an urban eye emergency department underwent SO and FC fundus imaging via one of three study arms: medical student 1 (MS1), medical student 2 (MS2), and ophthalmology resident (OR). Images of 188 eyes were graded for overall quality by two masked reviewers, and observed critical fundus findings were compared to dilated fundus examination documentation. SO images were higher quality in the OR arm than in the MS1 and MS2 arms (P quality between photographers (all P > .328). In the OR arm, SO images detected 74.3% of critical fundus findings, whereas FC images detected 77.1%. SO produces fundus images approaching the quality and diagnostic utility of traditional FC photographs. Copyright 2015, SLACK Incorporated.

  5. Research and application of static metrics for code quality%代码质量静态度量的研究与应用

    Institute of Scientific and Technical Information of China (English)

    黄沛杰; 杨铭铨

    2011-01-01

    Code quality metrics is an important branch of software analysis.Under the trend of networkization and serviciza-tion of software,static analysis method grasps more attention due to its advantage of low-cost,easy-to-implement,and independence from specific program running environment.This paper deals with the quality metrics of Java code.Based on the integration of the open source static analysis tools by Apache Ant, a comprehensive evaluation method of Java code quality is proposed.The approach,supporting comprehensive testing of size,regularity,maintainability,expandability,and potentially dangerous of code,provides an applicable evaluation method for project developers,managers and users.%代码质量度量是软件质量分析的一个重要研究方向.静态分析方法因其具有成本低、容易实现而且不依赖于程序特定的运行环境的优点,在当前软件网络化、服务化的趋势下倍受关注.针对Java代码质量度量进行研究,使用Ant工具整合各种开源的静态测试工具,并制定基于静态分析的Java代码质量综合评价方案,可支持包括代码规模、规范性、可维护性、可扩展性和潜在危险等方面的综合检测,为项目的开发者、管理者和使用者提供了实用的代码质量评价方法.

  6. Obesity utilization and health-related quality of life in Medicare enrollees.

    Science.gov (United States)

    Malinoff, Rochelle L; Elliott, Marc N; Giordano, Laura A; Grace, Susan C; Burroughs, James N

    2013-01-01

    The obese, with disproportionate chronic disease incidence, consume a large share of health care resources and drive up per capita Medicare spending. This study examined the prevalence of obesity and its association with health status, health-related quality of life (HRQOL), function, and outpatient utilization among Medicare Advantage seniors. Results indicate that obese beneficiaries, much more than overweight beneficiaries, have poorer health, functions, and HRQOL than normal weight beneficiaries and have substantially higher outpatient utilization. While weight loss is beneficial to both the overweight and obese, the markedly worse health status and high utilization of obese beneficiaries may merit particular attention.

  7. Use of plan quality degradation to evaluate tradeoffs in delivery efficiency and clinical plan metrics arising from IMRT optimizer and sequencer compromises

    Science.gov (United States)

    Wilkie, Joel R.; Matuszak, Martha M.; Feng, Mary; Moran, Jean M.; Fraass, Benedick A.

    2013-01-01

    Purpose: Plan degradation resulting from compromises made to enhance delivery efficiency is an important consideration for intensity modulated radiation therapy (IMRT) treatment plans. IMRT optimization and/or multileaf collimator (MLC) sequencing schemes can be modified to generate more efficient treatment delivery, but the effect those modifications have on plan quality is often difficult to quantify. In this work, the authors present a method for quantitative assessment of overall plan quality degradation due to tradeoffs between delivery efficiency and treatment plan quality, illustrated using comparisons between plans developed allowing different numbers of intensity levels in IMRT optimization and/or MLC sequencing for static segmental MLC IMRT plans. Methods: A plan quality degradation method to evaluate delivery efficiency and plan quality tradeoffs was developed and used to assess planning for 14 prostate and 12 head and neck patients treated with static IMRT. Plan quality was evaluated using a physician's predetermined “quality degradation” factors for relevant clinical plan metrics associated with the plan optimization strategy. Delivery efficiency and plan quality were assessed for a range of optimization and sequencing limitations. The “optimal” (baseline) plan for each case was derived using a clinical cost function with an unlimited number of intensity levels. These plans were sequenced with a clinical MLC leaf sequencer which uses >100 segments, assuring delivered intensities to be within 1% of the optimized intensity pattern. Each patient's optimal plan was also sequenced limiting the number of intensity levels (20, 10, and 5), and then separately optimized with these same numbers of intensity levels. Delivery time was measured for all plans, and direct evaluation of the tradeoffs between delivery time and plan degradation was performed. Results: When considering tradeoffs, the optimal number of intensity levels depends on the treatment

  8. Use of plan quality degradation to evaluate tradeoffs in delivery efficiency and clinical plan metrics arising from IMRT optimizer and sequencer compromises.

    Science.gov (United States)

    Wilkie, Joel R; Matuszak, Martha M; Feng, Mary; Moran, Jean M; Fraass, Benedick A

    2013-07-01

    Plan degradation resulting from compromises made to enhance delivery efficiency is an important consideration for intensity modulated radiation therapy (IMRT) treatment plans. IMRT optimization and/or multileaf collimator (MLC) sequencing schemes can be modified to generate more efficient treatment delivery, but the effect those modifications have on plan quality is often difficult to quantify. In this work, the authors present a method for quantitative assessment of overall plan quality degradation due to tradeoffs between delivery efficiency and treatment plan quality, illustrated using comparisons between plans developed allowing different numbers of intensity levels in IMRT optimization and/or MLC sequencing for static segmental MLC IMRT plans. A plan quality degradation method to evaluate delivery efficiency and plan quality tradeoffs was developed and used to assess planning for 14 prostate and 12 head and neck patients treated with static IMRT. Plan quality was evaluated using a physician's predetermined "quality degradation" factors for relevant clinical plan metrics associated with the plan optimization strategy. Delivery efficiency and plan quality were assessed for a range of optimization and sequencing limitations. The "optimal" (baseline) plan for each case was derived using a clinical cost function with an unlimited number of intensity levels. These plans were sequenced with a clinical MLC leaf sequencer which uses >100 segments, assuring delivered intensities to be within 1% of the optimized intensity pattern. Each patient's optimal plan was also sequenced limiting the number of intensity levels (20, 10, and 5), and then separately optimized with these same numbers of intensity levels. Delivery time was measured for all plans, and direct evaluation of the tradeoffs between delivery time and plan degradation was performed. When considering tradeoffs, the optimal number of intensity levels depends on the treatment site and on the stage in the process

  9. Contribution of landscape metrics to the assessment of scenic quality – the example of the landscape structure plan Havelland/Germany

    Directory of Open Access Journals (Sweden)

    H. Herbst

    2009-03-01

    Full Text Available The scenic quality of a landscape is a natural resource that is to be preserved according to German and international law. One important indicator for the evaluation of this value is the structural diversity of the landscape. Although Landscape Metrics (LM represent a well-known instrument for the quantification of landscape patterns, they are hardly used in applied landscape and environmental planning. This study shows possibilities for the integration of LM into a commonly used method to assess scenic quality by the example of a Landscape Structure Plan. First results indicate that especially Shannon’s Diversity Index and Edge Density are suitable to achieve an objective evaluation of the structural diversity as indicator for scenic quality. The addition of qualitative parameters to the objective structural analysis is discussed. Moreover, the use of landscape scenery units and raster cells as basic geometry has been compared. It shows that LM can support the evaluation of the aesthetic quality in environmental planning, especially when integrated into commonly used evaluation methods.

  10. Measuring intensive care unit performance after sustainable growth rate reform: An example with the National Quality Forum metrics.

    Science.gov (United States)

    Nguyen, Albert P; Hyder, Joseph A; Wanta, Brendan T; Stelfox, Henry T; Schmidt, Ulrich

    2016-12-01

    Performance measurement is essential for quality improvement and is inevitable in the shift to value-based payment. The National Quality Forum is an important clearinghouse for national performance measures in health care in the United States. We reviewed the National Quality Forum library of performance measures to highlight measures that are relevant to critical care medicine, and we describe gaps and opportunities for the future of performance measurement in critical care medicine. Crafting performance measures that address core aspects of critical care will be challenging, as current outcome and performance measures have problems with validity. Future quality measures will likely focus on interdisciplinary measures across the continuum of patient care. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. 42 CFR 423.153 - Drug utilization management, quality assurance, and medication therapy management programs (MTMPs).

    Science.gov (United States)

    2010-10-01

    ... the procedures and performance of its drug utilization management program, according to guidelines...) Screening for potential drug therapy problems due to therapeutic duplication. (ii) Age/gender-related... quality assurance measures and systems, according to guidelines specified by CMS. (d) Medication therapy...

  12. Coverage and quality: A comparison of Web of Science and Scopus databases for reporting faculty nursing publication metrics.

    Science.gov (United States)

    Powell, Kimberly R; Peterson, Shenita R

    2017-03-11

    Web of Science and Scopus are the leading databases of scholarly impact. Recent studies outside the field of nursing report differences in journal coverage and quality. A comparative analysis of nursing publications reported impact. Journal coverage by each database for the field of nursing was compared. Additionally, publications by 2014 nursing faculty were collected in both databases and compared for overall coverage and reported quality, as modeled by Scimajo Journal Rank, peer review status, and MEDLINE inclusion. Individual author impact, modeled by the h-index, was calculated by each database for comparison. Scopus offered significantly higher journal coverage. For 2014 faculty publications, 100% of journals were found in Scopus, Web of Science offered 82%. No significant difference was found in the quality of reported journals. Author h-index was found to be higher in Scopus. When reporting faculty publications and scholarly impact, academic nursing programs may be better represented by Scopus, without compromising journal quality. Programs with strong interdisciplinary work should examine all areas of strength to ensure appropriate coverage. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Usability evaluations of a wearable inertial sensing system and quality of movement metrics for stroke survivors by care professionals

    NARCIS (Netherlands)

    Klaassen, Bart; Menon, C.; van Beijnum, Bernhard J.F.; Held, J.P.; Reenalda, Jasper; van Meulen, Fokke; Veltink, Petrus H.; Hermens, Hermanus J.

    2017-01-01

    Background Inertial motion capture systems are used in many applications such as measuring the movement quality in stroke survivors. The absence of clinical effectiveness and usability evidence in these assistive echnologies into rehabilitation has delayed the transition of research into clinical

  14. Measuring Research Quality Using the Journal Impact Factor, Citations and "Ranked Journals": Blunt Instruments or Inspired Metrics?

    Science.gov (United States)

    Jarwal, Som D.; Brion, Andrew M.; King, Maxwell L.

    2009-01-01

    This paper examines whether three bibliometric indicators--the journal impact factor, citations per paper and the Excellence in Research for Australia (ERA) initiative's list of "ranked journals"--can predict the quality of individual research articles as assessed by international experts, both overall and within broad disciplinary…

  15. Utility of WHOQOL-BREF in measuring quality of life in Sickle Cell Disease

    Directory of Open Access Journals (Sweden)

    Reid Marvin E

    2009-08-01

    Full Text Available Abstract Background Sickle cell disease is the commonest genetic disorder in Jamaica and most likely exerts numerous effects on quality of life (QOL of those afflicted with it. The WHOQOL-Bref, which is a commonly utilized generic measure of quality of life, has never previously been utilized in this population. We have sought to study its utility in this disease population. Methods 491 patients with sickle cell disease were administered the questionnaire including demographics, WHOQOL-Bref, Short Form-36 (SF-36, Flanagan's quality of life scale (QOLS and measures of disease severity at their routine health maintenance visits to the sickle cell unit. Internal consistency reliabilities, construct validity and "known groups" validity of the WHOQOL-Bref, and its domains, were examined; and then compared to those of the other instruments. Results All three instruments had good internal consistency, ranging from 0.70 to 0.93 for the WHOQOL-Bref (except the 'social relationships' domain, 0.86–0.93 for the SF-36 and 0.88 for the QOLS. None of the instruments showed any marked floor or ceiling effects except the SF-36 'physical health' and 'role limitations' domains. The WHOQOL-Bref scale also had moderate concurrent validity and showed strong "known groups" validity. Conclusion This study has shown good psychometric properties of the WHOQOL-Bref instrument in determining QOL of those with sickle cell disease. Its utility in this regard is comparable to that of the SF-36 and QOLS.

  16. Cost and quality of fuels for electric utility plants: Energy data report. 1980 annual

    Energy Technology Data Exchange (ETDEWEB)

    1981-06-25

    In 1980 US electric utilities reported purchasng 594 million tons of coal, 408.5 million barrels of oil and 3568.7 billion ft/sup 3/ of gas. As compared with 1979 purchases, coal rose 6.7%, oil decreased 20.9%, and gas increased for the fourth year in a row. This volume presents tabulated and graphic data on the cost and quality of fossil fuel receipts to US electric utilities plants with a combined capacity of 25 MW or greater. Information is included on fuel origin and destination, fuel types, and sulfur content, plant types, capacity, and flue gas desulfurization method used, and fuel costs. (LCL)

  17. An information theoretic approach for privacy metrics

    Directory of Open Access Journals (Sweden)

    Michele Bezzi

    2010-12-01

    Full Text Available Organizations often need to release microdata without revealing sensitive information. To this scope, data are anonymized and, to assess the quality of the process, various privacy metrics have been proposed, such as k-anonymity, l-diversity, and t-closeness. These metrics are able to capture different aspects of the disclosure risk, imposing minimal requirements on the association of an individual with the sensitive attributes. If we want to combine them in a optimization problem, we need a common framework able to express all these privacy conditions. Previous studies proposed the notion of mutual information to measure the different kinds of disclosure risks and the utility, but, since mutual information is an average quantity, it is not able to completely express these conditions on single records. We introduce here the notion of one-symbol information (i.e., the contribution to mutual information by a single record that allows to express and compare the disclosure risk metrics. In addition, we obtain a relation between the risk values t and l, which can be used for parameter setting. We also show, by numerical experiments, how l-diversity and t-closeness can be represented in terms of two different, but equally acceptable, conditions on the information gain..

  18. Practical application of biological variation and Sigma metrics quality models to evaluate 20 chemistry analytes on the Beckman Coulter AU680.

    Science.gov (United States)

    Tran, Mai Thi Chi; Hoang, KienTrung; Greaves, Ronda F

    2016-11-01

    This study aimed to evaluate the imprecision and bias data generated for 20 routine chemistry analytes against both the biological variation fitness for purpose (FFP) and Sigma metrics (SM) criteria. Twenty serum/plasma analytes were evaluated on the Beckman Coulter AU680. Third party commercial lyophilized internal quality control samples of human origin were used for day-to-day imprecision calculations. Commercial external quality assurance (EQA) samples were used to determine the systematic error between the test method result and the instrument group mean result from the EQA program for each analyte. Biological variation data was used to calculate the minimum, desirable and optimal imprecision and bias for determination of FFP. The desirable total allowable error was determined from biological variation data and applied to the SM calculation. The outcomes of both quality approaches were then compared. The day-to-day imprecision of most tested analytes (except sodium and chloride) were smaller than the allowable imprecision (ranging from minimum to optimum). Most analytes achieved at least minimum bias. The SM varied with analyte concentration with six analytes producing low Sigma values. Comparing the quality processes eleven analytes produced a green light for both FFP and SM. There was some difference seen in interpretation for the other nine analytes. The individual interpretation of bias and imprecision using FFP criteria allowed for the clear determination of the major source of error. Whereas, SM provided a summative evaluation of method performance. But the selection of total allowable error (TEa) is fundamental to this interpretation and harmonisation of the TEa calculation is needed. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  19. The Impact of Bundled Payment on Emergency Department Utilization: Alternative Quality Contract Effects After Year One

    Science.gov (United States)

    Sharp, Adam L.; Song, Zirui; Safran, Dana G.; Chernew, Michael E.; Fendrick, A. Mark

    2014-01-01

    Objective Identify the effect of the Alternative Quality Contract (AQC), a global payment system implemented by Blue Cross Blue Shield of Massachusetts (BCBS) in 2009, on emergency department (ED) utilization. Methods BCBS claims from 2006–2009 for 332,624 enrollees whose primary care physician (PCP) enrolled in the AQC, and 1,296,399 whose PCP was not enrolled in the AQC were evaluated. We used a pre-post, intervention-control, propensity scored difference-in-difference approach to isolate the AQC effect on ED utilization. The analysis adjusted for age, sex, health status and secular trends to compare ED utilization between the treatment and control groups. Results Overall, secular trends showed ED utilization decreased slightly for both treatment and control groups. The adjusted analysis of the AQC group showed decreases from 0.131 to 0.127 visits per member/quarter, and the control group decreased from 0.157 to 0.152 visits per member/quarter. The difference-in-difference analysis showed the AQC had no statistically significant effect on total ED utilization compared to the control group. Conclusion In its first year, the AQC had no significant effect on ED utilization. Similar global budget programs may not alter ED use in the initial implementation period. PMID:24050802

  20. Apple Quicktime vs. Microsoft Windows Media: an objective comparison of video encoding quality

    Science.gov (United States)

    Aygen, Arman; Homayounfar, Kambiz

    2003-06-01

    This paper presents a methodology and a framework for quality assessment of compressed video. Usage of the framework is illustrated by taking five source video clips, compressing them by two commercially available video encoders, and calculating four perceptual metrics for each encoded clip. The perceptual metrics utilized to characterize vide quality were: jerkiness, blur, block distortion, and an objective overall quality index. The paper provides over forty curves to show the shape and range of the perceptual metrics.

  1. Matrix Converter Based Unified Power Quality Conditioner (MUPQC for Power Quality Improvement in a Utility

    Directory of Open Access Journals (Sweden)

    G.L. Valsala

    2014-05-01

    Full Text Available This study proposes a new approach of unified power quality conditioner which is made up of a matrix converter without energy storage devices to mitigate the current harmonics, voltage sags and swell. By connecting the matrix converter output terminals to the load side through series transformer and the input side of matrix converter is connected to the supply side with step up transformer. So a matrix converter injects the compensation voltage on the load-side, so it is possible to mitigate the voltage sag/swell problems, resulting in an efficient solution for mitigating voltage and current related power quality problems. Thus, the proposed topology can mitigate the voltage fluctuations and current harmonics without energy storage elements and the total harmonic distortion produced by the system also very low. It also reduced volume and cost, reduced capacitor power losses, together with higher reliability. The Space-Vector Modulation (SVM is used to control the matrix converter. MATLAB/SIMULINK based simulation results are presented to validate the approach.

  2. Utility of Arden Syntax for Representation of Fuzzy Logic in Clinical Quality Measures.

    Science.gov (United States)

    Jenders, Robert A

    2015-01-01

    Prior work has established that fuzzy logic is prevalent in clinical practice guidelines and that Arden Syntax is suitable for representing clinical quality measures (CQMs). Approved since then, Arden Syntax v2.9 (2012) has formal constructs for fuzzy logic even as new formalisms are proposed to represent quality logic. Determine the prevalence of fuzzy logic in CQMs and assess the utility of a contemporary version of Arden Syntax for representing them. Linguistic variables were tabulated in the 329 Assessing Care of the Vulnerable Elderly (ACOVE-3) CQMs, and these logic statements were encoded in Arden Syntax. In a total of 392 CQMs, linguistic variables occurred in 30.6%, and Arden Syntax could be used to represent these formally. Fuzzy logic occurs commonly in CQMs, and Arden Syntax offers particular utility for the representations of these constructs.

  3. Product Accuracy Effect of Oblique and Vertical Non-Metric Digital Camera Utilization in Uav-Photogrammetry to Determine Fault Plane

    Science.gov (United States)

    Amrullah, C.; Suwardhi, D.; Meilano, I.

    2016-06-01

    This study aims to see the effect of non-metric oblique and vertical camera combination along with the configuration of the ground control points to improve the precision and accuracy in UAV-Photogrammetry project. The field observation method is used for data acquisition with aerial photographs and ground control points. All data are processed by digital photogrammetric process with some scenarios in camera combination and ground control point configuration. The model indicates that the value of precision and accuracy increases with the combination of oblique and vertical camera at all control point configuration. The best products of the UAV-Photogrammetry model are produced in the form of Digital Elevation Model (DEM) compared to the LiDAR DEM. Furthermore, DEM from UAV-Photogrammetry and LiDAR are used to define the fault plane by using cross-section on the model and interpretation to determine the point at the extreme height of terrain changes. The result of the defined fault planes indicate that two models do not show any significant difference.

  4. [Valuation of health-related quality of life and utilities in health economics].

    Science.gov (United States)

    Greiner, Wolfgang; Klose, Kristina

    2014-01-01

    Measuring health-related quality of life is an important aspect in economic evaluation of health programmes. The development of utility-based (preference-based) measures is advanced by the discipline of health economics. Different preference measures are applied for valuing health states to produce a weighted health state index. Those preference weights should be derived from a general population sample in case of resource allocation on a collective level (as in current valuation studies of the EuroQol group).

  5. Protein quality and quantity and insulin control of mammary gland glucose utilization during lactation

    Energy Technology Data Exchange (ETDEWEB)

    Masor, M.L.

    1987-01-01

    Virgin Sprague-Dawley rats were bred, and fed laboratory stock (STOCK), 13% casein plus methionine, 13% wheat gluten, or 5% casein plus methionine through gestation and 4 days of lactation. Diets were switched at parturition to determine the effects of dietary protein quality and quantity fed during gestation and/or lactation on insulin stimulation of mammary glucose utilization. On day 20 of gestation (20G) and day 4 of lactation (4L) the right inguinal-abdominal mammary glands were removed, and acini and tissue slices were incubated in Krebs buffer with or without insulin containing (U-/sup 14/C)-glucose and 5mM glucose for 1 hour at 37/degrees/C. Glucose incorporation into CO/sub 2/, lipid and lactose was determined. Glucose incorporation into CO/sub 2/ and lipid, but not lactose was stimulated by insulin in mammary slices. Diet effects on glucose utilization in acini were confirmed in slices for basal and insulin stimulated levels. Treatment affected the absolute increase of insulin stimulation. Regression analysis significantly correlated pup weight gain with total glucose utilization. Poor dietary protein quality and quantity fed during gestation impaired both overall response of mammary glucose utilization to insulin stimulation, and mammary development during pregnancy. Improving protein value at parturition did not overcome those deficits by 4L.

  6. Utility and Weight of Factors of Bus Transit’ s Service Quality Analysis in Nanjing

    Institute of Scientific and Technical Information of China (English)

    Jianrong Liu; Tangyi Guo

    2015-01-01

    Service quality is a major factor that affects how public transport users evaluate bus service. In order to evaluate how bus users make trade⁃offs across travel cost, time, reliability, etc., and to investigate the extent to which the components of service quality vary according to relevant trip characteristics, this paper analyzes service quality of bus transit with the conjoint analysis. Through data analysis, the levels ’ utility values of reliability, waiting time, walking time, etc.,on the commuter trip and the non⁃commuter trip are gotten, so it is the utility function of the transit system. Then the factors’ weights are obtained through the utility values. The results show that on the commuter trip, passengers value reliability the most, which is followed by waiting time and walking time, while in⁃bus environment, price and station environment’ s weights are small. While on the non⁃commuter trip, the weights in a higher order to lower order are the first for reliability, the second for in⁃bus environment, the third for walking time, the fourth for station environment and the last for ticket price.

  7. An experimental evaluation of the Sternberg task as a workload metric for helicopter Flight Handling Qualities (FHQ) research

    Science.gov (United States)

    Hemingway, J. C.

    1984-01-01

    The objective was to determine whether the Sternberg item-recognition task, employed as a secondary task measure of spare mental capacity for flight handling qualities (FHQ) simulation research, could help to differentiate between different flight-control conditions. FHQ evaluations were conducted on the Vertical Motion Simulator at Ames Research Center to investigate different primary flight-control configurations, and selected stability and control augmentation levels for helicopters engaged in low-level flight regimes. The Sternberg task was superimposed upon the primary flight-control task in a balanced experimental design. The results of parametric statistical analysis of Sternberg secondary task data failed to support the continued use of this task as a measure of pilot workload. In addition to the secondary task, subjects provided Cooper-Harper pilot ratings (CHPR) and responded to workload questionnaire. The CHPR data also failed to provide reliable statistical discrimination between FHQ treatment conditions; some insight into the behavior of the secondary task was gained from the workload questionnaire data.

  8. Analysis and prediction of the influence of energy utilization on air quality in Beijing

    Institute of Scientific and Technical Information of China (English)

    LI Lin; HAO Jiming; HU Jingnan

    2007-01-01

    This work evaluates the influence of energy consumption on the future air quality in Beijing,using 2000 as the base year and 2008 as the target year.It establishes the emission inventory of primary PM10,SO2 and NOx related to energy utilization in eight areas of Beijing.The air quality model was adopted to simulate the temporal and spatial distribution of each pollutant concentration in the eight urban areas.Their emission,concentration distribution,and sectoral share responsibility rate were analyzed,and air quality in 2008 was predicted.The industrial sector contributed above 40% of primary PM10 and SO2 resulting from energy consumption,while vehicles accounted for about 65% of NOx.According to the current policy and development trend,air quality in the eight urban areas could become better in 2008 when the average concentrations of primary PM10,SO2 and NO2 related to energy utilization at each monitored site are predicted to be about 25,50 and 51 μg/m3,respectively.

  9. FABASOFT BEST PRACTICES AND TEST METRICS MODEL

    Directory of Open Access Journals (Sweden)

    Nadica Hrgarek

    2007-06-01

    Full Text Available Software companies have to face serious problems about how to measure the progress of test activities and quality of software products in order to estimate test completion criteria, and if the shipment milestone will be reached on time. Measurement is a key activity in testing life cycle and requires established, managed and well documented test process, defined software quality attributes, quantitative measures, and using of test management and bug tracking tools. Test metrics are a subset of software metrics (product metrics, process metrics and enable the measurement and quality improvement of test process and/or software product. The goal of this paper is to briefly present Fabasoft best practices and lessons learned during functional and system testing of big complex software products, and to describe a simple test metrics model applied to the software test process with the purpose to better control software projects, measure and increase software quality.

  10. Improving imaging utilization through practice quality improvement (maintenance of certification part IV): a review of requirements and approach to implementation.

    Science.gov (United States)

    Griffith, Brent; Brown, Manuel L; Jain, Rajan

    2014-04-01

    The purposes of this article are to review the American Board of Radiology requirements for practice quality improvement and to describe our approach to improving imaging utilization while offering a guide to implementing similar projects at other institutions, emphasizing the plan-do-study-act approach. There is increased emphasis on improving quality in health care. Our institution has undertaken a multiphase practice quality improvement project addressing the appropriate utilization of screening cervical spinal CT in an emergency department.

  11. Thermodynamic Metrics and Optimal Paths

    Energy Technology Data Exchange (ETDEWEB)

    Sivak, David; Crooks, Gavin

    2012-05-08

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  12. Random Kaehler metrics

    Energy Technology Data Exchange (ETDEWEB)

    Ferrari, Frank, E-mail: frank.ferrari@ulb.ac.be [Service de Physique Theorique et Mathematique, Universite Libre de Bruxelles and International Solvay Institutes, Campus de la Plaine, CP 231, 1050 Bruxelles (Belgium); Klevtsov, Semyon, E-mail: semyon.klevtsov@ulb.ac.be [Service de Physique Theorique et Mathematique, Universite Libre de Bruxelles and International Solvay Institutes, Campus de la Plaine, CP 231, 1050 Bruxelles (Belgium); ITEP, B. Cheremushkinskaya 25, Moscow 117218 (Russian Federation); Zelditch, Steve, E-mail: zelditch@math.northwestern.edu [Department of Mathematics, Northwestern University, Evanston, IL 60208 (United States)

    2013-04-01

    The purpose of this article is to propose a new method to define and calculate path integrals over metrics on a Kaehler manifold. The main idea is to use finite dimensional spaces of Bergman metrics, as an approximation to the full space of Kaehler metrics. We use the theory of large deviations to decide when a sequence of probability measures on the spaces of Bergman metrics tends to a limit measure on the space of all Kaehler metrics. Several examples are considered.

  13. Dynamic Evaluation of Water Quality Improvement Based on Effective Utilization of Stockbreeding Biomass Resource

    Directory of Open Access Journals (Sweden)

    Jingjing Yan

    2014-11-01

    Full Text Available The stockbreeding industry is growing rapidly in rural regions of China, carrying a high risk to the water environment due to the emission of huge amounts of pollutants in terms of COD, T-N and T-P to rivers. On the other hand, as a typical biomass resource, stockbreeding waste can be used as a clean energy source by biomass utilization technologies. In this paper, we constructed a dynamic linear optimization model to simulate the synthetic water environment management policies which includes both the water environment system and social-economic situational changes over 10 years. Based on the simulation, the model can precisely estimate trends of water quality, production of stockbreeding biomass energy and economic development under certain restrictions of the water environment. We examined seven towns of Shunyi district of Beijing as the target area to analyse synthetic water environment management policies by computer simulation based on the effective utilization of stockbreeding biomass resources to improve water quality and realize sustainable development. The purpose of our research is to establish an effective utilization method of biomass resources incorporating water environment preservation, resource reutilization and economic development, and finally realize the sustainable development of the society.

  14. Assessment of groundwater utilization for irrigating park trees under the spatiotemporal uncertainty condition of water quality

    Science.gov (United States)

    Jang, Cheng-Shin; Kuo, Yi-Ming

    2013-04-01

    Parks have a variety of functions for residents and are important for urban landscape planning. The healthy growth of urban park trees requires regular irrigation. To reduce the pressure of high groundwater levels and to avoid wasting groundwater resources, proper groundwater extraction for irrigating park trees in the Taipei Basin is regarded as a reciprocal solution of sustainable groundwater management and preserving excellent urban landscapes. Therefore, this study determines pristine groundwater use for irrigating park trees in the metropolitan Taipei Basin under the spatiotemporal uncertainty condition of water quality. First, six hydrochemical parameters in groundwater associated with an irrigation water quality standard were collected from a 12-year survey. Upper, median and lower quartiles of the six hydrochemical parameters were obtained to establish three thresholds. According to the irrigation water quality standard, multivariate indicator kriging (MVIK) was adopted to probabilistically evaluate the integration of the six hydrochemical parameters. Entropy was then applied to quantify the spatiotemporal uncertainty of the hydrochemical parameters. Finally, locations, which have high estimated probabilities for the median-quartile threshold and low local uncertainty, are suitable for pumping groundwater for irrigating park trees. The study results demonstrate that MVIK and entropy are capable of characterizing the spatiotemporal uncertainty of groundwater quality parameters and determining suitable parks of groundwater utilization for irrigation. Moreover, the upper, median and lower quartiles of hydrochemical parameters are served as three estimated thresholds in MVIK, which is robust to assessment predictions. Therefore, this study significantly improves the methodological application and limitation of MVIK for spatiotemporally analyzing environmental quality compared with the previous related works. Furthermore, the analyzed results indicate that 64

  15. Drinking water sources, availability, quality, access and utilization for goats in the Karak Governorate, Jordan.

    Science.gov (United States)

    Al-Khaza'leh, Ja'far Mansur; Reiber, Christoph; Al Baqain, Raid; Valle Zárate, Anne

    2015-01-01

    Goat production is an important agricultural activity in Jordan. The country is one of the poorest countries in the world in terms of water scarcity. Provision of sufficient quantity of good quality drinking water is important for goats to maintain feed intake and production. This study aimed to evaluate the seasonal availability and quality of goats' drinking water sources, accessibility, and utilization in different zones in the Karak Governorate in southern Jordan. Data collection methods comprised interviews with purposively selected farmers and quality assessment of water sources. The provision of drinking water was considered as one of the major constraints for goat production, particularly during the dry season (DS). Long travel distances to the water sources, waiting time at watering points, and high fuel and labor costs were the key reasons associated with the problem. All the values of water quality (WQ) parameters were within acceptable limits of the guidelines for livestock drinking WQ with exception of iron, which showed slightly elevated concentration in one borehole source in the DS. These findings show that water shortage is an important problem leading to consequences for goat keepers. To alleviate the water shortage constraint and in view of the depleted groundwater sources, alternative water sources at reasonable distance have to be tapped and monitored for water quality and more efficient use of rainwater harvesting systems in the study area is recommended.

  16. 基于仿生模型的图像质量评价方法%An image quality metric based on bionic models#

    Institute of Scientific and Technical Information of China (English)

    侯伟龙; 高新波; 何立火; 高飞

    2011-01-01

    图像质量的客观评价是图像处理领域中的一个重要分支,其评价指标可以作为一种测度或者准则用来校准图像处理系统,抑或用于图像处理算法的优化及参数的优选。鉴于人眼是图像的最终受体,而视觉注意机制在人眼观看图像过程中起到非常重要的作用。因此本文针对图像质量评价的基本问题,提出了一种新的基于视觉注意机制的仿生学图像质量评价算法。结合视觉注意机制形成的原理,利用高斯塔式分解将图像分解为不同的空域尺度,从而模拟人类视觉系统的多通道特性。采用对比敏感度函数对不同的空域尺度进行视觉感知滤波,然后利用人类视觉系统的中央-周边感受野特性与侧抑制机制对图像特征进行提取,进而利用该特征来捕捉由图像降质引起的视觉感知的差异。实验结果表明,本方法能较准确地反映人眼对图像质量的主观感受,且计算复杂度较低,性能优于同类评价算法。%Image quality evaluation is to use some computational models to predict the quality of the specified image automatically and accurately. Since human eyes are ultimate receptor of images, it is better to mimic human visual system (HVS) to perceive the image quality. Based on the properties of the HVS, a novel bionic image quality metric (IQA) is proposed, which adopts several bionic characteristics, e.g. multi-channel decomposition, contrast sensitivity function, center-surround operation and lateral inhibition mechanism. Experimental results demonstrate that the performance of the proposed IQA method outperforms those of the existing methods.

  17. Analyzing the barriers affecting the effective utilization of quality tools and techniques using Integrated ISM approach

    Directory of Open Access Journals (Sweden)

    Vivek Sharma

    2017-08-01

    Full Text Available The aim of this study is to recognize and scrutinize the barriers affecting the utilization of quality tools and techniques (QT&T in manufacturing organizations. For this purpose, twelve barriers af-fecting the execution of QT&T in manufacturing organizations have been identified from literature analysis and experts’ opinion (academicians and industrial. Questionnaire-based survey has been utilized for the validation of identified barriers. Afterwards, an integrated model of QT&T has been developed by using interpretive structural Modelling (ISM and Matriced Impacts Croisés Multi-plication Appliquée á un Classement (MICMAC approach. This research gives an apparent depic-tion to identify and handle the barriers by computing the effectiveness of each barrier. Barriers like accessibility of time and space, inability to change organizational culture and inadequate coordina-tion and teamwork are found to be the key barriers for utilization of QT&T in manufacturing organ-ization. The developed model will help the manufacturing organizations effectively utilize QT&T.

  18. A Systematic Review on the Impact of Metrics in Software Process Improvement

    Directory of Open Access Journals (Sweden)

    Simran Jaitly

    2014-03-01

    Full Text Available Software Process Improvement is an act of changing the ongoing software development and maintenance process to achieve basic business goals. It is a sequence of catalogued activities required to develop and maintain the software within technical and management schema. Software metrics provide a quantitative basis for planning and predicting software development processes and their required improvement strategies. This research paper focuses on the impact of software metrics on software process improvement. Moreover, many metrics and tools have been developed; promoted and utilized resulting in remarkable successes. It also examines the realm of software engineering to see why software metrics are needed and also reviews their contribution towards software process improvement and its quality.

  19. A Systematic Review on the Impact of Metrics in Software Process Improvement

    Directory of Open Access Journals (Sweden)

    Simran Jaitly

    2015-11-01

    Full Text Available Software Process Improvement is an act of changing the ongoing software development and maintenance process to achieve basic business goals. It is a sequence of catalogued activities required to develop and maintain the software within technical and management schema. Software metrics provide a quantitative basis for planning and predicting software development processes and their required improvement strategies. This research paper focuses on the impact of software metrics on software process improvement. Moreover, many metrics and tools have been developed; promoted and utilized resulting in remarkable successes. It also examines the realm of software engineering to see why software metrics are needed and also reviews their contribution towards software process improvement and its quality.

  20. Applying Sigma Metrics to Reduce Outliers.

    Science.gov (United States)

    Litten, Joseph

    2017-03-01

    Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods.

  1. NASA metric transition plan

    Science.gov (United States)

    NASA science publications have used the metric system of measurement since 1970. Although NASA has maintained a metric use policy since 1979, practical constraints have restricted actual use of metric units. In 1988, an amendment to the Metric Conversion Act of 1975 required the Federal Government to adopt the metric system except where impractical. In response to Public Law 100-418 and Executive Order 12770, NASA revised its metric use policy and developed this Metric Transition Plan. NASA's goal is to use the metric system for program development and functional support activities to the greatest practical extent by the end of 1995. The introduction of the metric system into new flight programs will determine the pace of the metric transition. Transition of institutional capabilities and support functions will be phased to enable use of the metric system in flight program development and operations. Externally oriented elements of this plan will introduce and actively support use of the metric system in education, public information, and small business programs. The plan also establishes a procedure for evaluating and approving waivers and exceptions to the required use of the metric system for new programs. Coordination with other Federal agencies and departments (through the Interagency Council on Metric Policy) and industry (directly and through professional societies and interest groups) will identify sources of external support and minimize duplication of effort.

  2. Novel resource utilization of refloated algal sludge to improve the quality of organic fertilizer.

    Science.gov (United States)

    Huang, Yan; Li, Rong; Liu, Hongjun; Wang, Beibei; Zhang, Chenmin; Shen, Qirong

    2014-08-01

    Without further management, large amounts of refloated algal sludge from Taihu Lake to retrieve nitrogen and phosphorus resources may result in serious secondary environmental pollution. The possibility of utilization of algal sludge to improve the quality of organic fertilizer was investigated in this study. Variations of physicochemical properties, germination index (GI) and microcystin (MC) content were analysed during the composting process. The results showed that the addition of algal sludge improved the contents of nutrients, common free amino acids and total common amino acids in the novel organic fertilizer. Rapid degradation rates of MC-LR and MC-RR, a high GI value and more abundance of culturable protease-producing bacteria were observed during the composting process added with algal sludge. Growth experiments showed that the novel organic fertilizer efficiently promoted plant growth. This study provides a novel resource recovery method to reclaim the Taihu Lake algal sludge and highlights a novel method to produce a high-quality organic fertilizer.

  3. Economic Valuation on Change of Tourism Quality in Rawapening, Indonesia: An Application of Random Utility Method

    Science.gov (United States)

    Subanti, S.; Irawan, B. R. M. B.; Sasongko, G.; Hakim, A. R.

    2017-04-01

    This study aims to determine the profit (loss) earned economic actors tourism activities if the condition or quality of tourism in Rawapening be improved (deteriorated). Change condition or quality can be seen by traveling expenses, natural environment, Japanese cultural performances, and traditional markets. The method used to measure changes in the economic benefits or economic loss with a random utility approach. The study was found that travel cost, natural environment, Japanese cultural performances, and traditional markets have significant factors about respondent preferences to choose the change of tourism condition. The value of compensation received by visitors as a result of changes in conditions improved by 2,932 billion, while the change in the condition worsens by 2,628 billion. Recommendation of this study is the local government should consider environmental factors in the formulation of tourism development in Rawapening.

  4. Mining and Utilizing Dataset Relevancy from Oceanographic Dataset (MUDROD) Metadata, Usage Metrics, and User Feedback to Improve Data Discovery and Access

    Science.gov (United States)

    Li, Y.; Jiang, Y.; Yang, C. P.; Armstrong, E. M.; Huang, T.; Moroni, D. F.; McGibbney, L. J.

    2016-12-01

    Big oceanographic data have been produced, archived and made available online, but finding the right data for scientific research and application development is still a significant challenge. A long-standing problem in data discovery is how to find the interrelationships between keywords and data, as well as the intrarelationships of the two individually. Most previous research attempted to solve this problem by building domain-specific ontology either manually or through automatic machine learning techniques. The former is costly, labor intensive and hard to keep up-to-date, while the latter is prone to noise and may be difficult for human to understand. Large-scale user behavior data modelling represents a largely untapped, unique, and valuable source for discovering semantic relationships among domain-specific vocabulary. In this article, we propose a search engine framework for mining and utilizing dataset relevancy from oceanographic dataset metadata, user behaviors, and existing ontology. The objective is to improve discovery accuracy of oceanographic data and reduce time for scientist to discover, download and reformat data for their projects. Experiments and a search example show that the proposed search engine helps both scientists and general users search with better ranking results, recommendation, and ontology navigation.

  5. The Utility of the OMI HCHO/NO2 in Air Quality Decision-Making Activities

    Science.gov (United States)

    Duncan, Bryan

    2010-01-01

    I will discuss a novel and practical application of the OMI HCHU and NO2 data products to the "weight of evidence" in the air quality decision-making process (e.g., State Implementation Plan (SIP)) for a city, region, or state to demonstrate that it is making progress toward attainment of the National Ambient Air Quality Standard (NAAQS) for ozone. Any trend, or lack thereof, in the observed OMI HCHO/NO2 may support that an emission control strategy implemented to reduce ozone is or is not occurring for a metropolitan area. In addition, the observed OMI HCHO/NO2 may be used to define new emission control strategies as the photochemical environments of urban areas evolve over time. I will demonstrate the utility of the OMI HCHO/NO2 over the U.S. for air quality applications with support from simulations with both a regional model and a photochemical box model. These results support mission planning of an OMI-like instrument for the proposed GEO-CAPE satellite that has as one of its objectives to study air quality from space. However, I'm attending the meeting as the Aura Deputy Project Scientist, so I don't technically need to present anything to justify the travel.

  6. The Utility of the OMI HCHO/NO2 in Air Quality Decision-Making Activities

    Science.gov (United States)

    Duncan, Bryan

    2010-01-01

    I will discuss a novel and practical application of the OMI HCHU and NO2 data products to the "weight of evidence" in the air quality decision-making process (e.g., State Implementation Plan (SIP)) for a city, region, or state to demonstrate that it is making progress toward attainment of the National Ambient Air Quality Standard (NAAQS) for ozone. Any trend, or lack thereof, in the observed OMI HCHO/NO2 may support that an emission control strategy implemented to reduce ozone is or is not occurring for a metropolitan area. In addition, the observed OMI HCHO/NO2 may be used to define new emission control strategies as the photochemical environments of urban areas evolve over time. I will demonstrate the utility of the OMI HCHO/NO2 over the U.S. for air quality applications with support from simulations with both a regional model and a photochemical box model. These results support mission planning of an OMI-like instrument for the proposed GEO-CAPE satellite that has as one of its objectives to study air quality from space. However, I'm attending the meeting as the Aura Deputy Project Scientist, so I don't technically need to present anything to justify the travel.

  7. Water quality, compliance, and health outcomes among utilities implementing Water Safety Plans in France and Spain.

    Science.gov (United States)

    Setty, Karen E; Kayser, Georgia L; Bowling, Michael; Enault, Jerome; Loret, Jean-Francois; Serra, Claudia Puigdomenech; Alonso, Jordi Martin; Mateu, Arnau Pla; Bartram, Jamie

    2017-05-01

    Water Safety Plans (WSPs), recommended by the World Health Organization since 2004, seek to proactively identify potential risks to drinking water supplies and implement preventive barriers that improve safety. To evaluate the outcomes of WSP application in large drinking water systems in France and Spain, we undertook analysis of water quality and compliance indicators between 2003 and 2015, in conjunction with an observational retrospective cohort study of acute gastroenteritis incidence, before and after WSPs were implemented at five locations. Measured water quality indicators included bacteria (E. coli, fecal streptococci, total coliform, heterotrophic plate count), disinfectants (residual free and total chlorine), disinfection by-products (trihalomethanes, bromate), aluminum, pH, turbidity, and total organic carbon, comprising about 240K manual samples and 1.2M automated sensor readings. We used multiple, Poisson, or Tobit regression models to evaluate water quality before and after the WSP intervention. The compliance assessment analyzed exceedances of regulated, recommended, or operational water quality thresholds using chi-squared or Fisher's exact tests. Poisson regression was used to examine acute gastroenteritis incidence rates in WSP-affected drinking water service areas relative to a comparison area. Implementation of a WSP generally resulted in unchanged or improved water quality, while compliance improved at most locations. Evidence for reduced acute gastroenteritis incidence following WSP implementation was found at only one of the three locations examined. Outcomes of WSPs should be expected to vary across large water utilities in developed nations, as the intervention itself is adapted to the needs of each location. The approach may translate to diverse water quality, compliance, and health outcomes. Copyright © 2017 Elsevier GmbH. All rights reserved.

  8. A CAD system and quality assurance protocol for bone age assessment utilizing digital hand atlas

    Science.gov (United States)

    Gertych, Arakadiusz; Zhang, Aifeng; Ferrara, Benjamin; Liu, Brent J.

    2007-03-01

    Determination of bone age assessment (BAA) in pediatric radiology is a task based on detailed analysis of patient's left hand X-ray. The current standard utilized in clinical practice relies on a subjective comparison of the hand with patterns in the book atlas. The computerized approach to BAA (CBAA) utilizes automatic analysis of the regions of interest in the hand image. This procedure is followed by extraction of quantitative features sensitive to skeletal development that are further converted to a bone age value utilizing knowledge from the digital hand atlas (DHA). This also allows providing BAA results resembling current clinical approach. All developed methodologies have been combined into one CAD module with a graphical user interface (GUI). CBAA can also improve the statistical and analytical accuracy based on a clinical work-flow analysis. For this purpose a quality assurance protocol (QAP) has been developed. Implementation of the QAP helped to make the CAD more robust and find images that cannot meet conditions required by DHA standards. Moreover, the entire CAD-DHA system may gain further benefits if clinical acquisition protocol is modified. The goal of this study is to present the performance improvement of the overall CAD-DHA system with QAP and the comparison of the CAD results with chronological age of 1390 normal subjects from the DHA. The CAD workstation can process images from local image database or from a PACS server.

  9. Sheaves of metric structures

    CERN Document Server

    Daza, Maicol A Ochoa

    2011-01-01

    We introduce and develop the theory of metric sheaves. A metric sheaf $\\A$ is defined on a topological space $X$ such that each fiber is a metric model. We describe the construction of the generic model as the quotient space of the sheaf through an appropriate filter. Semantics in this model is completely controlled and understood by the forcing rules in the sheaf.

  10. Sleep quality and health service utilization in Chinese general population: a cross-sectional study in Dongguan, China.

    Science.gov (United States)

    Zhang, Hui-Shan; Mai, Yan-Bing; Li, Wei-Da; Xi, Wen-Tao; Wang, Jin-Ming; Lei, Yi-Xiong; Wang, Pei-Xi

    The aims of this study were to explore the Pittsburgh Sleep Quality Index (PSQI) and health service utilization in Chinese general population, to investigate the association between PSQI and health service utilization and to identify the independent contributions of social demographic variables, health related factors and PSQI to health service utilization. In a cross-sectional community-based health survey using a multi-instrument questionnaire, 4067 subjects (≥15 years old) were studied. The Chinese version of the PSQI was used to assess sleep quality. Health service utilization was measured by recent two-week physician visit and annual hospitalization rates. Higher PSQI scores were associated with more frequent health service utilization. Higher scores in subjective sleep quality were associated with higher rate of recent two-week physician visit (adjusted OR = 1.24 per SD increase, P = 0.015). Higher scores in habitual sleep efficiency (adjusted OR = 1.24 per SD increase, P = 0.038) and sleep disturbances (adjusted OR = 2.09 per SD increase, P quality predicted more frequent health service utilization. The independent contribution of PSQI on health service utilization was smaller than social demographic variables. Copyright © 2016. Published by Elsevier B.V.

  11. Zonal management of multi-purposes groundwater utilization based on water quality and impact on the aquifer.

    Science.gov (United States)

    Liang, Ching-Ping; Jang, Cheng-Shin; Chen, Ching-Fang; Chen, Jui-Sheng

    2016-07-01

    Groundwater is widely used for drinking, irrigation, and aquaculture in the Pingtung Plain, Southwestern Taiwan. The overexploitation and poor quality of groundwater in some areas of the Pingtung Plain pose great challenges for the safe use and sustainable management of groundwater resources. Thus, establishing an effective management plan for multi-purpose groundwater utilization in the Pingtung Plain is imperative. Considerations of the quality of the groundwater and potential impact on the aquifer of groundwater exploitation are paramount to multi-purpose groundwater utilization management. This study proposes a zonal management plan for the multi-purpose use of groundwater in the Pingtung Plain. The zonal management plan is developed by considering the spatial variability of the groundwater quality and the impact on the aquifer, which is defined as the ratio of the actual groundwater extraction rate to transmissivity. A geostatistical Kriging approach is used to spatially delineate the safe zones based on the water quality standards applied in the three groundwater utilization sectors. Suitable zones for the impact on the aquifer are then spatially determined. The evaluation results showing the safe water quality zones for the three types of utilization demands and suitable zones for the impact on aquifer are integrated to create a zonal management map for multi-purpose groundwater utilization which can help government administrators to establish a water resource management strategy for safe and sustainable use of groundwater to meet multi-purpose groundwater utilization requirements in the Pingtung Plain.

  12. Landscape morphology metrics for urban areas: analysis of the role of vegetation in the management of the quality of urban environment

    Directory of Open Access Journals (Sweden)

    Danilo Marques de Magalhães

    2013-05-01

    Full Text Available This study has the objective to demonstrate the applicability of landscape metric analysis undertaken in fragments of urban land use. More specifically, it focuses in low vegetation cover, arboreal and shrubbery vegetation and their distribution on land use. Differences of vegetation cover in dense urban areas are explained. It also discusses briefly the state-of-the-art Landscape Ecology and landscape metrics. It develops, as an example, a case study in Belo Horizonte, Minas Gerais, Brazil. For this study, it selects the use of the area’s metrics, the relation between area, perimeter, core, and circumscribed circle. From this analysis, this paper proposes the definition of priority areas for conservation, urban parks, free spaces of common land, linear parks and green corridors. It is demonstrated that, in order to design urban landscape, studies of two-dimension landscape representations are still interesting, but should consider the systemic relation between different factors related to shape and land use.

  13. A study on coverage utilization and quality of maternal care services

    Directory of Open Access Journals (Sweden)

    Neeraj Agarwal, Abhiruchi Galhotra, H M Swami

    2011-01-01

    Full Text Available The objectives of the study were yo assess the utilization of various maternal services and to compare the quality of services provided by doctors and health workers in terms of components and advice received by pregnant women during antenatal period. It was a Cross-sectional Study conducted in a village on the border of Chandigarh (U.T. and Mohali (Punjab. All the women who had delivered in the past three years in the village Palsora were included in the study. 92.4% of the pregnancies were registered, 53.2% of which received antenatal care by a Doctor and 46.8% by a health worker. The measuring of blood pressure was significantly higher by the doctor than the health workers who recorded weight more significantly. The advice provided by doctors was significantly higher than health workers regarding diet, danger signs, newborn care, family planning and natal care.

  14. Availability, utilization, and quality of emergency obstetric care services in Bauchi State, Nigeria.

    Science.gov (United States)

    Abegunde, Dele; Kabo, Ibrahim A; Sambisa, William; Akomolafe, Toyin; Orobaton, Nosa; Abdulkarim, Masduk; Sadauki, Habib

    2015-03-01

    To report the availability, utilization, and quality of emergency obstetric care (EmOC) services in Bauchi State, Nigeria. Between June and July 2012, a cross-sectional survey of health facilities was conducted. Data on the performance of EmOC services between June 2011 and May 2012 were obtained from records of 20 general hospitals and 39 primary healthcare centers providing delivery services. Additionally, structured interviews with facility managers were conducted. Only 6 (10.2%) of the 59 facilities met the UN requirements for EmOC centers. None of the three senatorial zones in Bauchi State had the minimum acceptable number of five EmOC facilities per 500 000 population. Overall, 10 517 (4.4%) of the estimated 239 930 annual births took place in EmOC facilities. Cesarean delivery accounted for 3.6% (n=380) of the 10 517 births occurring in EmOC facilities and 0.2% of the 239 930 expected live births. Only 1416 (3.9%) of the expected 35 990 obstetric complications were managed in EmOC facilities. Overall, 45 (3.2%) of 1416 women with major direct obstetric complications treated at EmOC facilities died. Among 379 maternal deaths, 317 (83.6%) were attributable to major direct obstetric complications. Availability, utilization, and quality of EmOC services in Bauchi State, Nigeria, are suboptimal. The health system's capacity to manage emergency obstetric complications needs to be strengthened. Copyright © 2014 International Federation of Gynecology and Obstetrics. All rights reserved.

  15. Abbreviated quality of life scales for schizophrenia: comparison and utility of two brief community functioning measures.

    Science.gov (United States)

    Fervaha, Gagan; Foussias, George; Siddiqui, Ishraq; Agid, Ofer; Remington, Gary

    2014-04-01

    The Heinrichs-Carpenter Quality of Life Scale (QLS) is the most extensively used real-world community functioning scale in schizophrenia research. However, the extensive time required to administer it and the inclusion of items that overlap conceptually with negative symptoms limit its use across studies. The present study examined the validity and utility of two abbreviated QLS measures against the full QLS excluding negative symptom items. The sample included 1427 patients with schizophrenia who completed the baseline visit in the CATIE study. The validity of two abbreviated QLS measures (7-item and 4-item) were examined with the full QLS, excluding the intrapsychic foundations subscale, using correlation analysis. The utility of the abbreviated measures was explored by examining associations between the functioning scales and clinical variables and longitudinal change. Both abbreviated QLS measures were highly predictive of the full QLS (both r=0.91, pschizophrenia, especially when assessment of functional outcome is not the focus. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Utilization and quality of cryopreserved red blood cells in transfusion medicine.

    Science.gov (United States)

    Henkelman, S; Noorman, F; Badloe, J F; Lagerberg, J W M

    2015-02-01

    Cryopreserved (frozen) red blood cells have been used in transfusion medicine since the Vietnam war. The main method to freeze the red blood cells is by usage of glycerol. Although the usage of cryopreserved red blood cells was promising due to the prolonged storage time and the limited cellular deterioration at subzero temperatures, its usage have been hampered due to the more complex and labour intensive procedure and the limited shelf life of thawed products. Since the FDA approval of a closed (de) glycerolization procedure in 2002, allowing a prolonged postthaw storage of red blood cells up to 21 days at 2-6°C, cryopreserved red blood cells have become a more utilized blood product. Currently, cryopreserved red blood cells are mainly used in military operations and to stock red blood cells with rare phenotypes. Yet, cryopreserved red blood cells could also be useful to replenish temporary blood shortages, to prolong storage time before autologous transfusion and for IgA-deficient patients. This review describes the main methods to cryopreserve red blood cells, explores the quality of this blood product and highlights clinical settings in which cryopreserved red blood cells are or could be utilized.

  17. New metrics for blog mining

    Science.gov (United States)

    Ulicny, Brian; Baclawski, Ken; Magnus, Amy

    2007-04-01

    Blogs represent an important new arena for knowledge discovery in open source intelligence gathering. Bloggers are a vast network of human (and sometimes non-human) information sources monitoring important local and global events, and other blogs, for items of interest upon which they comment. Increasingly, issues erupt from the blog world and into the real world. In order to monitor blogging about important events, we must develop models and metrics that represent blogs correctly. The structure of blogs requires new techniques for evaluating such metrics as the relevance, specificity, credibility and timeliness of blog entries. Techniques that have been developed for standard information retrieval purposes (e.g. Google's PageRank) are suboptimal when applied to blogs because of their high degree of exophoricity, quotation, brevity, and rapidity of update. In this paper, we offer new metrics related for blog entry relevance, specificity, timeliness and credibility that we are implementing in a blog search and analysis tool for international blogs. This tools utilizes new blog-specific metrics and techniques for extracting the necessary information from blog entries automatically, using some shallow natural language processing techniques supported by background knowledge captured in domain-specific ontologies.

  18. Software Metrics Evaluation Based on Entropy

    CERN Document Server

    Selvarani, R; Ramachandran, Muthu; Prasad, Kamakshi

    2010-01-01

    Software engineering activities in the Industry has come a long way with various improve- ments brought in various stages of the software development life cycle. The complexity of modern software, the commercial constraints and the expectation for high quality products demand the accurate fault prediction based on OO design metrics in the class level in the early stages of software development. The object oriented class metrics are used as quality predictors in the entire OO software development life cycle even when a highly iterative, incremental model or agile software process is employed. Recent research has shown some of the OO design metrics are useful for predicting fault-proneness of classes. In this paper the empirical validation of a set of metrics proposed by Chidamber and Kemerer is performed to assess their ability in predicting the software quality in terms of fault proneness and degradation. We have also proposed the design complexity of object-oriented software with Weighted Methods per Class m...

  19. COMPARISON OF EUROPEAN UNION QUALITY LABELS UTILIZATION IN VISEGRAD GROUP COUNTRIES

    Directory of Open Access Journals (Sweden)

    rka Velcovsk

    2014-09-01

    Full Text Available The paper focuses on European Union quality system known as Protected Designation of Origin, Protected Geographical Indication and Tradional Speciality Guaranteed used in agricultural and food products sector. The aim of the paper is to analyse and compare the utilization of these labels by Visegrad group countries. Firstly, the literature review dealing with the topical area is given. Further, the European Union quality scheme is specified and the comparison of Visegrad group countries according to selected criteria is provided. Empirical part of the paper involves marketing research results analysis and discussion. Data comes from the Database of Origin and Registration. The sample consists of all 93 product names registered as Protected Designation of Origin, Protected Geographical Indication and Traditional Speciality Guaranteed in the database by Visegrad group countries to the 30th April 2013. The frequency of using the labels is analysed according to type of label, country of origin and product class. Pearsons chi-square test of independence and Pearson's and Cramer's contingency coefficients were used in order to confirm if significant differences do exist between variables.

  20. Metric modular spaces

    CERN Document Server

    Chistyakov, Vyacheslav

    2015-01-01

    Aimed toward researchers and graduate students familiar with elements of functional analysis, linear algebra, and general topology; this book contains a general study of modulars, modular spaces, and metric modular spaces. Modulars may be thought of as generalized velocity fields and serve two important purposes: generate metric spaces in a unified manner and provide a weaker convergence, the modular convergence, whose topology is non-metrizable in general. Metric modular spaces are extensions of metric spaces, metric linear spaces, and classical modular linear spaces. The topics covered include the classification of modulars, metrizability of modular spaces, modular transforms and duality between modular spaces, metric  and modular topologies. Applications illustrated in this book include: the description of superposition operators acting in modular spaces, the existence of regular selections of set-valued mappings, new interpretations of spaces of Lipschitzian and absolutely continuous mappings, the existe...

  1. Metric diffusion along foliations

    CERN Document Server

    Walczak, Szymon M

    2017-01-01

    Up-to-date research in metric diffusion along compact foliations is presented in this book. Beginning with fundamentals from the optimal transportation theory and the theory of foliations; this book moves on to cover Wasserstein distance, Kantorovich Duality Theorem, and the metrization of the weak topology by the Wasserstein distance. Metric diffusion is defined, the topology of the metric space is studied and the limits of diffused metrics along compact foliations are discussed. Essentials on foliations, holonomy, heat diffusion, and compact foliations are detailed and vital technical lemmas are proved to aide understanding. Graduate students and researchers in geometry, topology and dynamics of foliations and laminations will find this supplement useful as it presents facts about the metric diffusion along non-compact foliation and provides a full description of the limit for metrics diffused along foliation with at least one compact leaf on the two dimensions.

  2. Enterprise Sustainment Metrics

    Science.gov (United States)

    The Air Force sustainment enterprise does not have metrics that . . . adequately measure key sustainment parameters, according to the 2011 National...standardized and do not contribute to the overall assessment of the sustainment enterprise . This paper explores the development of a single metric...is not feasible. To answer the question does the sustainment enterprise provide cost-effective readiness for a weapon system, a suite of metrics is

  3. -Metric Space: A Generalization

    Directory of Open Access Journals (Sweden)

    Farshid Khojasteh

    2013-01-01

    Full Text Available We introduce the notion of -metric as a generalization of a metric by replacing the triangle inequality with a more generalized inequality. We investigate the topology of the spaces induced by a -metric and present some essential properties of it. Further, we give characterization of well-known fixed point theorems, such as the Banach and Caristi types in the context of such spaces.

  4. Prognostic Performance Metrics

    Data.gov (United States)

    National Aeronautics and Space Administration — This chapter presents several performance metrics for offline evaluation of prognostics algorithms. A brief overview of different methods employed for performance...

  5. Topics in Metric Approximation

    Science.gov (United States)

    Leeb, William Edward

    This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.

  6. Compactness in Metric Spaces

    Directory of Open Access Journals (Sweden)

    Nakasho Kazuhisa

    2016-09-01

    Full Text Available In this article, we mainly formalize in Mizar [2] the equivalence among a few compactness definitions of metric spaces, norm spaces, and the real line. In the first section, we formalized general topological properties of metric spaces. We discussed openness and closedness of subsets in metric spaces in terms of convergence of element sequences. In the second section, we firstly formalize the definition of sequentially compact, and then discuss the equivalence of compactness, countable compactness, sequential compactness, and totally boundedness with completeness in metric spaces.

  7. Impact of nausea/vomiting on quality of life as a visual analogue scale-derived utility score.

    Science.gov (United States)

    Grunberg, S M; Boutin, N; Ireland, A; Miner, S; Silveira, J; Ashikaga, T

    1996-11-01

    Pharmacoeconomic analysis is often based upon incremental cost per increase in survival (cost-effectiveness). Using this definition supportive care measures, which increase quality but not quantity of life, generate a zero denominator and cannot be directly compared with other components of health care cost. Cost-utility analysis, which measures incremental cost per increase in quality-adjusted life-years (QALY), where QALY = utility score x time at risk, addresses this problem, since successful supportive intervention increases the utility score and thus provides a finite denominator in QALY even when absolute survival is unchanged. However, utility scores for various supportive care modalities have not been well defined. As a pilot study to generate a first approximation of a utility score for nausea/vomiting, we used a rating scale technique and administered two visual analogue scale questions to 30 patients completing a cycle of chemotherapy. Patients rated their global quality of life during their previous cycle of chemotherapy with hypothetical absence or presence of nausea/vomiting as the only variable. The study population included 8 male and 22 female patients, with a median age of 56 years. The most common malignancies were breast cancer (8 patients), lung cancer (7 patients), and hematologic malignancies (7 patients). On a 100 mm visual analogue scale, the mean score for overall quality of life during chemotherapy was 79 mm without nausea/vomiting and 27 mm with nausea/vomiting (P < 0.001, paired t-test). The implied marked increase in utility with relief of nausea/vomiting suggests a significant impact on cost-utility analysis. Similar methodology could be used to estimate utility scores in other areas of supportive care.

  8. Energy Metrics for State Government Buildings

    Science.gov (United States)

    Michael, Trevor

    Measuring true progress towards energy conservation goals requires the accurate reporting and accounting of energy consumption. An accurate energy metrics framework is also a critical element for verifiable Greenhouse Gas Inventories. Energy conservation in government can reduce expenditures on energy costs leaving more funds available for public services. In addition to monetary savings, conserving energy can help to promote energy security, air quality, and a reduction of carbon footprint. With energy consumption/GHG inventories recently produced at the Federal level, state and local governments are beginning to also produce their own energy metrics systems. In recent years, many states have passed laws and executive orders which require their agencies to reduce energy consumption. In June 2008, SC state government established a law to achieve a 20% energy usage reduction in state buildings by 2020. This study examines case studies from other states who have established similar goals to uncover the methods used to establish an energy metrics system. Direct energy consumption in state government primarily comes from buildings and mobile sources. This study will focus exclusively on measuring energy consumption in state buildings. The case studies reveal that many states including SC are having issues gathering the data needed to accurately measure energy consumption across all state buildings. Common problems found include a lack of enforcement and incentives that encourage state agencies to participate in any reporting system. The case studies are aimed at finding the leverage used to gather the needed data. The various approaches at coercing participation will hopefully reveal methods that SC can use to establish the accurate metrics system needed to measure progress towards its 20% by 2020 energy reduction goal. Among the strongest incentives found in the case studies is the potential for monetary savings through energy efficiency. Framing energy conservation

  9. Creation of a simple natural language processing tool to support an imaging utilization quality dashboard.

    Science.gov (United States)

    Swartz, Jordan; Koziatek, Christian; Theobald, Jason; Smith, Silas; Iturrate, Eduardo

    2017-05-01

    ://iturrate.com/simpleNLP. Results obtained using this tool can be applied to enhance quality by presenting information about utilization and yield to providers via an imaging dashboard. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Google Scholar Metrics中文期刊H指数评价研究%Evaluation of Journal Quality Based on Google Scholar Metrics

    Institute of Scientific and Technical Information of China (English)

    杨毓丽; 陈陶; 张苏

    2013-01-01

    Google announced a new feature to its Scholar service (Google Scholar Metrics) on April 1, 2012. Metrics currently cover top 100 publications' H index published between 2007 and 2011 in ten languages. In this study, we compare Top 100 Chinese Journal H index between Metric and CNKI. The statistical results show that the Google H index and CNKI H index are exponential correlation. For the same publication, 90% of the Google publications H index are higher than CNKI publications H index.%  Google在2012年4月1日推出了一项新功能———Google学术计量(Google Scholar Metrics),公布了从2007年4月1日至今包括中文在内10种语言期刊H指数前100的排名。用户可以搜索期刊标题,获取期刊H指数。以Google学术计量H指数排名前100名的中文期刊为来源数据库,通过测算Google学术计量和中文期刊引文数据库(CNKI)的期刊H指数,了解二者之间的差异和联系。统计结果表明,Google的期刊H指数和CNKI的H指数呈相关性,对于同一种期刊,90%的Google期刊H指数低于CNKI期刊H指数。

  11. An elicitation of utility for quality of life under prospect theory.

    Science.gov (United States)

    Attema, Arthur E; Brouwer, Werner B F; l'Haridon, Olivier; Pinto, Jose Luis

    2016-07-01

    This paper performs several tests of decision analysis applied to the health domain. First, we conduct a test of the normative expected utility theory. Second, we investigate the possibility to elicit the more general prospect theory. We observe risk aversion for gains and losses and violations of expected utility. These results imply that mechanisms governing decisions in the health domain are similar to those in the monetary domain. However, we also report one important deviation: utility is universally concave for the health outcomes used in this study, in contrast to the commonly found S-shaped utility for monetary outcomes, with concave utility for gains and convex utility for losses.

  12. High quality factor polymeric Fabry-Perot resonators utilizing a polymer waveguide.

    Science.gov (United States)

    Tadayon, Mohammad Amin; Baylor, Martha-Elizabeth; Ashkenazi, Shai

    2014-03-10

    Optical resonators are used in a variety of applications ranging from sensors to lasers and signal routing in high volume communication networks. Achieving a high quality (Q) factor is necessary for higher sensitivity in sensing applications and for narrow linewidth light emission in most lasing applications. In this work, we propose a new approach to achieve a very high Q-factor in polymeric Fabry-Perot resonators by conquering light diffraction inside the optical cavity. This can be achieved by inducing a refractive index feature inside the optical cavity that simply creates a waveguide between the two mirrors. This approach eliminates diffraction loss from the cavity and therefore the Q-factor is only limited by mirror loss and absorption. To demonstrate this claim, a device has been fabricated consisting of two dielectric Bragg reflectors with a 100 μm layer of photosensitive polymer between them. The refractive index of this polymer can be modified utilizing standard photo-lithography processes. The measured finesse of the fabricated device was 692 and the Q-factor was 55000.

  13. Pragmatic security metrics applying metametrics to information security

    CERN Document Server

    Brotby, W Krag

    2013-01-01

    Other books on information security metrics discuss number theory and statistics in academic terms. Light on mathematics and heavy on utility, PRAGMATIC Security Metrics: Applying Metametrics to Information Security breaks the mold. This is the ultimate how-to-do-it guide for security metrics.Packed with time-saving tips, the book offers easy-to-follow guidance for those struggling with security metrics. Step by step, it clearly explains how to specify, develop, use, and maintain an information security measurement system (a comprehensive suite of metrics) to

  14. Surveillance Metrics Sensitivity Study

    Energy Technology Data Exchange (ETDEWEB)

    Bierbaum, R; Hamada, M; Robertson, A

    2011-11-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.

  15. Metrics for Transportation.

    Science.gov (United States)

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in transportation, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational terminology,…

  16. Metrics for Food Distribution.

    Science.gov (United States)

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in food distribution, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  17. Metric Education Evaluation Package.

    Science.gov (United States)

    Kansky, Bob; And Others

    This document was developed out of a need for a complete, carefully designed set of evaluation instruments and procedures that might be applied in metric inservice programs across the nation. Components of this package were prepared in such a way as to permit local adaptation to the evaluation of a broad spectrum of metric education activities.…

  18. Computational visual distinctness metric

    NARCIS (Netherlands)

    Martínez-Baena, J.; Toet, A.; Fdez-Vidal, X.R.; Garrido, A.; Rodríguez-Sánchez, R.

    1998-01-01

    A new computational visual distinctness metric based on principles of the early human visual system is presented. The metric is applied to quantify (1) the visual distinctness of targets in complex natural scenes and (2) the perceptual differences between compressed and uncompressed images. The new

  19. Harmonic Bergman Metric

    Institute of Scientific and Technical Information of China (English)

    ZHAOZhen-gang

    2005-01-01

    We have constructed the positive definite metric matrixes for the bounded domains of Rn and proved an inequality which is about the Jacobi matrix of a harmonic mapping on a bounded domain of Rn and the metric matrix of the same bounded domain.

  20. Privacy Metrics and Boundaries

    NARCIS (Netherlands)

    L-F. Pau (Louis-François)

    2005-01-01

    textabstractThis paper aims at defining a set of privacy metrics (quantitative and qualitative) in the case of the relation between a privacy protector ,and an information gatherer .The aims with such metrics are: -to allow to assess and compare different user scenarios and their differences; for ex

  1. Study of water quality improvements during riverbank filtration at three midwestern United States drinking water utilities

    Science.gov (United States)

    Weiss, W.; Bouwer, E.; Ball, W.; O'Melia, C.; Lechevallier, M.; Arora, H.; Aboytes, R.; Speth, T.

    2003-04-01

    Riverbank filtration (RBF) is a process during which surface water is subjected to subsurface flow prior to extraction from wells. During infiltration and soil passage, surface water is subjected to a combination of physical, chemical, and biological processes such as filtration, dilution, sorption, and biodegradation that can significantly improve the raw water quality (Tufenkji et al, 2002; Kuehn and Mueller, 2000; Kivimaki et al, 1998; Stuyfzand, 1998). Transport through alluvial aquifers is associated with a number of water quality benefits, including removal of microbes, pesticides, total and dissolved organic carbon (TOC and DOC), nitrate, and other contaminants (Hiscock and Grischek, 2002; Tufenkji et al., 2002; Ray et al, 2002; Kuehn and Mueller, 2000; Doussan et al, 1997; Cosovic et al, 1996; Juttner, 1995; Miettinen et al, 1994). In comparison to most groundwater sources, alluvial aquifers that are hydraulically connected to rivers are typically easier to exploit (shallow) and more highly productive for drinking water supplies (Doussan et al, 1997). Increased applications of RBF are anticipated as drinking water utilities strive to meet increasingly stringent drinking water regulations, especially with regard to the provision of multiple barriers for protection against microbial pathogens, and with regard to tighter regulations for disinfection by-products (DBPs), such as trihalomethanes (THMs) and haloacetic acids (HAAs). In the above context, research was conducted to document the water quality benefits during RBF at three major river sources in the mid-western United States, specifically with regard to DBP precursor organic matter and microbial pathogens. Specific objectives were to: 1. Evaluate the merits of RBF for removing/controlling DBP precursors and certain other drinking water contaminants (e.g. microorganisms). 2. Evaluate whether RBF can improve finished drinking water quality by removing and/or altering natural organic matter (NOM) in a

  2. Modeling Languages: metrics and assessing tools

    OpenAIRE

    Fonte, Daniela; Boas, Ismael Vilas; Azevedo, José; Peixoto, José João; Faria, Pedro; Silva, Pedro; Sá, Tiago de, 1990-; Costa, Ulisses; da Cruz, Daniela; Henriques, Pedro Rangel

    2012-01-01

    Any traditional engineering field has metrics to rigorously assess the quality of their products. Engineers know that the output must satisfy the requirements, must comply with the production and market rules, and must be competitive. Professionals in the new field of software engineering started a few years ago to define metrics to appraise their product: individual programs and software systems. This concern motivates the need to assess not only the outcome but also the process and tools em...

  3. Metric for Estimating Congruity between Quantum Images

    Directory of Open Access Journals (Sweden)

    Abdullah M. Iliyasu

    2016-10-01

    Full Text Available An enhanced quantum-based image fidelity metric, the QIFM metric, is proposed as a tool to assess the “congruity” between two or more quantum images. The often confounding contrariety that distinguishes between classical and quantum information processing makes the widely accepted peak-signal-to-noise-ratio (PSNR ill-suited for use in the quantum computing framework, whereas the prohibitive cost of the probability-based similarity score makes it imprudent for use as an effective image quality metric. Unlike the aforementioned image quality measures, the proposed QIFM metric is calibrated as a pixel difference-based image quality measure that is sensitive to the intricacies inherent to quantum image processing (QIP. As proposed, the QIFM is configured with in-built non-destructive measurement units that preserve the coherence necessary for quantum computation. This design moderates the cost of executing the QIFM in order to estimate congruity between two or more quantum images. A statistical analysis also shows that our proposed QIFM metric has a better correlation with digital expectation of likeness between images than other available quantum image quality measures. Therefore, the QIFM offers a competent substitute for the PSNR as an image quality measure in the quantum computing framework thereby providing a tool to effectively assess fidelity between images in quantum watermarking, quantum movie aggregation and other applications in QIP.

  4. Association between obesity, quality of life, physical activity and health service utilization in primary care patients with osteoarthritis.

    NARCIS (Netherlands)

    Rosemann, T.J.; Grol, R.P.T.M.; Herman, K.; Wensing, M.J.P.; Szecsenyi, J.

    2008-01-01

    ABSTRACT: OBJECTIVE: To assess the association of obesity with quality of life, health service utilization and physical activity in a large sample of primary care patients with osteoarthritis (OA). METHODS: Data were retrieved from the PraxArt project, representing a cohort of 1021 primary care pati

  5. Health Care of Latino Children with Autism and Other Developmental Disabilities: Quality of Provider Interaction Mediates Utilization

    Science.gov (United States)

    Parish, Susan; Magana, Sandra; Rose, Roderick; Timberlake, Maria; Swaine, Jamie G.

    2012-01-01

    This study examines access to, utilization of, and quality of health care for Latino children with autism and other developmental disabilities. We analyze data from the National Survey of Children with Special Health Care Needs (N = 4,414 children with autism and other developmental disabilities). Compared with White children, Latino children with…

  6. Epidural Steroids for Lumbosacral Radicular Syndrome Compared to Usual Care : Quality of Life and Cost Utility in General Practice

    NARCIS (Netherlands)

    Spijker-Huiges, Antje; Vermeulen, Karin; Winters, Jan C.; van Wijhe, Marten; van der Meer, Klaas

    Objective: To investigate the effect of adding segmental epidural steroid injections (SESIs) to usual care compared with usual care alone on quality of life and cost utility in lumbosacral radicular syndrome (LRS) in general practice. Design: A pragmatic randomized controlled trial. Results were

  7. Quality of life, health status, and health service utilization related to a new measure of health literacy: FLIGHT/VIDAS.

    Science.gov (United States)

    Ownby, Raymond L; Acevedo, Amarilis; Jacobs, Robin J; Caballero, Joshua; Waldrop-Valverde, Drenna

    2014-09-01

    Researchers have identified significant limitations in some currently used measures of health literacy. The purpose of this paper is to present data on the relation of health-related quality of life, health status, and health service utilization to performance on a new measure of health literacy in a nonpatient population. The new measure was administered to 475 English- and Spanish-speaking community-dwelling volunteers along with existing measures of health literacy and assessments of health-related quality of life, health status, and healthcare service utilization. Relations among measures were assessed via correlations and health status and utilization was tested across levels of health literacy using ANCOVA models. The new health literacy measure is significantly related to existing measures of health literacy as well as to participants' health-related quality of life. Persons with lower levels of health literacy reported more health conditions, more frequent physical symptoms, and greater healthcare service utilization. The new measure of health literacy is valid and shows relations to measures of conceptually related constructs such as quality of life and health behaviors. FLIGHT/VIDAS may be useful to researchers and clinicians interested in a computer administered and scored measure of health literacy. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Holographic Spherically Symmetric Metrics

    Science.gov (United States)

    Petri, Michael

    The holographic principle (HP) conjectures, that the maximum number of degrees of freedom of any realistic physical system is proportional to the system's boundary area. The HP has its roots in the study of black holes. It has recently been applied to cosmological solutions. In this article we apply the HP to spherically symmetric static space-times. We find that any regular spherically symmetric object saturating the HP is subject to tight constraints on the (interior) metric, energy-density, temperature and entropy-density. Whenever gravity can be described by a metric theory, gravity is macroscopically scale invariant and the laws of thermodynamics hold locally and globally, the (interior) metric of a regular holographic object is uniquely determined up to a constant factor and the interior matter-state must follow well defined scaling relations. When the metric theory of gravity is general relativity, the interior matter has an overall string equation of state (EOS) and a unique total energy-density. Thus the holographic metric derived in this article can serve as simple interior 4D realization of Mathur's string fuzzball proposal. Some properties of the holographic metric and its possible experimental verification are discussed. The geodesics of the holographic metric describe an isotropically expanding (or contracting) universe with a nearly homogeneous matter-distribution within the local Hubble volume. Due to the overall string EOS the active gravitational mass-density is zero, resulting in a coasting expansion with Ht = 1, which is compatible with the recent GRB-data.

  9. Metric characterizations II

    CERN Document Server

    Blecher, David P

    2012-01-01

    The present paper is a sequel to our paper "Metric characterization of isometries and of unital operator spaces and systems". We characterize certain common objects in the theory of operator spaces (unitaries, unital operator spaces, operator systems, operator algebras, and so on), in terms which are purely linear-metric, by which we mean that they only use the vector space structure of the space and its matrix norms. In the last part we give some characterizations of operator algebras (which are not linear-metric in our strict sense described in the paper).

  10. Characterization of Multiplicative Metric Completeness

    Directory of Open Access Journals (Sweden)

    Badshshah e Romer

    2016-03-01

    Full Text Available We established fixed point theorems in multiplicative metric spaces. The obtained results generalize Banach contraction principle in multiplicative metric spaces and also characterize completeness of the underlying multiplicative metric space.

  11. 基于敏捷管理模式的软件质量度量方法研究%Research on Software Quality Metric Method in Agile Management Mode

    Institute of Scientific and Technical Information of China (English)

    吴刚

    2016-01-01

    文章结合软件敏捷开发管理模式的特征和质量度量一般定义推论,研究提出一种基于团队属性因子和单位产品规模的缺陷值的软件质量度量与跟踪方法,即敏捷软件质量度量法,然后采用该软件质量度量模型对敏捷管理实践产生的过程数据进行计算处理和质量度量试验,研究并分析所得结果,总结得出提高软件产品质量的方法。%The article is according to the characteristic of data in agile software development management mode and general definition and inference of software quality, studies and puts forward a kind of software quality measurement and tracking method, agile software quality metric method, which bases on team attributes value and the defects value per unit product size. Then the application of agile software quality measurement model, measures and tracks the software product quality under the actual production environment, researches and analyses of the data results, and summaries the general methods of improving the quality of software product.

  12. General Motors Goes Metric

    Science.gov (United States)

    Webb, Ted

    1976-01-01

    Describes the program to convert to the metric system all of General Motors Corporation products. Steps include establishing policy regarding employee-owned tools, setting up training plans, and making arrangements with suppliers. (MF)

  13. Probabilistic metric spaces

    CERN Document Server

    Schweizer, B

    2005-01-01

    Topics include special classes of probabilistic metric spaces, topologies, and several related structures, such as probabilistic normed and inner-product spaces. 1983 edition, updated with 3 new appendixes. Includes 17 illustrations.

  14. A metric for success

    Science.gov (United States)

    Carver, Gary P.

    1994-05-01

    The federal agencies are working with industry to ease adoption of the metric system. The goal is to help U.S. industry compete more successfully in the global marketplace, increase exports, and create new jobs. The strategy is to use federal procurement, financial assistance, and other business-related activities to encourage voluntary conversion. Based upon the positive experiences of firms and industries that have converted, federal agencies have concluded that metric use will yield long-term benefits that are beyond any one-time costs or inconveniences. It may be time for additional steps to move the Nation out of its dual-system comfort zone and continue to progress toward metrication. This report includes 'Metric Highlights in U.S. History'.

  15. Metrics for Event Driven Software

    Directory of Open Access Journals (Sweden)

    Neha Chaudhary

    2016-01-01

    Full Text Available The evaluation of Graphical User Interface has significant role to improve its quality. Very few metrics exists for the evaluation of Graphical User Interface. The purpose of metrics is to obtain better measurements in terms of risk management, reliability forecast, project scheduling, and cost repression. In this paper structural complexity metrics is proposed for the evaluation of Graphical User Interface. Structural complexity of Graphical User Interface is considered as an indicator of complexity. The goal of identifying structural complexity is to measure the GUI testability. In this testability evaluation the process of measuring the complexity of the user interface from testing perspective is proposed. For the GUI evaluation and calculating structural complexity an assessment process is designed which is based on types of events. A fuzzy model is developed to evaluate the structural complexity of GUI. This model takes five types of events as input and return structural complexity of GUI as output. Further a relationship is established between structural complexity and testability of event driven software. Proposed model is evaluated with four different applications. It is evident from the results that higher the complexities lower the testability of application.

  16. Social media metrics

    OpenAIRE

    Balvín, Radek

    2013-01-01

    With growing amount of data produced by users on social media the need of extraction of relevant data for marketing, research and other uses grows as well. The bachelor thesis named "Social media metrics" presents the issues of monitoring, measurement and metrics of social media. In the research part it also maps and captures the present Czech practice in measurement and monitoring of social media. I also rate the use of social media monitoring tools and usual methods of social media measurem...

  17. Metrics of Risk Associated with Defects Rediscovery

    CERN Document Server

    Miranskyy, Andriy V; Reesor, Mark

    2011-01-01

    Software defects rediscovered by a large number of customers affect various stakeholders and may: 1) hint at gaps in a software manufacturer's Quality Assurance (QA) processes, 2) lead to an over-load of a software manufacturer's support and maintenance teams, and 3) consume customers' resources, leading to a loss of reputation and a decrease in sales. Quantifying risk associated with the rediscovery of defects can help all of these stake-holders. In this chapter we present a set of metrics needed to quantify the risks. The metrics are designed to help: 1) the QA team to assess their processes; 2) the support and maintenance teams to allocate their resources; and 3) the customers to assess the risk associated with using the software product. The paper includes a validation case study which applies the risk metrics to industrial data. To calculate the metrics we use mathematical instruments like the heavy-tailed Kappa distribution and the G/M/k queuing model.

  18. MPLS/VPN traffic engineering: SLA metrics

    Science.gov (United States)

    Cherkaoui, Omar; MacGibbon, Brenda; Blais, Michel; Serhrouchni, Ahmed

    2001-07-01

    Traffic engineering must be concerned with a broad definition of service that includes network availability, reliability and stability, as well as traditional traffic data on loss, throughput, delay and jitter. MPLS and Virtual Private Networks (VPNs) significantly contribute to security and Quality of Service (QoS) within communication networks, but there remains a need for metric measurement and evaluation. The purpose of this paper is to propose a methodology which gives a measure for LSP ( Lfew abel Switching Paths) metrics in VPN MPLS networks. We propose here a statistical method for the evaluation of those metrics. Statistical methodology is very important in this type of study since there is a large amount of data to consider. We use the notions of sample surveys, self-similar processes, linear regression, additive models and bootstrapping. The results obtained allows us to estimate the different metrics for such SLAs.

  19. PRODUCTION QUALITY AND UTILIZATION OF FORAGE%饲草产品的质量与利用

    Institute of Scientific and Technical Information of China (English)

    常明; 当周吉

    2011-01-01

    Forage is the foundation directly related to forage utilization. utilization. of human development of animal husbandry, We suggested that biotechnology should be forage quality of the product is continue used to improve forage%饲草是人类发展畜牧业的基础,饲草产品的质量直接关系到饲草的利用。建议,应当借助生物技术等不断提高饲草的利用效率。

  20. Utilizing knowledge from prior plans in the evaluation of quality assurance.

    Science.gov (United States)

    Stanhope, Carl; Wu, Q Jackie; Yuan, Lulin; Liu, Jianfei; Hood, Rodney; Yin, Fang-Fang; Adamson, Justus

    2015-06-21

    Increased interest regarding sensitivity of pre-treatment intensity modulated radiotherapy and volumetric modulated arc radiotherapy (VMAT) quality assurance (QA) to delivery errors has led to the development of dose-volume histogram (DVH) based analysis. This paradigm shift necessitates a change in the acceptance criteria and action tolerance for QA. Here we present a knowledge based technique to objectively quantify degradations in DVH for prostate radiotherapy. Using machine learning, organ-at-risk (OAR) DVHs from a population of 198 prior patients' plans were adapted to a test patient's anatomy to establish patient-specific DVH ranges. This technique was applied to single arc prostate VMAT plans to evaluate various simulated delivery errors: systematic single leaf offsets, systematic leaf bank offsets, random normally distributed leaf fluctuations, systematic lag in gantry angle of the mutli-leaf collimators (MLCs), fluctuations in dose rate, and delivery of each VMAT arc with a constant rather than variable dose rate.Quantitative Analyses of Normal Tissue Effects in the Clinic suggests V75Gy dose limits of 15% for the rectum and 25% for the bladder, however the knowledge based constraints were more stringent: 8.48 ± 2.65% for the rectum and 4.90 ± 1.98% for the bladder. 19 ± 10 mm single leaf and 1.9 ± 0.7 mm single bank offsets resulted in rectum DVHs worse than 97.7% (2σ) of clinically accepted plans. PTV degradations fell outside of the acceptable range for 0.6 ± 0.3 mm leaf offsets, 0.11 ± 0.06 mm bank offsets, 0.6 ± 1.3 mm of random noise, and 1.0 ± 0.7° of gantry-MLC lag.Utilizing a training set comprised of prior treatment plans, machine learning is used to predict a range of achievable DVHs for the test patient's anatomy. Consequently, degradations leading to statistical outliers may be identified. A knowledge based QA evaluation enables customized QA criteria per treatment site, institution and/or physician and can often be more sensitive to

  1. SOFTWARE METRICS VALIDATION METHODOLOGIES IN SOFTWARE ENGINEERING

    Directory of Open Access Journals (Sweden)

    K.P. Srinivasan

    2014-12-01

    Full Text Available In the software measurement validations, assessing the validation of software metrics in software engineering is a very difficult task due to lack of theoretical methodology and empirical methodology [41, 44, 45]. During recent years, there have been a number of researchers addressing the issue of validating software metrics. At present, software metrics are validated theoretically using properties of measures. Further, software measurement plays an important role in understanding and controlling software development practices and products. The major requirement in software measurement is that the measures must represent accurately those attributes they purport to quantify and validation is critical to the success of software measurement. Normally, validation is a collection of analysis and testing activities across the full life cycle and complements the efforts of other quality engineering functions and validation is a critical task in any engineering project. Further, validation objective is to discover defects in a system and assess whether or not the system is useful and usable in operational situation. In the case of software engineering, validation is one of the software engineering disciplines that help build quality into software. The major objective of software validation process is to determine that the software performs its intended functions correctly and provides information about its quality and reliability. This paper discusses the validation methodology, techniques and different properties of measures that are used for software metrics validation. In most cases, theoretical and empirical validations are conducted for software metrics validations in software engineering [1-50].

  2. Crowdsourcing metrics of digital collections

    Directory of Open Access Journals (Sweden)

    Tuula Pääkkönen

    2015-12-01

    Full Text Available In the National Library of Finland (NLF there are millions of digitized newspaper and journal pages, which are openly available via the public website  http://digi.kansalliskirjasto.fi. To serve users better, last year the front end was completely overhauled with its main aim in crowdsourcing features, e.g., by giving end-users the opportunity to create digital clippings and a personal scrapbook from the digital collections. But how can you know whether crowdsourcing has had an impact? How much crowdsourcing functionalities have been used so far? Did crowdsourcing work? In this paper the statistics and metrics of a recent crowdsourcing effort are analysed across the different digitized material types (newspapers, journals, ephemera. The subjects, categories and keywords given by the users are analysed to see which topics are the most appealing. Some notable public uses of the crowdsourced article clippings are highlighted. These metrics give us indications on how the end-users, based on their own interests, are investigating and using the digital collections. Therefore, the suggested metrics illustrate the versatility of the information needs of the users, varying from citizen science to research purposes. By analysing the user patterns, we can respond to the new needs of the users by making minor changes to accommodate the most active participants, while still making the service more approachable for those who are trying out the functionalities for the first time. Participation in the clippings and annotations can enrich the materials in unexpected ways and can possibly pave the way for opportunities of using crowdsourcing more also in research contexts. This creates more opportunities for the goals of open science since source data becomes ­available, making it possible for researchers to reach out to the general public for help. In the long term, utilizing, for example, text mining methods can allow these different end-user segments to

  3. Utility of WHOQOL-BREF in measuring quality of life in sickle cell disease

    National Research Council Canada - National Science Library

    Asnani, Monika R; Lipps, Garth E; Reid, Marvin E

    2009-01-01

    .... We have sought to study its utility in this disease population. 491 patients with sickle cell disease were administered the questionnaire including demographics, WHOQOL-Bref, Short Form-36 (SF-36...

  4. 48 CFR 711.002-70 - Metric system waivers.

    Science.gov (United States)

    2010-10-01

    ... the Office of Small and Disadvantaged Business Utilization (SDB) will be obtained prior to... each fiscal year, each USAID/W procurement activity and each Mission will submit a copy of the metric waiver log for the year to the USAID Metric Executive. (Mission logs are to be consolidated in a Mission...

  5. Quality improvement project to determine outpatient chemotherapy capacity and improve utilization.

    Science.gov (United States)

    Gruber, Marcia; Smith, Debra; O'Neal, Charnese; Hennessy, Kelli; Therrien, Melissa

    2008-01-01

    Nurses in chemotherapy administration settings are constantly challenged to increase utilization while maintaining patient safety. A performance improvement project was carried out to identify barriers to patient throughput and opportunities to improve utilization while not compromising patient safety. We found ways to safely increase the number of patients from 92 to 108 per day; however, patient tardiness and staff vacancies had a negative impact on patient wait times and nursing staff overtime.

  6. Efficient neural-network-based no-reference approach to an overall quality metric for JPEG and JPEG2000 compressed images

    NARCIS (Netherlands)

    Liu, H.; Redi, J.A.; Alers, H.; Zunino, R.; Heynderickx, I.E.J.R.

    2011-01-01

    Reliably assessing overall quality of JPEG/JPEG2000 coded images without having the original image as a reference is still challenging, mainly due to our limited understanding of how humans combine the various perceived artifacts to an overall quality judgment. A known approach to avoid the explicit

  7. Innovating for quality and value: Utilizing national quality improvement programs to identify opportunities for responsible surgical innovation.

    Science.gov (United States)

    Woo, Russell K; Skarsgard, Erik D

    2015-06-01

    Innovation in surgical techniques, technology, and care processes are essential for improving the care and outcomes of surgical patients, including children. The time and cost associated with surgical innovation can be significant, and unless it leads to improvements in outcome at equivalent or lower costs, it adds little or no value from the perspective of the patients, and decreases the overall resources available to our already financially constrained healthcare system. The emergence of a safety and quality mandate in surgery, and the development of the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) allow needs-based surgical care innovation which leads to value-based improvement in care. In addition to general and procedure-specific clinical outcomes, surgeons should consider the measurement of quality from the patients' perspective. To this end, the integration of validated Patient Reported Outcome Measures (PROMs) into actionable, benchmarked institutional outcomes reporting has the potential to facilitate quality improvement in process, treatment and technology that optimizes value for our patients and health system. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Einstein Metrics on Complex Surfaces

    CERN Document Server

    Lebrun, C

    1995-01-01

    We consider compact complex surfaces with Hermitian metrics which are Einstein but not Kaehler. It is shown that the manifold must be CP2 blown up at 1,2, or 3 points, and the isometry group of the metric must contain a 2-torus. Thus the Page metric on CP2#(-CP2) is almost the only metric of this type.

  9. The University of NSW electronic practice based research network: disease registers, data quality and utility.

    Science.gov (United States)

    Taggart, J; Liaw, S T; Dennis, S; Yu, H; Rahimi, A; Jalaludin, B; Harris, M

    2012-01-01

    Accurate well-maintained registers are a prerequisite to co-ordinated care of patients with chronic diseases. Their effectiveness in enabling improved management is dependent on the quality of the information captured. This paper provides an overview into the methodology and data quality of the electronic Practice Based Research Network. Clinical records with no identifying information are routinely extracted from four general practices. The data are linked in the data warehouse. Data quality is assessed for completeness, correctness and consistency. Reports on data quality are given back to practices and semi-structured interviews provide information to interpret the results and discuss how data quality could be improved. Data quality is mostly complete for sex and date of birth but indigenous status, smoking and weight were incomplete. There are generally high levels of correctness and internal consistency. Completeness of records in assisting the management of diabetes patients using the annual cycle of care was poor. GPs often use the progress notes to enter information during the consultation and coding diagnoses was considered onerous. The routine capture of electronic clinical health records from primary health care and health services can be used to monitor performance and improve the quality of clinical records. There is a need for accurate and comprehensive clinical records to ensure the safety and quality of clinical practice. Understanding the true reasons for poor data quality is complex. Having a community-based research network may assist in answering some of these questions. Electronic health records are increasingly being used for secondary research and evaluation, beyond the primary purpose of supporting clinical care. The data must be of sufficient quality to support these purposes.

  10. Cyber threat metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Frye, Jason Neal; Veitch, Cynthia K.; Mateski, Mark Elliot; Michalski, John T.; Harris, James Mark; Trevino, Cassandra M.; Maruoka, Scott

    2012-03-01

    Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.

  11. Isospectral Metrics on Projective Spaces

    CERN Document Server

    Rueckriemen, Ralf

    2011-01-01

    We construct isospectral non isometric metrics on real and complex projective space. We recall the construction using isometric torus actions by Carolyn Gordon in chapter 2. In chapter 3 we will recall some facts about complex projective space. In chapter 4 we build the isospectral metrics. Chapter 5 is devoted to the non isometry proof of the metrics built in chapter 4. In chapter 6 isospectral metrics on real projective space are derived from metrics on the sphere.

  12. Metric properties of the MacDQoL, individualized macular-disease-specific quality of life instrument, and newly identified subscales in French, German, Italian, and American populations

    OpenAIRE

    Berdeaux, G; Mesbah, M; Bradley, Clare

    2011-01-01

    The aims of this analysis were to confirm the UK results in other countries and to explore the possibility of subscales of the 25-Item Macular disease Dependent Quality of Life (MacDQoL) questionnaire.

  13. Vortices as degenerate metrics

    CERN Document Server

    Baptista, J M

    2012-01-01

    We note that the Bogomolny equation for abelian vortices is precisely the condition for invariance of the Hermitian-Einstein equation under a degenerate conformal transformation. This leads to a natural interpretation of vortices as degenerate hermitian metrics that satisfy a certain curvature equation. Using this viewpoint, we rephrase standard results about vortices and make some new observations. We note the existence of a conceptually simple, non-linear rule for superposing vortex solutions, and we describe the natural behaviour of the L^2-metric on the moduli space upon certain restrictions.

  14. Uniformly Convex Metric Spaces

    OpenAIRE

    Kell Martin

    2014-01-01

    In this paper the theory of uniformly convex metric spaces is developed. These spaces exhibit a generalized convexity of the metric from a fixed point. Using a (nearly) uniform convexity property a simple proof of reflexivity is presented and a weak topology of such spaces is analyzed. This topology called co-convex topology agrees with the usualy weak topology in Banach spaces. An example of a $CAT(0)$-spaces with weak topology which is not Hausdorff is given. This answers questions raised b...

  15. Finsler metrics and CPT

    CERN Document Server

    Sarkar, Sarben

    2010-01-01

    The role of Finsler-like metrics in situations where Lorentz symmetry breaking and also CPT violation are discussed. Various physical instances of such metrics both in quantum gravity and analogue systems are discussed. Both differences and similarities between the cases will be emphasised. In particular the medium of D-particles that arise in string theory will be examined. In this case the breaking of Lorentz invariance, at the level of quantum fluctuations, together with concomitant CPT in certain situations will be analysed. In particular it will be shown correlations for neutral meson pairs will be modified and a new contribution to baryogenesis will appear.

  16. Quality of life and utility measures: clinical parameters for decision-making in health

    OpenAIRE

    Campolina,Alessandro Gonçalves; Ciconelli, Rozana Mesquita [UNIFESP

    2006-01-01

    In recent decades, the international scientific community has become increasingly interested in the concept of quality of life. One of the most important implications of the focus on quality of life is a shift from cure to a guarantee of a better life as a health care goal, as well as the inclusion of individuals' preferences for certain health states in the decision-making process associated with treatments, diagnostic strategies, and health spending. This is especially important as the prev...

  17. Future of the PCI Readmission Metric.

    Science.gov (United States)

    Wasfy, Jason H; Yeh, Robert W

    2016-03-01

    Between 2013 and 2014, the Centers for Medicare and Medicaid Services and the National Cardiovascular Data Registry publically reported risk-adjusted 30-day readmission rates after percutaneous coronary intervention (PCI) as a pilot project. A key strength of this public reporting effort included risk adjustment with clinical rather than administrative data. Furthermore, because readmission after PCI is common, expensive, and preventable, this metric has substantial potential to improve quality and value in American cardiology care. Despite this, concerns about the metric exist. For example, few PCI readmissions are caused by procedural complications, limiting the extent to which improved procedural technique can reduce readmissions. Also, similar to other readmission measures, PCI readmission is associated with socioeconomic status and race. Accordingly, the metric may unfairly penalize hospitals that care for underserved patients. Perhaps in the context of these limitations, Centers for Medicare and Medicaid Services has not yet included PCI readmission among metrics that determine Medicare financial penalties. Nevertheless, provider organizations may still wish to focus on this metric to improve value for cardiology patients. PCI readmission is associated with low-risk chest discomfort and patient anxiety. Therefore, patient education, improved triage mechanisms, and improved care coordination offer opportunities to minimize PCI readmissions. Because PCI readmission is common and costly, reducing PCI readmission offers provider organizations a compelling target to improve the quality of care, and also performance in contracts involve shared financial risk.

  18. A new universal colour image fidelity metric

    NARCIS (Netherlands)

    Toet, A.; Lucassen, M.P.

    2003-01-01

    We extend a recently introduced universal grayscale image quality index to a newly developed perceptually decorrelated colour space. The resulting colour image fidelity metric quantifies the distortion of a processed colour image relative to its original version. We evaluated the new colour image fi

  19. Coal quality and coal utilization in the twenty-first century

    Energy Technology Data Exchange (ETDEWEB)

    Finkelman, R.B. [US Geological Survey, Reston, VA (USA)

    2000-07-01

    Knowledge of coal quality, the physical, chemical, and mineralogical characteristics and properties of coal will become increasingly important as we enter the 21st century. To maximize the efficiency of coal use and to minimize its environmental and human health impacts we will need to generate information on a broad array of coal quality parameters and to improve our ability to generate comprehensive, precise, and accurate coal quality data. To this end there is an international effort to quantify the modes of occurrence of the important elements in coal such as arsenic, mercury, selenium, sodium, nickel, and chromium. Quantitative information on the modes of occurrence of elements in coal is essential for the development of models to predict element behaviour during in-ground leaching, weathering, coal cleaning, and combustion. Anticipating the behaviour of the elements is necessary for evaluating the environmental and human health impacts, technological impacts, and economic by-product potential of coal use. Although millions of coal analyses have been performed worldwide, existing national coal quality databases are generally of limited use because much of the data are not readily accessible; geographic coverages are not comprehensive; analytical data may not be accurate; and samples may not be representative, or current. To address this problem a new international coal quality database is being developed to provide accurate, comprehensive, and reliable coal quality data to the host country and to the world coal community. Reliable coal quality data made available to the world coal community can be used to assess the opportunities for the transfer of appropriate technology and may be of value with various domestic and international policy decisions such as evaluating coal export/import opportunities and in assessing environmental impacts of coal use. 4 refs.

  20. The effect of phytase and fructooligosaccharide supplementation on growth performance, bone quality, and phosphorus utilization in broiler chickens.

    Science.gov (United States)

    Shang, Y; Rogiewicz, A; Patterson, R; Slominski, B A; Kim, W K

    2015-05-01

    An experiment was conducted to investigate the effects of phytase and 2 levels of fructooligosaccharide (FOS) supplementation on growth performance, bone mineralization, and P utilization of broiler chickens. A total of 210 day-old male broiler chickens (Ross) were randomly placed into 7 dietary treatments consisting of 6 replicates with 5 birds per pen. The experiment was designed as an augmented 2 × 3 factorial arrangement with 0 or 500 U/kg of phytase and 0, 0.5% or 1% of FOS added to a reduced Ca (0.8%) and available P (0.25%) negative control diet (NC). A positive control diet (PC) that contained 1% Ca and 0.45% available P was also included. During the entire experimental period, phytase supplementation significantly improved (P Phytase supplementation increased femur BMD (P Phytase alone and in combination with 0.5% FOS increased P utilization significantly when compared with other treatments (P phytase supplementation in low Ca and P diets improved growth performance, bone quality, and P utilization. However, supplementing NC diets with phytase and FOS did not result in bone mineralization values comparable with that of the PC diet. The application of dietary FOS alone had a negative effect on broiler bone quality.

  1. Metrical Phonology and SLA.

    Science.gov (United States)

    Tice, Bradley S.

    Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English language with the intention that it may be used in second language instruction. Stress is defined by its physical and acoustical correlates, and the principles of…

  2. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  3. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  4. Metric of States

    Institute of Scientific and Technical Information of China (English)

    MA Zhi-Hao

    2008-01-01

    Metric of quantum states plays an important role in quantum information theory. In this letter, we find the deep connection between quantum logic theory and quantum information theory. Using the method of quantum logic, we can get a famous inequality in quantum information theory, and we answer a question raised by S. Gudder.

  5. Importance and utility of microbial elements in evaluating soil quality: case studies in silvopastoral systems

    Directory of Open Access Journals (Sweden)

    Victoria Eugenia Vallejo Quintero

    2013-07-01

    Full Text Available Environmental sustainability is achieved by main-taining and improving soil quality. This quality is defined as “the ability of soil to function” and is evaluated through measuring a minimum set of data corresponding to different soil properties (physical, chemical and biological. However, assessment of these properties does not meet all the conditions necessary to be ideal indicators such as: clearly discriminating between the systems use and / or management evaluation, sensitivity to stress conditions associated with anthropogenic actions, easy measurement, accessibility to many users and short response time. Because loss in quality is associated with the alteration of many processes performed by soil microorganisms they meet the above conditions and have been proposed as valid indicators for diagnosing the impact of changes in land-use and ecosystem restoration. Thus, through the evaluation of the density, activity and /or structure-composition of microorganisms we can determine whether current management systems maintain, improve or degrade the soil. In this article we review the main concepts related to soil quality and its indicators. We discuss the effect of the implementation of silvopastoral systems on soil quality, with an emphasis on the use of microbial indicators.

  6. [Applicability and perceived utility of the European Quality Instrument for Health Promotion (EQUIHP) in a health promotion programme].

    Science.gov (United States)

    Cerdá-Gómez, Rebeca; Paredes-Carbonell, Joan J; López-Sánchez, M Pilar

    2017-03-23

    To describe the results of applying the European Quality Instrument for Health Promotion (EQUIHP) tool in the MIHsalud programme and to discuss its perceived utility by the programme's team members. Evaluation study applying EQUIHP to a health promotion programme. A total of ten MIHsalud staff (eight women and two men) completed the EQUIHP and participated in two group interviews to discuss its perceived utility. The programme obtained a total score of 6.5 points out of 10 in quality. The use of EQUIHP enabled the programme's weaknesses to be identified, such as lack of a communication plan, evaluability and sustainability; as well as its strengths, such as the inclusion of health promotion principles. The MIHsalud team believes that the EQUIHP is a useful tool which can facilitate a comprehensive evaluation of the programme in terms of a health promotion initiative. The use of the EQUIHP has made it possible to evaluate the quality of the programme and to make recommendations for its improvement, and it could be applied to other programmes and activities. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  7. Cohesion Metrics for Ontology Design and Application

    Directory of Open Access Journals (Sweden)

    Haining Yao

    2005-01-01

    Full Text Available Recently, domain specific ontology development has been driven by research on the Semantic Web. Ontologies have been suggested for use in many application areas targeted by the Semantic Web, such as dynamic web service composition and general web service matching. Fundamental characteristics of these ontologies must be determined in order to effectively make use of them: for example, Sirin, Hendler and Parsia have suggested that determining fundamental characteristics of ontologies is important for dynamic web service composition. Our research examines cohesion metrics for ontologies. The cohesion metrics examine the fundamental quality of cohesion as it relates to ontologies.

  8. Implementing the Data Center Energy Productivity Metric

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Marquez, Andres; Rawson, Andrew; Cader, Tahir; Fox, Kevin M.; Gustafson, William I.; Mundy, Christopher J.

    2012-10-01

    As data centers proliferate in both size and number, their energy efficiency is becoming increasingly important. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented, high performance computing data center. We found that DCeP was successful in clearly distinguishing between different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve (or even maximize) energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and among data centers.

  9. A Retention Assessment Process: Utilizing Total Quality Management Principles and Focus Groups

    Science.gov (United States)

    Codjoe, Henry M.; Helms, Marilyn M.

    2005-01-01

    Retaining students is a critical topic in higher education. Methodologies abound to gather attrition data as well as key variables important to retention. Using the theories of total quality management and focus groups, this case study gathers and reports data from current college students. Key results, suggestions for replication, and areas for…

  10. Utilization of Aromatic Rice in Improving Grain Quality of Hybrid Rice (Ⅰ)

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    To improve grain quality of the high-yielding hybrid rice in China, we introduced the aromatic rice MR365, an improved Indian cultivar, from IRRI in 1984 and began to transfer its aroma and good quality characters into the existing maintainer lines. In the meantime, the research on the inheritance of aroma for increasing the breeding efficiency was also conducted.It was found that the inheritance of aroma in MR 365 and its derivatives was controlled by one pair of recessive major genes based on the KOH-soaking method. There existed disparity in aroma degree among different grains of F2 generation, and different aromatic CMS lines derived from the same aromatic donor had also a little difference in the degree of aroma, which implies that, besides the major genes, aroma may also be affected by the genetic backgrounds or minor genes.Xiangxiang 2A, developed from the cross of V20A//V20B/MR365, is the first aromatic CMS line bred in China. It is not only aromatic but also has good grain quality and combining ability. Using it as female parent, Xiangyou 63 (Xiangxiang 2A / Minghui 63), the first quasi-aromatic hybrid rice combination in China, was developed, and released to farmers in 1995. Xiangyou 63 is characteristic of quasi-aromatic or partially aromatic (because only a portion of or not all grains are aromatic), good grain quality, high-yielding ability, good blast resistance and wide adaptability.

  11. A Retention Assessment Process: Utilizing Total Quality Management Principles and Focus Groups

    Science.gov (United States)

    Codjoe, Henry M.; Helms, Marilyn M.

    2005-01-01

    Retaining students is a critical topic in higher education. Methodologies abound to gather attrition data as well as key variables important to retention. Using the theories of total quality management and focus groups, this case study gathers and reports data from current college students. Key results, suggestions for replication, and areas for…

  12. Intensive Care Unit Utilization and Interhospital Transfers As Potential Indicators of Rural Hospital Quality

    Science.gov (United States)

    Wakefield, Douglas S.; Ward, Marcia; Miller, Thomas; Ohsfeldt, Robert; Jaana, Mirou; Lei, Yang; Tracy, Roger; Schneider, John

    2004-01-01

    Obtaining meaningful information from statistically valid and reliable measures of the quality of care for disease-specific care provided in small rural hospitals is limited by small numbers of cases and different definitive care capacities. An alternative approach may be to aggregate and analyze patient services that reflect more generalized care…

  13. Utilizing Depth of Colonization of Seagrasses to Develop Numeric Water Quality Criteria for Florida Estuaries

    Science.gov (United States)

    US EPA is working with state and local partners in Florida to develop numeric water quality criteria to protect estuaries from nutrient pollution. Similar to other nutrient management programs in Florida, EPA is considering status of seagrass habitats as an indicator of biologic...

  14. The influence of quality maternity waiting homes on utilization of facilities for delivery in rural Zambia.

    Science.gov (United States)

    Henry, Elizabeth G; Semrau, Katherine; Hamer, Davidson H; Vian, Taryn; Nambao, Mary; Mataka, Kaluba; Scott, Nancy A

    2017-05-30

    Residential accommodation for expectant mothers adjacent to health facilities, known as maternity waiting homes (MWH), is an intervention designed to improve access to skilled deliveries in low-income countries like Zambia where the maternal mortality ratio is estimated at 398 deaths per 100,000 live births. Our study aimed to assess the relationship between MWH quality and the likelihood of facility delivery in Kalomo and Choma Districts in Southern Province, Zambia. We systematically assessed and inventoried the functional capacity of all existing MWH using a quantitative facility survey and photographs of the structures. We calculated a composite score and used multivariate regression to quantify MWH quality and its association with the likelihood of facility delivery using household survey data collected on delivery location in Kalomo and Choma Districts from 2011-2013. MWH were generally in poor condition and composite scores varied widely, with a median score of 28.0 and ranging from 12 to 66 out of a possible 75 points. Of the 17,200 total deliveries captured from 2011-2013 in 40 study catchment area facilities, a higher proportion occurred in facilities where there was either a MWH or the health facility provided space for pregnant waiting mothers compared to those with no accommodations (60.7% versus 55.9%, p facilities had an MWH, those women with MWHs in their catchment area that were rated medium or high quality had a 95% increase in the odds of facility delivery than those whose catchment area MWHs were of poor quality (OR: 1.95, 95% CI 1.76, 2.16). Improving both the availability and the quality of MWH represents a potentially useful strategy to increasing facility delivery in rural Zambia. The Zambia Chlorhexidine Application Trial is registered at Clinical Trials.gov (identifier: NCT01241318).

  15. Metric scales for emotion measurement

    Directory of Open Access Journals (Sweden)

    Martin Junge

    2016-09-01

    Full Text Available The scale quality of indirect and direct scalings of the intensity of emotional experiences was investigated from the perspective of representational measurement theory. Study 1 focused on sensory pleasantness and disgust, Study 2 on surprise and amusement, and Study 3 on relief and disappointment. In each study, the emotion intensities elicited by a set of stimuli were estimated using Ordinal Difference Scaling, an indirect probabilistic scaling method based on graded pair comparisons. The obtained scale values were used to select test cases for the quadruple axiom, a central axiom of difference measurement. A parametric bootstrap test was used to decide whether the participants’ difference judgments systematically violated the axiom. Most participants passed this test. The indirect scalings of these participants were then linearly correlated with their direct emotion intensity ratings to determine whether they agreed with them up to measurement error, and hence might be metric as well. The majority of the participants did not pass this test. The findings suggest that Ordinal Difference Scaling allows to measure emotion intensity on a metric scale level for most participants. As a consequence, quantitative emotion theories become amenable to empirical test on the individual level using indirect measurements of emotional experience.

  16. Evaluation of the Design Metric to Reduce the Number of Defects in Software Development

    CERN Document Server

    Qureshi, M Rizwan Jameel; 10.5815/ijitcs.2012.04.02

    2012-01-01

    Software design is one of the most important and key activities in the system development life cycle (SDLC) phase that ensures the quality of software. Different key areas of design are very vital to be taken into consideration while designing software. Software design describes how the software system is decomposed and managed in smaller components. Object-oriented (OO) paradigm has facilitated software industry with more reliable and manageable software and its design. The quality of the software design can be measured through different metrics such as Chidamber and Kemerer (CK) design metrics, Mood Metrics & Lorenz and Kidd metrics. CK metrics is one of the oldest and most reliable metrics among all metrics available to software industry to evaluate OO design. This paper presents an evaluation of CK metrics to propose an improved CK design metrics values to reduce the defects during software design phase in software. This paper will also describe that whether a significant effect of any CK design metri...

  17. Utilization of Aromatic Rice in Improving Grain Quality of Hybrid Rice

    Institute of Scientific and Technical Information of China (English)

    周坤炉; 廖伏明

    2004-01-01

    To improve grain quality of the high-yielding hybrid rice in China, we introduced the aromatic rice MR365. an improve Indian cultivar with aroma and other desirable grain quality characters such as long grain and low chalkiness, from IRRI in 1984 and began to transfer its aroma and good quality characters into the existing maintainer lines with good combining ability but poor grain quality.In the meantime,we also conducted the research on the inheritance of aroma for incerasing the breeding efficiency. Through years of research and breeding practices, two cytoplasmic male sterile(CMS)lines Xiangxiang2 A and Xingxiang A and a series of quasi-aromatic hybrids mated from these aromatic CMS lines have been developed and released for commercial production in China. It was found that the inheritance of aroma in MR365 and its dervatives including Xiangxiang2 A,XinxiangA and Xiang2B S was controlled by one pair of recessive major genes based on the identification of aroma by the KOH-soaking method. We also found that there existed disparity in aroma degree among different grains of F2 generation,and different aromatic CMS lines derived from the same aromatic donor such as Xiangxing2 A and Xinxiang A had also a little difference in the degree of aroma,which implies that,besides the major genes,aroma may also be affected by the genetic backgrounds or minor genes.Xiangxiang 2 A,developed from the cross of V20A//V20B/MR365,is the first aromatic CMS line bred in China. It is not only aromatic but has good grain quality and combining ability. Using it as female parent,Xiangyou 63(Xiangxiang 2A/Minghui 63),the first quasi-aromatic hybrid rice combination in China,was developed and approved to release to farmers in 1995.Xiangyou63 is characteristic of quasi-aromatic or partially aromatic(because only a portion of or NOT ALL grains are aromatic),good grain quality,high-yielding ability, good blast resistance and wide adaptability.However,Xiangiang2 A has an evident drawback

  18. The Impact of Global Budgets on Pharmaceutical Spending and Utilization: Early Experience from the Alternative Quality Contract

    Science.gov (United States)

    Afendulis, Christopher C.; Fendrick, A. Mark; Song, Zirui; Landon, Bruce E.; Safran, Dana Gelb; Mechanic, Robert E.; Chernew, Michael E.

    2016-01-01

    In 2009, Blue Cross Blue Shield of Massachusetts implemented a global budget-based payment system, the Alternative Quality Contract (AQC), in which provider groups assumed accountability for spending. We investigate the impact of global budgets on the utilization of prescription drugs and related expenditures. Our analyses indicate no statistically significant evidence that the AQC reduced the use of drugs. Although the impact may change over time, early evidence suggests that it is premature to conclude that global budget systems may reduce access to medications. PMID:25500751

  19. A Structure for Three-Phase Four-Wire Distribution System Utilizing Unified Power Quality Conditioner (UPQC)

    OpenAIRE

    B. Santhosh Kumar; K. VIJAY KUMAR

    2014-01-01

    This paper presents a novel structure for a three- phase four-wire (3P4W) distribution system utilizing unified power quality conditioner (UPQC). The 3P4W system is realized from a three-phase three-wire system where the neutral of series transformer used in series part UPQC is considered as the fourth wire for the 3P4W system. A new control strategy to balance the unbalanced load currents is also presented in this paper. The neutral current that may flow toward transformer ne...

  20. Validation metrics for turbulent plasma transport

    Science.gov (United States)

    Holland, C.

    2016-06-01

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnostics to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. The utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak [J. L. Luxon, Nucl. Fusion 42, 614 (2002)], as part of a multi-year transport model validation activity.

  1. Validation metrics for turbulent plasma transport

    Energy Technology Data Exchange (ETDEWEB)

    Holland, C., E-mail: chholland@ucsd.edu [Center for Energy Research, University of California, San Diego, La Jolla, California 92093-0417 (United States)

    2016-06-15

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnostics to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. The utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak [J. L. Luxon, Nucl. Fusion 42, 614 (2002)], as part of a multi-year transport model validation activity.

  2. 3D Air Quality and the Clean Air Interstate Rule: Lagrangian Sampling of CMAQ Model Results to Aid Regional Accountability Metrics

    Science.gov (United States)

    Fairlie, T. D.; Szykman, Jim; Pierce, Robert B.; Gilliland, A. B.; Engel-Cox, Jill; Weber, Stephanie; Kittaka, Chieko; Al-Saadi, Jassim A.; Scheffe, Rich; Dimmick, Fred; hide

    2008-01-01

    The Clean Air Interstate Rule (CAIR) is expected to reduce transport of air pollutants (e.g. fine sulfate particles) in nonattainment areas in the Eastern United States. CAIR highlights the need for an integrated air quality observational and modeling system to understand sulfate as it moves in multiple dimensions, both spatially and temporally. Here, we demonstrate how results from an air quality model can be combined with a 3d monitoring network to provide decision makers with a tool to help quantify the impact of CAIR reductions in SO2 emissions on regional transport contributions to sulfate concentrations at surface monitors in the Baltimore, MD area, and help improve decision making for strategic implementation plans (SIPs). We sample results from the Community Multiscale Air Quality (CMAQ) model using ensemble back trajectories computed with the NASA Langley Research Center trajectory model to provide Lagrangian time series and vertical profile information, that can be compared with NASA satellite (MODIS), EPA surface, and lidar measurements. Results are used to assess the regional transport contribution to surface SO4 measurements in the Baltimore MSA, and to characterize the dominant source regions for low, medium, and high SO4 episodes.

  3. Power Quality Issues In Indian Power Distribution Utilities And Feasible Solutions

    Directory of Open Access Journals (Sweden)

    Narasimha Pandit

    2015-08-01

    Full Text Available One important contributing factor to Indias slow pace of development in general and relatively poor industrial growth in particular is the poor quality and reliability of the electrical power. Earlier the consumers of electrical energy were mere acceptors. Interruptions and other voltage disturbances were part of the deal. But today electric power is viewed as a product with certain characteristics which can be measured predicted guaranteed improved etc. which has become an integral part of our life. This paper gives insights on different Power Quality PQ problems experienced by the Indian electricity consumers and the reasons for those problems. This paper proposes feasible solutions to assist in employing or implementing appropriate mitigation techniques with an optimism of an improvement in the field scenario as more and more investments are proposed in Generation Transmission and Distribution Sectors and stringent codes and standards are being imposed for those who do not maintain minimum PQ level in the field.

  4. Case series utilizing exposure, relaxation, and rescripting therapy: impact on nightmares, sleep quality, and psychological distress.

    Science.gov (United States)

    Davis, Joanne L; Wright, David C

    2005-01-01

    Experiencing a traumatic event may initiate or exacerbate the occurrence of nightmares. Nightmares may impact sleep quality and quantity, posttraumatic stress symptoms, and depression. Recently, imagery rehearsal has gained attention in the treatment of trauma-related nightmares and is reported to be promising in the reduction of nightmares. On the basis of the vast literature describing the therapeutic benefits of exposure techniques for anxiety-related problems, the treatment was modified to enhance the exposure component. This article presents a case series using this modified version of imagery rehearsal, Exposure, Relaxation, and Rescripting Therapy, with 1 male and 3 female participants. Overall, the participants treated reported a reduction in nightmare frequency and severity; 3 out of 4 participants also reported a reduction in posttraumatic stress and depression symptomotology and an increase in sleep quality and quantity. Clinical implications and future research directions are discussed.

  5. Utilization of ko-factors for quality assurance in neutron activation analysis

    DEFF Research Database (Denmark)

    Heydorn, K.; Damsgaard, E.

    1994-01-01

    Multielement certification analysis by instrumental neutron activation analysis requires simultaneous irradiation of several elemental comparator standards in order to ascertain traceability. Internal consistency of different comparators may be checked by calculation of k0-ratios, which show large...... deviations from unity in case of stoichiometric or other gross errors. Quality assurance based on the Analysis of Precision of k0-ratios from replicate analyses detects unexpected variability associated with inaccurate comparator standards. In two actual cases of cerification lack of statistical control...

  6. Evaluation of the utility of sediment data in NASQAN (National Stream Quality Accounting Network)

    OpenAIRE

    Koh, Robert C. Y.; Brooks, Norman H.; Vanoni, Vito A.; Taylor, Brent D.

    1983-01-01

    Monthly suspended sediment discharge measurements, made by the USGS as part of the National Stream Quality Accounting Network (NASQAN), are analysed to assess the adequacy in terms of spatial coverage, temporal sampling frequency, accuracy of measurements, as well as in determining the sediment yield in the nation's rivers. It is concluded that the spatial distribution of NASQAN stations is reasonable but necessarily judgemental. The temporal variations of sediment data contain much high...

  7. Visual quality inspection of capsule heads utilizing shape and gray information

    Science.gov (United States)

    Wang, Qi; Zhang, Tie; Cai, Zhenlin; Jiang, Nan; Wu, Jiamei; Zhang, Xiangde

    2015-11-01

    Capsule quality inspection is important and necessary in the pharmaceutical industry. The popular methods often mis-detect capsule head defects. To solve this problem, we propose a high-quality visual defect inspection method for capsule heads. In detail, first, capsule head images are captured by high-speed cameras with ring illuminators. Then, radial symmetry transform (RST) is employed to locate region of interest (ROI). Next, the ROI image is enhanced by homomorphic filter and binarized by basic global thresholding. After that, six discriminative features of ROI are extracted, which are skeleton feature, binary density, number of connected boundaries, RST power, mean, and variance. Finally, these features are classified by support vector machine to inspect the quality of the capsule head. The experiment is carried out on a self-established capsule image database, Northeastern University Capsule Image Database Version 1.0. According to our experiment, the proposed method can detect ROI correctly for all of the capsule head images and inspection accuracy achieves a true positive rate of 100.00% and true negative rate of 100.00%.

  8. Quality of service improvement, Handoff Prioritization and Channel utilization for Cellular Network

    Directory of Open Access Journals (Sweden)

    Anoop Kumar Gangwar

    2014-10-01

    Full Text Available In this paper Call admission control (CAC is a significant component in wireless networks to promise quality of service requirements and also to improve the network flexibility. The reliability is measured in terms of quality of service (QoS and grade of service (GoS. GoS is a call‐level factor, which comprises of a new call blocking probability and handoff call blocking probability. So a robust Call Admission and Power Control Mechanism are desired. An admission control method considering the quality of service (QoS requirements is accountable for deciding whether an incoming call/connection can be accepted or not. One major challenge in designing a CAC creates due to the fact that the cellular network has to service two major types of calls: new calls and handoff calls. The QoS performances related to these two types of calls are generally measured by new call blocking probability and handoff call dropping probability. Our work advance the dropping and handoff loss probabilities and present a coherent framework for comparative studies of presented approaches, but also helps future researches and developments of new call admission policies.

  9. Utilization of aromatic rice in improving grain quality of hybrid rice(Ⅱ)

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Xiangxiang 2A has an evident drawback, i.e., instability in male sterility under higher temperature conditionsresulting from the existence of minor restoring genes in it, which greatly hampered the extension of its elite hybrid Xiangyou63 with both high yield and fine quality in commercial production.To improve Xiangxiang 2A, the hybridization of Xiangxiang 2B with V20 B was made again in 1990. A new aromaticCMS line Xinxiang A was successfully developed in 1994. It not only retains the favorable characteristics of Xiangxiang 2Ain grain quality and combining ability, but also expresses complete and stable male sterility and high seed production yieldpotential. Up to now, by using it as female parent, a series of quasi-aromatic hybrids have been developed. Some of themhave been released to farmers. Because such hybrids can not only yield higher or as high as but also possess a bettergrain quality than the current common high-yielding hybrid rice varieties, so that they are preferred and well welcome bythe farmers in China. The planting area under these hybrids is increasing rapidly in China.

  10. NPScape Metric GIS Data - Housing

    Data.gov (United States)

    National Park Service, Department of the Interior — NPScape housing metrics are calculated using outputs from the Spatially Explicit Regional Growth Model. Metric GIS datasets are produced seamlessly for the United...

  11. The Kerr Metric

    CERN Document Server

    Teukolsky, Saul A

    2014-01-01

    This review describes the events leading up to the discovery of the Kerr metric in 1963 and the enormous impact the discovery has had in the subsequent 50 years. The review discusses the Penrose process, the four laws of black hole mechanics, uniqueness of the solution, and the no-hair theorems. It also includes Kerr perturbation theory and its application to black hole stability and quasi-normal modes. The Kerr metric's importance in the astrophysics of quasars and accreting stellar-mass black hole systems is detailed. A theme of the review is the "miraculous" nature of the solution, both in describing in a simple analytic formula the most general rotating black hole, and in having unexpected mathematical properties that make many calculations tractable. Also included is a pedagogical derivation of the solution suitable for a first course in general relativity.

  12. Metric adjusted skew information

    DEFF Research Database (Denmark)

    Hansen, Frank

    2008-01-01

    establish a connection between the geometrical formulation of quantum statistics as proposed by Chentsov and Morozova and measures of quantum information as introduced by Wigner and Yanase and extended in this article. We show that the set of normalized Morozova-Chentsov functions describing the possible...... quantum statistics is a Bauer simplex and determine its extreme points. We determine a particularly simple skew information, the "¿-skew information," parametrized by a ¿ ¿ (0, 1], and show that the convex cone this family generates coincides with the set of all metric adjusted skew informations.......We extend the concept of Wigner-Yanase-Dyson skew information to something we call "metric adjusted skew information" (of a state with respect to a conserved observable). This "skew information" is intended to be a non-negative quantity bounded by the variance (of an observable in a state...

  13. Making metrics meaningful

    Directory of Open Access Journals (Sweden)

    Linda Bennett

    2013-07-01

    Full Text Available Continuing purchase of AHSS resources is threatened more by library budget squeezes than that of STM resources. Librarians must justify all expenditure, but quantitative metrical analysis to assess the value to the institution of journals and specialized research databases for AHSS subjects can be inconclusive; often the number of recorded transactions is lower than for STM, as the resource may be relevant to a smaller number of users. This paper draws on a literature review and extensive primary research, including a survey of 570 librarians and academics across the Anglophone countries, findings from focus group meetings and the analysis of user behaviour at a UK university before and after the installation of the Summon discovery system. It concludes that providing a new approach to metrics can help to develop resources strategies that meet changing user needs; and that usage statistics can be complemented with supplementary ROI measures to make them more meaningful.

  14. Effect of drug utilization reviews on the quality of in-hospital prescribing: a quasi-experimental study

    Directory of Open Access Journals (Sweden)

    Chabot Isabelle

    2006-03-01

    Full Text Available Abstract Background Drug utilization review (DUR programs are being conducted in Canadian hospitals with the aim of improving the appropriateness of prescriptions. However, there is little evidence of their effectiveness. The objective of this study was to assess the impact of both a retrospective and a concurrent DUR programs on the quality of in-hospital prescribing. Methods We conducted an interrupted time series quasi-experimental study. Using explicit criteria for quality of prescribing, the natural history of cisapride prescription was established retrospectively in three university-affiliated hospitals. A retrospective DUR was implemented in one of the hospitals, a concurrent DUR in another, whereas the third hospital served as a control. An archivist abstracted records of all patients who were prescribed cisapride during the observation period. The effect of DURs relative to the control hospital was determined by comparing estimated regression coefficients from the time series models and by testing the statistical significance using a 2-tailed Student's t test. Results The concurrent DUR program significantly improved the appropriateness of prescriptions for the indication for use whereas the retrospective DUR brought about no significant effect on the quality of prescribing. Conclusion Results suggest a retrospective DUR approach may not be sufficient to improve the quality of prescribing. However, a concurrent DUR strategy, with direct feedback to prescribers seems effective and should be tested in other settings with other drugs.

  15. Lake Water Quality Indexing To Identify Suitable Sites For Household Utility: A Case Study Jambhulwadi Lake;Pune(MS

    Directory of Open Access Journals (Sweden)

    Aher D. N.

    2016-05-01

    Full Text Available Water management practices need a fresh look in order to avoid water crisis in the next two decades. This essentially requires looking for proper management practices for growing economy and population. The water resources of the Lake basins remain almost constant while demand of water for various purposes is increasing. Water pollution as a corollary of accelerated industrial growth has drawn concerns over public health and environment. Water is required for different purposes like domestic, agricultural, hydro-power, navigation, recreation, etc. Utilization in all these diverse uses of water should be optimized and an awareness of water as a inadequate resource should be fostered. Water quality index (WQI is precious and unique rating to depict the overall water quality status in appropriate treatment technique to meet the concerned issues. This paper elaborates on the WQI concepts and current scenario of Jambhulwadi Lake which will help in future as natural potable groundwater resource. It also focuses on case scenario of calculating WQI using Weighted Arithmetic Water Quality Index an example dataset. The quality of water way to evaluate by testing various physicochemical parameters such as pH, Temperature, Total Dissolved Solid (TDS,Alkalinity Total Hardness, Dissolved Oxygen (DO, Biological Oxygen Demand (BOD,Chemical Oxygen Demand (COD, Nitrites, Phosphate, Conductivity.

  16. Quality of fresh organic matter affects priming of soil organic matter and substrate utilization patterns of microbes

    Science.gov (United States)

    Wang, Hui; Boutton, Thomas W.; Xu, Wenhua; Hu, Guoqing; Jiang, Ping; Bai, Edith

    2015-05-01

    Changes in biogeochemical cycles and the climate system due to human activities are expected to change the quantity and quality of plant litter inputs to soils. How changing quality of fresh organic matter (FOM) might influence the priming effect (PE) on soil organic matter (SOM) mineralization is still under debate. Here we determined the PE induced by two 13C-labeled FOMs with contrasting nutritional quality (leaf vs. stalk of Zea mays L.). Soils from two different forest types yielded consistent results: soils amended with leaf tissue switched faster from negative PE to positive PE due to greater microbial growth compared to soils amended with stalks. However, after 16 d of incubation, soils amended with stalks had a higher PE than those amended with leaf. Phospholipid fatty acid (PLFA) results suggested that microbial demand for carbon and other nutrients was one of the major determinants of the PE observed. Therefore, consideration of both microbial demands for nutrients and FOM supply simultaneously is essential to understand the underlying mechanisms of PE. Our study provided evidence that changes in FOM quality could affect microbial utilization of substrate and PE on SOM mineralization, which may exacerbate global warming problems under future climate change.

  17. Learning Sequence Neighbourhood Metrics

    CERN Document Server

    Bayer, Justin; van der Smagt, Patrick

    2011-01-01

    Recurrent neural networks (RNNs) in combination with a pooling operator and the neighbourhood components analysis (NCA) objective function are able to detect the characterizing dynamics of sequences and embed them into a fixed-length vector space of arbitrary dimensionality. Subsequently, the resulting features are meaningful and can be used for visualization or nearest neighbour classification in linear time. This kind of metric learning for sequential data enables the use of algorithms tailored towards fixed length vector spaces such as R^n.

  18. Metric for Early Measurement of Software Complexity

    Directory of Open Access Journals (Sweden)

    Ghazal Keshavarz,

    2011-06-01

    Full Text Available Software quality depends on several factors such as on time delivery; within budget and fulfilling user's needs. Complexity is one of the most important factors that may affect the quality. Therefore, measuring and controlling the complexity result in improving the quality. So far, most of the researches have tried to identify and measure the complexity in design and code phase. However, whenwe have the code or design for software, it is too late to control complexity. In this article, with emphasis on Requirement Engineering process, we analyze the causes of software complexity, particularly in the first phase of software development, and propose a requirement based metric. This metric enables a software engineer to measure the complexity before actual design and implementation and choosestrategies that are appropriate to the software complexity degree, thus saving on cost and human resource wastage and, more importantly, leading to lower maintenance costs.

  19. Metrics and Assessment

    Directory of Open Access Journals (Sweden)

    Todd Carpenter

    2015-07-01

    Full Text Available An important and timely plenary session at the 2015 UKSG Conference and Exhibition focused on the role of metrics in research assessment. The two excellent speakers had slightly divergent views.Todd Carpenter from NISO (National Information Standards Organization argued that altmetrics aren’t alt anymore and that downloads and other forms of digital interaction, including social media reference, reference tracking, personal library saving, and secondary linking activity now provide mainstream approaches to the assessment of scholarly impact. James Wilsdon is professor of science and democracy in the Science Policy Research Unit at the University of Sussex and is chair of the Independent Review of the Role of Metrics in Research Assessment commissioned by the Higher Education Funding Council in England (HEFCE. The outcome of this review will inform the work of HEFCE and the other UK higher education funding bodies as they prepare for the future of the Research Excellence Framework. He is more circumspect arguing that metrics cannot and should not be used as a substitute for informed judgement. This article provides a summary of both presentations.

  20. Marked metric measure spaces

    CERN Document Server

    Depperschmidt, Andrej; Pfaffelhuber, Peter

    2011-01-01

    A marked metric measure space (mmm-space) is a triple (X,r,mu), where (X,r) is a complete and separable metric space and mu is a probability measure on XxI for some Polish space I of possible marks. We study the space of all (equivalence classes of) marked metric measure spaces for some fixed I. It arises as state space in the construction of Markov processes which take values in random graphs, e.g. tree-valued dynamics describing randomly evolving genealogical structures in population models. We derive here the topological properties of the space of mmm-spaces needed to study convergence in distribution of random mmm-spaces. Extending the notion of the Gromov-weak topology introduced in (Greven, Pfaffelhuber and Winter, 2009), we define the marked Gromov-weak topology, which turns the set of mmm-spaces into a Polish space. We give a characterization of tightness for families of distributions of random mmm- spaces and identify a convergence determining algebra of functions, called polynomials.

  1. Bottom-up perspectives of extreme event and climate change threats to water quality: Drinking water utilities in California

    Science.gov (United States)

    Ekstrom, J.; Klasic, M.; Fencl, A.; Lubell, M.; Bedsworth, L. W.; Baker, E.

    2016-12-01

    Extreme events impact water quality, which pose serious challenges for drinking water systems. Such extreme events, including wildfire, storm surge, and other weather-related extremes, are projected to increase under a changing climate. It remains unclear what climate change information can support water managers in preparing for more extreme events. Exploring this topic requires understanding the larger question: What is the role of scientific information in adapting to climate change? We present two parts of a three-year study geared to understand whether, where, why and in what way climate information (or the lack of) is used or needed to support long term water quality planning for extreme events. In 2015 we surveyed California drinking water utilities and found a wide range of extreme event/water quality issues, perspectives on the severity of climate change threats, drought impacts and trusted information sources relating to water quality concerns. Approximately 70% of 259 respondents had recently experienced extreme weather-related events that worsen or trigger water quality. Survey results informed development of a case study analysis to gain a more in-depth understanding of what type of - or when - extreme events information could support climate adaptation. Projections of extreme events are often not in a form that is useable for water quality planning. Relative to supply-related projections, water quality has received much less scientific attention, leaving it an assumed scientific information gap and need for management. The question remains whether filling this gap would help adaptation, whom it would help, and in what way. Based on interviews with water systems in Summer 2016, our case study analyses reinforce that extreme events threaten water quality in many ways; largely as secondary impacts of climate change. Secondary impacts involve disinfection byproducts, increasing salinity in the Delta, and the use of lower quality sources. The most common

  2. Geometry of manifolds with area metric: Multi-metric backgrounds

    Energy Technology Data Exchange (ETDEWEB)

    Schuller, Frederic P. [Perimeter Institute for Theoretical Physics, 31 Caroline Street N, Waterloo N2L 2Y5 (Canada) and Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A. Postal 70-543, Mexico D.F. 04510 (Mexico)]. E-mail: fschuller@perimeterinstitute.ca; Wohlfarth, Mattias N.R. [II. Institut fuer Theoretische Physik, Universitaet Hamburg, Luruper Chaussee 149, 22761 Hamburg (Germany)]. E-mail: mattias.wohlfarth@desy.de

    2006-07-24

    We construct the differential geometry of smooth manifolds equipped with an algebraic curvature map acting as an area measure. Area metric geometry provides a spacetime structure suitable for the discussion of gauge theories and strings, and is considerably more general than Lorentzian geometry. Our construction of geometrically relevant objects, such as an area metric compatible connection and derived tensors, makes essential use of a decomposition theorem due to Gilkey, whereby we generate the area metric from a finite collection of metrics. Employing curvature invariants for multi-metric backgrounds we devise a class of gravity theories with inherently stringy character, and discuss gauge matter actions.

  3. Utility of FT-IR imaging spectroscopy in estimating differences between the quality of bovine blastocysts

    Science.gov (United States)

    Wiecheć, A.; Opiela, J.; Lipiec, E.; Kwiatek, W. M.

    2013-10-01

    This study was conducted to verify whether the FT-IR spectroscopy and Focal Plane Array (FPA) imaging can be successfully applied to estimate the quality of bovine blastocysts (on the basis of the concentration of nucleic acids and amides). The FT-IR spectra of inner cell mass from blastocysts of three different culture systems were examined. The spectral changes between blastocysts were analyzed in DNA (spectral range of 1240-950 cm-1) and protein amides (1800-1400 cm-1). Blastocyst 1 (BL1-HA) was developed from the fertilized oocyte cultured with low concentration of hialuronian (HA), Blastocyst 2 and 3 were developed from the oocytes cultured in standard conditions. Cleavage stage blastocyst 2 (BL2-SOF) has been cultured in SOF medium while blastocyst 3 (BL3-VERO) was cultured in co-culture with VERO cells. The multivariate statistical analysis (Hierarchical Cluster Analysis - HCA and Principal Component Analysis - PCA) of single cells spectra showed high similarity of cells forming the inner cell mass within single blastocyst. The main variance between the three examined blastocysts was related to amides bands. Differences in the intensities of the amides' peaks between the bovine blastocysts derived from different culture systems indicated that specific proteins reflecting the appearance of a new phenotype were produced. However, for the three blastocysts, the α-helix typical peak was twice more intensive than the β-sheet typical peak suggesting that the differentiation processes had been started. Taking into account the quantitative and qualitative composition of the protein into examined blastocysts, it can be assumed, that the quality of the BL1-HA turned out much more similar to BL3-VERO than to BL2-SOF. FT-IR spectroscopy can be successfully applied in reproductive biology research for quality estimation of oocytes and embryos at varied stages of their development. Moreover this technique proved to be particularly useful when the quantity of the

  4. On the Empirical Estimation of Utility Distribution Damping Parameters Using Power Quality Waveform Data

    Directory of Open Access Journals (Sweden)

    Irene Y. H. Gu

    2007-01-01

    Full Text Available This paper describes an efficient yet accurate methodology for estimating system damping. The proposed technique is based on linear dynamic system theory and the Hilbert damping analysis. The proposed technique requires capacitor switching waveforms only. The detected envelope of the intrinsic transient portion of the voltage waveform after capacitor bank energizing and its decay rate along with the damped resonant frequency are used to quantify effective X/R ratio of a system. Thus, the proposed method provides complete knowledge of system impedance characteristics. The estimated system damping can also be used to evaluate the system vulnerability to various PQ disturbances, particularly resonance phenomena, so that a utility may take preventive measures and improve PQ of the system.

  5. Some References on Metric Information.

    Science.gov (United States)

    National Bureau of Standards (DOC), Washington, DC.

    This resource work lists metric information published by the U.S. Government and the American National Standards Institute. Also organizations marketing metric materials for education are given. A short table of conversions is included as is a listing of basic metric facts for everyday living. (LS)

  6. Projectively related complex Finsler metrics

    CERN Document Server

    Aldea, Nicoleta

    2011-01-01

    In this paper we introduce in study the projectively related complex Finsler metrics. We prove the complex versions of the Rapcs\\'{a}k's theorem and characterize the weakly K\\"{a}hler and generalized Berwald projectively related complex Finsler metrics. The complex version of Hilbert's Fourth Problem is also pointed out. As an application, the projectiveness of a complex Randers metric is described.

  7. Utilization of mixed pond ash in integrated steel plant for manufacturing superior quality bricks

    Indian Academy of Sciences (India)

    Piyush Kant Pandey; Raj Kumar Agrawal

    2002-10-01

    Fly ash (FA) poses serious problems to the industries. Integrated steel plants generate huge quantity of FA from their captive power plants and other furnaces. This ash is generally disposed off in the ash ponds along with other sludges and residues of steel making operations. This changes the constitution of FA and makes the brick manufacturing difficult. This paper has attempted to devise the ways for the use of this mixed ash for manufacturing mixed ash clay bricks successfully. The bricks thus made are superior in structural and aesthetic qualities and portents huge saving in the manufacturing costs with better consumer response.

  8. Sinabung Volcanic Ash Utilization As The Additive for Paving Block Quality A and B

    Science.gov (United States)

    Sembiring, I. S.; Hastuty, I. P.

    2017-03-01

    Paving block is one of the building materials used as the top layer of the road structure besides asphalt and concrete. Paving block is made of mixed materials such as portland cement or other adhesive materials, water and aggregate. In this research, the material used as the additive of cement and concrete is volcanic ash from Mount Sinabung, it is based on the results of the material testing, Sinabung ash contains 74.3% silica (SiO2). The purpose of this research aims to analyze the behavior of the paving blocks quality A and B with and without a mixture of Sinabung ash, to analyze the workability of fresh concrete using Sinabung ash as an additive in concrete, and to compare the test results of paving blocks with and without using Sinabung ash. The samples that we made consist of four variations of the concrete mix to experiment a mixture of normal sample without additive, samples which are mixed with the addition of Sinabung ash 5%, 10%, 15%, 20% and 25% of the volume of concrete/m3. Each variation consists of 10 samples of the concrete with 28 days curing time period. We will do the compressive strength and water absorption test to the samples to determine whether the samples are in accordance with the type needed. According to the test result, paving blocks with Sinabung ash and curing time reach quality A at 0%, 5% and 10% mixture with the compressive strength of each 50.14 MPa, 46.20 MPa and 1.49Mpa, and reach quality B at 15%, 20 %,25% mixture with curing time and 0%, 5%, 10%, 15%, 20% and 25% mixture without curing time. According to the absorption values we got from the test which are 6.66%, 6.73%, 6.88%, 7.03%, 7.09% and 7.16%, the entire sample have average absorption exceeding SNI standardization which is above 6% and reach quality C. Based on compressive strength and absorption data obtained Sinabung ash can’t fully replace cement as the binder because of the low CaO content.

  9. Research on customer satisfaction with the quality of services provided by public utilities of the city of Belgrade

    Directory of Open Access Journals (Sweden)

    Živković Radmila

    2014-01-01

    Full Text Available Monopoly market conditions, in which public companies used to operate ten to twenty years ago, substantially dictated the way of considering and creating business of public companies in Serbia. However, introduction of changes to the environment, such as more intensive competition and changes of needs and demands of the customers requires abandoning old orientations to business. Public companies are in position to create and offer a higher level of service quality, based on better and more intensified communication with their customers. Public enterprises are monitored by public authorities, especially in the areas of restrictions on the choice of business strategies, pricing and price restrictions, selection of suppliers and the like. On the other hand, there is a branch competition occurring, on which public companies must count. In such an environment, creating effective services should be the key strategic objective for the development of public utility companies of the city of Belgrade. Service companies should be modern service companies, able to actively participate in the market, looking upon customers - citizens as users of their services. The aim of the research is to determine the perception of value and customer satisfaction with the services provided by the public utilities of Belgrade. The results of the study indicate that respondents are not satisfied with provided services and do not have clearly defined attitudes towards key aspects of public enterprises, which are supposed to be important for positioning and improving the quality of services in the market.

  10. Improving the quality of urban public space through the identification of space utilization index at Imam Bonjol Park, Padang city

    Science.gov (United States)

    Eriawan, Tomi; Setiawati, Lestari

    2017-06-01

    Padang City as a big city with a population approaching one million people has to address the issue of increased activities of the population and increased need for land and space for those activities. One of the effects of population growth and the development of activities in Padang is the decreasing number of open spaces for the outdoor public activities, both the natural and artificial public. However, Padang City has several open spaces that are built and managed by the government including 40 units of open spaces in the form of plansum parks, playgrounds, and sports parks, with a total area of 10.88 hectares. Despite their status as public open spaces, not all of them can be used and enjoyed by the public since most of them are passive parks, in which they are made only as a garden without any indulgences. This study was performed to assess the quality of public spaces in the central business of Padang City, namely Imam Bonjol Park (Taman Imam Bonjol). The methods of this study were done through several stages, which were to identify the typology of function space based on [1] Carmona (2008) and to assess the space utilization index based on the approach of Public Space Index according to Mehta [2] (2007). The purpose of this study was to assess the quality of space which is a public space in Padang City. The space quality was measured based on the variables in Good Public Space Index, the intensity of use, the intensity of social activity, the duration of activity, the variations in usage, and the diversity of use. The rate of the index of public space quality at Taman Imam Bonjol was determined by assessing 5 (five) variables of space quality. Based on the results of the analysis, public space utilization index was equal to 0.696. This result could be used to determine the quality of public space, in this case was Imam Bonjol Park was in Medium category. The parameters indicated several results including the lack of diversity in users' activity time, less

  11. Water quality and algal community dynamics of three deepwater lakes in Minnesota utilizing CE-QUAL-W2 models

    Science.gov (United States)

    Smith, Erik A.; Kiesling, Richard L.; Galloway, Joel M.; Ziegeweid, Jeffrey R.

    2014-01-01

    Water quality, habitat, and fish in Minnesota lakes will potentially be facing substantial levels of stress in the coming decades primarily because of two stressors: (1) land-use change (urban and agricultural) and (2) climate change. Several regional and statewide lake modeling studies have identified the potential linkages between land-use and climate change on reductions in the volume of suitable lake habitat for coldwater fish populations. In recent years, water-resource scientists have been making the case for focused assessments and monitoring of sentinel systems to address how these stress agents change lakes over the long term. Currently in Minnesota, a large-scale effort called “Sustaining Lakes in a Changing Environment” is underway that includes a focus on monitoring basic watershed, water quality, habitat, and fish indicators of 24 Minnesota sentinel lakes across a gradient of ecoregions, depths, and nutrient levels. As part of this effort, the U.S. Geological Survey, in cooperation with the Minnesota Department of Natural Resources, developed predictive water quality models to assess water quality and habitat dynamics of three select deepwater lakes in Minnesota. The three lakes (Lake Carlos in Douglas County, Elk Lake in Clearwater County, and Trout Lake in Cook County) were assessed under recent (2010–11) meteorological conditions. The three selected lakes contain deep, coldwater habitats that remain viable during the summer months for coldwater fish species. Hydrodynamics and water-quality characteristics for each of the three lakes were simulated using the CE-QUAL-W2 model, which is a carbon-based, laterally averaged, two-dimensional water-quality model. The CE-QUAL-W2 models address the interaction between nutrient cycling, primary production, and trophic dynamics to predict responses in the distribution of temperature and oxygen in lakes. The CE-QUAL-W2 models for all three lakes successfully predicted water temperature, on the basis of the

  12. THE QUALITY AND UTILITY OF ANNUAL FINANCIAL REPORTS BETWEEN EXPECTATIONS AND REALITY

    Directory of Open Access Journals (Sweden)

    Magdalena Mihai

    2016-12-01

    Full Text Available Any company makes its presence felt in the market by providing financial - accounting. Users will be more interested in issuing entity, to the extent that the information provided is attractive and quality, showing favorable results. So any accounting information quality is a measure of objectivity and transparency pursued transposition. Setting targets for financial statements depends on many factors and in addition, there is a universal set of objective, valid for all businesses, whatever the accounting system adopted. Over time, in our country, the accounting system has undergone various changes aimed at ensuring financial accounting information as qualitative targets, so try to bring the national accounting system close to international accounting standards. From an analysis of the three periods to improve national accounting system, no one can see only instability reform process accounting, which included regulations associated initially with European directives (which is also found in French accounting system, then with IAS and European directives, in order to subsequently take place reversing the order of accounting regulations, putting in this regard, European directives first, and IFRS (whose application is reduced on the second place, combination that has led to the emergence of contradictory situation in some cases.

  13. The utilization of six sigma and statistical process control techniques in surgical quality improvement.

    Science.gov (United States)

    Sedlack, Jeffrey D

    2010-01-01

    Surgeons have been slow to incorporate industrial reliability techniques. Process control methods were applied to surgeon waiting time between cases, and to length of stay (LOS) after colon surgery. Waiting times between surgeries were evaluated by auditing the operating room records of a single hospital over a 1-month period. The medical records of 628 patients undergoing colon surgery over a 5-year period were reviewed. The average surgeon wait time between cases was 53 min, and the busiest surgeon spent 291/2 hr in 1 month waiting between surgeries. Process control charting demonstrated poor overall control of the room turnover process. Average LOS after colon resection also demonstrated very poor control. Mean LOS was 10 days. Weibull's conditional analysis revealed a conditional LOS of 9.83 days. Serious process management problems were identified in both analyses. These process issues are both expensive and adversely affect the quality of service offered by the institution. Process control mechanisms were suggested or implemented to improve these surgical processes. Industrial reliability and quality management tools can easily and effectively identify process control problems that occur on surgical services.

  14. Enhancing U.S. Coast Guard Metrics

    Science.gov (United States)

    2015-01-01

    Enhancing U.S. Coast Guard Metrics Scott Savitz, Henry H. Willis , Aaron C. Davenport, Martina Melliand, William Sasser, Elizabeth Tencza, Dulani...evaluate their utility in other contexts. 6 See Stephanie Young, Henry H. Willis , Melinda Moore, and Jeffrey Engstrom, Measuring Cooperative...front cost of the USS Gerald R. Ford aircraft carrier . U.S. Air Force, “United States Air Force Fiscal Year 2015 Budget Overview,” Washington, D.C

  15. Quality Metrics of Digitally Derived Imagery and Their Relation to Interpreter Performance. III. Subjective Scaling of Hard-Copy Digital Imagery.

    Science.gov (United States)

    1982-02-01

    ul .ir imAje ucquisitt ij)fa ndi splay r equi r ’n-nt~i OVP.RVIFW -)F THE RF1:;FAW-H PLAN Th’i r-vso’ir:h pI in ii; !Ail r) ot iiena t i- a I Iy ini...television imaje . However, in developing television standards, mark points have been reported. A mark point refers to the noise level that intersects a...iv j Uli J me ts in scil in, image quality. Each of 36 i,,-ai.o:s (s sc _ie:n ) was rated by comparinj each image wi Lh a ,-taloq of 16 dejradei imajes

  16. Using Principal Component and Tidal Analysis as a Quality Metric for Detecting Systematic Heading Uncertainty in Long-Term Acoustic Doppler Current Profiler Data

    Science.gov (United States)

    Morley, M. G.; Mihaly, S. F.; Dewey, R. K.; Jeffries, M. A.

    2015-12-01

    Ocean Networks Canada (ONC) operates the NEPTUNE and VENUS cabled ocean observatories to collect data on physical, chemical, biological, and geological ocean conditions over multi-year time periods. Researchers can download real-time and historical data from a large variety of instruments to study complex earth and ocean processes from their home laboratories. Ensuring that the users are receiving the most accurate data is a high priority at ONC, requiring quality assurance and quality control (QAQC) procedures to be developed for all data types. While some data types have relatively straightforward QAQC tests, such as scalar data range limits that are based on expected observed values or measurement limits of the instrument, for other data types the QAQC tests are more comprehensive. Long time series of ocean currents from Acoustic Doppler Current Profilers (ADCP), stitched together from multiple deployments over many years is one such data type where systematic data biases are more difficult to identify and correct. Data specialists at ONC are working to quantify systematic compass heading uncertainty in long-term ADCP records at each of the major study sites using the internal compass, remotely operated vehicle bearings, and more analytical tools such as principal component analysis (PCA) to estimate the optimal instrument alignments. In addition to using PCA, some work has been done to estimate the main components of the current at each site using tidal harmonic analysis. This paper describes the key challenges and presents preliminary PCA and tidal analysis approaches used by ONC to improve long-term observatory current measurements.

  17. The Utility of the Faces Pain Scale in the Assessment of Shoulder Pain in Turkish Stroke Patients: Its Relation with Quality of Life and Psychologic Status

    Science.gov (United States)

    Dogan, Sebnem Koldas; Ay, Saime; Oztuna, Derya; Aytur, Yesim Kurtais; Evcik, Deniz

    2010-01-01

    This study was planned to investigate the utility of the vertical Faces Pain Scale (FPS) in the assessment of pain in stroke patients using the shoulder pain model and to assess its utility in the Turkish patient population. The secondary aim was to analyze the association of FPS with the quality of life and depression in the study population.…

  18. A Unification of G-Metric, Partial Metric, and b-Metric Spaces

    Directory of Open Access Journals (Sweden)

    Nawab Hussain

    2014-01-01

    Full Text Available Using the concepts of G-metric, partial metric, and b-metric spaces, we define a new concept of generalized partial b-metric space. Topological and structural properties of the new space are investigated and certain fixed point theorems for contractive mappings in such spaces are obtained. Some examples are provided here to illustrate the usability of the obtained results.

  19. Utilization of a labeled tracking oligonucleotide for visualization and quality control of spotted 70-mer arrays

    Directory of Open Access Journals (Sweden)

    Khan Shehnaz

    2004-02-01

    Full Text Available Abstract Background Spotted 70-mer oligonucleotide arrays offer potentially greater specificity and an alternative to expensive cDNA library maintenance and amplification. Since microarray fabrication is a considerable source of data variance, we previously directly tagged cDNA probes with a third fluorophore for prehybridization quality control. Fluorescently modifying oligonucleotide sets is cost prohibitive, therefore, a co-spotted Staphylococcus aureus-specific fluorescein-labeled "tracking" oligonucleotide is described to monitor fabrication variables of a Mycobacterium tuberculosis oligonucleotide microarray. Results Significantly (p M. tuberculosis H37Rv and M. tuberculosis mprA. Linearity between the mean log Cy3/Cy5 ratios of genes differentially expressed from arrays either possessing or lacking the tracking oligonucleotide was observed (R2 = 0.90, p Conclusions This novel approach enables prehybridization array visualization for spotted oligonucleotide arrays and sets the stage for more sophisticated slide qualification and data filtering applications.

  20. Six Sigma Methodology Utilization in Telecom Sector for Quality Improvement- A DMAIC Process

    Directory of Open Access Journals (Sweden)

    MANISH BHARGAVA,

    2010-12-01

    Full Text Available This article presents tools of Six Sigma for Telecom Industries; these can achieve powerful operational improvements that produce sustainable business benefits. Six Sigma Qualtec’s dedicated Six Sigma for Telecom practice is specifically designed to help traditional and modern telecommunications providers, become more efficient in their operating procedures. By learning and implementing improvements such as Voice of the Customer (VOC, , Six Sigma, Business Process Management Design for Six Sigma and Lean Enterprise principles, those companies will be able to dramatically improve the way they do business thus attracting and keeping customers in this hyper-competitive industry. This paper maps some of the changes in the telecom markets that resulted from competitive entry and givesan insight into the dynamics of competitive markets in relation to quality improvement. Additionally, the presentation seeks to demonstrate that in the quest for the particular competitive outcome via independent and transparent regulation.

  1. The utilization of crude fish oil (CFO) to increase mudcrab (Scylla serrata) feed quality

    Science.gov (United States)

    Lamid, Mirni; Agustono

    2017-02-01

    Crude fish oil is one of essential fatty acid sources, which is found in Sardinella lemuru. This research aims to study the quality improvement of mudcrab(Scylla serrata) feed. Four feed formulations were designed by using completely randomized design, including P0 = trash fish + 1% tapioca starch; P1=trash fish + 2.0% crude fish oil + 1% tapioca starch;, P2= trash fish +4.0% crude fish oil + 1% tapioca starch; P3=trash fish + 6.0% crude fish oil + 1% tapioca starch; P4=trash fish +8.0% crude fish oil + 1% tapioca starch, respectively, which were carried out in quadruplicate. This study showed that feed formulation significantly affected crude protein, crude fiber, crude lipid, ash, organic matter and nitrogen free extract and energy of mudcrab. The P2 feed was the best formulation but had a slight different from P3 formulation.

  2. Operating room metrics score card-creating a prototype for individualized feedback.

    Science.gov (United States)

    Gabriel, Rodney A; Gimlich, Robert; Ehrenfeld, Jesse M; Urman, Richard D

    2014-11-01

    The balance between reducing costs and inefficiencies with that of patient safety is a challenging problem faced in the operating room suite. An ongoing challenge is the creation of effective strategies that reduce these inefficiencies and provide real-time personalized metrics and electronic feedback to anesthesia practitioners. We created a sample report card structure, utilizing existing informatics systems. This system allows to gather and analyze operating room metrics for each anesthesia provider and offer personalized feedback. To accomplish this task, we identified key metrics that represented time and quality parameters. We collected these data for individual anesthesiologists and compared performance to the overall group average. Data were presented as an electronic score card and made available to individual clinicians on a real-time basis in an effort to provide effective feedback. These metrics included number of cancelled cases, average turnover time, average time to operating room ready and patient in room, number of delayed first case starts, average induction time, average extubation time, average time to recovery room arrival to discharge, performance feedback from other providers, compliance to various protocols, and total anesthetic costs. The concept we propose can easily be generalized to a variety of operating room settings, types of facilities and OR health care professionals. Such a scorecard can be created using content that is important for operating room efficiency, research, and practice improvement for anesthesia providers.

  3. Quality traits of Indian peanut cultivars and their utility as nutritional and functional food.

    Science.gov (United States)

    Bishi, S K; Lokesh, Kumar; Mahatma, M K; Khatediya, N; Chauhan, S M; Misra, J B

    2015-01-15

    Peanut (Arachis hypogaea L.) is considered as a highly nutritious foodstuff. Of late, the importance of peanut as a functional food has been growing. Kernels of forty-one Indian peanut cultivars were analyzed for their oil, fatty acid profiles, sucrose, raffinose family oligosaccharides (RFOs); phenolics, and free amino acids contents along with antioxidant capacity. The range and the mean value (given in parenthesis) for each of the traits analysed were, oil: 44.1-53.8% (50.1%), O/L ratio: 0.9-2.8 (1.4), sucrose: 2.61-6.5% (4.63%), RFOs: 0.12-0.76% (0.47%), phenolics: 0.14-0.39% (0.23%), free amino acids: 0.052-0.19% (0.12%) and antioxidant capacity: 1.05-6.97 (3.40) μmol TEg(-1). The significant correlation between phenol content and antioxidant capacity suggests phenol content as an easy marker for rapid screening of genotypes for their antioxidant capacity. A few cultivars with desirable traits and their prospective utility were identified which would be useful for future breeding programme to develop nutritional superior peanuts. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Degenerate pseudo-Riemannian metrics

    CERN Document Server

    Hervik, Sigbjorn; Yamamoto, Kei

    2014-01-01

    In this paper we study pseudo-Riemannian spaces with a degenerate curvature structure i.e. there exists a continuous family of metrics having identical polynomial curvature invariants. We approach this problem by utilising an idea coming from invariant theory. This involves the existence of a boost, the existence of this boost is assumed to extend to a neighbourhood. This approach proves to be very fruitful: It produces a class of metrics containing all known examples of degenerate metrics. To date, only Kundt and Walker metrics have been given, however, our study gives a plethora of examples showing that degenerate metrics extend beyond the Kundt and Walker examples. The approach also gives a useful criterion for a metric to be degenerate. Specifically, we use this to study the subclass of VSI and CSI metrics (i.e., spaces where polynomial curvature invariants are all vanishing or constants, respectively).

  5. Distance Metric Tracking

    Science.gov (United States)

    2016-03-02

    520, 2004. 16 [12] E.C. Hall and R.M. Willett. Online convex optimization in dynamic environ- ments. Selected Topics in Signal Processing, IEEE Journal...Conference on Machine Learning, pages 1160–1167. ACM, 2008. [25] Eric P Xing, Michael I Jordan, Stuart Russell, and Andrew Y Ng. Distance metric...whereBψ is any Bregman divergence and ηt is the learning rate parameter. From ( Hall & Willett, 2015) we have: Theorem 1. G` = max θ∈Θ,`∈L ‖∇f(θ)‖ φmax = 1

  6. Metrics for Multiagent Systems

    Science.gov (United States)

    Lass, Robert N.; Sultanik, Evan A.; Regli, William C.

    A Multiagent System (MAS) is a software paradigm for building large scale intelligent distributed systems. Increasingly these systems are being deployed on handheld computing devices that rely on non-traditional communications mediums such as mobile ad hoc networks and satellite links. These systems present new challenges for computer scientists in describing system performance and analyzing competing systems. This chapter surveys existing metrics that can be used to describe MASs and related components. A framework for analyzing MASs is provided and an example of how this framework might be employed is given for the domain of distributed constraint reasoning.

  7. Sustainable chemistry metrics.

    Science.gov (United States)

    Calvo-Flores, Francisco García

    2009-01-01

    Green chemistry has developed mathematical parameters to describe the sustainability of chemical reactions and processes, in order to quantify their environmental impact. These parameters are related to mass and energy magnitudes, and enable analyses and numerical diagnoses of chemical reactions. The environmental impact factor (E factor), atom economy, and reaction mass efficiency have been the most influential metrics, and they are interconnected by mathematical equations. The ecodesign concept must also be considered for complex industrial syntheses, as a part of the sustainability of manufacturing processes. The aim of this Concept article is to identify the main parameters for evaluating undesirable environmental consequences.

  8. Utilization of Cinnamon Leaf and Shrimp Flour as an Enhancer of Catfish Meat Quality

    Directory of Open Access Journals (Sweden)

    Mia Setiawati

    2017-05-01

    Full Text Available Catfish (Pangasianodon hypophthalmus is a freshwater fish that has been produced in the form of a filet. One of the problems in producing good catfish fillet is compactness and brightness of catfish farmed meat. This research aimed to get feed formulation as enhancer  meat quality of striped catfish with added Cinnamon leaves flour (Cinnamomum burmannii  and used shrimp head meal. A Fish with a weight of  208.98±25.76 g reared in 12 floating nets cage (2x1x1.5 m3 with density of 15 fish/nets for 60 days. As treatment, fish were fed with feed contains 1% cinnamon leaves,  45% shrimp head meal, and combined of cinnamon leaves and shrimp head meal, and as control used feed were formulated without cinnamon leaves and shrimp head meal. Fish were fed 2 times a daily with feeding rate 3.5% of average body weight offish. The test parameters observed were physical, chemical and organoleptic test of catfish meat. The results showed feed with contains cinnamon leaves and shrimp head meal could decrease level of body fat 14.7% compared than control (p<0.05. Feed with used cinnamon leaves and shrimp head meal gave a texture offillet fish more compact,  elastic and color of fillet fish white.

  9. Utilization of Cinnamon Leaf and Shrimp Flour as an Enhancer of Catfish Meat Quality

    Directory of Open Access Journals (Sweden)

    Mia Setiawati

    2017-04-01

    Full Text Available Catfish (Pangasianodon hypophthalmus is a freshwater fish that has been produced in the form of a filet. One of the problems in producing good catfish fillet is compactness and brightness of catfish farmed meat. This research aimed to get feed formulation as enhancer meat quality of striped catfish with added Cinnamon leaves flour (Cinnamomum burmannii and used shrimp head meal. A Fish with a weight of 208.98±25.76 g reared in 12 floating nets cage (2x1x1.5 m3 with density of 15 fish/nets for 60 days. As treatment, fish were fed with feed contains 1% cinnamon leaves, 45% shrimp head meal, and combined of cinnamon leaves and shrimp head meal, and as control used feed were formulated without cinnamon leaves and shrimp head meal. Fish were fed 2 times a daily with feeding rate 3.5% of average body weight of fish. The test parameters observed were physical, chemical and organoleptic test of catfish meat. The results showed feed with contains cinnamon leaves and shrimp head meal could decrease level of body fat 14.7% compared than control (p<0.05. Feed with used cinnamon leaves and shrimp head meal gave a texture of fillet fish more compact, elastic and color of fillet fish white. Keywords: Cinnamomum burmannii, fillet, shrimp head meal, feed formulated, Pangasianodon hypophthalmus

  10. Resource utilization after gastrostomy tube placement: defining areas of improvement for future quality improvement projects.

    Science.gov (United States)

    Correa, Jesus A; Fallon, Sara C; Murphy, Kathleen M; Victorian, Veronica A; Bisset, George S; Vasudevan, Sanjeev A; Lopez, Monica E; Brandt, Mary L; Cass, Darrell L; Rodriguez, J Ruben; Wesson, David E; Lee, Timothy C

    2014-11-01

    Gastrostomy tube (GT) placement is a frequent procedure at a tertiary care children's hospital. Because of underlying patient illness and the nature of the device, patients often require multiple visits to the emergency room for GT-related concerns. We hypothesized that the majority of our patient visits to the ER related to gastrostomy tube concerns were not medically urgent. The purpose of this study was to characterize the incidence and indications for GT-related emergency room visits and readmission rates in order to develop family educational material that might allow for these nonurgent concerns to be addressed on an outpatient basis. We reviewed the medical records of all patients with GT placement in the operating room from January 2011 to September 2012. We evaluated our primary outcome of ER visits at less than 30 days after discharge and 30-365 days after discharge. The purpose of the ER visit was categorized as either mechanical (dislodgement, leaking) or wound-related (infection, granulation tissue). Additional outcomes assessed included readmission rates, reoperation rates, and the use of gastrostomy contrast studies. During the study period, 247 patients had gastrostomy tubes placed at our institution at a median age of 15.3 months (range 0.03 months-22 years). Of the total patient population, 219 were discharged less than 30 days after their operation (89%). Of these, 42 (20%) returned to the emergency room a total of 44 times within 30 days of discharge for concerns related to their GT. Avoidable visits related to leaking, mild clogs, and granulation tissue were seen in 17/44 (39%). An additional 40 patients among the entire cohort of 247 (16%) presented to the ER a total of 71 times 31-365 days post-discharge; 59 (83%) of these visits were potentially avoidable. The readmission rate related to the GT was low (4%). Few studies have attempted to quantify the amount of postoperative resources utilized post-GT placement in children. Our findings

  11. Quality of life in haemophilia A: Hemophilia Utilization Group Study Va (HUGS-Va).

    Science.gov (United States)

    Poon, J-L; Zhou, Z-Y; Doctor, J N; Wu, J; Ullman, M M; Ross, C; Riske, B; Parish, K L; Lou, M; Koerper, M A; Gwadry-Sridhar, F; Forsberg, A D; Curtis, R G; Johnson, K A

    2012-09-01

    This study describes health-related quality of life (HRQoL) of persons with haemophilia A in the United States (US) and determines associations between self-reported joint pain, motion limitation and clinically evaluated joint range of motion (ROM), and between HRQoL and ROM. As part of a 2-year cohort study, we collected baseline HRQoL using the SF-12 (adults) and PedsQL (children), along with self-ratings of joint pain and motion limitation, in persons with factor VIII deficiency recruited from six Haemophilia Treatment Centres (HTCs) in geographically diverse regions of the US. Clinically measured joint ROM measurements were collected from medical charts of a subset of participants. Adults (N = 156, mean age: 33.5 ± 12.6 years) had mean physical and mental component scores of 43.4 ± 10.7 and 50.9 ± 10.1, respectively. Children (N = 164, mean age: 9.7 ± 4.5 years) had mean total PedsQL, physical functioning, and psychosocial health scores of 85.9 ± 13.8, 89.5 ± 15.2, and 84.1 ± 15.3, respectively. Persons with more severe haemophilia and higher self-reported joint pain and motion limitation had poorer scores, particularly in the physical aspects of HRQoL. In adults, significant correlations (P < 0.01) were found between ROM measures and both self-reported measures. Except among those with severe disease, children and adults with haemophilia have HRQoL scores comparable with those of the healthy US population. The physical aspects of HRQoL in both adults and children with haemophilia A in the US decrease with increasing severity of illness. However, scores for mental aspects of HRQoL do not differ between severity groups. These findings are comparable with those from studies in European and Canadian haemophilia populations.

  12. A study on correlation between 2D and 3D gamma evaluation metrics in patient-specific quality assurance for VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Rajasekaran, Dhanabalan, E-mail: dhanabalanraj@gmail.com; Jeevanandam, Prakash; Sukumar, Prabakar; Ranganathan, Arulpandiyan; Johnjothi, Samdevakumar; Nagarajan, Vivekanandan

    2014-01-01

    In this study, we investigated the correlation between 2-dimensional (2D) and 3D gamma analysis using the new PTW OCTAVIUS 4D system for various parameters. For this study, we selected 150 clinically approved volumetric-modulated arc therapy (VMAT) plans of head and neck (50), thoracic (esophagus) (50), and pelvic (cervix) (50) sites. Individual verification plans were created and delivered to the OCTAVIUS 4D phantom. Measured and calculated dose distributions were compared using the 2D and 3D gamma analysis by global (maximum), local and selected (isocenter) dose methods. The average gamma passing rate for 2D global gamma analysis in coronal and sagittal plane was 94.81% ± 2.12% and 95.19% ± 1.76%, respectively, for commonly used 3-mm/3% criteria with 10% low-dose threshold. Correspondingly, for the same criteria, the average gamma passing rate for 3D planar global gamma analysis was 95.90% ± 1.57% and 95.61% ± 1.65%. The volumetric 3D gamma passing rate for 3-mm/3% (10% low-dose threshold) global gamma was 96.49% ± 1.49%. Applying stringent gamma criteria resulted in higher differences between 2D planar and 3D planar gamma analysis across all the global, local, and selected dose gamma evaluation methods. The average gamma passing rate for volumetric 3D gamma analysis was 1.49%, 1.36%, and 2.16% higher when compared with 2D planar analyses (coronal and sagittal combined average) for 3 mm/3% global, local, and selected dose gamma analysis, respectively. On the basis of the wide range of analysis and correlation study, we conclude that there is no assured correlation or notable pattern that could provide relation between planar 2D and volumetric 3D gamma analysis. Owing to higher passing rates, higher action limits can be set while performing 3D quality assurance. Site-wise action limits may be considered for patient-specific QA in VMAT.

  13. A study on correlation between 2D and 3D gamma evaluation metrics in patient-specific quality assurance for VMAT.

    Science.gov (United States)

    Rajasekaran, Dhanabalan; Jeevanandam, Prakash; Sukumar, Prabakar; Ranganathan, Arulpandiyan; Johnjothi, Samdevakumar; Nagarajan, Vivekanandan

    2014-01-01

    In this study, we investigated the correlation between 2-dimensional (2D) and 3D gamma analysis using the new PTW OCTAVIUS 4D system for various parameters. For this study, we selected 150 clinically approved volumetric-modulated arc therapy (VMAT) plans of head and neck (50), thoracic (esophagus) (50), and pelvic (cervix) (50) sites. Individual verification plans were created and delivered to the OCTAVIUS 4D phantom. Measured and calculated dose distributions were compared using the 2D and 3D gamma analysis by global (maximum), local and selected (isocenter) dose methods. The average gamma passing rate for 2D global gamma analysis in coronal and sagittal plane was 94.81% ± 2.12% and 95.19% ± 1.76%, respectively, for commonly used 3-mm/3% criteria with 10% low-dose threshold. Correspondingly, for the same criteria, the average gamma passing rate for 3D planar global gamma analysis was 95.90% ± 1.57% and 95.61% ± 1.65%. The volumetric 3D gamma passing rate for 3-mm/3% (10% low-dose threshold) global gamma was 96.49% ± 1.49%. Applying stringent gamma criteria resulted in higher differences between 2D planar and 3D planar gamma analysis across all the global, local, and selected dose gamma evaluation methods. The average gamma passing rate for volumetric 3D gamma analysis was 1.49%, 1.36%, and 2.16% higher when compared with 2D planar analyses (coronal and sagittal combined average) for 3mm/3% global, local, and selected dose gamma analysis, respectively. On the basis of the wide range of analysis and correlation study, we conclude that there is no assured correlation or notable pattern that could provide relation between planar 2D and volumetric 3D gamma analysis. Owing to higher passing rates, higher action limits can be set while performing 3D quality assurance. Site-wise action limits may be considered for patient-specific QA in VMAT. Copyright © 2014 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  14. Toward utilization of data for program management and evaluation: quality assessment of five years of health management information system data in Rwanda

    OpenAIRE

    Nisingizwe, Marie Paul; Hari S Iyer; Gashayija, Modeste; Hirschhorn, Lisa R.; Amoroso, Cheryl; Wilson, Randy; Rubyutsa, Eric; Gaju, Eric; Basinga, Paulin; Muhire, Andrew; Binagwaho, Agnès; Hedt-Gauthier, Bethany

    2014-01-01

    Background: Health data can be useful for effective service delivery, decision making, and evaluating existing programs in order to maintain high quality of healthcare. Studies have shown variability in data quality from national health management information systems (HMISs) in sub-Saharan Africa which threatens utility of these data as a tool to improve health systems. The purpose of this study is to assess the quality of Rwanda's HMIS data over a 5-year period. Methods: The World Health Org...

  15. Toward utilization of data for program management and evaluation: quality assessment of five years of health management information system data in Rwanda

    OpenAIRE

    Nisingizwe, Marie Paul; Hari S Iyer; Gashayija, Modeste; Hirschhorn, Lisa R.; Amoroso, Cheryl; Wilson, Randy; Rubyutsa, Eric; Gaju, Eric; Basinga, Paulin; Muhire, Andrew; Binagwaho, Agnès; Hedt-Gauthier, Bethany

    2014-01-01

    Background: Health data can be useful for effective service delivery, decision making, and evaluating existing programs in order to maintain high quality of healthcare. Studies have shown variability in data quality from national health management information systems (HMISs) in sub-Saharan Africa which threatens utility of these data as a tool to improve health systems. The purpose of this study is to assess the quality of Rwanda’s HMIS data over a 5-year period.Methods: The World Health Orga...

  16. A Structure for Three-Phase Four-Wire Distribution System Utilizing Unified Power Quality Conditioner (UPQC

    Directory of Open Access Journals (Sweden)

    B. Santhosh Kumar

    2014-02-01

    Full Text Available This paper presents a novel structure for a three- phase four-wire (3P4W distribution system utilizing unified power quality conditioner (UPQC. The 3P4W system is realized from a three-phase three-wire system where the neutral of series transformer used in series part UPQC is considered as the fourth wire for the 3P4W system. A new control strategy to balance the unbalanced load currents is also presented in this paper. The neutral current that may flow toward transformer neutral point is compensated by using a four-leg voltage source inverter topology for shunt part. Thus, the series transformer neutral will be at virtual zero potential during all operating conditions. The simulation results based on MATLAB/Simulink are presented to show the effectiveness of the proposed UPQC-based 3P4W distribution system.

  17. A Quality Assurance Method that Utilizes 3D Dosimetry and Facilitates Clinical Interpretation

    Energy Technology Data Exchange (ETDEWEB)

    Oldham, Mark, E-mail: mark.oldham@duke.edu [Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States); Thomas, Andrew; O' Daniel, Jennifer; Juang, Titania [Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States); Ibbott, Geoffrey [University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Adamovics, John [Rider University, Lawrenceville, New Jersey (United States); Kirkpatrick, John P. [Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States)

    2012-10-01

    Purpose: To demonstrate a new three-dimensional (3D) quality assurance (QA) method that provides comprehensive dosimetry verification and facilitates evaluation of the clinical significance of QA data acquired in a phantom. Also to apply the method to investigate the dosimetric efficacy of base-of-skull (BOS) intensity-modulated radiotherapy (IMRT) treatment. Methods and Materials: Two types of IMRT QA verification plans were created for 6 patients who received BOS IMRT. The first plan enabled conventional 2D planar IMRT QA using the Varian portal dosimetry system. The second plan enabled 3D verification using an anthropomorphic head phantom. In the latter, the 3D dose distribution was measured using the DLOS/Presage dosimetry system (DLOS = Duke Large-field-of-view Optical-CT System, Presage Heuris Pharma, Skillman, NJ), which yielded isotropic 2-mm data throughout the treated volume. In a novel step, measured 3D dose distributions were transformed back to the patient's CT to enable calculation of dose-volume histograms (DVH) and dose overlays. Measured and planned patient DVHs were compared to investigate clinical significance. Results: Close agreement between measured and calculated dose distributions was observed for all 6 cases. For gamma criteria of 3%, 2 mm, the mean passing rate for portal dosimetry was 96.8% (range, 92.0%-98.9%), compared to 94.9% (range, 90.1%-98.9%) for 3D. There was no clear correlation between 2D and 3D passing rates. Planned and measured dose distributions were evaluated on the patient's anatomy, using DVH and dose overlays. Minor deviations were detected, and the clinical significance of these are presented and discussed. Conclusions: Two advantages accrue to the methods presented here. First, treatment accuracy is evaluated throughout the whole treated volume, yielding comprehensive verification. Second, the clinical significance of any deviations can be assessed through the generation of DVH curves and dose overlays on

  18. Outcomes of a population-based asthma management program: quality of life, absenteeism, and utilization.

    Science.gov (United States)

    Legorreta, A P; Leung, K M; Berkbigler, D; Evans, R; Liu, X

    2000-07-01

    Despite the availability of the National Asthma Education Program (NAEP) guidelines since 1991, asthma remains inadequately managed. To improve quality of life, functional status, and self-management behavior of asthma patients, a large health maintenance organization (HMO) in California implemented an asthma management program in 1996. To evaluate the effectiveness of an asthma management program in an HMO setting. Prospective study. Survey data from members who participated in the intervention program and data from members who received usual care were analyzed using propensity score technique. A total of 1,043 asthma patients who responded both baseline and follow-up survey were included in the analysis. From baseline to followup, participants in the in-home intervention program reported significant improvement in functional status (improvements range from 0.2 to 7.2), daily use of steroid inhaler (+4.1%), daily peak flow meter use (+6.4%), self-reported knowledge of what to do for an asthma attack (+12.4%), and feeling that their asthma was under control (+10.8%). Absenteeism (-11.8%) and hospitalization due to asthma (-3.5%) were significantly reduced from baseline to follow-up. Participants did not report significant changes in overuse of beta2-agonists and emergency room visits due to asthma. In comparison with the asthmatic patients who received usual care (non-participants), participants had significantly greater improvement on daily use of steroid inhaler (+4.0% versus -6.0%), daily use of home peak flow meter (+6.4% versus 1.9%) and self-reported knowledge on what to do for an asthma attack (+12.4% versus +5.4%). These findings suggest that population-based programs can improve functional status, increase self-monitoring and knowledge about asthma, and decrease absenteeism and hospitalization for asthma by directly providing asthmatic patients with educational materials and self-monitoring tools. Such "direct-to-consumer" outreach programs may help bridge

  19. The utility of comparative models and the local model quality for protein crystal structure determination by Molecular Replacement

    Directory of Open Access Journals (Sweden)

    Pawlowski Marcin

    2012-11-01

    Full Text Available Abstract Background Computational models of protein structures were proved to be useful as search models in Molecular Replacement (MR, a common method to solve the phase problem faced by macromolecular crystallography. The success of MR depends on the accuracy of a search model. Unfortunately, this parameter remains unknown until the final structure of the target protein is determined. During the last few years, several Model Quality Assessment Programs (MQAPs that predict the local accuracy of theoretical models have been developed. In this article, we analyze whether the application of MQAPs improves the utility of theoretical models in MR. Results For our dataset of 615 search models, the real local accuracy of a model increases the MR success ratio by 101% compared to corresponding polyalanine templates. On the contrary, when local model quality is not utilized in MR, the computational models solved only 4.5% more MR searches than polyalanine templates. For the same dataset of the 615 models, a workflow combining MR with predicted local accuracy of a model found 45% more correct solution than polyalanine templates. To predict such accuracy MetaMQAPclust, a “clustering MQAP” was used. Conclusions Using comparative models only marginally increases the MR success ratio in comparison to polyalanine structures of templates. However, the situation changes dramatically once comparative models are used together with their predicted local accuracy. A new functionality was added to the GeneSilico Fold Prediction Metaserver in order to build models that are more useful for MR searches. Additionally, we have developed a simple method, AmIgoMR (Am I good for MR?, to predict if an MR search with a template-based model for a given template is likely to find the correct solution.

  20. [Utility of the questionnaire for quality of life EORTC-QLQ-C30 in psycho-oncological outcome research].

    Science.gov (United States)

    Determann, M M; Kollenbaum, V-E; Henne-Bruns, D

    2004-01-01

    Aim of this paper is to examine the utility and validity of the questionnaire for quality of life EORTC-QLQ-C30 (European Organization for Research and Treatment of Cancer). Data were collected within the scope of a study for evaluation of individual psycho-oncological support for inpatients with colorectal cancer undergoing surgery. The study was sponsored by the German Cancer Aid. The design was a prospective randomized controlled trial. After informed consent, patients were randomized in one of two groups: patients in the experimental group received individualized psychotherapeutic support during the hospital stay; those in the control group received a daily program of classical music. All patients were assessed one day before surgical treatment, ten days and three months after surgery. Instruments were questionnaires for quality of life and state anxiety. 106 patients met the inclusion criteria. Results show insufficient discriminative power (high significant bivariate correlations between most EORTC scales, Kendalls tau-b) and insufficient construct validity (high and significant bivariate correlations between most EORTC scales and state anxiety, Kendalls tau-b) of the EORTC scales. The scores of "cognitive functioning" and some symptom scales show an insufficient scatter. The illustration of situational influences and therefore an insufficient illustration of effects of specific interventions are connected with a high sensitivity of the scales and a tendency to extreme sores. The psycho-oncological intervention shows a significant stress reducing effect on the specific EORTC-scale "Emotional Functioning" and on State Anxiety (STAI). The testing of utility and validity of the EORTC-Questionnaire shows that they are insufficient and therefore the benefit for evaluation of specific intervention procedures is restricted.

  1. Effect of coal quality on maintenance costs at utility plants. Final report. [Effect of ash and sulfur content of coal

    Energy Technology Data Exchange (ETDEWEB)

    Holt, E.C. Jr.

    1980-06-01

    In an attempt to determine if correlation exists between coal quality, as measured by its ash and sulfur contents, and the maintenance cost at utility plants, an examination was made of the actual maintenance cost experience of selected portions of five TVA coal-fired power plants as a function of the fuel quality consumed during an extended period of time. The results indicate that, according to our decision rules developed in compliance with accepted statistical practices, correlation does exist in many portions of the coal-fired plants for which sufficient maintenance cost records were available. The degree of correlation varies significantly among the individual portions of a particular plant as well as among the various plants. However, the indicators are sufficient to confirm that a change (within the design constraints of the unit) in the ash and/or sulfur content of the coal being consumed by a utility boiler will have a proportionate effect on the maintenance cost at the plant. In the cases examined, each percent variation in ash content could have a monetary effect of from $0.05 to $0.10 per ton of coal consumed. Similarly, each percent variation in sulfur content could influence maintenance costs from $0.30 to $0.50 per ton of coal. Since these values are based on preliminary analysis of limited data, they must be approached with caution and not removed from the context in which they are presented. However, if borne out by further study, the potential magnitude of such savings may be sufficient to justify the acquisition of superior coal supplies, either by changing the source and/or using preparation to obtain a lower ash and sulfur fuel.

  2. Metrics and Its Function in Poetry

    Institute of Scientific and Technical Information of China (English)

    XIAO Zhong-qiong; CHEN Min-jie

    2013-01-01

    Poetry is a special combination of musical and linguistic qualities-of sounds both regarded as pure sound and as mean-ingful speech. Part of the pleasure of poetry lies in its relationship with music. Metrics, including rhythm and meter, is an impor-tant method for poetry to express poetic sentiment. Through the introduction of poetic language and typical examples, the writer of this paper tries to discuss the relationship between sound and meaning.

  3. A Metrics Approach for Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Cristian CIUREA

    2009-01-01

    Full Text Available This article presents different types of collaborative systems, their structure and classification. This paper defines the concept of virtual campus as a collaborative system. It builds architecture for virtual campus oriented on collaborative training processes. It analyses the quality characteristics of collaborative systems and propose techniques for metrics construction and validation in order to evaluate them. The article analyzes different ways to increase the efficiency and the performance level in collaborative banking systems.

  4. Osteoarthritis: quality of life, comorbidities, medication and health service utilization assessed in a large sample of primary care patients

    Directory of Open Access Journals (Sweden)

    Szecsenyi Joachim

    2007-06-01

    Full Text Available Abstract Objective To assess the gender related impact of osteoarthritis (OA on quality of life (QoL and health service utilization (HSU of primary care patients in Germany. Methods Cross sectional study with 1250 OA patients attending 75 primary care practices from March to May 2005. QoL was assessed using the GERMAN-AIMS2-SF. Data about comorbidities, prescriptions, health service utilization, and physical activity were obtained by questioning patients or from the patients' medical files. Depression was assessed by means of the Patient Health Questionnaire (PHQ-9. Results 1021 (81.7% questionnaires were returned. 347 (34% patients were male. Impact of OA on QoL was different between gender: women achieved significantly higher scores in the AIMS 2-SF dimensions lower body (p Conclusion The extent to which OA impacts men and women differs in primary care patients. This might have resulted in the revealed differences in the pharmacological treatment and the HSU. Further research is needed to confirm our findings and to assess causality.

  5. Metrics for Evaluating Dialogue Strategies in a Spoken Language System

    CERN Document Server

    Danieli, M; Danieli, Morena; Gerbino, Elisabetta

    1996-01-01

    In this paper, we describe a set of metrics for the evaluation of different dialogue management strategies in an implemented real-time spoken language system. The set of metrics we propose offers useful insights in evaluating how particular choices in the dialogue management can affect the overall quality of the man-machine dialogue. The evaluation makes use of established metrics: the transaction success, the contextual appropriateness of system answers, the calculation of normal and correction turns in a dialogue. We also define a new metric, the implicit recovery, which allows to measure the ability of a dialogue manager to deal with errors by different levels of analysis. We report evaluation data from several experiments, and we compare two different approaches to dialogue repair strategies using the set of metrics we argue for.

  6. The air quality and human health effects of integrating utility-scale batteries into the New York State electricity grid

    Science.gov (United States)

    Gilmore, Elisabeth A.; Apt, Jay; Walawalkar, Rahul; Adams, Peter J.; Lave, Lester B.

    In a restructured electricity market, utility-scale energy storage technologies such as advanced batteries can generate revenue by charging at low electricity prices and discharging at high prices. This strategy changes the magnitude and distribution of air quality emissions and the total carbon dioxide (CO 2) emissions. We evaluate the social costs associated with these changes using a case study of 500 MW sodium-sulfur battery installations with 80% round-trip efficiency. The batteries displace peaking generators in New York City and charge using off-peak generation in the New York Independent System Operator (NYISO) electricity grid during the summer. We identify and map charging and displaced plant types to generators in the NYISO. We then convert the emissions into ambient concentrations with a chemical transport model, the Particulate Matter Comprehensive Air Quality Model with extensions (PMCAM x). Finally, we transform the concentrations into their equivalent human health effects and social benefits and costs. Reductions in premature mortality from fine particulate matter (PM 2.5) result in a benefit of 4.5 ¢ kWh -1 and 17 ¢ kWh -1 from displacing a natural gas and distillate fuel oil fueled peaking plant, respectively, in New York City. Ozone (O 3) concentrations increase due to decreases in nitrogen oxide (NO x) emissions, although the magnitude of the social cost is less certain. Adding the costs from charging, displacing a distillate fuel oil plant yields a net social benefit, while displacing the natural gas plant has a net social cost. With the existing base-load capacity, the upstate population experiences an increase in adverse health effects. If wind generation is charging the battery, both the upstate charging location and New York City benefit. At 20 per tonne of CO 2, the costs from CO 2 are small compared to those from air quality. We conclude that storage could be added to existing electricity grids as part of an integrated strategy from a

  7. Metric-adjusted skew information

    DEFF Research Database (Denmark)

    Liang, Cai; Hansen, Frank

    2010-01-01

    We give a truly elementary proof of the convexity of metric-adjusted skew information following an idea of Effros. We extend earlier results of weak forms of superadditivity to general metric-adjusted skew information. Recently, Luo and Zhang introduced the notion of semi-quantum states on a bipa......We give a truly elementary proof of the convexity of metric-adjusted skew information following an idea of Effros. We extend earlier results of weak forms of superadditivity to general metric-adjusted skew information. Recently, Luo and Zhang introduced the notion of semi-quantum states...

  8. Canonical metrics on complex manifold

    Institute of Scientific and Technical Information of China (English)

    YAU Shing-Tung

    2008-01-01

    @@ Complex manifolds are topological spaces that are covered by coordinate charts where the Coordinate changes are given by holomorphic transformations. For example, Riemann surfaces are one dimensional complex manifolds. In order to understand complex manifolds, it is useful to introduce metrics that are compatible with the complex structure. In general, we should have a pair (M, ds2M) where ds2M is the metric. The metric is said to be canonical if any biholomorphisms of the complex manifolds are automatically isometries. Such metrics can naturally be used to describe invariants of the complex structures of the manifold.

  9. Canonical metrics on complex manifold

    Institute of Scientific and Technical Information of China (English)

    YAU; Shing-Tung(Yau; S.-T.)

    2008-01-01

    Complex manifolds are topological spaces that are covered by coordinate charts where the coordinate changes are given by holomorphic transformations.For example,Riemann surfaces are one dimensional complex manifolds.In order to understand complex manifolds,it is useful to introduce metrics that are compatible with the complex structure.In general,we should have a pair(M,ds~2_M)where ds~2_M is the metric.The metric is said to be canonical if any biholomorphisms of the complex manifolds are automatically isometries.Such metrics can naturally be used to describe invariants of the complex structures of the manifold.

  10. The metric system: An introduction

    Science.gov (United States)

    Lumley, Susan M.

    On 13 Jul. 1992, Deputy Director Duane Sewell restated the Laboratory's policy on conversion to the metric system which was established in 1974. Sewell's memo announced the Laboratory's intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory's conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on 25 Jul. 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation's conversion to the metric system. The second part of this report is on applying the metric system.

  11. Two classes of metric spaces

    Directory of Open Access Journals (Sweden)

    Isabel Garrido

    2016-04-01

    Full Text Available The class of metric spaces (X,d known as small-determined spaces, introduced by Garrido and Jaramillo, are properly defined by means of some type of real-valued Lipschitz functions on X. On the other hand, B-simple metric spaces introduced by Hejcman are defined in terms of some kind of bornologies of bounded subsets of X. In this note we present a common framework where both classes of metric spaces can be studied which allows us to see not only the relationships between them but also to obtain new internal characterizations of these metric properties.

  12. Habitat Complexity Metrics to Guide Restoration of Large Rivers

    Science.gov (United States)

    Jacobson, R. B.; McElroy, B. J.; Elliott, C.; DeLonay, A.

    2011-12-01

    Restoration strategies on large, channelized rivers typically strive to recover lost habitat complexity, based on the assumption complexity and biophysical capacity are directly related. Although definition of links between complexity and biotic responses can be tenuous, complexity metrics have appeal because of their potential utility in quantifying habitat quality, defining reference conditions and design criteria, and measuring restoration progress. Hydroacoustic instruments provide many ways to measure complexity on large rivers, yet substantive questions remain about variables and scale of complexity that are meaningful to biota, and how complexity can be measured and monitored cost effectively. We explore these issues on the Missouri River, using the example of channel re-engineering projects that are intended to aid in recovery of the pallid sturgeon, an endangered benthic fish. We are refining understanding of what habitat complexity means for adult fish by combining hydroacoustic habitat assessments with acoustic telemetry to map locations during reproductive migrations and spawning. These data indicate that migrating sturgeon select points with relatively low velocity but adjacent to areas of high velocity (that is, with high velocity gradients); the integration of points defines pathways which minimize energy expenditures during upstream migrations of 10's to 100's of km. Complexity metrics that efficiently quantify migration potential at the reach scale are therefore directly relevant to channel restoration strategies. We are also exploring complexity as it relates to larval sturgeon dispersal. Larvae may drift for as many as 17 days (100's of km at mean velocities) before using up their yolk sac, after which they "settle" into habitats where they initiate feeding. An assumption underlying channel re-engineering is that additional channel complexity, specifically increased shallow, slow water, is necessary for early feeding and refugia. Development of

  13. [A Handling Qualities Metric for Damaged Aircraft

    Science.gov (United States)

    Cogan, Bruce; Hayes, Peggy

    2009-01-01

    In recent flight tests of F-15 Intelligent Flight Control System (IFCS), software simulated aircraft control surface failures were inserted to evaluate the IFCS adaptive systems. The failure commanded the left stabilator to a fixed position. The adaptive system uses a neural network that is designed to change control law gains, in the event of damage (real or simulated), that allows the aircraft to fly as it had before the damage. The performance of the adaptive system was assessed in terms of its ability to re-establish good onboard model tracking and its ability to decouple roll and pitch response.

  14. Evaluating health-related quality of life in type 1 diabetes: a systematic literature review of utilities for adults with type 1 diabetes

    Science.gov (United States)

    Smith-Palmer, Jayne; Bae, Jay P; Boye, Kristina S; Norrbacka, Kirsi; Hunt, Barnaby; Valentine, William J

    2016-01-01

    Background and aims Type 1 diabetes is a chronic condition associated with micro- and macrovascular complications that have a notable impact on health-related quality of life, the magnitude of which can be quantified via the use of utility values. The aim of this review was to conduct a systematic literature review to identify and compare published health state utility values for adults with type 1 diabetes both, with and without diabetes-related complications. Methods Literature searches of the PubMed, EMBASE, and Cochrane Library databases were performed to identify English language studies on adults with type 1 diabetes, published from 2000 onward, reporting utility values for patients with or without diabetes-related complications or assessing the impact of changes in HbA1c or body mass index on quality of life. For inclusion, studies were required to report utilities elicited using validated methods. Results A total of 20 studies were included in the final review that included utility values elicited using the EuroQuol five dimensions questionnaire (n=9), 15D questionnaire (n=2), Quality of Well-Being scale (n=4), time trade-off (n=3), and standard gamble (n=2) methods. For patients with no complications, reported utility values ranged from 0.90 to 0.98. Complications including stroke (reported disutility range, −0.105 to −0.291), neuropathy (range, −0.055 to −0.358), and blindness (range, −0.132 to −0.208) were associated with the largest decrements in utility values. The magnitude of utility values and utility decrements was influenced by the assessment method used. Conclusion Complications lead to impaired health-related quality of life in patients with type 1 diabetes, the magnitude of which is influenced by the method used to determine utilities. There is currently a lack of utility data for certain complications of type 1 diabetes, meaning that many economic evaluations have relied on a combination of type 1 and type 2 diabetes utilities

  15. STATISTICAL ANALYSIS FOR OBJECT ORIENTED DESIGN SOFTWARE SECURITY METRICS

    OpenAIRE

    Amjan.Shaik; Dr.C.R.K.Reddy; Dr.A.Damodaran

    2010-01-01

    In the last decade, empirical studies on object-oriented design metrics have shown some of them to be useful for predicting the fault-proneness of classes in object-oriented software systems. In the era of Computerization Object Oriented Paradigm is becoming more and more pronounced. This has provoked the need of high quality object oriented software, as the traditional metrics cannot be applied on the object-oriented systems. This paper gives the evaluation of CK suit of metrics. There are q...

  16. Health-Related Quality of Life and Health Service Utilization in Chinese Rural-to-Urban Migrant Workers

    Directory of Open Access Journals (Sweden)

    Chu-Hong Lu

    2015-02-01

    Full Text Available Objectives: The number of rural-to-urban migrant workers has been increasing rapidly in China over recent decades, but there is a scarcity of data on health-related quality of life (HRQOL and health service utilization among Chinese rural-to-urban migrant workers in comparison to local urban residents. We aimed to address this question. Methods: This was a cross-sectional study of 2315 rural-to-urban migrant workers and 2347 local urban residents in the Shenzhen-Dongguan economic zone (China in 2013. Outcomes included HRQOL (measured by Health Survey Short Form 36 and health service utilization (self-reported. Results: Compared to local urban residents, rural-to-urban migrant workers had lower scores in all domains of HRQOL, and were more likely to report chronic illnesses (9.2% vs. 6.0%, adjusted OR = 1.62, 95% CI 1.28–2.04 and recent two-week morbidity (21.3% vs. 5.0%, adjusted OR = 5.41, 95% CI 4.26–6.88. Among individuals who reported sickness in the recent two weeks, migrant workers were much less likely to see a doctor (32.7% vs. 66.7%, adjusted OR = 0.21, 95% CI 0.13–0.36. Conclusions: Chinese rural-to-urban migrant workers have lower HRQOL, much more frequent morbidity, but are also much less likely to see a doctor in times of sickness as compared to local urban residents, indicating the existence of significant unmet medical care needs in this population.

  17. Quality demand, raw material utilization and costs at a marked increase in the use of forest fuels; Kvalitetskrav, raavaruutnyttjande och kostnader vid kraftigt oekad anvaendning av skogsbraensle

    Energy Technology Data Exchange (ETDEWEB)

    Arlinger, John; Brunberg, Bengt; Eriksson, Mats; Thor, Magnus [Forestry Research Inst. of Sweden, Uppsala (Sweden)

    2001-03-01

    The work has been carried on in three steps: (1) Mapping of the present quality of forest fuels at heating and cogeneration utilities and pellets producers, (2) Calculation of gross supply of forest fuels in three forestry administrations at AssiDomaen in southern, central and northern Sweden, and (3) Analysis of costs and raw material utilization in three forestry administrations at AssiDomaen in southern, central and northern Sweden. A very detailed description of the results is given in three appendixes.

  18. Metrical Phonology: German Sound System.

    Science.gov (United States)

    Tice, Bradley S.

    Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English and German languages. The objective is to promote use of metrical phonology as a tool for enhancing instruction in stress patterns in words and sentences, particularly in…

  19. Metrics for Hard Goods Merchandising.

    Science.gov (United States)

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in hard goods merchandising, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  20. Metrics for Soft Goods Merchandising.

    Science.gov (United States)

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in soft goods merchandising, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  1. Conversion to the Metric System

    Science.gov (United States)

    Crunkilton, John C.; Lee, Jasper S.

    1974-01-01

    The authors discuss background information about the metric system and explore the effect of metrication of agriculture in areas such as equipment calibration, chemical measurement, and marketing of agricultural products. Suggestions are given for possible leadership roles and approaches that agricultural education might take in converting to the…

  2. Numerical Calabi-Yau metrics

    CERN Document Server

    Douglas, M R; Lukic, S; Reinbacher, R; Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, Rene

    2006-01-01

    We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics, and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results.

  3. Metric Supplement to Technical Drawing.

    Science.gov (United States)

    Henschel, Mark

    This manual is intended for use in training persons whose vocations involve technical drawing to use the metric system of measurement. It could be used in a short course designed for that purpose or for individual study. The manual begins with a brief discussion of the rationale for conversion to the metric system. It then provides a…

  4. Some Results on Metric Trees

    CERN Document Server

    Aksoy, Asuman Guven

    2010-01-01

    Using isometric embedding of metric trees into Banach spaces, this paper will investigate barycenters, type and cotype, and various measures of compactness of metric trees. A metric tree ($T$, $d$) is a metric space such that between any two of its points there is an unique arc that is isometric to an interval in $\\mathbb{R}$. We begin our investigation by examining isometric embeddings of metric trees into Banach spaces. We then investigate the possible images $x_0=\\pi ((x_1+\\ldots+x_n)/n)$, where $\\pi$ is a contractive retraction from the ambient Banach space $X$ onto $T$ (such a $\\pi$ always exists) in order to understand the "metric" barycenter of a family of points $ x_1, \\ldots,x_n$ in a tree $T$. Further, we consider the metric properties of trees such as their type and cotype. We identify various measures of compactness of metric trees (their covering numbers, $\\epsilon$-entropy and Kolmogorov widths) and the connections between them. Additionally, we prove that the limit of the sequence of Kolmogorov...

  5. Generalized metric spaces and mappings

    CERN Document Server

    Lin, Shou

    2016-01-01

    The idea of mutual classification of spaces and mappings is one of the main research directions of point set topology. In a systematical way, this book discusses the basic theory of generalized metric spaces by using the mapping method, and summarizes the most important research achievements, particularly those from Chinese scholars, in the theory of spaces and mappings since the 1960s. This book has three chapters, two appendices and a list of more than 400 references. The chapters are "The origin of generalized metric spaces", "Mappings on metric spaces" and "Classes of generalized metric spaces". Graduates or senior undergraduates in mathematics major can use this book as their text to study the theory of generalized metric spaces. Researchers in this field can also use this book as a valuable reference.

  6. A Taxonomy of Metrics for Hosted Databases

    Directory of Open Access Journals (Sweden)

    Jordan Shropshire

    2006-04-01

    Full Text Available The past three years has seen exponential growth in the number of organizations who have elected to entrust core information technology functions to application service providers. Of particular interest is the outsourcing of critical systems such as corporate databases. Major banks and financial service firms are contracting with third party organizations, sometimes overseas, for their database needs. These sophisticated contracts require careful supervision by both parties. Due to the complexities of web- based applications and the complicated nature of databases, an entire class of software suites has been developed to measure the quality of service the database is providing. This article investigates the performance metrics which have evolved to satisfy this need and describes a taxonomy of performance metrics for hosted databases.

  7. Effect of ruminal vs postruminal administration of degradable protein on utilization of low-quality forage by beef steers.

    Science.gov (United States)

    Bandyk, C A; Cochran, R C; Wickersham, T A; Titgemeyer, E C; Farmer, C G; Higgins, J J

    2001-01-01

    An experiment was designed to determine the effects of ruminal and postruminal infusions of ruminally degradable protein (casein) on intake and digestion of low-quality hay by beef steers. Twelve ruminally fistulated Angus x Hereford steers (initial BW = 563 kg) were blocked by weight and assigned to one of three treatments: control (C; hay only) or hay plus ruminal (R) or postruminal (P) infusion of 400 g/d of sodium caseinate. The trial consisted of five periods: 1) 10-d adaptation to the hay diet; 2) 7-d measurement of hay intake (without infusions); 3) 10-d adaptation to protein infusion treatments (intake measurements continued); 4) 7-d measurement of hay intake and digestibility (infusions continued); and 5) 3-d ruminal sampling period (infusions continued). Steers were given ad libitum access to tallgrass-prairie hay (3.4% CP, 76.6% NDF) throughout the study. Casein was administered once daily before feeding, either directly into the rumen or via anchored infusion lines into the abomasum. Hay intake was increased by supplementation (P infusion elicited a greater (P = 0.04) increase in hay intake than postruminal infusion. Intake tended (P = 0.11) to be lower in period 4 than in period 2 for control steers but was greater in period 4 than in period 2 (P infusion of a degradable protein source improved forage utilization, although the response in forage OM intake and total digestible OM intake was greater for ruminal infusion than for postruminal infusion.

  8. METRICS DEVELOPMENT FOR PATENTS.

    Science.gov (United States)

    Veiga, Daniela Francescato; Ferreira, Lydia Masako

    2015-01-01

    To develop a proposal for metrics for patents to be applied in assessing the postgraduate programs of Medicine III - Capes. From the reading and analysis of the 2013 area documents of all the 48 areas of Capes, a proposal for metrics for patents was developed to be applied in Medicine III programs. Except for the areas Biotechnology, Food Science, Biological Sciences III, Physical Education, Engineering I, III and IV and Interdisciplinary, most areas do not adopt a scoring system for patents. The proposal developed was based on the criteria of Biotechnology, with adaptations. In general, it will be valued, in ascending order, the deposit, the granting and licensing/production. It will also be assigned higher scores to patents registered abroad and whenever there is a participation of students. This proposal can be applied to the item Intellectual Production of the evaluation form, in subsection Technical Production/Patents. The percentage of 10% for academic programs and 40% for Masters Professionals should be maintained. The program will be scored as Very Good when it reaches 400 points or over; Good, between 200 and 399 points; Regular, between 71 and 199 points; Weak up to 70 points; Insufficient, no punctuation. Desenvolver uma proposta de métricas para patentes a serem aplicadas na avaliação dos Programas de Pós-Graduação da Área Medicina III - Capes. A partir da leitura e análise dos documentos de área de 2013 de todas as 48 Áreas da Capes, desenvolveu-se uma proposta de métricas para patentes, a ser aplicada na avaliação dos programas da área. Constatou-se que, com exceção das áreas Biotecnologia, Ciência de Alimentos, Ciências Biológicas III, Educação Física, Engenharias I, III e IV e Interdisciplinar, a maioria não adota sistema de pontuação para patentes. A proposta desenvolvida baseou-se nos critérios da Biotecnologia, com adaptações. De uma forma geral, foi valorizado, em ordem crescente, o depósito, a concessão e o

  9. GPS Metric Tracking Unit

    Science.gov (United States)

    2008-01-01

    As Global Positioning Satellite (GPS) applications become more prevalent for land- and air-based vehicles, GPS applications for space vehicles will also increase. The Applied Technology Directorate of Kennedy Space Center (KSC) has developed a lightweight, low-cost GPS Metric Tracking Unit (GMTU), the first of two steps in developing a lightweight, low-cost Space-Based Tracking and Command Subsystem (STACS) designed to meet Range Safety's link margin and latency requirements for vehicle command and telemetry data. The goals of STACS are to improve Range Safety operations and expand tracking capabilities for space vehicles. STACS will track the vehicle, receive commands, and send telemetry data through the space-based asset, which will dramatically reduce dependence on ground-based assets. The other step was the Low-Cost Tracking and Data Relay Satellite System (TDRSS) Transceiver (LCT2), developed by the Wallops Flight Facility (WFF), which allows the vehicle to communicate with a geosynchronous relay satellite. Although the GMTU and LCT2 were independently implemented and tested, the design collaboration of KSC and WFF engineers allowed GMTU and LCT2 to be integrated into one enclosure, leading to the final STACS. In operation, GMTU needs only a radio frequency (RF) input from a GPS antenna and outputs position and velocity data to the vehicle through a serial or pulse code modulation (PCM) interface. GMTU includes one commercial GPS receiver board and a custom board, the Command and Telemetry Processor (CTP) developed by KSC. The CTP design is based on a field-programmable gate array (FPGA) with embedded processors to support GPS functions.

  10. Sharp metric obstructions for quasi-Einstein metrics

    CERN Document Server

    Case, Jeffrey S

    2011-01-01

    Using the tractor calculus to study conformally warped manifolds, we adapt results of Gover and Nurowski to give sharp metric obstructions to the existence of quasi-Einstein metrics on suitably generic manifolds. We do this by introducing an analogue of the curvature tractor, itself the tractor analogue of the curvature of the Fefferman-Graham ambient metric. We then use these obstructions to produce a tensorial invariant which is polynomial in the Riemann curvature and its divergence, and which gives the desired obstruction. In particular, this leads to a generalization to arbitrary dimensions of an algorithm due to Bartnik and Tod for finding static metrics. We also explore the consequences of this work for gradient Ricci solitons, finding an obstruction to their existence on suitably generic manifolds, and observing an interesting similarity between the nonnegativity of the curvature tractor and Hamilton's matrix Harnack inequality.

  11. More on effective composite metrics

    Science.gov (United States)

    Heisenberg, Lavinia

    2015-07-01

    In this work we study different classes of effective composite metrics proposed in the context of one-loop quantum corrections in bimetric gravity. For this purpose we consider contributions of the matter loops in the form of cosmological constants and potential terms yielding two types of effective composite metrics. This guarantees a nice behavior at the quantum level. However, the theoretical consistency at the classical level needs to be ensured additionally. It turns out that among all these possible couplings, only one unique effective metric survives these criteria at the classical level.

  12. More on effective composite metrics

    CERN Document Server

    Heisenberg, Lavinia

    2015-01-01

    In this work we study different classes of effective composite metrics proposed in the context of one-loop quantum corrections in bimetric gravity. For this purpose we consider contributions of the matter loops in form of cosmological constants and potential terms yielding two types of effective composite metrics. This guarantees a nice behaviour at the quantum level. However, the theoretical consistency at the classical level needs to be ensured additionally. It turns out that among all these possible couplings only one unique effective metric survives this criteria at the classical level.

  13. Generalized Painleve-Gullstrand metrics

    Energy Technology Data Exchange (ETDEWEB)

    Lin Chunyu [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China)], E-mail: l2891112@mail.ncku.edu.tw; Soo Chopin [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China)], E-mail: cpsoo@mail.ncku.edu.tw

    2009-02-02

    An obstruction to the implementation of spatially flat Painleve-Gullstrand (PG) slicings is demonstrated, and explicitly discussed for Reissner-Nordstroem and Schwarzschild-anti-deSitter spacetimes. Generalizations of PG slicings which are not spatially flat but which remain regular at the horizons are introduced. These metrics can be obtained from standard spherically symmetric metrics by physical Lorentz boosts. With these generalized PG metrics, problematic contributions to the imaginary part of the action in the Parikh-Wilczek derivation of Hawking radiation due to the obstruction can be avoided.

  14. Daylight metrics and energy savings

    Energy Technology Data Exchange (ETDEWEB)

    Mardaljevic, John; Heschong, Lisa; Lee, Eleanor

    2009-12-31

    The drive towards sustainable, low-energy buildings has increased the need for simple, yet accurate methods to evaluate whether a daylit building meets minimum standards for energy and human comfort performance. Current metrics do not account for the temporal and spatial aspects of daylight, nor of occupants comfort or interventions. This paper reviews the historical basis of current compliance methods for achieving daylit buildings, proposes a technical basis for development of better metrics, and provides two case study examples to stimulate dialogue on how metrics can be applied in a practical, real-world context.

  15. Conformal Patterson-Walker metrics

    CERN Document Server

    Hammerl, Matthias; Šilhan, Josef; Taghavi-Chabert, Arman; Žádník, Vojtěch

    2016-01-01

    The classical Patterson-Walker construction of a split-signature (pseudo-)Riemannian structure from a given torsion-free affine connection is generalized to a construction of a split-signature conformal structure from a given projective class of connections. A characterization of the induced structures is obtained. We achieve a complete description of Einstein metrics in the conformal class formed by the Patterson-Walker metric. Finally, we describe all symmetries of the conformal Patterson-Walker metric. In both cases we obtain descriptions in terms of geometric data on the original structure.

  16. Evaluating the Impact of Parent-Reported Medical Home Status on Children's Health Care Utilization, Expenditures, and Quality: A Difference-in-Differences Analysis with Causal Inference Methods.

    Science.gov (United States)

    Han, Bing; Yu, Hao; Friedberg, Mark W

    2017-04-01

    To evaluate the effects of the parent-reported medical home status on health care utilization, expenditures, and quality for children. Medical Expenditure Panel Survey (MEPS) during 2004-2012, including a total of 9,153 children who were followed up for 2 years in the survey. We took a causal difference-in-differences approach using inverse probability weighting and doubly robust estimators to study how changes in medical home status over a 2-year period affected children's health care outcomes. Our analysis adjusted for children's sociodemographic, health, and insurance statuses. We conducted sensitivity analyses using alternative statistical methods, different approaches to outliers and missing data, and accounting for possible common-method biases. Compared with children whose parents reported having medical homes in both years 1 and 2, those who had medical homes in year 1 but lost them in year 2 had significantly lower parent-reported ratings of health care quality and higher utilization of emergency care. Compared with children whose parents reported having no medical homes in both years, those who did not have medical homes in year 1 but gained them in year 2 had significantly higher ratings of health care quality, but no significant differences in health care expenditures and utilization. Having a medical home may help improve health care quality for children; losing a medical home may lead to higher utilization of emergency care. © Health Research and Educational Trust.

  17. A Preliminary Investigation of the Utility of the "Behavior Support Plan Quality Evaluation Guide II" for Use in Australia

    Science.gov (United States)

    Webber, Lynne S.; McVilly, Keith R.; Fester, Tarryn; Zazelis, Telly

    2011-01-01

    Background: The quality of behaviour support plans (BSPs) can be an important influence on the quality of the support provided to people with disability who show challenging behaviours. The Behavior Support Plan Quality Evaluation Guide II (BSP-QE II) is one tool that may be useful in assessing the quality of behaviour support plans. It has…

  18. Obtaining the Knowledge of a Server Performance from Non-Intrusively Measurable Metrics

    Directory of Open Access Journals (Sweden)

    Satoru Ohta

    2016-04-01

    Full Text Available Most network services are provided by server computers. To provide these services with good quality, the server performance must be managed adequately. For the server management, the performance information is commonly obtained from the operating system (OS and hardware of the managed computer. However, this method has a disadvantage. If the performance is degraded by excessive load or hardware faults, it becomes difficult to collect and transmit information. Thus, it is necessary to obtain the information without interfering with the server’s OS and hardware. This paper investigates a technique that utilizes non-intrusively measureable metrics that are obtained through passive traffic monitoring and electric currents monitored by the sensors attached to the power supply. However, these metrics do not directly represent the performance experienced by users. Hence, it is necessary to discover the complicated function that maps the metrics to the true performance information. To discover this function from the measured samples, a machine learning technique based on a decision tree is examined. The technique is important because it is applicable to the power management of server clusters and the immigration control of virtual servers

  19. Revision and extension of Eco-LCA metrics for sustainability assessment of the energy and chemical processes.

    Science.gov (United States)

    Yang, Shiying; Yang, Siyu; Kraslawski, Andrzej; Qian, Yu

    2013-12-17

    Ecologically based life cycle assessment (Eco-LCA) is an appealing approach for the evaluation of resources utilization and environmental impacts of the process industries from an ecological scale. However, the aggregated metrics of Eco-LCA suffer from some drawbacks: the environmental impact metric has limited applicability; the resource utilization metric ignores indirect consumption; the renewability metric fails to address the quantitative distinction of resources availability; the productivity metric seems self-contradictory. In this paper, the existing Eco-LCA metrics are revised and extended for sustainability assessment of the energy and chemical processes. A new Eco-LCA metrics system is proposed, including four independent dimensions: environmental impact, resource utilization, resource availability, and economic effectiveness. An illustrative example of comparing assessment between a gas boiler and a solar boiler process provides insight into the features of the proposed approach.

  20. Development of a noise metric for assessment of exposure risk to complex noises.

    Science.gov (United States)

    Zhu, Xiangdong; Kim, Jay H; Song, Won Joon; Murphy, William J; Song, Seongho

    2009-08-01

    Many noise guidelines currently use A-weighted equivalent sound pressure level L(Aeq) as the noise metric and the equal energy hypothesis to assess the risk of occupational noises. Because of the time-averaging effect involved with the procedure, the current guidelines may significantly underestimate the risk associated with complex noises. This study develops and evaluates several new noise metrics for more accurate assessment of exposure risks to complex and impulsive noises. The analytic wavelet transform was used to obtain time-frequency characteristics of the noise. 6 basic, unique metric forms that reflect the time-frequency characteristics were developed, from which 14 noise metrics were derived. The noise metrics were evaluated utilizing existing animal test data that were obtained by exposing 23 groups of chinchillas to, respectively, different types of noise. Correlations of the metrics with the hearing losses observed in chinchillas were compared and the most promising noise metric was identified.

  1. Predicting EuroQoL 5 Dimensions 5 Levels (EQ-5D-5L) Utilities from Older People's Quality of Life Brief Questionnaire (OPQoL-Brief) Scores.

    Science.gov (United States)

    Kaambwa, Billingsley; Ratcliffe, Julie

    2017-06-16

    Economic evaluation of healthcare treatment and services targeted at older people requires measurement of utility-based quality-of-life outcomes but it is not always possible to collect such outcome data. It may, however, be possible to estimate these outcomes using non-utility measures of quality of life where the latter have been collected. The objective of this study was to develop a regression-based algorithm to map a non-utility-based outcome, the Older People's Quality of Life brief questionnaire (OPQoL-brief), onto a utility-based outcome, the EuroQoL 5 Dimensions 5 Levels (EQ-5D-5L). The estimation sample comprised 330 community-based Australian older people (>65 years), while the validation sample consisted of 293 older people from a separate study. Six regression techniques were employed to estimate utilities from OPQoL-brief. The predictive accuracy of 54 regression models (six regression techniques × nine model specifications) was assessed using six criteria: mean absolute error (MAE), root mean squared error (RMSE), correlation, distribution of predicted utilities, distribution of residuals, and proportion of predictions with absolute errors brief items were included as continuous variables (OLS 4). RMSE and MAE estimates for this model (0.2201 and 0.1638, respectively) were within the range of published estimates. It is possible to predict valid utilities from OPQoL-brief using regression methods. We recommend OLS model (4) for this exercise.

  2. Let's Make Metric Ice Cream

    Science.gov (United States)

    Zimmerman, Marianna

    1975-01-01

    Describes a classroom activity which involved sixth grade students in a learning situation including making ice cream, safety procedures in a science laboratory, calibrating a thermometer, using metric units of volume and mass. (EB)

  3. A unifying process capability metric

    Directory of Open Access Journals (Sweden)

    John Jay Flaig

    2009-07-01

    Full Text Available A new economic approach to process capability assessment is presented, which differs from the commonly used engineering metrics. The proposed metric consists of two economic capability measures – the expected profit and the variation in profit of the process. This dual economic metric offers a number of significant advantages over other engineering or economic metrics used in process capability analysis. First, it is easy to understand and communicate. Second, it is based on a measure of total system performance. Third, it unifies the fraction nonconforming approach and the expected loss approach. Fourth, it reflects the underlying interest of management in knowing the expected financial performance of a process and its potential variation.

  4. Let's Make Metric Ice Cream

    Science.gov (United States)

    Zimmerman, Marianna

    1975-01-01

    Describes a classroom activity which involved sixth grade students in a learning situation including making ice cream, safety procedures in a science laboratory, calibrating a thermometer, using metric units of volume and mass. (EB)

  5. Phantom metrics with Killing spinors

    Directory of Open Access Journals (Sweden)

    W.A. Sabra

    2015-11-01

    Full Text Available We study metric solutions of Einstein–anti-Maxwell theory admitting Killing spinors. The analogue of the IWP metric which admits a space-like Killing vector is found and is expressed in terms of a complex function satisfying the wave equation in flat (2+1-dimensional space–time. As examples, electric and magnetic Kasner spaces are constructed by allowing the solution to depend only on the time coordinate. Euclidean solutions are also presented.

  6. Coverage Metrics for Model Checking

    Science.gov (United States)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  7. A Metric for Heterotic Moduli

    CERN Document Server

    Candelas, Philip; McOrist, Jock

    2016-01-01

    Heterotic vacua of string theory are realised, at large radius, by a compact threefold with vanishing first Chern class together with a choice of stable holomorphic vector bundle. These form a wide class of potentially realistic four-dimensional vacua of string theory. Despite all their phenomenological promise, there is little understanding of the metric on the moduli space of these. What is sought is the analogue of special geometry for these vacua. The metric on the moduli space is important in phenomenology as it normalises D-terms and Yukawa couplings. It is also of interest in mathematics, since it generalises the metric, first found by Kobayashi, on the space of gauge field connections, to a more general context. Here we construct this metric, correct to first order in alpha', in two ways: first by postulating a metric that is invariant under background gauge transformations of the gauge field, and also by dimensionally reducing heterotic supergravity. These methods agree and the resulting metric is Ka...

  8. Health-Related Quality of Life in Parkinson disease: Correlation between Health Utilities Index III and Unified Parkinson's Disease Rating Scale (UPDRS in U.S. male veterans

    Directory of Open Access Journals (Sweden)

    Kleiner-Fisman Galit

    2010-08-01

    Full Text Available Abstract Objective To apply a scaled, preference-based measure to the evaluation of health-related quality of life (HRQoL in Parkinson's disease (PD; to evaluate the relationship between disease-specific rating scales and estimated HRQoL; and to identify predictors of diminished HRQoL. Background Scaled, preference-based measures of HRQoL ("utilities" serve as indices of impact of disease, and can be used to generate quality-adjusted estimates of survival for health-economic evaluations. Evaluation of utilities for PD and their correlation with standard rating scales have been limited. Methods Utilities were generated using the Health Utilities Index Mark III (HUI-III on consecutive patients attending a PD Clinic between October 2003 and June 2006. Disease severity, medical, surgical (subthalamic nucleus deep brain stimulation (STN-DBS, and demographic information were used as model covariates. Predictors of HUI-III utility scores were evaluated using the Wilxocon rank-sum test and linear regression models. Results 68 men with a diagnosis of PD and a mean age of 74.0 (SD 7.4 were included in the data analysis. Mean HUI-III utility at first visit was 0.45 (SD 0.33. In multivariable models, UPDRS-II score (r2 = 0.56, P Conclusions Poor self-care in PD reflected by worsening UPDRS-II scores is strongly correlated with low generic HRQoL. HUI-III-based health utilities display convergent validity with the UPDRS-II. These findings highlight the importance of measures of independence as determinants of HRQoL in PD, and will facilitate the utilization of existing UPDRS data into economic analyses of PD therapies.

  9. Do efforts to standardize, assess and improve the quality of health service provision to adolescents by government-run health services in low and middle income countries, lead to improvements in service-quality and service-utilization by adolescents?

    Science.gov (United States)

    Chandra-Mouli, Venkatraman; Chatterjee, Subidita; Bose, Krishna

    2016-02-06

    Researchers and implementers working in adolescent health, and adolescents themselves question whether government-run health services in conservative and resource-constrained settings can be made adolescent friendly. This paper aims to find out what selected low and middle income country (LMIC) governments have set out to do to improve the quality of health service provision to adolescents; whether their efforts led to measurable improvements in quality and to increased health service-utilization by adolescents. We gathered normative guidance and reports from eight LMICs in Asia, Africa, Central and Eastern Europe and the Western Pacific. We analysed national quality standards for adolescent friendly health services, findings from the assessments of the quality of health service provision, and findings on the utilization of health services. Governments of LMICs have set out to improve the accessibility, acceptability, equity, appropriateness and effectiveness of health service provision to adolescents by defining standards and actions to achieve them. Their actions have led to measurable improvements in quality and to increases in health service utilisation by adolescents. With support, government-run health facilities in LMICs can improve the quality of health services and their utilization by adolescents.

  10. The effect of a poverty reduction policy and service quality standards on commune-level primary health care utilization in Thai Nguyen Province, Vietnam.

    Science.gov (United States)

    Nguyen, Phuong; Bich Hanh, Duong; Lavergne, M Ruth; Mai, Tung; Nguyen, Quang; Phillips, James F; Hughes, Jane; Van Thuc, Ha

    2010-07-01

    Although universal access to quality health services is a primary policy goal of the Government of Vietnam (GOVN), economic restructuring and privatization of health services have been associated with emerging inequities in access to care. A GOVN programme for socio-economic development known as Program 135 (P135) designates communes known to be relatively poor as priority localities for development resources. Under this programme, basic curative and preventive health services, including some prescription drugs, are provided free of charge at commune health centres (CHCs). In an effort to improve the quality of care provided at CHCs, the national Ministry of Health (MOH) has implemented a set of national benchmarks for commune health care, which defines a minimum configuration of equipment, staff, training and other elements of service provision. This research examines the impact of P135 poverty reduction policy, achievement of MOH benchmark indicators and commune socio-economic characteristics on CHC utilization rates in Thai Nguyen Province, Vietnam. The analysis uses administrative data reported from 178 CHCs in Thai Nguyen Province for nine quarters, including 2004, 2005 and the first quarter of 2006. Mixed linear regression models are used to estimate the main and interaction effects on utilization rates of exposure to the P135 policies, achievement of MOH benchmarks, poverty, distance to the district hospital and ethnic composition. Communes that are poor and remote have comparatively high CHC utilization rates. Multivariate regression results suggest that communes exposed to the P135 policy have higher utilization rates, but these effects are conditional upon achievement of benchmark standards, thus perceived quality care enhances CHC utilization. Combining Program P135 with benchmark investment reduced the gap between primary health care utilization in poor communes versus those that are less poor. These commune-level findings suggest that CHC policies

  11. Ring-push metric learning for person reidentification

    Science.gov (United States)

    He, Botao; Yu, Shaohua

    2017-05-01

    Person reidentification (re-id) has been widely studied because of its extensive use in video surveillance and forensics applications. It aims to search a specific person among a nonoverlapping camera network, which is highly challenging due to large variations in the cluttered background, human pose, and camera viewpoint. We present a metric learning algorithm for learning a Mahalanobis distance for re-id. Generally speaking, there exist two forces in the conventional metric learning process, one pulling force that pulls points of the same class closer and the other pushing force that pushes points of different classes as far apart as possible. We argue that, when only a limited number of training data are given, forcing interclass distances to be as large as possible may drive the metric to overfit the uninformative part of the images, such as noises and backgrounds. To alleviate overfitting, we propose the ring-push metric learning algorithm. Different from other metric learning methods that only punish too small interclass distances, in the proposed method, both too small and too large inter-class distances are punished. By introducing the generalized logistic function as the loss, we formulate the ring-push metric learning as a convex optimization problem and utilize the projected gradient descent method to solve it. The experimental results on four public datasets demonstrate the effectiveness of the proposed algorithm.

  12. An inheritance complexity metric for object-oriented code: A cognitive approach

    Indian Academy of Sciences (India)

    Sanjay Misra; Ibrahim Akman; Murat Koyuncu

    2011-06-01

    Software metrics should be used in order to improve the productivity and quality of software, because they provide critical information about reliability and maintainability of the system. In this paper, we propose a cognitive complexity metric for evaluating design of object-oriented (OO) code. The proposed metric is based on an important feature of the OO systems: Inheritance. It calculates the complexity at method level considering internal structure of methods, and also considers inheritance to calculate the complexity of class hierarchies. The proposed metric is validated both theoretically and empirically. For theoretical validation, principles of measurement theory are applied since the measurement theory has been proposed and extensively used in the literature as a means to evaluate the software engineering metrics. We applied our metric on a real project for empirical validation and compared it with Chidamber and Kemerer (CK) metrics suite. The theoretical, practical and empirical validations and the comparative study prove the robustness of the measure.

  13. Fixed Point Theory for Cyclic Weak $phi-$contraction in Fuzzy Metric Spaces

    Directory of Open Access Journals (Sweden)

    M. Hasan

    2012-02-01

    Full Text Available In this paper, we introduce cyclic weak $phi-$contractions in fuzzy metric spaces and utilize the same to prove some results on existence and uniqueness of fixed point in fuzzy metric spaces. Some related results are also proved besides furnishing illustrative examples.

  14. Projectively related metrics, Weyl nullity, and metric projectively invariant equations

    CERN Document Server

    Gover, A Rod

    2015-01-01

    A metric projective structure is a manifold equipped with the unparametrised geodesics of some pseudo-Riemannian metric. We make acomprehensive treatment of such structures in the case that there is a projective Weyl curvature nullity condition. The analysis is simplified by a fundamental and canonical 2-tensor invariant that we discover. It leads to a new canonical tractor connection for these geometries which is defined on a rank $(n+1)$-bundle. We show this connection is linked to the metrisability equations that govern the existence of metrics compatible with the structure. The fundamental 2-tensor also leads to a new class of invariant linear differential operators that are canonically associated to these geometries; included is a third equation studied by Gallot et al. We apply the results to study the metrisability equation, in the nullity setting described. We obtain strong local and global results on the nature of solutions and also on the nature of the geometries admitting such solutions, obtaining ...

  15. Metrics for phylogenetic networks II: nodal and triplets metrics.

    Science.gov (United States)

    Cardona, Gabriel; Llabrés, Mercè; Rosselló, Francesc; Valiente, Gabriel

    2009-01-01

    The assessment of phylogenetic network reconstruction methods requires the ability to compare phylogenetic networks. This is the second in a series of papers devoted to the analysis and comparison of metrics for tree-child time consistent phylogenetic networks on the same set of taxa. In this paper, we generalize to phylogenetic networks two metrics that have already been introduced in the literature for phylogenetic trees: the nodal distance and the triplets distance. We prove that they are metrics on any class of tree-child time consistent phylogenetic networks on the same set of taxa, as well as some basic properties for them. To prove these results, we introduce a reduction/expansion procedure that can be used not only to establish properties of tree-child time consistent phylogenetic networks by induction, but also to generate all tree-child time consistent phylogenetic networks with a given number of leaves.

  16. 基于感知重要性的立体图像质量评价方法%An Objective Quality Assessment Metric for Stereoscopic Images Based on Perceptual Significance

    Institute of Scientific and Technical Information of China (English)

    段芬芳; 邵枫; 蒋刚毅; 郁梅; 李福翠

    2013-01-01

    Stereoscopic image quality assessment is an effective way to evaluate the performance of stereoscopic video system. However, how to utilize human visual characteristics in quality assessment is still an unsolved issue. In this paper, an objective stereoscopic image quality assessment method is proposed based on perceptual significance. Firstly, by analyzing the effects of visual saliency and distortion on perceptual quality, we construct perceptual significance model of stereoscopic images. Then, we separate the stereoscopic image into four types of regions:salient distortion region, salient non-distortion region, non-salient distortion region and non-salient non-distortion region, and evaluate them independently. Finally, all evaluation results are integrated into an overall score. Experimental results show that the proposed method can achieve higher consistency with the subjective assessment of stereoscopic images and effectively reflect human visual system.%立体图像质量评价是评价立体视频系统性能的有效途径,而如何对立体图像质量进行有效的客观评价是目前的研究难点。本文提出了一种基于感知重要性的立体图像质量评价方法。该评价方法通过分析视觉显著和失真对感知质量的影响,建立立体图像视觉感知重要性模型,将立体图像分为四类区域:显著失真区域、显著非失真区域、非显著失真区域和非显著非失真区域,然后对各个区域分别进行评价,最后通过对各个区域赋予不同的权值从而预测得到最终的客观评价值。实验结果表明,该方法与主观评价结果有较好的相关性,符合人眼视觉系统。

  17. Non-metric chaotic inflation

    Energy Technology Data Exchange (ETDEWEB)

    Enqvist, Kari [Physics Department, University of Helsinki, and Helsinki Institute of Physics, FIN-00014 Helsinki (Finland); Koivisto, Tomi [Institute for Theoretical Physics and Spinoza Institute, Leuvenlaan 4, 3584 CE Utrecht (Netherlands); Rigopoulos, Gerasimos, E-mail: kari.enqvist@helsinki.fi, E-mail: T.S.Koivisto@astro.uio.no, E-mail: rigopoulos@physik.rwth-aachen.de [Institut für Theoretische Teilchenphysik und Kosmologie, RWTH Aachen University, D-52056 Aachen (Germany)

    2012-05-01

    We consider inflation within the context of what is arguably the simplest non-metric extension of Einstein gravity. There non-metricity is described by a single graviscalar field with a non-minimal kinetic coupling to the inflaton field Ψ, parameterized by a single parameter γ. There is a simple equivalent description in terms of a massless field and an inflaton with a modified potential. We discuss the implications of non-metricity for chaotic inflation and find that it significantly alters the inflaton dynamics for field values Ψ∼>M{sub P}/γ, dramatically changing the qualitative behaviour in this regime. In the equivalent single-field description this is described as a cuspy potential that forms of barrier beyond which the inflation becomes a ghost field. This imposes an upper bound on the possible number of e-folds. For the simplest chaotic inflation models, the spectral index and the tensor-to-scalar ratio receive small corrections dependent on the non-metricity parameter. We also argue that significant post-inflationary non-metricity may be generated.

  18. Lagrange Spaces with (γ,β-Metric

    Directory of Open Access Journals (Sweden)

    Suresh K. Shukla

    2013-01-01

    Full Text Available We study Lagrange spaces with (γ,β-metric, where γ is a cubic metric and β is a 1-form. We obtain fundamental metric tensor, its inverse, Euler-Lagrange equations, semispray coefficients, and canonical nonlinear connection for a Lagrange space endowed with a (γ,β-metric. Several other properties of such space are also discussed.

  19. Health-Related Quality of Life and Utility Scores in People with Mental Disorders: A Comparison with the Non-Mentally Ill General Population

    Directory of Open Access Journals (Sweden)

    Amélie Prigent

    2014-03-01

    Full Text Available There is a lack of comparable health-related quality of life (HRQoL and utility data across all mental disorders and all inpatient and outpatient settings. Our objective was to investigate the HRQoL and utility scores of people with mental disorders in France, treated in outpatient and inpatient settings, and to identify the HRQoL and utility score losses attributable to mental disorders compared to the non-mentally ill general population. A cross-sectional survey was conducted to assess HRQoL (SF-12 and utility scores of patients with mental disorders and followed in four psychiatric sectors in France. Scores were described by demographic and clinical characteristics and were then adjusted on age and gender and compared with those of the non-mentally ill general population. Median HRQoL and utility scores were significantly lower in patients with mental disorders than in the non-mentally ill general population; median differences amounted to 5.4 for the HRQoL physical score, to 11.8 for the HRQoL mental score and to 0.125 for the utility score. Our findings underscore the negative impact of mental disorders on HRQoL in France and provide a baseline to assess the global impact of current and future organizational changes in the mental health care system.

  20. RGB-NIR color image fusion: metric and psychophysical experiments

    Science.gov (United States)

    Hayes, Alex E.; Finlayson, Graham D.; Montagna, Roberto

    2015-01-01

    In this paper, we compare four methods of fusing visible RGB and near-infrared (NIR) images to produce a color output image, using a psychophysical experiment and image fusion quality metrics. The results of the psychophysical experiment show that two methods are significantly preferred to the original RGB image, and therefore RGB-NIR image fusion may be useful for photographic enhancement in those cases. The Spectral Edge method is the most preferred method, followed by the dehazing method of Schaul et al. We then investigate image fusion metrics which give results correlated with the psychophysical experiment results. We extend several existing metrics from 2 to 1 to M to N channel image fusion, as well as introducing new metrics based on output image colorfulness and contrast, and test them on our experimental data. While none of the individual metrics gives a ranking of the algorithms which exactly matches that of the psychophysical experiment, through a combination of two metrics we accurately rank the two leading fusion methods.

  1. Adaptive distance metric learning for diffusion tensor image segmentation.

    Science.gov (United States)

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  2. Adaptive distance metric learning for diffusion tensor image segmentation.

    Directory of Open Access Journals (Sweden)

    Youyong Kong

    Full Text Available High quality segmentation of diffusion tensor images (DTI is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  3. Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool

    OpenAIRE

    Taha, Abdel Aziz; Hanbury, Allan

    2015-01-01

    Background Medical Image segmentation is an important image processing step. Comparing images to evaluate the quality of segmentation is an essential part of measuring progress in this research area. Some of the challenges in evaluating medical segmentation are: metric selection, the use in the literature of multiple definitions for certain metrics, inefficiency of the metric calculation implementations leading to difficulties with large volumes, and lack of support for fuzzy segmentation by ...

  4. MECO In An Exponential Metric

    CERN Document Server

    Robertson, Stanley L

    2016-01-01

    Magnetic Eternally Collapsing Objects (MECO) have been proposed as the central engines of galactic black hole candidates (GBHC) and supermassive active galactic nuclei (AGN). Previous work has shown that their luminosities and spectral and timing characteristics are in good agreement with observations. These features and the formation of jets are generated primarily by the interactions of accretion disks with an intrinsically magnetic central MECO. The interaction of accretion disks with the anchored magnetic fields of the central objects permits a unified description of properties for GBHC, AGN, neutron stars in low mass x-ray binaries and dwarf novae systems. The previously published MECO models have been based on a quasistatic Schwarzschild metric of General Relativity; however, the only essential feature of this metric is its ability to produce extreme gravitational redshifts. For reasons discussed in this article, an alternative development based on a quasistatic exponential metric is considered here.

  5. Complexity Metrics for Spreadsheet Models

    CERN Document Server

    Bregar, Andrej

    2008-01-01

    Several complexity metrics are described which are related to logic structure, data structure and size of spreadsheet models. They primarily concentrate on the dispersion of cell references and cell paths. Most metrics are newly defined, while some are adapted from traditional software engineering. Their purpose is the identification of cells which are liable to errors. In addition, they can be used to estimate the values of dependent process metrics, such as the development duration and effort, and especially to adjust the cell error rate in accordance with the contents of each individual cell, in order to accurately asses the reliability of a model. Finally, two conceptual constructs - the reference branching condition cell and the condition block - are discussed, aiming at improving the reliability, modifiability, auditability and comprehensibility of logical tests.

  6. Complexity Metrics for Workflow Nets

    DEFF Research Database (Denmark)

    Lassen, Kristian Bisgaard; van der Aalst, Wil M.P.

    2009-01-01

    , etc. It seems obvious that the complexity of the model contributes to design errors and a lack of understanding. It is not easy to measure complexity, however. This paper presents three complexity metrics that have been implemented in the process analysis tool ProM. The metrics are defined...... analysts have difficulties grasping the dynamics implied by a process model. Recent empirical studies show that people make numerous errors when modeling complex business processes, e.g., about 20 percent of the EPCs in the SAP reference model have design flaws resulting in potential deadlocks, livelocks...... for a subclass of Petri nets named Workflow nets, but the results can easily be applied to other languages. To demonstrate the applicability of these metrics, we have applied our approach and tool to 262 relatively complex Protos models made in the context of various student projects. This allows us to validate...

  7. Moduli spaces of riemannian metrics

    CERN Document Server

    Tuschmann, Wilderich

    2015-01-01

    This book studies certain spaces of Riemannian metrics on both compact and non-compact manifolds. These spaces are defined by various sign-based curvature conditions, with special attention paid to positive scalar curvature and non-negative sectional curvature, though we also consider positive Ricci and non-positive sectional curvature. If we form the quotient of such a space of metrics under the action of the diffeomorphism group (or possibly a subgroup) we obtain a moduli space. Understanding the topology of both the original space of metrics and the corresponding moduli space form the central theme of this book. For example, what can be said about the connectedness or the various homotopy groups of such spaces? We explore the major results in the area, but provide sufficient background so that a non-expert with a grounding in Riemannian geometry can access this growing area of research.

  8. Rainbow metric from quantum gravity

    CERN Document Server

    Assaniousssi, Mehdi; Lewandowski, Jerzy

    2014-01-01

    In this letter, we describe a general mechanism for emergence of a rainbow metric from a quantum cosmological model. This idea is based on QFT on a quantum space-time. Under general assumptions, we discover that the quantum space-time on which the field propagates can be replaced by a classical space-time, whose metric depends explicitly on the energy of the field: as shown by an analysis of dispersion relations, quanta of different energy propagate on different metrics, similar to photons in a refractive material (hence the name "rainbow" used in the literature). In deriving this result, we do not consider any specific theory of quantum gravity: the qualitative behavior of high-energy particles on quantum space-time relies only on the assumption that the quantum space-time is described by a wave-function $\\Psi_o$ in a Hilbert space $\\mathcal{H}_G$.

  9. Evaluation of the Design Metric to Reduce the Number of Defects in Software Development

    Directory of Open Access Journals (Sweden)

    M. Rizwan Jameel Qureshi

    2012-04-01

    Full Text Available Software design is one of the most important and key activities in the system development life cycle (SDLC phase that ensures the quality of software. Different key areas of design are very vital to be taken into consideration while designing software. Software design describes how the software system is decomposed and managed in smaller components. Object-oriented (OO paradigm has facilitated software industry with more reliable and manageable software and its design. The quality of the software design can be measured through different metrics such as Chidamber and Kemerer (CK design metrics, Mood Metrics & Lorenz and Kidd metrics. CK metrics is one of the oldest and most reliable metrics among all metrics available to software industry to evaluate OO design. This paper presents an evaluation of CK metrics to propose an improved CK design metrics values to reduce the defects during software design phase in software. This paper will also describe that whether a significant effect of any CK design metrics exists on total number of defects per module or not. This is achieved by conducting survey in two software development companies.

  10. Remarks on G-Metric Spaces

    Directory of Open Access Journals (Sweden)

    Bessem Samet

    2013-01-01

    Full Text Available In 2005, Mustafa and Sims (2006 introduced and studied a new class of generalized metric spaces, which are called G-metric spaces, as a generalization of metric spaces. We establish some useful propositions to show that many fixed point theorems on (nonsymmetric G-metric spaces given recently by many authors follow directly from well-known theorems on metric spaces. Our technique can be easily extended to other results as shown in application.

  11. S-curvature of isotropic Berwald metrics

    Institute of Scientific and Technical Information of China (English)

    Akbar TAYEBI; Mehdi RAFIE-RAD

    2008-01-01

    Isotropic Berwald metrics are as a generalization of Berwald metrics. Shen proved that every Berwald metric is of vanishing S-curvature. In this paper, we generalize this fact and prove that every isotropic Berwald metric is of isotropic S-curvature. Let F = α + β be a Randers metric of isotropic Berwald curvature. Then it corresponds to a conformal vector field through navigation representation.

  12. Separable metrics and radiating stars

    Indian Academy of Sciences (India)

    G Z ABEBE; S D MAHARAJ

    2017-01-01

    We study the junction condition relating the pressure to heat flux at the boundary of an accelerating and expanding spherically symmetric radiating star. We transform the junction condition to an ordinary differential equation by making a separability assumption on the metric functions in the space–time variables. The condition of separability on the metric functions yields several new exact solutions. A class of shear-free models is found which contains a linear equation of state and generalizes a previously obtained model. Four new shearing models are obtained; all the gravitational potentials can be written explicitly. A brief physical analysis indicates that the matter variables are well behaved.

  13. Einstein metrics in projective geometry

    CERN Document Server

    Cap, A; Macbeth, H R

    2012-01-01

    It is well known that pseudo-Riemannian metrics in the projective class of a given torsion free affine connection can be obtained from (and are equivalent to) the solutions of a certain overdetermined projectively invariant differential equation. This equation is a special case of a so-called first BGG equation. The general theory of such equations singles out a subclass of so-called normal solutions. We prove that non-degerate normal solutions are equivalent to pseudo-Riemannian Einstein metrics in the projective class and observe that this connects to natural projective extensions of the Einstein condition.

  14. Complexity Metrics for Workflow Nets

    DEFF Research Database (Denmark)

    Lassen, Kristian Bisgaard; van der Aalst, Wil M.P.

    2009-01-01

    Process modeling languages such as EPCs, BPMN, flow charts, UML activity diagrams, Petri nets, etc.\\ are used to model business processes and to configure process-aware information systems. It is known that users have problems understanding these diagrams. In fact, even process engineers and system......, etc. It seems obvious that the complexity of the model contributes to design errors and a lack of understanding. It is not easy to measure complexity, however. This paper presents three complexity metrics that have been implemented in the process analysis tool ProM. The metrics are defined...

  15. The flexibility of optical metrics

    CERN Document Server

    Bittencourt, Eduardo; Smolyaninov, Igor; Smolyaninova, Vera N

    2015-01-01

    We firstly revisit the importance, naturalness and limitations of the so-called optical metrics for describing the propagation of light rays in the limit of geometric optics. We then exemplify their flexibility and nontriviality in some nonlinear material media and in the context of nonlinear theories of the electromagnetism, both underlain by curved backgrounds, where optical metrics could be flat and impermeable membranes only to photons could be conceived, respectively. Finally, we underline and discuss the relevance and potential applications of our analyses in a broad sense, ranging from material media to compact astrophysical systems.

  16. The Extended Edit Distance Metric

    CERN Document Server

    Fuad, Muhammad Marwan Muhammad

    2007-01-01

    Similarity search is an important problem in information retrieval. This similarity is based on a distance. Symbolic representation of time series has attracted many researchers recently, since it reduces the dimensionality of these high dimensional data objects. We propose a new distance metric that is applied to symbolic data objects and we test it on time series data bases in a classification task. We compare it to other distances that are well known in the literature for symbolic data objects. We also prove, mathematically, that our distance is metric.

  17. Performance Metrics for Haptic Interfaces

    CERN Document Server

    Samur, Evren

    2012-01-01

    Haptics technology is being used more and more in different applications, such as in computer games for increased immersion, in surgical simulators to create a realistic environment for training of surgeons, in surgical robotics due to safety issues and in mobile phones to provide feedback from user action. The existence of these applications highlights a clear need to understand performance metrics for haptic interfaces and their implications on device design, use and application. Performance Metrics for Haptic Interfaces aims at meeting this need by establishing standard practices for the ev

  18. The U.S. power industry's activities to expand coal ash utilization in face of lower ash quality

    Energy Technology Data Exchange (ETDEWEB)

    Golden, D.M. [Electric Power Research Inst., Palo Alto, CA (United States)

    2001-07-01

    The use of coal by electric power utilities results in more than 105 million tons of by products each year in the United States. More restrictive air quality emission limits have resulted in cleaner air, but this means the fly ash is more contaminated and cannot be used in its largest market, the concrete industry. For this reason, the Electric Power Research Institute (EPRI) conducted a 5 year program at increasing ash utilization in the cement and concrete market in the United States. This initiative was in response to recent concerns regarding the impacts on ash quality due to more aggressive nitrogen oxide (NOx) controls. The EPRI program provides the technical basis for protecting the bulk sale of coal ash in high-volume applications in cement and concrete and other high volume civil engineering applications. Fly ash derived from NOx control systems has higher carbon levels and ammonia levels. Problems with ammoniated ash are a major concern for coal-fired power plants. It was shown that there are four ways to minimize the impact of NOx controls that reduce ash quality directly affecting ash utilization. These are: (1) prevention of carbon accumulation in fly ash for use in sensitive markets, (2) carbon removal, (3) concentration of reactive ash fractions by removal of coarse fractions, and (4) ammonia removal. It was concluded that more studies are needed to examine long-term durability and other properties before any of these options can be exploited on an industrial scale. 21 refs., 1 tab.

  19. The Imprecise Science of Evaluating Scholarly Performance: Utilizing Broad Quality Categories for an Assessment of Business and Management Journals

    Science.gov (United States)

    Lange, Thomas

    2006-01-01

    In a growing number of countries, government-appointed assessment panels develop ranks on the basis of the quality of scholarly outputs to apportion budgets in recognition of evaluated performance and to justify public funds for future R&D activities. When business and management journals are being grouped in broad quality categories, a recent…

  20. Unified Metrical Common Fixed Point Theorems in 2-Metric Spaces via an Implicit Relation

    Directory of Open Access Journals (Sweden)

    Sunny Chauhan

    2013-01-01

    Full Text Available We prove some common fixed point theorems for two pairs of weakly compatible mappings in 2-metric spaces via an implicit relation. As an application to our main result, we derive Bryant's type generalized fixed point theorem for four finite families of self-mappings which can be utilized to derive common fixed point theorems involving any finite number of mappings. Our results improve and extend a host of previously known results. Moreover, we study the existence of solutions of a nonlinear integral equation.

  1. A priori discretization error metrics for distributed hydrologic modeling applications

    Science.gov (United States)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under

  2. Defining a Standard Metric for Electricity Savings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Marilyn; Akbari, Hashem; Blumstein, Carl; Koomey, Jonathan; Brown, Richard; Calwell, Chris; Carter, Sheryl; Cavanagh, Ralph; Chang, Audrey; Claridge, David; Craig, Paul; Diamond, Rick; Eto, Joseph H.; Fulkerson, William; Gadgil, Ashok; Geller, Howard; Goldemberg, Jose; Goldman, Chuck; Goldstein, David B.; Greenberg, Steve; Hafemeister, David; Harris, Jeff; Harvey, Hal; Heitz, Eric; Hirst, Eric; Hummel, Holmes; Kammen, Dan; Kelly, Henry; Laitner, Skip; Levine, Mark; Lovins, Amory; Masters, Gil; McMahon, James E.; Meier, Alan; Messenger, Michael; Millhone, John; Mills, Evan; Nadel, Steve; Nordman, Bruce; Price, Lynn; Romm, Joe; Ross, Marc; Rufo, Michael; Sathaye, Jayant; Schipper, Lee; Schneider, Stephen H; Sweeney, James L; Verdict, Malcolm; Vorsatz, Diana; Wang, Devra; Weinberg, Carl; Wilk, Richard; Wilson, John; Worrell, Ernst

    2009-03-01

    The growing investment by governments and electric utilities in energy efficiency programs highlights the need for simple tools to help assess and explain the size of the potential resource. One technique that is commonly used in this effort is to characterize electricity savings in terms of avoided power plants, because it is easier for people to visualize a power plant than it is to understand an abstraction such as billions of kilowatt-hours. Unfortunately, there is no standardization around the characteristics of such power plants. In this letter we define parameters for a standard avoided power plant that have physical meaning and intuitive plausibility, for use in back-of-the-envelope calculations. For the prototypical plant this article settles on a 500 MW existing coal plant operating at a 70percent capacity factor with 7percent T&D losses. Displacing such a plant for one year would save 3 billion kW h per year at the meter and reduce emissions by 3 million metric tons of CO2 per year. The proposed name for this metric is the Rosenfeld, in keeping with the tradition among scientists of naming units in honor of the person most responsible for the discovery and widespread adoption of the underlying scientific principle in question--Dr. Arthur H. Rosenfeld.

  3. Value-based metrics and Internet-based enterprises

    Science.gov (United States)

    Gupta, Krishan M.

    2001-10-01

    Within the last few years, a host of value-based metrics like EVA, MVA, TBR, CFORI, and TSR have evolved. This paper attempts to analyze the validity and applicability of EVA and Balanced Scorecard for Internet based organizations. Despite the collapse of the dot-com model, the firms engaged in e- commerce continue to struggle to find new ways to account for customer-base, technology, employees, knowledge, etc, as part of the value of the firm. While some metrics, like the Balance Scorecard are geared towards internal use, others like EVA are for external use. Value-based metrics are used for performing internal audits as well as comparing firms against one another; and can also be effectively utilized by individuals outside the firm looking to determine if the firm is creating value for its stakeholders.

  4. Developing a confidence metric for the Landsat land surface temperature product

    Science.gov (United States)

    Laraby, Kelly G.; Schott, John R.; Raqueno, Nina

    2016-05-01

    Land Surface Temperature (LST) is an important Earth system data record that is useful to fields such as change detection, climate research, environmental monitoring, and smaller scale applications such as agriculture. Certain Earth-observing satellites can be used to derive this metric, and it would be extremely useful if such imagery could be used to develop a global product. Through the support of the National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS), a LST product for the Landsat series of satellites has been developed. Currently, it has been validated for scenes in North America, with plans to expand to a trusted global product. For ideal atmospheric conditions (e.g. stable atmosphere with no clouds nearby), the LST product underestimates the surface temperature by an average of 0.26 K. When clouds are directly above or near the pixel of interest, however, errors can extend to several Kelvin. As the product approaches public release, our major goal is to develop a quality metric that will provide the user with a per-pixel map of estimated LST errors. There are several sources of error that are involved in the LST calculation process, but performing standard error propagation is a difficult task due to the complexity of the atmospheric propagation component. To circumvent this difficulty, we propose to utilize the relationship between cloud proximity and the error seen in the LST process to help develop a quality metric. This method involves calculating the distance to the nearest cloud from a pixel of interest in a scene, and recording the LST error at that location. Performing this calculation for hundreds of scenes allows us to observe the average LST error for different ranges of distances to the nearest cloud. This paper describes this process in full, and presents results for a large set of Landsat scenes.

  5. Hybrid metric-Palatini stars

    CERN Document Server

    Danila, Bogdan; Lobo, Francisco S N; Mak, M K

    2016-01-01

    We consider the internal structure and the physical properties of specific classes of neutron, quark and Bose-Einstein Condensate stars in the hybrid metric-Palatini gravity theory, which is a combination of the metric and Palatini $f(R)$ formalisms. The theory is very successful in accounting for the observed phenomenology, since it unifies local constraints at the Solar System level and the late-time cosmic acceleration, even if the scalar field is very light. We derive the equilibrium equations for a spherically symmetric configuration (mass continuity and Tolman-Oppenheimer-Volkoff) in the framework of hybrid metric-Palatini theory, and we investigate their solutions numerically for different equations of state of neutron and quark matter, by adopting for the scalar field potential a Higgs-type form. Stellar models, described by the stiff fluid, radiation-like, the bag model and the Bose-Einstein Condensate equations of state are explicitly constructed in both General Relativity and hybrid metric-Palatini...

  6. Socio-technical security metrics

    NARCIS (Netherlands)

    Gollmann, D.; Herley, C.; Koenig, V.; Pieters, W.; Sasse, M.A.

    2015-01-01

    Report from Dagstuhl seminar 14491. This report documents the program and the outcomes of Dagstuhl Seminar 14491 “Socio-Technical Security Metrics”. In the domain of safety, metrics inform many decisions, from the height of new dikes to the design of nuclear plants. We can state, for example, that t

  7. Leading Gainful Employment Metric Reporting

    Science.gov (United States)

    Powers, Kristina; MacPherson, Derek

    2016-01-01

    This chapter will address the importance of intercampus involvement in reporting of gainful employment student-level data that will be used in the calculation of gainful employment metrics by the U.S. Department of Education. The authors will discuss why building relationships within the institution is critical for effective gainful employment…

  8. Strong metric dimension: A survey

    Directory of Open Access Journals (Sweden)

    Kratica Jozef

    2014-01-01

    Full Text Available The strong metric dimension has been a subject of considerable amount of research in recent years. This survey describes the related development by bringing together theoretical results and computational approaches, and places the recent results within their historical and scientific framework. [Projekat Ministarstva nauke Republike Srbije, br. 174010 i br. 174033

  9. On a Schwarzschild like metric

    CERN Document Server

    Anastasiei, M

    2011-01-01

    In this short Note we would like to bring into the attention of people working in General Relativity a Schwarzschild like metric found by Professor Cleopatra Mociu\\c{t}chi in sixties. It was obtained by the A. Sommerfeld reasoning from his treatise "Elektrodynamik" but using instead of the energy conserving law from the classical Physics, the relativistic energy conserving law.

  10. Area metric gravity and accelerating cosmology

    CERN Document Server

    Punzi, R; Wohlfarth, M N R; Punzi, Raffaele; Schuller, Frederic P.; Wohlfarth, Mattias N.R.

    2007-01-01

    Area metric manifolds emerge as effective classical backgrounds in quantum string theory and quantum gauge theory, and present a true generalization of metric geometry. Here, we consider area metric manifolds in their own right, and develop in detail the foundations of area metric differential geometry. Based on the construction of an area metric curvature scalar, which reduces in the metric-induced case to the Ricci scalar, we re-interpret the Einstein-Hilbert action as dynamics for an area metric spacetime. In contrast to modifications of general relativity based on metric geometry, no continuous deformation scale needs to be introduced; the extension to area geometry is purely structural and thus rigid. We present an intriguing prediction of area metric gravity: without dark energy or fine-tuning, the late universe exhibits a small acceleration.

  11. Utility of registries for post-marketing evaluation of medicines. A survey of Swedish health care quality registries from a regulatory perspective.

    Science.gov (United States)

    Feltelius, Nils; Gedeborg, Rolf; Holm, Lennart; Zethelius, Björn

    2017-06-01

    The aim of this study was to describe content and procedures in some selected Swedish health care quality registries (QRs) of relevance to regulatory decision-making. A workshop was organized with participation of seven Swedish QRs which subsequently answered a questionnaire regarding registry content on drug treatments and outcomes. Patient populations, coverage, data handling and quality control, as well as legal and ethical aspects are presented. Scientific publications from the QRs are used as a complementary measure of quality and scientific relevance. The registries under study collect clinical data of high relevance to regulatory and health technology agencies. Five out of seven registries provide information on the drug of interest. When applying external quality criteria, we found a high degree of fulfillment, although information on medication was not sufficient to answer all questions of regulatory interest. A notable strength is the option for linkage to the Prescribed Drug Registry and to information on education and socioeconomic status. Data on drugs used during hospitalization were also collected to some extent. Outcome measures collected resemble those used in relevant clinical trials. All registries collected patient-reported outcome measures. The number of publications from the registries was substantial, with studies of appropriate design, including randomized registry trials. Quality registries may provide a valuable source of post-marketing data on drug effectiveness, safety, and cost-effectiveness. Closer collaboration between registries and regulators to improve quality and usefulness of registry data could benefit both regulatory utility and value for health care providers.

  12. Statistical Assessment of QC Metrics on Raw LC-MS/MS Data.

    Science.gov (United States)

    Wang, Xia

    2017-01-01

    Data quality assessment is important for reproducibility of proteomics experiments and reusability of proteomics data. We describe a set of statistical tools to routinely visualize and examine the quality control (QC) metrics obtained for raw LC-MS/MS data on different instrument types and mass spectrometers. The QC metrics used here are the identification free QuaMeter metrics. Statistical assessments introduced include (a) principal component analysis, (b) dissimilarity measures, (c) T (2)-chart for quality control, and (d) change point analysis. We demonstrate the workflow by a step-by-step assessment of a subset of Study 5 for the Clinical Proteomics Technology Assessment for Cancer (CPTAC) using our R functions.

  13. Health-Related Quality of Life and Health Utility Values in Beta Thalassemia Major Patients Receiving Different Types of Iron Chelators in Iran.

    Science.gov (United States)

    Seyedifar, Meysam; Dorkoosh, Farid Abedin; Hamidieh, Amir Ali; Naderi, Majid; Karami, Hossein; Karimi, Mehran; Fadaiyrayeny, Masoomeh; Musavi, Masoumeh; Safaei, Sanaz; Ahmadian-Attari, Mohammad Mahdi; Hadjibabaie, Molouk; Cheraghali, Abdol Majid; Akbari Sari, Ali

    2016-10-01

    Background: Thalassemia is a chronic, inherited blood disorder, which in its most severe form, causes life-threatening anemia. Thalassemia patients not only engage with difficulties of blood transfusion and iron chelating therapy but also have some social challenges and health threatening factors. There are some reports on quality of life in thalassemia patients around the world from southeast of Asia to Italy in Europe and United States. In this study, we tried to evaluate and compare Health Related Quality of life (HRQoL) and the health utility in beta thalassemia major patients receiving different types of iron chelators and living in different socio-economical situations. Subjects and Methods: EQ-5D-3L accompanied by a Visual Analogue Scale (VAS) questionnaire was used. The respondents were patients with beta thalassemia major that were at least 12 years old selected from 3 provinces of Sistan-Blouchestan, Fars and Mazandaran. Comorbidities including heart complication, Diabetes Mellitus and Hepatitis and also types of iron chelators (oral, injection, combination of both) were also asked. Cross tab and ANOVA analysis conducted to evaluate each dimension score and health utility differences between provinces, iron chelation methods, comorbidities, age group and gender. Results: 528 patients answered the questionnaires. The health utility of patients that received oral iron chelator were 0.87 ± .01 for oral iron chelators versus 0.81 ± .01 for injection dosage form (p<0.05). Increase in age was accompanied by decrease in health utility. Females faced more usual activity problems, anxiety and depression. Heart problems were more prevalent in males. Conclusion: This study suggests that the quality of life of beta thalassemia major patients is dependent on type of iron chelation treatment which they received, the gender they have, the comorbidities they suffer and socio-economical situations they live in.

  14. A generalized Web Service response time metric to support collaborative and corroborative Web Service monitoring

    CSIR Research Space (South Africa)

    Makitla, I

    2015-12-01

    Full Text Available In this paper, we describe the development of a generalized metric for computing response time of a web service. Such a generalized metric would help to develop consensus with regards to the meanings of contracted Quality of Service (QoS) parameters...

  15. A proposed set of metrics for standardized outcome reporting in the management of low back pain

    DEFF Research Database (Denmark)

    Clement, R Carter; Welander, Adina; Stowell, Caleb

    2015-01-01

    metrics through a 6-round modified Delphi process. The scope of the outcome set was degenerative lumbar conditions. RESULTS: Patient-reported metrics include numerical pain scales, lumbar-related function using the Oswestry disability index, health-related quality of life using the EQ-5D-3L questionnaire...

  16. Advanced system demonstration for utilization of biomass as an energy source. Technical Appendix B: air quality studies. Environmental report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-01-01

    The plant site studies include: climate, present ambient air quality, construction impacts, and operation impacts. The fuelwood harvest region studies include: present environment and harvesting impacts. The use of the Valley Model for alternative sites analysis is discussed. (MHR)

  17. Quality assurance for image-guided radiation therapy utilizing CT-based technologies: A report of the AAPM TG-179

    Energy Technology Data Exchange (ETDEWEB)

    Bissonnette, Jean-Pierre; Balter, Peter A.; Dong Lei; Langen, Katja M.; Lovelock, D. Michael; Miften, Moyed; Moseley, Douglas J.; Pouliot, Jean; Sonke, Jan-Jakob; Yoo, Sua [Task Group 179, Department of Radiation Physics, Princess Margaret Hospital, University of Toronto, Toronto, Ontario, M5G 2M9 (Canada); Department of Radiation Physics, University of Texas M.D. Anderson Cancer Center, Houston, Texas 77030 (United States); Department of Radiation Oncology, M. D. Anderson Cancer Center Orlando, Orlando, Florida 32806 (United States); Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10021 (United States); Department of Radiation Oncology, University of Colorado School of Medicine, Aurora, Colorado 80045 (United States); Department of Radiation Physics, Princess Margaret Hospital, University of Toronto, Toronto, Ontario, M5G 2M9 (Canada); Department of Radiation Oncology, UCSF Comprehensive Cancer Center, 1600 Divisadero St., Suite H 1031, San Francisco, California 94143-1708 (United States); Department of Radiation Oncology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands); Department of Radiation Oncology, Duke University, Durham, North Carolina 27710 (United States)

    2012-04-15

    Purpose: Commercial CT-based image-guided radiotherapy (IGRT) systems allow widespread management of geometric variations in patient setup and internal organ motion. This document provides consensus recommendations for quality assurance protocols that ensure patient safety and patient treatment fidelity for such systems. Methods: The AAPM TG-179 reviews clinical implementation and quality assurance aspects for commercially available CT-based IGRT, each with their unique capabilities and underlying physics. The systems described are kilovolt and megavolt cone-beam CT, fan-beam MVCT, and CT-on-rails. A summary of the literature describing current clinical usage is also provided. Results: This report proposes a generic quality assurance program for CT-based IGRT systems in an effort to provide a vendor-independent program for clinical users. Published data from long-term, repeated quality control tests form the basis of the proposed test frequencies and tolerances.Conclusion: A program for quality control of CT-based image-guidance systems has been produced, with focus on geometry, image quality, image dose, system operation, and safety. Agreement and clarification with respect to reports from the AAPM TG-101, TG-104, TG-142, and TG-148 has been addressed.

  18. Hurricane exposure and county fetal death rates, utilization of a county environmental quality index for confounding control.

    Science.gov (United States)

    The effects of natural disasters on public health are a rising concern, with increasing severity of disaster events. Many disaster studies utilize county-level analysis, however most do not control for county level environmental factors. Hurricane exposure during pregnancy could ...

  19. Hurricane exposure and county fetal death rates, utilization of a county environmental quality index for confounding control.

    Science.gov (United States)

    The effects of natural disasters on public health are a rising concern, with increasing severity of disaster events. Many disaster studies utilize county-level analysis, however most do not control for county level environmental factors. Hurricane exposure during pregnancy could ...

  20. Applications of image metrics in dynamic scene adaptation

    Science.gov (United States)

    Sadjadi, Firooz A.

    1992-08-01

    One of the major problems in dealing with the changes in the information contented a scene which is one of the characteristics of any dynamic scene is how adapt to these variations such that the performance of any automatic scene analyzer such as object recognizer be at its optimum. In this paper we examine the use of image and signal metrics for characterizing any scene variations and then we describe an automated system for the extraction of these quality measures and finally we will show how these metrics can be used for the automatic adaptation of an object recognition system and the resulting jump in the performance of this system.

  1. Length spectra and degeneration of flat metrics

    CERN Document Server

    Duchin, Moon; Rafi, Kasra

    2009-01-01

    In this paper we consider flat metrics (semi-translation structures) on surfaces of finite type. There are two main results. The first is a complete description of when a set of simple closed curves is spectrally rigid, that is, when the length vector determines a metric among the class of flat metrics. Secondly, we give an embedding into the space of geodesic currents and use this to get a boundary for the space of flat metrics. The geometric interpretation is that flat metrics degenerate to "mixed structures" on the surface: part flat metric and part measured foliation.

  2. A possible molecular metric for biological evolvability

    Indian Academy of Sciences (India)

    Aditya Mittal; B Jayaram

    2012-07-01

    Proteins manifest themselves as phenotypic traits, retained or lost in living systems via evolutionary pressures. Simply put, survival is essentially the ability of a living system to synthesize a functional protein that allows for a response to environmental perturbations (adaptation). Loss of functional proteins leads to extinction. Currently there are no universally applicable quantitative metrics at the molecular level for either measuring ‘evolvability’ of life or for assessing the conditions under which a living system would go extinct and why. In this work, we show emergence of the first such metric by utilizing the recently discovered stoichiometric margin of life for all known naturally occurring (and functional) proteins. The constraint of having well-defined stoichiometries of the 20 amino acids in naturally occurring protein sequences requires utilization of the full scope of degeneracy in the genetic code, i.e. usage of all codons coding for an amino acid, by only 11 of the 20 amino acids. This shows that the non-availability of individual codons for these 11 amino acids would disturb the fine stoichiometric balance resulting in non-functional proteins and hence extinction. Remarkably, these amino acids are found in close proximity of any given amino acid in the backbones of thousands of known crystal structures of folded proteins. On the other hand, stoichiometry of the remaining 9 amino acids, found to be farther/distal from any given amino acid in backbones of folded proteins, is maintained independent of the number of codons available to synthesize them, thereby providing some robustness and hence survivability.

  3. MEASURING OBJECT-ORIENTED SYSTEMS BASED ON THE EXPERIMENTAL ANALYSIS OF THE COMPLEXITY METRICS

    Directory of Open Access Journals (Sweden)

    J.S.V.R.S.SASTRY,

    2011-05-01

    Full Text Available Metrics are used to help a software engineer in quantitative analysis to assess the quality of the design before a system is built. The focus of Object-Oriented metrics is on the class which is the fundamental building block of the Object-Oriented architecture. These metrics are focused on internal object structure and external object structure. Internal object structure reflects the complexity of each individual entity such as methods and classes. External complexity measures the interaction among entities such as Coupling and Inheritance. This paper mainly focuses on a set of object oriented metrics that can be used to measure the quality of an object oriented design. Two types of complexity metrics in Object-Oriented paradigm namely Mood metrics and Lorenz & Kidd metrics. Mood metrics consist of Method inheritance factor(MIF, Coupling factor(CF, Attribute inheritance factor(AIF, Method hiding factor(MHF, Attribute hiding factor(AHF, and polymorphism factor(PF. Lorenz & Kidd metrics consist of Number of operations overridden (NOO, Number operations added (NOA, Specialization index(SI. Mood metrics and Lorenz & Kidd metrics measurements are used mainly by designers and testers. Designers uses these metrics to access the software early in process,making changes that will reduce complexity and improve the continuing capability of the design. Testers use to test the software for finding the complexity, performance of the system, quality of the software. This paper reviews Mood metrics and Lorenz & Kidd metrics are validates theoretically and empirically methods. In thispaper, work has been done to explore the quality of design of software components using object oriented paradigm. A number of object oriented metrics have been proposed in the literature for measuring the design attributes such as inheritance, coupling, polymorphism etc. This paper, metrics have been used to analyzevarious features of software component. Complexity of methods

  4. Quality of Life, Depression, and Healthcare Resource Utilization among Adults with Type 2 Diabetes Mellitus and Concomitant Hypertension and Obesity: A Prospective Survey

    Directory of Open Access Journals (Sweden)

    Andrew J. Green

    2012-01-01

    Full Text Available Background. This study compared quality of life, depression, and healthcare resource utilization among adults with type 2 diabetes mellitus (T2DM and comorbid hypertension (HTN and obesity with those of adults reporting T2DM alone. Methods. Respondents to the US SHIELD survey self-reported their height, weight, comorbid conditions, hospitalizations, and outpatient visits and completed the Short Form-12 (SF-12 and Patient Health Questionnaire (PHQ-9. Respondents reporting T2DM and HTN and obesity (body mass index, BMI, ≥30 kg/m2 were compared with a T2DM-alone group. Results. Respondents with T2DM, HTN, and obesity (n=1292 had significantly lower SF-12 Physical and Mental Component Summary scores (37.3 and 50.9, resp. than T2DM-alone respondents (n=349 (45.8 and 53.5, resp., P<0.0001. Mean PHQ-9 scores were significantly higher among T2DM respondents with comorbid HTN and obesity (5.0 versus 2.5, P<0.0001, indicating greater depression burden. Respondents with T2DM, HTN, and obesity had significantly more resource utilization with respect to physician visits and emergency room visits but not hospitalizations than respondents with T2DM alone (P=0.03. Conclusions. SHIELD respondents with comorbid conditions of T2DM, HTN, and obesity reported greater healthcare resource utilization, more depression symptoms, and lower quality of life than the T2DM-alone group.

  5. Learning Objects Reusability Effectiveness Metric (LOREM

    Directory of Open Access Journals (Sweden)

    Torky Ibrahim Sultan

    2014-03-01

    Full Text Available In this research we aim to propose an advanced metric to evaluate the effectiveness of learning objects in order to be reused in new contexts. By the way learning objects reusability is achieving economic benefits from educational technology as it saving time and improving quality, but in case of choosing unsuitable learning object it may be less benefit than creating the learning object from scratch. Actually learning objects reusability can facilitate systems development and adaptation. By surveying the current evaluation metrics, we found that while they cover essential aspects, they enables all reviewers of learning objects to evaluate all criteria without paying attention to their roles in creating the learning object which affect their capability to evaluate specific criteria. Our proposed Approach (LOREM is evaluating learning objects based on a group of Aspects which measure their level of effectiveness in order to be reused in other contexts. LOREM classifies reviewers into 3 categories; 1. Academic Group: (Subject Expert Matter “SME” and Instructor. 2. Technical Group: (Instructional Designer “ID”, LO Developer and LO Designer. 3. Students group. The authorization of reviewers in these several categories are differentiated according to reviewer's type, e.g., (Instructor, LO Developer and their area of expert (their expertise subjects for academic and students reviewers.

  6. Hausdorff metric in the fuzzy environment

    Directory of Open Access Journals (Sweden)

    Laura Victoria Forero Vega

    2016-10-01

    Full Text Available Context: intuitively, the concept the set has been established as a collection of different elements, that is, a set is determined via the relationship of membership of an element of a universe as a whole. The situation, of course, is whether or does not belong; in a diffuse to each element subset of the universe it is associated with a degree of membership, which is a number between 0 and 1. The fuzzy subsets are established as a correspondence between each element of the universe and a degree of membership. Method: the study was based on previous work as articles or books, where authors present ideas about the importance of fuzzy subsets and the need to create with them new theories and spaces. Results: by combining two theories, a new study environment that allows state that corresponds Hausdorff distance, extends and adjusts the notion of distance between nonempty compact subsets in the environment of metrics spaces, more accurately generated in (Rn; du. Conclusions: the construction carried out allows a metric space with several qualities, where we can say that are the object consequence initial study.

  7. Classifier-assisted metric for chromosome pairing.

    Science.gov (United States)

    Ventura, Rodrigo; Khmelinskii, Artem; Sanches, J

    2010-01-01

    Cytogenetics plays a central role in the detection of chromosomal abnormalities and in the diagnosis of genetic diseases. A karyogram is an image representation of human chromosomes arranged in order of decreasing size and paired in 23 classes. In this paper we propose an approach to automatically pair the chromosomes into a karyogram, using the information obtained in a rough SVM-based classification step, to help the pairing process mainly based on similarity metrics between the chromosomes. Using a set of geometric and band pattern features extracted from the chromosome images, the algorithm is formulated on a Bayesian framework, combining the similarity metric with the results from the classifier. The solution is obtained solving a mixed integer program. Two datasets with contrasting quality levels and 836 chromosomes each were used to test and validate the algorithm. Relevant improvements with respect to the algorithm described by the authors in [1] were obtained with average paring rates above 92%, close to the rates obtained by human operators.

  8. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    Science.gov (United States)

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  9. Impact of Risk Aversion on Price and Quality Decisions under Demand Uncertainty via the CARA Utility Function

    Directory of Open Access Journals (Sweden)

    Qinqin Li

    2014-01-01

    Full Text Available This paper investigates optimal price and quality decisions of a manufacturer-retailer supply chain under demand uncertainty, in which players are both risk-averse decision makers. The manufacturer determines the wholesale price and quality of the product, and the retailer determines the retail price. By means of game theory, we employ the constant absolute risk aversion (CARA function to analyze two different supply chain structures, that is, manufacturer Stackelberg model (MS and retailer Stackelberg model (RS. We then analyze the results to explore the effects of risk aversion of the manufacturer and the retailer upon the equilibrium decisions. Our results imply that both the risk aversion of the manufacturer and the retailer play an important role in the price and quality decisions. We find that, in general, in MS and RS models, the optimal wholesale price and quality decrease with the risk aversion of the manufacturer but increase with the risk aversion of the retailer, while the retail price decreases with the risk aversion of the manufacturer as well as the retailer. We also examine the impact of quality cost coefficient on the optimal decisions. Finally, numerical examples are presented to illustrate the different degree of effects of players’ risk aversion on equilibrium results and to compare results in different models considered.

  10. A Model-Based Approach to Object-Oriented Software Metrics

    Institute of Scientific and Technical Information of China (English)

    梅宏; 谢涛; 杨芙清

    2002-01-01

    The need to improve software productivity and software quality has put for-ward the research on software metrics technology and the development of software metrics toolto support related activities. To support object-oriented software metrics practice effectively,a model-based approach to object-oriented software metrics is proposed in this paper. Thisapproach guides the metrics users to adopt the quality metrics model to measure the object-oriented software products. The development of the model can be achieved by using a top-downapproach. This approach explicitly proposes the conception of absolute normalization computa-tion and relative normalization computation for a metrics model. Moreover, a generic softwaremetrics tool - Jade Bird Object-Oriented Metrics Tool (JBOOMT) is designed to implementthis approach. The parser-based approach adopted by the tool makes the information of thesource program accurate and complete for measurement. It supports various customizablehierarchical metrics models and provides a flexible user interface for users to manipulate themodels. It also supports absolute and relative normalization mechanisms in different situations.

  11. Optimizing Network Routing by Deducing a QoS Metric Using Rough Sets

    Directory of Open Access Journals (Sweden)

    Ali.A.Sakr,

    2010-07-01

    Full Text Available The routing within networks, must satisfy the QoS metrics. In traditional data networks, routing is concerned on connectivity or cost. Routing protocols usually characterize the network with one or more metric(s. However, in order to support a wide range of QoS requirements, routing protocols need to have a more complex model. Thenetwork is characterized with multiple metrics such as bandwidth, delay, jitters, loss rate, authentication, security,…etc. This complex model necessitates a long time to proceed. The Rough Set Theory (RST is applied to reduce these metrics successfully and decide the most effective ones. In this paper, RST is applied to reduce the online metrics that are reported by Routing Information Protocols (RIP. The paper represents information about network elements (links, or nodes to obtain the Quality of Service (QoS core [1]. ROSETTA software is applied to deduce a QoS metric as a substitution for all routing metrics. This metric is used to select the optimal routes. The results confirm that the proposed metric is adequately suit for selecting the proper routes.

  12. Metrics for Evaluating the Accuracy of Solar Power Forecasting: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, J.; Hodge, B. M.; Florita, A.; Lu, S.; Hamann, H. F.; Banunarayanan, V.

    2013-10-01

    Forecasting solar energy generation is a challenging task due to the variety of solar power systems and weather regimes encountered. Forecast inaccuracies can result in substantial economic losses and power system reliability issues. This paper presents a suite of generally applicable and value-based metrics for solar forecasting for a comprehensive set of scenarios (i.e., different time horizons, geographic locations, applications, etc.). In addition, a comprehensive framework is developed to analyze the sensitivity of the proposed metrics to three types of solar forecasting improvements using a design of experiments methodology, in conjunction with response surface and sensitivity analysis methods. The results show that the developed metrics can efficiently evaluate the quality of solar forecasts, and assess the economic and reliability impact of improved solar forecasting.

  13. A Trustability Metric for Code Search based on Developer Karma

    CERN Document Server

    Gysin, Florian S

    2010-01-01

    The promise of search-driven development is that developers will save time and resources by reusing external code in their local projects. To efficiently integrate this code, users must be able to trust it, thus trustability of code search results is just as important as their relevance. In this paper, we introduce a trustability metric to help users assess the quality of code search results and therefore ease the cost-benefit analysis they undertake trying to find suitable integration candidates. The proposed trustability metric incorporates both user votes and cross-project activity of developers to calculate a "karma" value for each developer. Through the karma value of all its developers a project is ranked on a trustability scale. We present JBender, a proof-of-concept code search engine which implements our trustability metric and we discuss preliminary results from an evaluation of the prototype.

  14. A metrics suite for coupling measurement of software architecture

    Institute of Scientific and Technical Information of China (English)

    KONG Qing-yan; LUN Li-jun; ZHAO Jia-hua; WANG Yi-he

    2009-01-01

    To better evaluate the quality of software architecture,a metrics suite is proposed to measure the coupling of software architecture models,in which CBC is used to measure the coupling between components,CBCC is used to measure the coupling of transferring message between components,CBCCT is used to measure the coupling of software architecture, WCBCC is used to measure the coupling of transferring message with weight between components,and WCBCCT is used to measure the coupling of message transmission with weight in the whole software architecture.The proposed algorithm for the coupling metrics is applied to the design of serve software architecture.Analysis of an example validates the feasibility of this metrics suite.

  15. Quality of Life and Utility in Patients with Metastatic Soft Tissue and Bone Sarcoma: The Sarcoma Treatment and Burden of Illness in North America and Europe (SABINE Study

    Directory of Open Access Journals (Sweden)

    Peter Reichardt

    2012-01-01

    Full Text Available The aim of the study was to assess health-related quality of life (HRQoL among metastatic soft tissue (mSTS or bone sarcoma (mBS patients who had attained a favourable response to chemotherapy. We employed the EORTC QLQ-C30, the 3-item Cancer-Related Symptoms Questionnaire, and the EQ-5D instrument. HRQoL was evaluated overall and by health state in 120 mSTS/mBS patients enrolled in the SABINE study across nine countries in Europe and North America. Utility was estimated from responses to the EQ-5D instrument using UK population-based weights. The mean EQ-5D utility score was 0.69 for the pooled patient sample with little variation across health states. However, patients with progressive disease reported a clinically significant lower utility (0.56. Among disease symptoms, pain and respiratory symptoms are common. This study showed that mSTS/mBS is associated with reduced HRQoL and utility among patients with metastatic disease.

  16. The escape velocity and Schwarzschild metric

    CERN Document Server

    Murzagalieva, A G; Murzagaliev, G Z

    2002-01-01

    The escape velocity value in the terms of general relativity by means Schwarzschild metric is provided to make of the motion equation with Friedman cosmological model behavior build in the terms of Robertson-Worker metric. (author)

  17. Security Metrics in Industrial Control Systems

    CERN Document Server

    Collier, Zachary A; Ganin, Alexander A; Kott, Alex; Linkov, Igor

    2015-01-01

    Risk is the best known and perhaps the best studied example within a much broader class of cyber security metrics. However, risk is not the only possible cyber security metric. Other metrics such as resilience can exist and could be potentially very valuable to defenders of ICS systems. Often, metrics are defined as measurable properties of a system that quantify the degree to which objectives of the system are achieved. Metrics can provide cyber defenders of an ICS with critical insights regarding the system. Metrics are generally acquired by analyzing relevant attributes of that system. In terms of cyber security metrics, ICSs tend to have unique features: in many cases, these systems are older technologies that were designed for functionality rather than security. They are also extremely diverse systems that have different requirements and objectives. Therefore, metrics for ICSs must be tailored to a diverse group of systems with many features and perform many different functions. In this chapter, we first...

  18. Dimension of the boundary in different metrics

    CERN Document Server

    Klén, Riku

    2010-01-01

    On domains $\\Omega\\subset\\R^n$, we consider metrics induced by continuous densities $\\rho\\colon\\Omega\\rightarrow(0,\\infty)$ and study the Hausdorff and packing dimensions of the boundary of $\\Omega$ with respect to these metrics.

  19. Hybrid metric-Palatini gravity

    CERN Document Server

    Capozziello, Salvatore; Koivisto, Tomi S; Lobo, Francisco S N; Olmo, Gonzalo J

    2015-01-01

    Recently, the phenomenology of f(R) gravity has been scrutinized motivated by the possibility to account for the self-accelerated cosmic expansion without invoking dark energy sources. Besides, this kind of modified gravity is capable of addressing the dynamics of several self-gravitating systems alternatively to the presence of dark matter. It has been established that both metric and Palatini versions of these theories have interesting features but also manifest severe and different downsides. A hybrid combination of theories, containing elements from both these two formalisms, turns out to be also very successful accounting for the observed phenomenology and is able to avoid some drawbacks of the original approaches. This article reviews the formulation of this hybrid metric-Palatini approach and its main achievements in passing the local tests and in applications to astrophysical and cosmological scenarios, where it provides a unified approach to the problems of dark energy and dark matter.

  20. Hofer's metrics and boundary depth

    CERN Document Server

    Usher, Michael

    2011-01-01

    We show that if (M,\\omega) is a closed symplectic manifold which admits a nontrivial Hamiltonian vector field all of whose contractible closed orbits are constant, then Hofer's metric on the group of Hamiltonian diffeomorphisms of (M,\\omega) has infinite diameter, and indeed admits infinite-dimensional quasi-isometrically embedded normed vector spaces. A similar conclusion applies to Hofer's metric on various spaces of Lagrangian submanifolds, including those Hamiltonian-isotopic to the diagonal in M x M when M satisfies the above dynamical condition. To prove this, we use the properties of a Floer-theoretic quantity called the boundary depth, which measures the nontriviality of the boundary operator on the Floer complex in a way that encodes robust symplectic-topological information.