WorldWideScience

Sample records for quality metrics utility

  1. Supporting analysis and assessments quality metrics: Utility market sector

    Energy Technology Data Exchange (ETDEWEB)

    Ohi, J. [National Renewable Energy Lab., Golden, CO (United States)

    1996-10-01

    In FY96, NREL was asked to coordinate all analysis tasks so that in FY97 these tasks will be part of an integrated analysis agenda that will begin to define a 5-15 year R&D roadmap and portfolio for the DOE Hydrogen Program. The purpose of the Supporting Analysis and Assessments task at NREL is to provide this coordination and conduct specific analysis tasks. One of these tasks is to prepare the Quality Metrics (QM) for the Program as part of the overall QM effort at DOE/EERE. The Hydrogen Program one of 39 program planning units conducting QM, a process begun in FY94 to assess benefits/costs of DOE/EERE programs. The purpose of QM is to inform decisionmaking during budget formulation process by describing the expected outcomes of programs during the budget request process. QM is expected to establish first step toward merit-based budget formulation and allow DOE/EERE to get {open_quotes}most bang for its (R&D) buck.{close_quotes} In FY96. NREL coordinated a QM team that prepared a preliminary QM for the utility market sector. In the electricity supply sector, the QM analysis shows hydrogen fuel cells capturing 5% (or 22 GW) of the total market of 390 GW of new capacity additions through 2020. Hydrogen consumption in the utility sector increases from 0.009 Quads in 2005 to 0.4 Quads in 2020. Hydrogen fuel cells are projected to displace over 0.6 Quads of primary energy in 2020. In future work, NREL will assess the market for decentralized, on-site generation, develop cost credits for distributed generation benefits (such as deferral of transmission and distribution investments, uninterruptible power service), cost credits for by-products such as heat and potable water, cost credits for environmental benefits (reduction of criteria air pollutants and greenhouse gas emissions), compete different fuel cell technologies against each other for market share, and begin to address economic benefits, especially employment.

  2. Beyond metrics? Utilizing 'soft intelligence' for healthcare quality and safety.

    Science.gov (United States)

    Martin, Graham P; McKee, Lorna; Dixon-Woods, Mary

    2015-10-01

    Formal metrics for monitoring the quality and safety of healthcare have a valuable role, but may not, by themselves, yield full insight into the range of fallibilities in organizations. 'Soft intelligence' is usefully understood as the processes and behaviours associated with seeking and interpreting soft data-of the kind that evade easy capture, straightforward classification and simple quantification-to produce forms of knowledge that can provide the basis for intervention. With the aim of examining current and potential practice in relation to soft intelligence, we conducted and analysed 107 in-depth qualitative interviews with senior leaders, including managers and clinicians, involved in healthcare quality and safety in the English National Health Service. We found that participants were in little doubt about the value of softer forms of data, especially for their role in revealing troubling issues that might be obscured by conventional metrics. Their struggles lay in how to access softer data and turn them into a useful form of knowing. Some of the dominant approaches they used risked replicating the limitations of hard, quantitative data. They relied on processes of aggregation and triangulation that prioritised reliability, or on instrumental use of soft data to animate the metrics. The unpredictable, untameable, spontaneous quality of soft data could be lost in efforts to systematize their collection and interpretation to render them more tractable. A more challenging but potentially rewarding approach involved processes and behaviours aimed at disrupting taken-for-granted assumptions about quality, safety, and organizational performance. This approach, which explicitly values the seeking out and the hearing of multiple voices, is consistent with conceptual frameworks of organizational sensemaking and dialogical understandings of knowledge. Using soft intelligence this way can be challenging and discomfiting, but may offer a critical defence against the

  3. Utility Validation of a New Fingerprint Quality Metric

    OpenAIRE

    Yao , Zhigang; Charrier , Christophe; Rosenberger , Christophe

    2014-01-01

    International audience; Fingerprint somehow can be regarded as a relatively fullfledged application in biometrics. The use of this biometric modality is not limited to traditional public security area, but spread into the daily life, smart phone authentication control and e-payment, for instance. However, quality control of biometric sample is still a necessary task due in order to optimize the operational performance. Research works had shown that biometric systems performance could be great...

  4. Beyond metrics? Utilizing ‘soft intelligence’ for healthcare quality and safety

    OpenAIRE

    Martin, Graham P.; McKee, Lorna; Dixon-Woods, Mary

    2015-01-01

    Formal metrics for monitoring the quality and safety of healthcare have a valuable role, but may not, by themselves, yield full insight into the range of fallibilities in organizations. ‘Soft intelligence’ is usefully understood as the processes and behaviours associated with seeking and interpreting soft data—of the kind that evade easy capture, straightforward classification and simple quantification—to produce forms of knowledge that can provide the basis for intervention. With the aim of ...

  5. Software Quality Assurance Metrics

    Science.gov (United States)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  6. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    Science.gov (United States)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  7. Software quality metrics aggregation in industry

    NARCIS (Netherlands)

    Mordal, K.; Anquetil, N.; Laval, J.; Serebrenik, A.; Vasilescu, B.N.; Ducasse, S.

    2013-01-01

    With the growing need for quality assessment of entire software systems in the industry, new issues are emerging. First, because most software quality metrics are defined at the level of individual software components, there is a need for aggregation methods to summarize the results at the system

  8. Assessing Software Quality Through Visualised Cohesion Metrics

    Directory of Open Access Journals (Sweden)

    Timothy Shih

    2001-05-01

    Full Text Available Cohesion is one of the most important factors for software quality as well as maintainability, reliability and reusability. Module cohesion is defined as a quality attribute that seeks for measuring the singleness of the purpose of a module. The module of poor quality can be a serious obstacle to the system quality. In order to design a good software quality, software managers and engineers need to introduce cohesion metrics to measure and produce desirable software. A highly cohesion software is thought to be a desirable constructing. In this paper, we propose a function-oriented cohesion metrics based on the analysis of live variables, live span and the visualization of processing element dependency graph. We give six typical cohesion examples to be measured as our experiments and justification. Therefore, a well-defined, well-normalized, well-visualized and well-experimented cohesion metrics is proposed to indicate and thus enhance software cohesion strength. Furthermore, this cohesion metrics can be easily incorporated with software CASE tool to help software engineers to improve software quality.

  9. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  10. Decision Analysis for Metric Selection on a Clinical Quality Scorecard.

    Science.gov (United States)

    Guth, Rebecca M; Storey, Patricia E; Vitale, Michael; Markan-Aurora, Sumita; Gordon, Randolph; Prevost, Traci Q; Dunagan, Wm Claiborne; Woeltje, Keith F

    2016-09-01

    Clinical quality scorecards are used by health care institutions to monitor clinical performance and drive quality improvement. Because of the rapid proliferation of quality metrics in health care, BJC HealthCare found it increasingly difficult to select the most impactful scorecard metrics while still monitoring metrics for regulatory purposes. A 7-step measure selection process was implemented incorporating Kepner-Tregoe Decision Analysis, which is a systematic process that considers key criteria that must be satisfied in order to make the best decision. The decision analysis process evaluates what metrics will most appropriately fulfill these criteria, as well as identifies potential risks associated with a particular metric in order to identify threats to its implementation. Using this process, a list of 750 potential metrics was narrowed to 25 that were selected for scorecard inclusion. This decision analysis process created a more transparent, reproducible approach for selecting quality metrics for clinical quality scorecards. © The Author(s) 2015.

  11. A software quality model and metrics for risk assessment

    Science.gov (United States)

    Hyatt, L.; Rosenberg, L.

    1996-01-01

    A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.

  12. Assessment and improvement of radiation oncology trainee contouring ability utilizing consensus-based penalty metrics

    International Nuclear Information System (INIS)

    Hallock, Abhirami; Read, Nancy; D'Souza, David

    2012-01-01

    The objective of this study was to develop and assess the feasibility of utilizing consensus-based penalty metrics for the purpose of critical structure and organ at risk (OAR) contouring quality assurance and improvement. A Delphi study was conducted to obtain consensus on contouring penalty metrics to assess trainee-generated OAR contours. Voxel-based penalty metric equations were used to score regions of discordance between trainee and expert contour sets. The utility of these penalty metric scores for objective feedback on contouring quality was assessed by using cases prepared for weekly radiation oncology radiation oncology trainee treatment planning rounds. In two Delphi rounds, six radiation oncology specialists reached agreement on clinical importance/impact and organ radiosensitivity as the two primary criteria for the creation of the Critical Structure Inter-comparison of Segmentation (CriSIS) penalty functions. Linear/quadratic penalty scoring functions (for over- and under-contouring) with one of four levels of severity (none, low, moderate and high) were assigned for each of 20 OARs in order to generate a CriSIS score when new OAR contours are compared with reference/expert standards. Six cases (central nervous system, head and neck, gastrointestinal, genitourinary, gynaecological and thoracic) then were used to validate 18 OAR metrics through comparison of trainee and expert contour sets using the consensus derived CriSIS functions. For 14 OARs, there was an improvement in CriSIS score post-educational intervention. The use of consensus-based contouring penalty metrics to provide quantitative information for contouring improvement is feasible.

  13. Development of quality metrics for ambulatory pediatric cardiology: Infection prevention.

    Science.gov (United States)

    Johnson, Jonathan N; Barrett, Cindy S; Franklin, Wayne H; Graham, Eric M; Halnon, Nancy J; Hattendorf, Brandy A; Krawczeski, Catherine D; McGovern, James J; O'Connor, Matthew J; Schultz, Amy H; Vinocur, Jeffrey M; Chowdhury, Devyani; Anderson, Jeffrey B

    2017-12-01

    In 2012, the American College of Cardiology's (ACC) Adult Congenital and Pediatric Cardiology Council established a program to develop quality metrics to guide ambulatory practices for pediatric cardiology. The council chose five areas on which to focus their efforts; chest pain, Kawasaki Disease, tetralogy of Fallot, transposition of the great arteries after arterial switch, and infection prevention. Here, we sought to describe the process, evaluation, and results of the Infection Prevention Committee's metric design process. The infection prevention metrics team consisted of 12 members from 11 institutions in North America. The group agreed to work on specific infection prevention topics including antibiotic prophylaxis for endocarditis, rheumatic fever, and asplenia/hyposplenism; influenza vaccination and respiratory syncytial virus prophylaxis (palivizumab); preoperative methods to reduce intraoperative infections; vaccinations after cardiopulmonary bypass; hand hygiene; and testing to identify splenic function in patients with heterotaxy. An extensive literature review was performed. When available, previously published guidelines were used fully in determining metrics. The committee chose eight metrics to submit to the ACC Quality Metric Expert Panel for review. Ultimately, metrics regarding hand hygiene and influenza vaccination recommendation for patients did not pass the RAND analysis. Both endocarditis prophylaxis metrics and the RSV/palivizumab metric passed the RAND analysis but fell out during the open comment period. Three metrics passed all analyses, including those for antibiotic prophylaxis in patients with heterotaxy/asplenia, for influenza vaccination compliance in healthcare personnel, and for adherence to recommended regimens of secondary prevention of rheumatic fever. The lack of convincing data to guide quality improvement initiatives in pediatric cardiology is widespread, particularly in infection prevention. Despite this, three metrics were

  14. Experiences with Software Quality Metrics in the EMI middlewate

    OpenAIRE

    Alandes, M; Kenny, E M; Meneses, D; Pucciani, G

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristi...

  15. Degraded visual environment image/video quality metrics

    Science.gov (United States)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  16. Research on quality metrics of wireless adaptive video streaming

    Science.gov (United States)

    Li, Xuefei

    2018-04-01

    With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.

  17. [Clinical trial data management and quality metrics system].

    Science.gov (United States)

    Chen, Zhao-hua; Huang, Qin; Deng, Ya-zhong; Zhang, Yue; Xu, Yu; Yu, Hao; Liu, Zong-fan

    2015-11-01

    Data quality management system is essential to ensure accurate, complete, consistent, and reliable data collection in clinical research. This paper is devoted to various choices of data quality metrics. They are categorized by study status, e.g. study start up, conduct, and close-out. In each category, metrics for different purposes are listed according to ALCOA+ principles such us completeness, accuracy, timeliness, traceability, etc. Some general quality metrics frequently used are also introduced. This paper contains detail information as much as possible to each metric by providing definition, purpose, evaluation, referenced benchmark, and recommended targets in favor of real practice. It is important that sponsors and data management service providers establish a robust integrated clinical trial data quality management system to ensure sustainable high quality of clinical trial deliverables. It will also support enterprise level of data evaluation and bench marking the quality of data across projects, sponsors, data management service providers by using objective metrics from the real clinical trials. We hope this will be a significant input to accelerate the improvement of clinical trial data quality in the industry.

  18. A universal color image quality metric

    NARCIS (Netherlands)

    Toet, A.; Lucassen, M.P.

    2003-01-01

    We extend a recently introduced universal grayscale image quality index to a newly developed perceptually decorrelated color space. The resulting color image quality index quantifies the distortion of a processed color image relative to its original version. We evaluated the new color image quality

  19. Experiences with Software Quality Metrics in the EMI middleware

    International Nuclear Information System (INIS)

    Alandes, M; Meneses, D; Pucciani, G; Kenny, E M

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project to extract “code metrics” on the status of the software products and “process metrics” related to the quality of the development and support process such as reaction time to critical bugs, requirements tracking and delays in product releases.

  20. Experiences with Software Quality Metrics in the EMI Middleware

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project t...

  1. Development of soil quality metrics using mycorrhizal fungi

    Energy Technology Data Exchange (ETDEWEB)

    Baar, J.

    2010-07-01

    Based on the Treaty on Biological Diversity of Rio de Janeiro in 1992 for maintaining and increasing biodiversity, several countries have started programmes monitoring soil quality and the above- and below ground biodiversity. Within the European Union, policy makers are working on legislation for soil protection and management. Therefore, indicators are needed to monitor the status of the soils and these indicators reflecting the soil quality, can be integrated in working standards or soil quality metrics. Soil micro-organisms, particularly arbuscular mycorrhizal fungi (AMF), are indicative of soil changes. These soil fungi live in symbiosis with the great majority of plants and are sensitive to changes in the physico-chemical conditions of the soil. The aim of this study was to investigate whether AMF are reliable and sensitive indicators for disturbances in the soils and can be used for the development of soil quality metrics. Also, it was studied whether soil quality metrics based on AMF meet requirements to applicability by users and policy makers. Ecological criterions were set for the development of soil quality metrics for different soils. Multiple root samples containing AMF from various locations in The Netherlands were analyzed. The results of the analyses were related to the defined criterions. This resulted in two soil quality metrics, one for sandy soils and a second one for clay soils, with six different categories ranging from very bad to very good. These soil quality metrics meet the majority of requirements for applicability and are potentially useful for the development of legislations for the protection of soil quality. (Author) 23 refs.

  2. Software metrics to improve software quality in HEP

    International Nuclear Information System (INIS)

    Lancon, E.

    1996-01-01

    The ALEPH reconstruction program maintainability has been evaluated with a case tool implementing an ISO standard methodology based on software metrics. It has been found that the overall quality of the program is good and has shown improvement over the past five years. Frequently modified routines exhibits lower quality; most buys were located in routines with particularly low quality. Implementing from the beginning a quality criteria could have avoided time losses due to bug corrections. (author)

  3. Quality metric for spherical panoramic video

    Science.gov (United States)

    Zakharchenko, Vladyslav; Choi, Kwang Pyo; Park, Jeong Hoon

    2016-09-01

    Virtual reality (VR)/ augmented reality (AR) applications allow users to view artificial content of a surrounding space simulating presence effect with a help of special applications or devices. Synthetic contents production is well known process form computer graphics domain and pipeline has been already fixed in the industry. However emerging multimedia formats for immersive entertainment applications such as free-viewpoint television (FTV) or spherical panoramic video require different approaches in content management and quality assessment. The international standardization on FTV has been promoted by MPEG. This paper is dedicated to discussion of immersive media distribution format and quality estimation process. Accuracy and reliability of the proposed objective quality estimation method had been verified with spherical panoramic images demonstrating good correlation results with subjective quality estimation held by a group of experts.

  4. Pragmatic quality metrics for evolutionary software development models

    Science.gov (United States)

    Royce, Walker

    1990-01-01

    Due to the large number of product, project, and people parameters which impact large custom software development efforts, measurement of software product quality is a complex undertaking. Furthermore, the absolute perspective from which quality is measured (customer satisfaction) is intangible. While we probably can't say what the absolute quality of a software product is, we can determine the relative quality, the adequacy of this quality with respect to pragmatic considerations, and identify good and bad trends during development. While no two software engineers will ever agree on an optimum definition of software quality, they will agree that the most important perspective of software quality is its ease of change. We can call this flexibility, adaptability, or some other vague term, but the critical characteristic of software is that it is soft. The easier the product is to modify, the easier it is to achieve any other software quality perspective. This paper presents objective quality metrics derived from consistent lifecycle perspectives of rework which, when used in concert with an evolutionary development approach, can provide useful insight to produce better quality per unit cost/schedule or to achieve adequate quality more efficiently. The usefulness of these metrics is evaluated by applying them to a large, real world, Ada project.

  5. Development of Quality Metrics in Ambulatory Pediatric Cardiology.

    Science.gov (United States)

    Chowdhury, Devyani; Gurvitz, Michelle; Marelli, Ariane; Anderson, Jeffrey; Baker-Smith, Carissa; Diab, Karim A; Edwards, Thomas C; Hougen, Tom; Jedeikin, Roy; Johnson, Jonathan N; Karpawich, Peter; Lai, Wyman; Lu, Jimmy C; Mitchell, Stephanie; Newburger, Jane W; Penny, Daniel J; Portman, Michael A; Satou, Gary; Teitel, David; Villafane, Juan; Williams, Roberta; Jenkins, Kathy

    2017-02-07

    The American College of Cardiology Adult Congenital and Pediatric Cardiology (ACPC) Section had attempted to create quality metrics (QM) for ambulatory pediatric practice, but limited evidence made the process difficult. The ACPC sought to develop QMs for ambulatory pediatric cardiology practice. Five areas of interest were identified, and QMs were developed in a 2-step review process. In the first step, an expert panel, using the modified RAND-UCLA methodology, rated each QM for feasibility and validity. The second step sought input from ACPC Section members; final approval was by a vote of the ACPC Council. Work groups proposed a total of 44 QMs. Thirty-one metrics passed the RAND process and, after the open comment period, the ACPC council approved 18 metrics. The project resulted in successful development of QMs in ambulatory pediatric cardiology for a range of ambulatory domains. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  6. Performance evaluation of objective quality metrics for HDR image compression

    Science.gov (United States)

    Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic

    2014-09-01

    Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.

  7. PSQM-based RR and NR video quality metrics

    Science.gov (United States)

    Lu, Zhongkang; Lin, Weisi; Ong, Eeping; Yang, Xiaokang; Yao, Susu

    2003-06-01

    This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-based metric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.

  8. Towards Video Quality Metrics Based on Colour Fractal Geometry

    Directory of Open Access Journals (Sweden)

    Richard Noël

    2010-01-01

    Full Text Available Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.

  9. Project, building and utilization of a tomograph of micro metric resolution to application in soil science

    International Nuclear Information System (INIS)

    Macedo, Alvaro; Torre Neto, Andre; Cruvinel, Paulo Estevao; Crestana, Silvio

    1996-08-01

    This paper describes the project , building and utilization of a tomograph of micro metric resolution in soil science. It describes the problems involved in soil's science study and it describes the system and methodology

  10. Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling

    Science.gov (United States)

    Mog, Robert A.

    1997-01-01

    Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.

  11. SU-E-T-222: How to Define and Manage Quality Metrics in Radiation Oncology.

    Science.gov (United States)

    Harrison, A; Cooper, K; DeGregorio, N; Doyle, L; Yu, Y

    2012-06-01

    Since the 2001 IOM Report Crossing the Quality Chasm: A New Health System for the 21st Century, the need to provide quality metrics in health care has increased. Quality metrics have yet to be defined for the field of radiation oncology. This study represents one institutes initial efforts defining and measuring quality metrics using our electronic medical record and verify system(EMR) as a primary data collection tool. This effort began by selecting meaningful quality metrics rooted in the IOM definition of quality (safe, timely, efficient, effective, equitable and patient-centered care) that were also measurable targets based on current data input and workflow. Elekta MOSAIQ 2.30.04D1 was used to generate reports on the number of Special Physics Consults(SPC) charged as a surrogate for treatment complexity, daily patient time in department(DTP) as a measure of efficiency and timeliness, and time from CT-simulation to first LINAC appointment(STL). The number of IMRT QAs delivered in the department was also analyzed to assess complexity. Although initial MOSAIQ reports were easily generated, the data needed to be assessed and adjusted for outliers. Patients with delays outside of radiation oncology such as chemotherapy or surgery were excluded from STL data. We found an average STL of six days for all CT-simulated patients and an average DTP of 52 minutes total time, with 23 minutes in the LINAC vault. Annually, 7.3% of all patient require additional physics support indicated by SPC. Utilizing our EMR, an entire year's worth of useful data characterizing our clinical experience was analyzed in less than one day. Having baseline quality metrics is necessary to improve patient care. Future plans include dissecting this data into more specific categories such as IMRT DTP, workflow timing following CT-simulation, beam-on hours, chart review outcomes, and dosimetric quality indicators. © 2012 American Association of Physicists in Medicine.

  12. A composite efficiency metrics for evaluation of resource and energy utilization

    International Nuclear Information System (INIS)

    Yang, Siyu; Yang, Qingchun; Qian, Yu

    2013-01-01

    Polygeneration systems are commonly found in chemical and energy industry. These systems often involve chemical conversions and energy conversions. Studies of these systems are interdisciplinary, mainly involving fields of chemical engineering, energy engineering, environmental science, and economics. Each of these fields has developed an isolated index system different from the others. Analyses of polygeneration systems are therefore very likely to provide bias results with only the indexes from one field. This paper is motivated from this problem to develop a new composite efficiency metrics for polygeneration systems. This new metrics is based on the second law of thermodynamics, exergy theory. We introduce exergy cost for waste treatment as the energy penalty into conventional exergy efficiency. Using this new metrics could avoid the situation of spending too much energy for increasing production or paying production capacity for saving energy consumption. The composite metrics is studied on a simplified co-production process, syngas to methanol and electricity. The advantage of the new efficiency metrics is manifested by comparison with carbon element efficiency, energy efficiency, and exergy efficiency. Results show that the new metrics could give more rational analysis than the other indexes. - Highlights: • The composite efficiency metric gives the balanced evaluation of resource utilization and energy utilization. • This efficiency uses the exergy for waste treatment as the energy penalty. • This efficiency is applied on a simplified co-production process. • Results show that the composite metrics is better than energy efficiencies and resource efficiencies

  13. Duration of Postoperative Mechanical Ventilation as a Quality Metric for Pediatric Cardiac Surgical Programs.

    Science.gov (United States)

    Gaies, Michael; Werho, David K; Zhang, Wenying; Donohue, Janet E; Tabbutt, Sarah; Ghanayem, Nancy S; Scheurer, Mark A; Costello, John M; Gaynor, J William; Pasquali, Sara K; Dimick, Justin B; Banerjee, Mousumi; Schwartz, Steven M

    2018-02-01

    Few metrics exist to assess quality of care at pediatric cardiac surgical programs, limiting opportunities for benchmarking and quality improvement. Postoperative duration of mechanical ventilation (POMV) may be an important quality metric because of its association with complications and resource utilization. In this study we modelled case-mix-adjusted POMV duration and explored hospital performance across POMV metrics. This study used the Pediatric Cardiac Critical Care Consortium clinical registry to analyze 4,739 hospitalizations from 15 hospitals (October 2013 to August 2015). All patients admitted to pediatric cardiac intensive care units after an index cardiac operation were included. We fitted a model to predict duration of POMV accounting for patient characteristics. Robust estimates of SEs were obtained using bootstrap resampling. We created performance metrics based on observed-to-expected (O/E) POMV to compare hospitals. Overall, 3,108 patients (65.6%) received POMV; the remainder were extubated intraoperatively. Our model was well calibrated across groups; neonatal age had the largest effect on predicted POMV. These comparisons suggested clinically and statistically important variation in POMV duration across centers with a threefold difference observed in O/E ratios (0.6 to 1.7). We identified 1 hospital with better-than-expected and 3 hospitals with worse-than-expected performance (p < 0.05) based on the O/E ratio. We developed a novel case-mix-adjusted model to predict POMV duration after congenital heart operations. We report variation across hospitals on metrics of O/E duration of POMV that may be suitable for benchmarking quality of care. Identifying high-performing centers and practices that safely limit the duration of POMV could stimulate quality improvement efforts. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  14. Reliability, Validity, Comparability and Practical Utility of Cybercrime-Related Data, Metrics, and Information

    OpenAIRE

    Nir Kshetri

    2013-01-01

    With an increasing pervasiveness, prevalence and severity of cybercrimes, various metrics, measures and statistics have been developed and used to measure various aspects of this phenomenon. Cybercrime-related data, metrics, and information, however, pose important and difficult dilemmas regarding the issues of reliability, validity, comparability and practical utility. While many of the issues of the cybercrime economy are similar to other underground and underworld industries, this economy ...

  15. Proxy Graph: Visual Quality Metrics of Big Graph Sampling.

    Science.gov (United States)

    Nguyen, Quan Hoang; Hong, Seok-Hee; Eades, Peter; Meidiana, Amyra

    2017-06-01

    Data sampling has been extensively studied for large scale graph mining. Many analyses and tasks become more efficient when performed on graph samples of much smaller size. The use of proxy objects is common in software engineering for analysis and interaction with heavy objects or systems. In this paper, we coin the term 'proxy graph' and empirically investigate how well a proxy graph visualization can represent a big graph. Our investigation focuses on proxy graphs obtained by sampling; this is one of the most common proxy approaches. Despite the plethora of data sampling studies, this is the first evaluation of sampling in the context of graph visualization. For an objective evaluation, we propose a new family of quality metrics for visual quality of proxy graphs. Our experiments cover popular sampling techniques. Our experimental results lead to guidelines for using sampling-based proxy graphs in visualization.

  16. Quality Metrics in Neonatal and Pediatric Critical Care Transport: A National Delphi Project.

    Science.gov (United States)

    Schwartz, Hamilton P; Bigham, Michael T; Schoettker, Pamela J; Meyer, Keith; Trautman, Michael S; Insoft, Robert M

    2015-10-01

    The transport of neonatal and pediatric patients to tertiary care facilities for specialized care demands monitoring the quality of care delivered during transport and its impact on patient outcomes. In 2011, pediatric transport teams in Ohio met to identify quality indicators permitting comparisons among programs. However, no set of national consensus quality metrics exists for benchmarking transport teams. The aim of this project was to achieve national consensus on appropriate neonatal and pediatric transport quality metrics. Modified Delphi technique. The first round of consensus determination was via electronic mail survey, followed by rounds of consensus determination in-person at the American Academy of Pediatrics Section on Transport Medicine's 2012 Quality Metrics Summit. All attendees of the American Academy of Pediatrics Section on Transport Medicine Quality Metrics Summit, conducted on October 21-23, 2012, in New Orleans, LA, were eligible to participate. Candidate quality metrics were identified through literature review and those metrics currently tracked by participating programs. Participants were asked in a series of rounds to identify "very important" quality metrics for transport. It was determined a priori that consensus on a metric's importance was achieved when at least 70% of respondents were in agreement. This is consistent with other Delphi studies. Eighty-two candidate metrics were considered initially. Ultimately, 12 metrics achieved consensus as "very important" to transport. These include metrics related to airway management, team mobilization time, patient and crew injuries, and adverse patient care events. Definitions were assigned to the 12 metrics to facilitate uniform data tracking among programs. The authors succeeded in achieving consensus among a diverse group of national transport experts on 12 core neonatal and pediatric transport quality metrics. We propose that transport teams across the country use these metrics to

  17. Applicability of Existing Objective Metrics of Perceptual Quality for Adaptive Video Streaming

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Krasula, Lukás; Shahid, Muhammad

    2016-01-01

    Objective video quality metrics are designed to estimate the quality of experience of the end user. However, these objective metrics are usually validated with video streams degraded under common distortion types. In the presented work, we analyze the performance of published and known full......-reference and noreference quality metrics in estimating the perceived quality of adaptive bit-rate video streams knowingly out of scope. Experimental results indicate not surprisingly that state of the art objective quality metrics overlook the perceived degradations in the adaptive video streams and perform poorly...

  18. Quality Evaluation in Wireless Imaging Using Feature-Based Objective Metrics

    OpenAIRE

    Engelke, Ulrich; Zepernick, Hans-Jürgen

    2007-01-01

    This paper addresses the evaluation of image quality in the context of wireless systems using feature-based objective metrics. The considered metrics comprise of a weighted combination of feature values that are used to quantify the extend by which the related artifacts are present in a processed image. In view of imaging applications in mobile radio and wireless communication systems, reduced-reference objective quality metrics are investigated for quantifying user-perceived quality. The exa...

  19. MUSTANG: A Community-Facing Web Service to Improve Seismic Data Quality Awareness Through Metrics

    Science.gov (United States)

    Templeton, M. E.; Ahern, T. K.; Casey, R. E.; Sharer, G.; Weertman, B.; Ashmore, S.

    2014-12-01

    IRIS DMC is engaged in a new effort to provide broad and deep visibility into the quality of data and metadata found in its terabyte-scale geophysical data archive. Taking advantage of large and fast disk capacity, modern advances in open database technologies, and nimble provisioning of virtual machine resources, we are creating an openly accessible treasure trove of data measurements for scientists and the general public to utilize in providing new insights into the quality of this data. We have branded this statistical gathering system MUSTANG, and have constructed it as a component of the web services suite that IRIS DMC offers. MUSTANG measures over forty data metrics addressing issues with archive status, data statistics and continuity, signal anomalies, noise analysis, metadata checks, and station state of health. These metrics could potentially be used both by network operators to diagnose station problems and by data users to sort suitable data from unreliable or unusable data. Our poster details what MUSTANG is, how users can access it, what measurements they can find, and how MUSTANG fits into the IRIS DMC's data access ecosystem. Progress in data processing, approaches to data visualization, and case studies of MUSTANG's use for quality assurance will be presented. We want to illustrate what is possible with data quality assurance, the need for data quality assurance, and how the seismic community will benefit from this freely available analytics service.

  20. The Quality Adjusted Life Year: A Total-Utility Perspective.

    Science.gov (United States)

    Firth, Steven J

    2018-04-01

    Given that a properly formed utilitarian response to healthcare distribution issues should evaluate cost effectiveness against the total utility increase, it follows that any utilitarian cost-effectiveness metric should be sensitive to increases in both individual and social utility afforded by a given intervention. Quality adjusted life year (QALY) based decisionmaking in healthcare cannot track increases in social utility, and as a result, the QALY cannot be considered a strict utilitarian response to issues of healthcare distribution. This article considers arguments against, and a possible defence of, the QALY as a utilitarian concept; in response, the article offers a similar - but properly formed - utilitarian metric called the (IALY). This article also advances a tool called the 'glee factor' (GF) on which the IALY may lean in a similar way to which the QALY leans on the Rosser Index.

  1. A no-reference image and video visual quality metric based on machine learning

    Science.gov (United States)

    Frantc, Vladimir; Voronin, Viacheslav; Semenishchev, Evgenii; Minkin, Maxim; Delov, Aliy

    2018-04-01

    The paper presents a novel visual quality metric for lossy compressed video quality assessment. High degree of correlation with subjective estimations of quality is due to using of a convolutional neural network trained on a large amount of pairs video sequence-subjective quality score. We demonstrate how our predicted no-reference quality metric correlates with qualitative opinion in a human observer study. Results are shown on the EVVQ dataset with comparison existing approaches.

  2. Software metrics: The key to quality software on the NCC project

    Science.gov (United States)

    Burns, Patricia J.

    1993-01-01

    Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.

  3. Development of quality metrics for ambulatory pediatric cardiology: Chest pain.

    Science.gov (United States)

    Lu, Jimmy C; Bansal, Manish; Behera, Sarina K; Boris, Jeffrey R; Cardis, Brian; Hokanson, John S; Kakavand, Bahram; Jedeikin, Roy

    2017-12-01

    As part of the American College of Cardiology Adult Congenital and Pediatric Cardiology Section effort to develop quality metrics (QMs) for ambulatory pediatric practice, the chest pain subcommittee aimed to develop QMs for evaluation of chest pain. A group of 8 pediatric cardiologists formulated candidate QMs in the areas of history, physical examination, and testing. Consensus candidate QMs were submitted to an expert panel for scoring by the RAND-UCLA modified Delphi process. Recommended QMs were then available for open comments from all members. These QMs are intended for use in patients 5-18 years old, referred for initial evaluation of chest pain in an ambulatory pediatric cardiology clinic, with no known history of pediatric or congenital heart disease. A total of 10 candidate QMs were submitted; 2 were rejected by the expert panel, and 5 were removed after the open comment period. The 3 approved QMs included: (1) documentation of family history of cardiomyopathy, early coronary artery disease or sudden death, (2) performance of electrocardiogram in all patients, and (3) performance of an echocardiogram to evaluate coronary arteries in patients with exertional chest pain. Despite practice variation and limited prospective data, 3 QMs were approved, with measurable data points which may be extracted from the medical record. However, further prospective studies are necessary to define practice guidelines and to develop appropriate use criteria in this population. © 2017 Wiley Periodicals, Inc.

  4. Effective dose efficiency: an application-specific metric of quality and dose for digital radiography

    Energy Technology Data Exchange (ETDEWEB)

    Samei, Ehsan; Ranger, Nicole T; Dobbins, James T III; Ravin, Carl E, E-mail: samei@duke.edu [Carl E Ravin Advanced Imaging Laboratories, Department of Radiology (United States)

    2011-08-21

    The detective quantum efficiency (DQE) and the effective DQE (eDQE) are relevant metrics of image quality for digital radiography detectors and systems, respectively. The current study further extends the eDQE methodology to technique optimization using a new metric of the effective dose efficiency (eDE), reflecting both the image quality as well as the effective dose (ED) attributes of the imaging system. Using phantoms representing pediatric, adult and large adult body habitus, image quality measurements were made at 80, 100, 120 and 140 kVp using the standard eDQE protocol and exposures. ED was computed using Monte Carlo methods. The eDE was then computed as a ratio of image quality to ED for each of the phantom/spectral conditions. The eDQE and eDE results showed the same trends across tube potential with 80 kVp yielding the highest values and 120 kVp yielding the lowest. The eDE results for the pediatric phantom were markedly lower than the results for the adult phantom at spatial frequencies lower than 1.2-1.7 mm{sup -1}, primarily due to a correspondingly higher value of ED per entrance exposure. The relative performance for the adult and large adult phantoms was generally comparable but affected by kVps. The eDE results for the large adult configuration were lower than the eDE results for the adult phantom, across all spatial frequencies (120 and 140 kVp) and at spatial frequencies greater than 1.0 mm{sup -1} (80 and 100 kVp). Demonstrated for chest radiography, the eDE shows promise as an application-specific metric of imaging performance, reflective of body habitus and radiographic technique, with utility for radiography protocol assessment and optimization.

  5. Transfusion rate as a quality metric: is blood conservation a learnable skill?

    Science.gov (United States)

    Paone, Gaetano; Brewer, Robert; Likosky, Donald S; Theurer, Patricia F; Bell, Gail F; Cogan, Chad M; Prager, Richard L

    2013-10-01

    Between January 2008 and December 2012, a multicenter quality collaborative initiated a focus on blood conservation as a quality metric, with educational presentations and quarterly reporting of institutional-level perioperative transfusion rates and outcomes. This prospective cohort study was undertaken to determine the effect of that initiative on transfusion rates after isolated coronary artery bypass grafting (CABG). Between January 1, 2008, and December 31, 2012, 30,271 patients underwent isolated CABG in Michigan. Evaluated were annual crude and adjusted trends in overall transfusion rates for red blood cells (RBCs), fresh frozen plasma (FFP), and platelets, and in operative death. Transfusion rates continuously decreased for all blood products. RBC use decreased from 56.4% in 2008 (baseline) to 38.3% in 2012, FFP use decreased from 14.8% to 9.1%, and platelet use decreased from 20.5% to 13.4% (ptrend conservation techniques, coincident with regular reporting and review of perioperative transfusion rates as a quality metric, was associated with a significant decrease in blood product utilization. These reductions were concurrent with significant improvement in most perioperative outcomes. This intervention was also safe, as it was not associated with any increases in mortality. Copyright © 2013 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  6. A guide to calculating habitat-quality metrics to inform conservation of highly mobile species

    Science.gov (United States)

    Bieri, Joanna A.; Sample, Christine; Thogmartin, Wayne E.; Diffendorfer, James E.; Earl, Julia E.; Erickson, Richard A.; Federico, Paula; Flockhart, D. T. Tyler; Nicol, Sam; Semmens, Darius J.; Skraber, T.; Wiederholt, Ruscena; Mattsson, Brady J.

    2018-01-01

    Many metrics exist for quantifying the relative value of habitats and pathways used by highly mobile species. Properly selecting and applying such metrics requires substantial background in mathematics and understanding the relevant management arena. To address this multidimensional challenge, we demonstrate and compare three measurements of habitat quality: graph-, occupancy-, and demographic-based metrics. Each metric provides insights into system dynamics, at the expense of increasing amounts and complexity of data and models. Our descriptions and comparisons of diverse habitat-quality metrics provide means for practitioners to overcome the modeling challenges associated with management or conservation of such highly mobile species. Whereas previous guidance for applying habitat-quality metrics has been scattered in diversified tracks of literature, we have brought this information together into an approachable format including accessible descriptions and a modeling case study for a typical example that conservation professionals can adapt for their own decision contexts and focal populations.Considerations for Resource ManagersManagement objectives, proposed actions, data availability and quality, and model assumptions are all relevant considerations when applying and interpreting habitat-quality metrics.Graph-based metrics answer questions related to habitat centrality and connectivity, are suitable for populations with any movement pattern, quantify basic spatial and temporal patterns of occupancy and movement, and require the least data.Occupancy-based metrics answer questions about likelihood of persistence or colonization, are suitable for populations that undergo localized extinctions, quantify spatial and temporal patterns of occupancy and movement, and require a moderate amount of data.Demographic-based metrics answer questions about relative or absolute population size, are suitable for populations with any movement pattern, quantify demographic

  7. Analytical performance evaluation of a high-volume hematology laboratory utilizing sigma metrics as standard of excellence.

    Science.gov (United States)

    Shaikh, M S; Moiz, B

    2016-04-01

    Around two-thirds of important clinical decisions about the management of patients are based on laboratory test results. Clinical laboratories are required to adopt quality control (QC) measures to ensure provision of accurate and precise results. Six sigma is a statistical tool, which provides opportunity to assess performance at the highest level of excellence. The purpose of this study was to assess performance of our hematological parameters on sigma scale in order to identify gaps and hence areas of improvement in patient care. Twelve analytes included in the study were hemoglobin (Hb), hematocrit (Hct), red blood cell count (RBC), mean corpuscular volume (MCV), red cell distribution width (RDW), total leukocyte count (TLC) with percentages of neutrophils (Neutr%) and lymphocytes (Lymph %), platelet count (Plt), mean platelet volume (MPV), prothrombin time (PT), and fibrinogen (Fbg). Internal quality control data and external quality assurance survey results were utilized for the calculation of sigma metrics for each analyte. Acceptable sigma value of ≥3 was obtained for the majority of the analytes included in the analysis. MCV, Plt, and Fbg achieved value of performed poorly on both level 1 and 2 controls with sigma value of <3. Despite acceptable conventional QC tools, application of sigma metrics can identify analytical deficits and hence prospects for the improvement in clinical laboratories. © 2016 John Wiley & Sons Ltd.

  8. Reliability, Validity, Comparability and Practical Utility of Cybercrime-Related Data, Metrics, and Information

    Directory of Open Access Journals (Sweden)

    Nir Kshetri

    2013-02-01

    Full Text Available With an increasing pervasiveness, prevalence and severity of cybercrimes, various metrics, measures and statistics have been developed and used to measure various aspects of this phenomenon. Cybercrime-related data, metrics, and information, however, pose important and difficult dilemmas regarding the issues of reliability, validity, comparability and practical utility. While many of the issues of the cybercrime economy are similar to other underground and underworld industries, this economy also has various unique aspects. For one thing, this industry also suffers from a problem partly rooted in the incredibly broad definition of the term “cybercrime”. This article seeks to provide insights and analysis into this phenomenon, which is expected to advance our understanding into cybercrime-related information.

  9. Evaluating which plan quality metrics are appropriate for use in lung SBRT.

    Science.gov (United States)

    Yaparpalvi, Ravindra; Garg, Madhur K; Shen, Jin; Bodner, William R; Mynampati, Dinesh K; Gafar, Aleiya; Kuo, Hsiang-Chi; Basavatia, Amar K; Ohri, Nitin; Hong, Linda X; Kalnicki, Shalom; Tome, Wolfgang A

    2018-02-01

    Several dose metrics in the categories-homogeneity, coverage, conformity and gradient have been proposed in literature for evaluating treatment plan quality. In this study, we applied these metrics to characterize and identify the plan quality metrics that would merit plan quality assessment in lung stereotactic body radiation therapy (SBRT) dose distributions. Treatment plans of 90 lung SBRT patients, comprising 91 targets, treated in our institution were retrospectively reviewed. Dose calculations were performed using anisotropic analytical algorithm (AAA) with heterogeneity correction. A literature review on published plan quality metrics in the categories-coverage, homogeneity, conformity and gradient was performed. For each patient, using dose-volume histogram data, plan quality metric values were quantified and analysed. For the study, the radiation therapy oncology group (RTOG) defined plan quality metrics were: coverage (0.90 ± 0.08); homogeneity (1.27 ± 0.07); conformity (1.03 ± 0.07) and gradient (4.40 ± 0.80). Geometric conformity strongly correlated with conformity index (p plan quality guidelines-coverage % (ICRU 62), conformity (CN or CI Paddick ) and gradient (R 50% ). Furthermore, we strongly recommend that RTOG lung SBRT protocols adopt either CN or CI Padddick in place of prescription isodose to target volume ratio for conformity index evaluation. Advances in knowledge: Our study metrics are valuable tools for establishing lung SBRT plan quality guidelines.

  10. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  11. Fly ash quality and utilization

    Energy Technology Data Exchange (ETDEWEB)

    Barta, L.E.; Lachner, L.; Wenzel, G.B. [Inst. for Energy, Budapest (Hungary); Beer, M.J. [Massachusetts Inst. of Technology, Cambridge, MA (United States)

    1995-12-01

    The quality of fly ash is of considerable importance to fly ash utilizers. The fly ash puzzolanic activity is one of the most important properties that determines the role of fly ash as a binding agent in the cementing process. The puzzolanic activity, however is a function of fly ash particle size and chemical composition. These parameters are closely related to the process of fly ash formation in pulverized coal fired furnaces. In turn, it is essential to understand the transformation of mineral matter during coal combustion. Due to the particle-to-particle variation of coal properties and the random coalescence of mineral particles, the properties of fly ash particles e.g. size, SiO{sub 2} content, viscosity can change considerably from particle to particle. These variations can be described by the use of the probability theory. Since the mean values of these randomly changing parameters are not sufficient to describe the behavior of individual fly ash particles during the formation of concrete, therefore it is necessary to investigate the distribution of these variables. Examples of these variations were examined by the Computer Controlled Scanning Electron Microscopy (CCSEM) for particle size and chemical composition for Texas lignite and Eagel Butte mineral matter and fly ash. The effect of combustion on the variations of these properties for both the fly ash and mineral matter were studied by using a laminar flow reactor. It is shown in our paper, that there are significant variations (about 40-50% around the mean values) of the above-listed properties for both coal samples. By comparing the particle size and chemical composition distributions of the mineral matter and fly ash, it was possible to conclude that for the Texas lignite mineral matter, the combustion did not effect significantly the distribution of these properties, however, for the Eagel Butte coal the combustion had a major impact on these mineral matter parameters.

  12. Mining and Utilizing Dataset Relevancy from Oceanographic Dataset Metadata, Usage Metrics, and User Feedback to Improve Data Discovery and Access

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to mine and utilize the combination of Earth Science dataset, metadata with usage metrics and user feedback to objectively extract relevance for improved...

  13. Utility of ck metrics in predicting size of board-based software games

    International Nuclear Information System (INIS)

    Sabhat, N.; Azam, F.; Malik, A.A.

    2017-01-01

    Software size is one of the most important inputs of many software cost and effort estimation models. Early estimation of software plays an important role at the time of project inception. An accurate estimate of software size is, therefore, crucial for planning, managing, and controlling software development projects dealing with the development of software games. However, software size is unavailable during early phase of software development. This research determines the utility of CK (Chidamber and Kemerer) metrics, a well-known suite of object-oriented metrics, in estimating the size of software applications using the information from its UML (Unified Modeling Language) class diagram. This work focuses on a small subset dealing with board-based software games. Almost sixty games written using an object-oriented programming language are downloaded from open source repositories, analyzed and used to calibrate a regression-based size estimation model. Forward stepwise MLR (Multiple Linear Regression) is used for model fitting. The model thus obtained is assessed using a variety of accuracy measures such as MMRE (Mean Magnitude of Relative Error), Prediction of x(PRED(x)), MdMRE (Median of Relative Error) and validated using K-fold cross validation. The accuracy of this model is also compared with an existing model tailored for size estimation of board games. Based on a small subset of desktop games developed in various object-oriented languages, we obtained a model using CK metrics and forward stepwise multiple linear regression with reasonable estimation accuracy as indicated by the value of the coefficient of determination (R2 = 0.756).Comparison results indicate that the existing size estimation model outperforms the model derived using CK metrics in terms of accuracy of prediction. (author)

  14. SU-E-T-776: Use of Quality Metrics for a New Hypo-Fractionated Pre-Surgical Mesothelioma Protocol

    International Nuclear Information System (INIS)

    Richardson, S; Mehta, V

    2015-01-01

    Purpose: The “SMART” (Surgery for Mesothelioma After Radiation Therapy) approach involves hypo-fractionated radiotherapy of the lung pleura to 25Gy over 5 days followed by surgical resection within 7. Early clinical results suggest that this approach is very promising, but also logistically challenging due to the multidisciplinary involvement. Due to the compressed schedule, high dose, and shortened planning time, the delivery of the planned doses were monitored for safety with quality metric software. Methods: Hypo-fractionated IMRT treatment plans were developed for all patients and exported to Quality Reports™ software. Plan quality metrics or PQMs™ were created to calculate an objective scoring function for each plan. This allows for an objective assessment of the quality of the plan and a benchmark for plan improvement for subsequent patients. The priorities of various components were incorporated based on similar hypo-fractionated protocols such as lung SBRT treatments. Results: Five patients have been treated at our institution using this approach. The plans were developed, QA performed, and ready within 5 days of simulation. Plan Quality metrics utilized in scoring included doses to OAR and target coverage. All patients tolerated treatment well and proceeded to surgery as scheduled. Reported toxicity included grade 1 nausea (n=1), grade 1 esophagitis (n=1), grade 2 fatigue (n=3). One patient had recurrent fluid accumulation following surgery. No patients experienced any pulmonary toxicity prior to surgery. Conclusion: An accelerated course of pre-operative high dose radiation for mesothelioma is an innovative and promising new protocol. Without historical data, one must proceed cautiously and monitor the data carefully. The development of quality metrics and scoring functions for these treatments allows us to benchmark our plans and monitor improvement. If subsequent toxicities occur, these will be easy to investigate and incorporate into the

  15. SU-E-T-776: Use of Quality Metrics for a New Hypo-Fractionated Pre-Surgical Mesothelioma Protocol

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, S; Mehta, V [Swedish Cancer Institute, Seattle, WA (United States)

    2015-06-15

    Purpose: The “SMART” (Surgery for Mesothelioma After Radiation Therapy) approach involves hypo-fractionated radiotherapy of the lung pleura to 25Gy over 5 days followed by surgical resection within 7. Early clinical results suggest that this approach is very promising, but also logistically challenging due to the multidisciplinary involvement. Due to the compressed schedule, high dose, and shortened planning time, the delivery of the planned doses were monitored for safety with quality metric software. Methods: Hypo-fractionated IMRT treatment plans were developed for all patients and exported to Quality Reports™ software. Plan quality metrics or PQMs™ were created to calculate an objective scoring function for each plan. This allows for an objective assessment of the quality of the plan and a benchmark for plan improvement for subsequent patients. The priorities of various components were incorporated based on similar hypo-fractionated protocols such as lung SBRT treatments. Results: Five patients have been treated at our institution using this approach. The plans were developed, QA performed, and ready within 5 days of simulation. Plan Quality metrics utilized in scoring included doses to OAR and target coverage. All patients tolerated treatment well and proceeded to surgery as scheduled. Reported toxicity included grade 1 nausea (n=1), grade 1 esophagitis (n=1), grade 2 fatigue (n=3). One patient had recurrent fluid accumulation following surgery. No patients experienced any pulmonary toxicity prior to surgery. Conclusion: An accelerated course of pre-operative high dose radiation for mesothelioma is an innovative and promising new protocol. Without historical data, one must proceed cautiously and monitor the data carefully. The development of quality metrics and scoring functions for these treatments allows us to benchmark our plans and monitor improvement. If subsequent toxicities occur, these will be easy to investigate and incorporate into the

  16. Extracting Patterns from Educational Traces via Clustering and Associated Quality Metrics

    NARCIS (Netherlands)

    Mihaescu, Marian; Tanasie, Alexandru; Dascalu, Mihai; Trausan-Matu, Stefan

    2016-01-01

    Clustering algorithms, pattern mining techniques and associated quality metrics emerged as reliable methods for modeling learners’ performance, comprehension and interaction in given educational scenarios. The specificity of available data such as missing values, extreme values or outliers,

  17. National Quality Forum Colon Cancer Quality Metric Performance: How Are Hospitals Measuring Up?

    Science.gov (United States)

    Mason, Meredith C; Chang, George J; Petersen, Laura A; Sada, Yvonne H; Tran Cao, Hop S; Chai, Christy; Berger, David H; Massarweh, Nader N

    2017-12-01

    To evaluate the impact of care at high-performing hospitals on the National Quality Forum (NQF) colon cancer metrics. The NQF endorses evaluating ≥12 lymph nodes (LNs), adjuvant chemotherapy (AC) for stage III patients, and AC within 4 months of diagnosis as colon cancer quality indicators. Data on hospital-level metric performance and the association with survival are unclear. Retrospective cohort study of 218,186 patients with resected stage I to III colon cancer in the National Cancer Data Base (2004-2012). High-performing hospitals (>75% achievement) were identified by the proportion of patients achieving each measure. The association between hospital performance and survival was evaluated using Cox shared frailty modeling. Only hospital LN performance improved (15.8% in 2004 vs 80.7% in 2012; trend test, P fashion [0 metrics, reference; 1, hazard ratio (HR) 0.96 (0.89-1.03); 2, HR 0.92 (0.87-0.98); 3, HR 0.85 (0.80-0.90); 2 vs 1, HR 0.96 (0.91-1.01); 3 vs 1, HR 0.89 (0.84-0.93); 3 vs 2, HR 0.95 (0.89-0.95)]. Performance on metrics in combination was associated with lower risk of death [LN + AC, HR 0.86 (0.78-0.95); AC + timely AC, HR 0.92 (0.87-0.98); LN + AC + timely AC, HR 0.85 (0.80-0.90)], whereas individual measures were not [LN, HR 0.95 (0.88-1.04); AC, HR 0.95 (0.87-1.05)]. Less than half of hospitals perform well on these NQF colon cancer metrics concurrently, and high performance on individual measures is not associated with improved survival. Quality improvement efforts should shift focus from individual measures to defining composite measures encompassing the overall multimodal care pathway and capturing successful transitions from one care modality to another.

  18. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    Science.gov (United States)

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  19. Better Metrics to Automatically Predict the Quality of a Text Summary

    Directory of Open Access Journals (Sweden)

    Judith D. Schlesinger

    2012-09-01

    Full Text Available In this paper we demonstrate a family of metrics for estimating the quality of a text summary relative to one or more human-generated summaries. The improved metrics are based on features automatically computed from the summaries to measure content and linguistic quality. The features are combined using one of three methods—robust regression, non-negative least squares, or canonical correlation, an eigenvalue method. The new metrics significantly outperform the previous standard for automatic text summarization evaluation, ROUGE.

  20. A Metric Tool for Predicting Source Code Quality from a PDL Design

    OpenAIRE

    Henry, Sallie M.; Selig, Calvin

    1987-01-01

    The software crisis has increased the demand for automated tools to assist software developers in the production of quality software. Quality metrics have given software developers a tool to measure software quality. These measurements, however, are available only after the software has been produced. Due to high cost, software managers are reluctant, to redesign and reimplement low quality software. Ideally, a life cycle which allows early measurement of software quality is a necessary ingre...

  1. Using business intelligence to monitor clinical quality metrics.

    Science.gov (United States)

    Resetar, Ervina; Noirot, Laura A; Reichley, Richard M; Storey, Patricia; Skiles, Ann M; Traynor, Patrick; Dunagan, W Claiborne; Bailey, Thomas C

    2007-10-11

    BJC HealthCare (BJC) uses a number of industry standard indicators to monitor the quality of services provided by each of its hospitals. By establishing an enterprise data warehouse as a central repository of clinical quality information, BJC is able to monitor clinical quality performance in a timely manner and improve clinical outcomes.

  2. Defining quality metrics and improving safety and outcome in allergy care.

    Science.gov (United States)

    Lee, Stella; Stachler, Robert J; Ferguson, Berrylin J

    2014-04-01

    The delivery of allergy immunotherapy in the otolaryngology office is variable and lacks standardization. Quality metrics encompasses the measurement of factors associated with good patient-centered care. These factors have yet to be defined in the delivery of allergy immunotherapy. We developed and applied quality metrics to 6 allergy practices affiliated with an academic otolaryngic allergy center. This work was conducted at a tertiary academic center providing care to over 1500 patients. We evaluated methods and variability between 6 sites. Tracking of errors and anaphylaxis was initiated across all sites. A nationwide survey of academic and private allergists was used to collect data on current practice and use of quality metrics. The most common types of errors recorded were patient identification errors (n = 4), followed by vial mixing errors (n = 3), and dosing errors (n = 2). There were 7 episodes of anaphylaxis of which 2 were secondary to dosing errors for a rate of 0.01% or 1 in every 10,000 injection visits/year. Site visits showed that 86% of key safety measures were followed. Analysis of nationwide survey responses revealed that quality metrics are still not well defined by either medical or otolaryngic allergy practices. Academic practices were statistically more likely to use quality metrics (p = 0.021) and perform systems reviews and audits in comparison to private practices (p = 0.005). Quality metrics in allergy delivery can help improve safety and quality care. These metrics need to be further defined by otolaryngic allergists in the changing health care environment. © 2014 ARS-AAOA, LLC.

  3. A management-oriented framework for selecting metrics used to assess habitat- and path-specific quality in spatially structured populations

    Science.gov (United States)

    Nicol, Sam; Wiederholt, Ruscena; Diffendorfer, James E.; Mattsson, Brady; Thogmartin, Wayne E.; Semmens, Darius J.; Laura Lopez-Hoffman,; Norris, Ryan

    2016-01-01

    Mobile species with complex spatial dynamics can be difficult to manage because their population distributions vary across space and time, and because the consequences of managing particular habitats are uncertain when evaluated at the level of the entire population. Metrics to assess the importance of habitats and pathways connecting habitats in a network are necessary to guide a variety of management decisions. Given the many metrics developed for spatially structured models, it can be challenging to select the most appropriate one for a particular decision. To guide the management of spatially structured populations, we define three classes of metrics describing habitat and pathway quality based on their data requirements (graph-based, occupancy-based, and demographic-based metrics) and synopsize the ecological literature relating to these classes. Applying the first steps of a formal decision-making approach (problem framing, objectives, and management actions), we assess the utility of metrics for particular types of management decisions. Our framework can help managers with problem framing, choosing metrics of habitat and pathway quality, and to elucidate the data needs for a particular metric. Our goal is to help managers to narrow the range of suitable metrics for a management project, and aid in decision-making to make the best use of limited resources.

  4. Utility of different glycemic control metrics for optimizing management of diabetes.

    Science.gov (United States)

    Kohnert, Klaus-Dieter; Heinke, Peter; Vogt, Lutz; Salzsieder, Eckhard

    2015-02-15

    The benchmark for assessing quality of long-term glycemic control and adjustment of therapy is currently glycated hemoglobin (HbA1c). Despite its importance as an indicator for the development of diabetic complications, recent studies have revealed that this metric has some limitations; it conveys a rather complex message, which has to be taken into consideration for diabetes screening and treatment. On the basis of recent clinical trials, the relationship between HbA1c and cardiovascular outcomes in long-standing diabetes has been called into question. It becomes obvious that other surrogate and biomarkers are needed to better predict cardiovascular diabetes complications and assess efficiency of therapy. Glycated albumin, fructosamin, and 1,5-anhydroglucitol have received growing interest as alternative markers of glycemic control. In addition to measures of hyperglycemia, advanced glucose monitoring methods became available. An indispensible adjunct to HbA1c in routine diabetes care is self-monitoring of blood glucose. This monitoring method is now widely used, as it provides immediate feedback to patients on short-term changes, involving fasting, preprandial, and postprandial glucose levels. Beyond the traditional metrics, glycemic variability has been identified as a predictor of hypoglycemia, and it might also be implicated in the pathogenesis of vascular diabetes complications. Assessment of glycemic variability is thus important, but exact quantification requires frequently sampled glucose measurements. In order to optimize diabetes treatment, there is a need for both key metrics of glycemic control on a day-to-day basis and for more advanced, user-friendly monitoring methods. In addition to traditional discontinuous glucose testing, continuous glucose sensing has become a useful tool to reveal insufficient glycemic management. This new technology is particularly effective in patients with complicated diabetes and provides the opportunity to characterize

  5. Metrics for analyzing the quality of model transformations

    NARCIS (Netherlands)

    Amstel, van M.F.; Lange, C.F.J.; Brand, van den M.G.J.; Falcone, G.; Guéhéneuc, Y.G.; Lange, C.F.J.; Porkoláb, Z.; Sahraoui, H.A.

    2008-01-01

    Model transformations become increasingly important with the emergence of model driven engineering of, amongst others, objectoriented software systems. It is therefore necessary to define and evaluate the quality of model transformations. The goal of our research is to make the quality of model

  6. Quality of Service Metrics in Wireless Sensor Networks: A Survey

    Science.gov (United States)

    Snigdh, Itu; Gupta, Nisha

    2016-03-01

    Wireless ad hoc network is characterized by autonomous nodes communicating with each other by forming a multi hop radio network and maintaining connectivity in a decentralized manner. This paper presents a systematic approach to the interdependencies and the analogy of the various factors that affect and constrain the wireless sensor network. This article elaborates the quality of service parameters in terms of methods of deployment, coverage and connectivity which affect the lifetime of the network that have been addressed, till date by the different literatures. The analogy of the indispensable rudiments was discussed that are important factors to determine the varied quality of service achieved, yet have not been duly focused upon.

  7. The use of quality metrics in service centres

    NARCIS (Netherlands)

    Petkova, V.T.; Sander, P.C.; Brombacher, A.C.

    2000-01-01

    In industry it is not well realised that a service centre is potentially one of the major contributors to quality improvement. Service is able to collect vital information about the field behaviour of products in interaction with customers. If this information is well analysed and communicated, the

  8. Developing a more useful surface quality metric for laser optics

    Science.gov (United States)

    Turchette, Quentin; Turner, Trey

    2011-02-01

    Light scatter due to surface defects on laser resonator optics produces losses which lower system efficiency and output power. The traditional methodology for surface quality inspection involves visual comparison of a component to scratch and dig (SAD) standards under controlled lighting and viewing conditions. Unfortunately, this process is subjective and operator dependent. Also, there is no clear correlation between inspection results and the actual performance impact of the optic in a laser resonator. As a result, laser manufacturers often overspecify surface quality in order to ensure that optics will not degrade laser performance due to scatter. This can drive up component costs and lengthen lead times. Alternatively, an objective test system for measuring optical scatter from defects can be constructed with a microscope, calibrated lighting, a CCD detector and image processing software. This approach is quantitative, highly repeatable and totally operator independent. Furthermore, it is flexible, allowing the user to set threshold levels as to what will or will not constitute a defect. This paper details how this automated, quantitative type of surface quality measurement can be constructed, and shows how its results correlate against conventional loss measurement techniques such as cavity ringdown times.

  9. A Validation of Object-Oriented Design Metrics as Quality Indicators

    Science.gov (United States)

    Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio

    1997-01-01

    This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.

  10. Performance evaluation of no-reference image quality metrics for face biometric images

    Science.gov (United States)

    Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick

    2018-03-01

    The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.

  11. National evaluation of multidisciplinary quality metrics for head and neck cancer.

    Science.gov (United States)

    Cramer, John D; Speedy, Sedona E; Ferris, Robert L; Rademaker, Alfred W; Patel, Urjeet A; Samant, Sandeep

    2017-11-15

    The National Quality Forum has endorsed quality-improvement measures for multiple cancer types that are being developed into actionable tools to improve cancer care. No nationally endorsed quality metrics currently exist for head and neck cancer. The authors identified patients with surgically treated, invasive, head and neck squamous cell carcinoma in the National Cancer Data Base from 2004 to 2014 and compared the rate of adherence to 5 different quality metrics and whether compliance with these quality metrics impacted overall survival. The metrics examined included negative surgical margins, neck dissection lymph node (LN) yield ≥ 18, appropriate adjuvant radiation, appropriate adjuvant chemoradiation, adjuvant therapy within 6 weeks, as well as overall quality. In total, 76,853 eligible patients were identified. There was substantial variability in patient-level adherence, which was 80% for negative surgical margins, 73.1% for neck dissection LN yield, 69% for adjuvant radiation, 42.6% for adjuvant chemoradiation, and 44.5% for adjuvant therapy within 6 weeks. Risk-adjusted Cox proportional-hazard models indicated that all metrics were associated with a reduced risk of death: negative margins (hazard ratio [HR] 0.73; 95% confidence interval [CI], 0.71-0.76), LN yield ≥ 18 (HR, 0.93; 95% CI, 0.89-0.96), adjuvant radiation (HR, 0.67; 95% CI, 0.64-0.70), adjuvant chemoradiation (HR, 0.84; 95% CI, 0.79-0.88), and adjuvant therapy ≤6 weeks (HR, 0.92; 95% CI, 0.89-0.96). Patients who received high-quality care had a 19% reduced adjusted hazard of mortality (HR, 0.81; 95% CI, 0.79-0.83). Five head and neck cancer quality metrics were identified that have substantial variability in adherence and meaningfully impact overall survival. These metrics are appropriate candidates for national adoption. Cancer 2017;123:4372-81. © 2017 American Cancer Society. © 2017 American Cancer Society.

  12. Quality metric for accurate overlay control in <20nm nodes

    Science.gov (United States)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  13. Quality Assessment of Adaptive Bitrate Videos using Image Metrics and Machine Learning

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Brunnström, Kjell

    2015-01-01

    Adaptive bitrate (ABR) streaming is widely used for distribution of videos over the internet. In this work, we investigate how well we can predict the quality of such videos using well-known image metrics, information about the bitrate levels, and a relatively simple machine learning method...

  14. No Reference Prediction of Quality Metrics for H.264 Compressed Infrared Image Sequences for UAV Applications

    DEFF Research Database (Denmark)

    Hossain, Kabir; Mantel, Claire; Forchhammer, Søren

    2018-01-01

    The framework for this research work is the acquisition of Infrared (IR) images from Unmanned Aerial Vehicles (UAV). In this paper we consider the No-Reference (NR) prediction of Full Reference Quality Metrics for Infrared (IR) video sequences which are compressed and thus distorted by an H.264...

  15. Improved color metrics in solid-state lighting via utilization of on-chip quantum dots

    Science.gov (United States)

    Mangum, Benjamin D.; Landes, Tiemo S.; Theobald, Brian R.; Kurtin, Juanita N.

    2017-02-01

    While Quantum Dots (QDs) have found commercial success in display applications, there are currently no widely available solid state lighting products making use of QD nanotechnology. In order to have real-world success in today's lighting market, QDs must be capable of being placed in on-chip configurations, as remote phosphor configurations are typically much more expensive. Here we demonstrate solid-state lighting devices made with on-chip QDs. These devices show robust reliability under both dry and wet high stress conditions. High color quality lighting metrics can easily be achieved using these narrow, tunable QD downconverters: CRI values of Ra > 90 as well as R9 values > 80 are readily available when combining QDs with green phosphors. Furthermore, we show that QDs afford a 15% increase in overall efficiency compared to traditional phosphor downconverted SSL devices. The fundamental limit of QD linewidth is examined through single particle QD emission studies. Using standard Cd-based QD synthesis, it is found that single particle linewidths of 20 nm FWHM represent a lower limit to the narrowness of QD emission in the near term.

  16. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)*

    Science.gov (United States)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the “International Workshop on Proteomic Data Quality Metrics” in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: 1) an evolving list of comprehensive quality metrics and 2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals. PMID:22052993

  17. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    Science.gov (United States)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the U.S. National Cancer Institute (NCI) convened the “International Workshop on Proteomic Data Quality Metrics” in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: (1) an evolving list of comprehensive quality metrics and (2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals. PMID:22053864

  18. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2011-01-01

    and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use......Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed...

  19. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2012-01-01

    and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the "International Workshop on Proteomic Data Quality Metrics" in Sydney, Australia, to identify and address issues facing the development and use......Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed...

  20. Integrated concurrent utilization quality review, Part one.

    Science.gov (United States)

    Caterinicchio, R P

    1987-01-01

    This article is the first of a two-part series which argues for the concurrent management of the appropriateness, necessity, and quality of patient care. Intensifying scrutiny by the credentialing groups, the PROs and all third-party payors underscores the vital need to implement cost-effective information systems which integrate the departmentalized functions of patient-physician profiling, DRG case-mix analyses, length of stay monitoring, pre-admission/admission and continued stay review, discharge planning, risk management, incident reporting and quality review. In the domain of physician performance regarding admitting and practice patterns, the ability to exercise concurrent utilization-quality review means early detection and prevention of events which would otherwise result in denials of payment and/or compromised patient care. Concurrent utilization-quality review must, by definition, be managerially invasive and focused; hence, it is integral to maintaining the integrity of the services and product lines offered by the provider. In fact, if PPO status is a marketing agenda, then the institutional objectives of cost-effectiveness, productivity, value, and competitiveness can only be achieved through concurrent utilization-quality review.

  1. Quality Utilization Aware Based Data Gathering for Vehicular Communication Networks

    Directory of Open Access Journals (Sweden)

    Yingying Ren

    2018-01-01

    Full Text Available The vehicular communication networks, which can employ mobile, intelligent sensing devices with participatory sensing to gather data, could be an efficient and economical way to build various applications based on big data. However, high quality data gathering for vehicular communication networks which is urgently needed faces a lot of challenges. So, in this paper, a fine-grained data collection framework is proposed to cope with these new challenges. Different from classical data gathering which concentrates on how to collect enough data to satisfy the requirements of applications, a Quality Utilization Aware Data Gathering (QUADG scheme is proposed for vehicular communication networks to collect the most appropriate data and to best satisfy the multidimensional requirements (mainly including data gathering quantity, quality, and cost of application. In QUADG scheme, the data sensing is fine-grained in which the data gathering time and data gathering area are divided into very fine granularity. A metric named “Quality Utilization” (QU is to quantify the ratio of quality of the collected sensing data to the cost of the system. Three data collection algorithms are proposed. The first algorithm is to ensure that the application which has obtained the specified quantity of sensing data can minimize the cost and maximize data quality by maximizing QU. The second algorithm is to ensure that the application which has obtained two requests of application (the quantity and quality of data collection, or the quantity and cost of data collection could maximize the QU. The third algorithm is to ensure that the application which aims to satisfy the requirements of quantity, quality, and cost of collected data simultaneously could maximize the QU. Finally, we compare our proposed scheme with the existing schemes via extensive simulations which well justify the effectiveness of our scheme.

  2. Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.

    Science.gov (United States)

    Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel

    2017-07-28

    New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.

  3. SU-E-J-155: Automatic Quantitative Decision Making Metric for 4DCT Image Quality

    International Nuclear Information System (INIS)

    Kiely, J Blanco; Olszanski, A; Both, S; White, B; Low, D

    2015-01-01

    Purpose: To develop a quantitative decision making metric for automatically detecting irregular breathing using a large patient population that received phase-sorted 4DCT. Methods: This study employed two patient cohorts. Cohort#1 contained 256 patients who received a phasesorted 4DCT. Cohort#2 contained 86 patients who received three weekly phase-sorted 4DCT scans. A previously published technique used a single abdominal surrogate to calculate the ratio of extreme inhalation tidal volume to normal inhalation tidal volume, referred to as the κ metric. Since a single surrogate is standard for phase-sorted 4DCT in radiation oncology clinical practice, tidal volume was not quantified. Without tidal volume, the absolute κ metric could not be determined, so a relative κ (κrel) metric was defined based on the measured surrogate amplitude instead of tidal volume. Receiver operator characteristic (ROC) curves were used to quantitatively determine the optimal cutoff value (jk) and efficiency cutoff value (τk) of κrel to automatically identify irregular breathing that would reduce the image quality of phase-sorted 4DCT. Discriminatory accuracy (area under the ROC curve) of κrel was calculated by a trapezoidal numeric integration technique. Results: The discriminatory accuracy of ?rel was found to be 0.746. The key values of jk and tk were calculated to be 1.45 and 1.72 respectively. For values of ?rel such that jk≤κrel≤τk, the decision to reacquire the 4DCT would be at the discretion of the physician. This accounted for only 11.9% of the patients in this study. The magnitude of κrel held consistent over 3 weeks for 73% of the patients in cohort#3. Conclusion: The decision making metric, ?rel, was shown to be an accurate classifier of irregular breathing patients in a large patient population. This work provided an automatic quantitative decision making metric to quickly and accurately assess the extent to which irregular breathing is occurring during phase

  4. SU-E-J-155: Automatic Quantitative Decision Making Metric for 4DCT Image Quality

    Energy Technology Data Exchange (ETDEWEB)

    Kiely, J Blanco; Olszanski, A; Both, S; White, B [University of Pennsylvania, Philadelphia, PA (United States); Low, D [Deparment of Radiation Oncology, University of California Los Angeles, Los Angeles, CA (United States)

    2015-06-15

    Purpose: To develop a quantitative decision making metric for automatically detecting irregular breathing using a large patient population that received phase-sorted 4DCT. Methods: This study employed two patient cohorts. Cohort#1 contained 256 patients who received a phasesorted 4DCT. Cohort#2 contained 86 patients who received three weekly phase-sorted 4DCT scans. A previously published technique used a single abdominal surrogate to calculate the ratio of extreme inhalation tidal volume to normal inhalation tidal volume, referred to as the κ metric. Since a single surrogate is standard for phase-sorted 4DCT in radiation oncology clinical practice, tidal volume was not quantified. Without tidal volume, the absolute κ metric could not be determined, so a relative κ (κrel) metric was defined based on the measured surrogate amplitude instead of tidal volume. Receiver operator characteristic (ROC) curves were used to quantitatively determine the optimal cutoff value (jk) and efficiency cutoff value (τk) of κrel to automatically identify irregular breathing that would reduce the image quality of phase-sorted 4DCT. Discriminatory accuracy (area under the ROC curve) of κrel was calculated by a trapezoidal numeric integration technique. Results: The discriminatory accuracy of ?rel was found to be 0.746. The key values of jk and tk were calculated to be 1.45 and 1.72 respectively. For values of ?rel such that jk≤κrel≤τk, the decision to reacquire the 4DCT would be at the discretion of the physician. This accounted for only 11.9% of the patients in this study. The magnitude of κrel held consistent over 3 weeks for 73% of the patients in cohort#3. Conclusion: The decision making metric, ?rel, was shown to be an accurate classifier of irregular breathing patients in a large patient population. This work provided an automatic quantitative decision making metric to quickly and accurately assess the extent to which irregular breathing is occurring during phase

  5. A Novel Scoring Metrics for Quality Assurance of Ocean Color Observations

    Science.gov (United States)

    Wei, J.; Lee, Z.

    2016-02-01

    Interpretation of the ocean bio-optical properties from ocean color observations depends on the quality of the ocean color data, specifically the spectrum of remote sensing reflectance (Rrs). The in situ and remotely measured Rrs spectra are inevitably subject to errors induced by instrument calibration, sea-surface correction and atmospheric correction, and other environmental factors. Great efforts have been devoted to the ocean color calibration and validation. Yet, there exist no objective and consensus criteria for assessment of the ocean color data quality. In this study, the gap is filled by developing a novel metrics for such data quality assurance and quality control (QA/QC). This new QA metrics is not intended to discard "suspicious" Rrs spectra from available datasets. Rather, it takes into account the Rrs spectral shapes and amplitudes as a whole and grades each Rrs spectrum. This scoring system is developed based on a large ensemble of in situ hyperspectral remote sensing reflectance data measured from various aquatic environments and processed with robust procedures. This system is further tested with the NASA bio-Optical Marine Algorithm Data set (NOMAD), with results indicating significant improvements in the estimation of bio-optical properties when Rrs spectra marked with higher quality assurance are used. This scoring system is further verified with simulated data and satellite ocean color data in various regions, and we envision higher quality ocean color products with the implementation of such a quality screening system.

  6. Simulation of devices mobility to estimate wireless channel quality metrics in 5G networks

    Science.gov (United States)

    Orlov, Yu.; Fedorov, S.; Samuylov, A.; Gaidamaka, Yu.; Molchanov, D.

    2017-07-01

    The problem of channel quality estimation for devices in a wireless 5G network is formulated. As a performance metrics of interest we choose the signal-to-interference-plus-noise ratio, which depends essentially on the distance between the communicating devices. A model with a plurality of moving devices in a bounded three-dimensional space and a simulation algorithm to determine the distances between the devices for a given motion model are devised.

  7. Performance of different colour quality metrics proposed to CIE TC 1-91

    OpenAIRE

    Bhusal, Pramod; Dangol, Rajendra

    2017-01-01

    The main aim of the article is to find out the performance of different metrics proposed to CIE TC 1-91. Currently, six different indexes have been proposed to CIE TC 1-91: Colour Quality Scale (CQS), Feeling of Contrast Index (FCI), Memory colour rendering index (MCRI), Preference of skin (PS), Relative gamut area index (RGAI) and Illuminating Engineering society Method for evaluating light source colour rendition (IES TM-30). The evaluation and analysis are based on previously conducted exp...

  8. Developing a composite weighted quality metric to reflect the total benefit conferred by a health plan.

    Science.gov (United States)

    Taskler, Glen B; Braithwaite, R Scott

    2015-03-01

    To improve individual health quality measures, which are associated with varying degrees of health benefit, and composite quality metrics, which weight individual measures identically. We developed a health-weighted composite quality measure reflecting the total health benefit conferred by a health plan annually, using preventive care as a test case. Using national disease prevalence, we simulated a hypothetical insurance panel of individuals aged 25 to 84 years. For each individual, we estimated the gain in life expectancy associated with 1 year of health system exposure to encourage adherence to major preventive care guidelines, controlling for patient characteristics (age, race, gender, comorbidity) and variation in individual adherence rates. This personalized gain in life expectancy was used to proxy for the amount of health benefit conferred by a health plan annually to its members, and formed weights in our health-weighted composite quality measure. We aggregated health benefits across the health insurance membership panel to analyze total health system performance. Our composite quality metric gave the highest weights to health plans that succeeded in implementing tobacco cessation and weight loss. One year of compliance with these goals was associated with 2 to 10 times as much health benefit as compliance with easier-to-follow preventive care services, such as mammography, aspirin, and antihypertensives. For example, for women aged 55 to 64 years, successful interventions to encourage weight loss were associated with 2.1 times the health benefit of blood pressure reduction and 3.9 times the health benefit of increasing adherence with screening mammography. A single health-weighted quality metric may inform measurement of total health system performance.

  9. Studying the added value of visual attention in objective image quality metrics based on eye movement data

    NARCIS (Netherlands)

    Liu, H.; Heynderickx, I.E.J.

    2009-01-01

    Current research on image quality assessment tends to include visual attention in objective metrics to further enhance their performance. A variety of computational models of visual attention are implemented in different metrics, but their accuracy in representing human visual attention is not fully

  10. qcML: an exchange format for quality control metrics from mass spectrometry experiments.

    Science.gov (United States)

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W P; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A; Kelstrup, Christian D; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S; Olsen, Jesper V; Heck, Albert J R; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart

    2014-08-01

    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.

  11. qcML: An Exchange Format for Quality Control Metrics from Mass Spectrometry Experiments*

    Science.gov (United States)

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W. P.; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A.; Kelstrup, Christian D.; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S.; Olsen, Jesper V.; Heck, Albert J. R.; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart

    2014-01-01

    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. PMID:24760958

  12. Parameter Search Algorithms for Microwave Radar-Based Breast Imaging: Focal Quality Metrics as Fitness Functions.

    Science.gov (United States)

    O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin

    2017-12-06

    Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.

  13. Image-guided radiotherapy quality control: Statistical process control using image similarity metrics.

    Science.gov (United States)

    Shiraishi, Satomi; Grams, Michael P; Fong de Los Santos, Luis E

    2018-05-01

    The purpose of this study was to demonstrate an objective quality control framework for the image review process. A total of 927 cone-beam computed tomography (CBCT) registrations were retrospectively analyzed for 33 bilateral head and neck cancer patients who received definitive radiotherapy. Two registration tracking volumes (RTVs) - cervical spine (C-spine) and mandible - were defined, within which a similarity metric was calculated and used as a registration quality tracking metric over the course of treatment. First, sensitivity to large misregistrations was analyzed for normalized cross-correlation (NCC) and mutual information (MI) in the context of statistical analysis. The distribution of metrics was obtained for displacements that varied according to a normal distribution with standard deviation of σ = 2 mm, and the detectability of displacements greater than 5 mm was investigated. Then, similarity metric control charts were created using a statistical process control (SPC) framework to objectively monitor the image registration and review process. Patient-specific control charts were created using NCC values from the first five fractions to set a patient-specific process capability limit. Population control charts were created using the average of the first five NCC values for all patients in the study. For each patient, the similarity metrics were calculated as a function of unidirectional translation, referred to as the effective displacement. Patient-specific action limits corresponding to 5 mm effective displacements were defined. Furthermore, effective displacements of the ten registrations with the lowest similarity metrics were compared with a three dimensional (3DoF) couch displacement required to align the anatomical landmarks. Normalized cross-correlation identified suboptimal registrations more effectively than MI within the framework of SPC. Deviations greater than 5 mm were detected at 2.8σ and 2.1σ from the mean for NCC and MI

  14. Using research metrics to evaluate the International Atomic Energy Agency guidelines on quality assurance for R&D

    Energy Technology Data Exchange (ETDEWEB)

    Bodnarczuk, M.

    1994-06-01

    The objective of the International Atomic Energy Agency (IAEA) Guidelines on Quality Assurance for R&D is to provide guidance for developing quality assurance (QA) programs for R&D work on items, services, and processes important to safety, and to support the siting, design, construction, commissioning, operation, and decommissioning of nuclear facilities. The standard approach to writing papers describing new quality guidelines documents is to present a descriptive overview of the contents of the document. I will depart from this approach. Instead, I will first discuss a conceptual framework of metrics for evaluating and improving basic and applied experimental science as well as the associated role that quality management should play in understanding and implementing these metrics. I will conclude by evaluating how well the IAEA document addresses the metrics from this conceptual framework and the broader principles of quality management.

  15. Testing Quality and Metrics for the LHC Magnet Powering System throughout Past and Future Commissioning

    CERN Document Server

    Anderson, D; Charifoulline, Z; Dragu, M; Fuchsberger, K; Garnier, JC; Gorzawski, AA; Koza, M; Krol, K; Rowan, S; Stamos, K; Zerlauth, M

    2014-01-01

    The LHC magnet powering system is composed of thousands of individual components to assure a safe operation when operating with stored energies as high as 10GJ in the superconducting LHC magnets. Each of these components has to be thoroughly commissioned following interventions and machine shutdown periods to assure their protection function in case of powering failures. As well as having dependable tracking of test executions it is vital that the executed commissioning steps and applied analysis criteria adequately represent the operational state of each component. The Accelerator Testing (AccTesting) framework in combination with a domain specific analysis language provides the means to quantify and improve the quality of analysis for future campaigns. Dedicated tools were developed to analyse in detail the reasons for failures and success of commissioning steps in past campaigns and to compare the results with newly developed quality metrics. Observed shortcomings and discrepancies are used to propose addi...

  16. The role of metrics and measurements in a software intensive total quality management environment

    Science.gov (United States)

    Daniels, Charles B.

    1992-01-01

    Paramax Space Systems began its mission as a member of the Rockwell Space Operations Company (RSOC) team which was the successful bidder on a massive operations consolidation contract for the Mission Operations Directorate (MOD) at JSC. The contract awarded to the team was the Space Transportation System Operations Contract (STSOC). Our initial challenge was to accept responsibility for a very large, highly complex and fragmented collection of software from eleven different contractors and transform it into a coherent, operational baseline. Concurrently, we had to integrate a diverse group of people from eleven different companies into a single, cohesive team. Paramax executives recognized the absolute necessity to develop a business culture based on the concept of employee involvement to execute and improve the complex process of our new environment. Our executives clearly understood that management needed to set the example and lead the way to quality improvement. The total quality management policy and the metrics used in this endeavor are presented.

  17. Hospital readiness for health information exchange: development of metrics associated with successful collaboration for quality improvement.

    Science.gov (United States)

    Korst, Lisa M; Aydin, Carolyn E; Signer, Jordana M K; Fink, Arlene

    2011-08-01

    The development of readiness metrics for organizational participation in health information exchange is critical for monitoring progress toward, and achievement of, successful inter-organizational collaboration. In preparation for the development of a tool to measure readiness for data-sharing, we tested whether organizational capacities known to be related to readiness were associated with successful participation in an American data-sharing collaborative for quality improvement. Cross-sectional design, using an on-line survey of hospitals in a large, mature data-sharing collaborative organized for benchmarking and improvement in nursing care quality. Factor analysis was used to identify salient constructs, and identified factors were analyzed with respect to "successful" participation. "Success" was defined as the incorporation of comparative performance data into the hospital dashboard. The most important factor in predicting success included survey items measuring the strength of organizational leadership in fostering a culture of quality improvement (QI Leadership): (1) presence of a supportive hospital executive; (2) the extent to which a hospital values data; (3) the presence of leaders' vision for how the collaborative advances the hospital's strategic goals; (4) hospital use of the collaborative data to track quality outcomes; and (5) staff recognition of a strong mandate for collaborative participation (α=0.84, correlation with Success 0.68 [P<0.0001]). The data emphasize the importance of hospital QI Leadership in collaboratives that aim to share data for QI or safety purposes. Such metrics should prove useful in the planning and development of this complex form of inter-organizational collaboration. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. The Relationship between the Level and Modality of HRM Metrics, Quality of HRM Practice and Organizational Performance

    OpenAIRE

    Nina Pološki Vokić

    2011-01-01

    The paper explores the relationship between the way organizations measure HRM and overall quality of HRM activities, as well as the relationship between HRM metrics used and financial performance of an organization. In the theoretical part of the paper modalities of HRM metrics are grouped into five groups (evaluating HRM using accounting principles, evaluating HRM using management techniques, evaluating individual HRM activities, aggregate evaluation of HRM, and evaluating HRM de...

  19. The software product assurance metrics study: JPL's software systems quality and productivity

    Science.gov (United States)

    Bush, Marilyn W.

    1989-01-01

    The findings are reported of the Jet Propulsion Laboratory (JPL)/Software Product Assurance (SPA) Metrics Study, conducted as part of a larger JPL effort to improve software quality and productivity. Until recently, no comprehensive data had been assembled on how JPL manages and develops software-intensive systems. The first objective was to collect data on software development from as many projects and for as many years as possible. Results from five projects are discussed. These results reflect 15 years of JPL software development, representing over 100 data points (systems and subsystems), over a third of a billion dollars, over four million lines of code and 28,000 person months. Analysis of this data provides a benchmark for gauging the effectiveness of past, present and future software development work. In addition, the study is meant to encourage projects to record existing metrics data and to gather future data. The SPA long term goal is to integrate the collection of historical data and ongoing project data with future project estimations.

  20. RNA-SeQC: RNA-seq metrics for quality control and process optimization.

    Science.gov (United States)

    DeLuca, David S; Levin, Joshua Z; Sivachenko, Andrey; Fennell, Timothy; Nazaire, Marc-Danie; Williams, Chris; Reich, Michael; Winckler, Wendy; Getz, Gad

    2012-06-01

    RNA-seq, the application of next-generation sequencing to RNA, provides transcriptome-wide characterization of cellular activity. Assessment of sequencing performance and library quality is critical to the interpretation of RNA-seq data, yet few tools exist to address this issue. We introduce RNA-SeQC, a program which provides key measures of data quality. These metrics include yield, alignment and duplication rates; GC bias, rRNA content, regions of alignment (exon, intron and intragenic), continuity of coverage, 3'/5' bias and count of detectable transcripts, among others. The software provides multi-sample evaluation of library construction protocols, input materials and other experimental parameters. The modularity of the software enables pipeline integration and the routine monitoring of key measures of data quality such as the number of alignable reads, duplication rates and rRNA contamination. RNA-SeQC allows investigators to make informed decisions about sample inclusion in downstream analysis. In summary, RNA-SeQC provides quality control measures critical to experiment design, process optimization and downstream computational analysis. See www.genepattern.org to run online, or www.broadinstitute.org/rna-seqc/ for a command line tool.

  1. On use of image quality metrics for perceptual blur modeling: image/video compression case

    Science.gov (United States)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  2. Workshop summary: 'Integrating air quality and climate mitigation - is there a need for new metrics to support decision making?'

    Science.gov (United States)

    von Schneidemesser, E.; Schmale, J.; Van Aardenne, J.

    2013-12-01

    Air pollution and climate change are often treated at national and international level as separate problems under different regulatory or thematic frameworks and different policy departments. With air pollution and climate change being strongly linked with regard to their causes, effects and mitigation options, the integration of policies that steer air pollutant and greenhouse gas emission reductions might result in cost-efficient, more effective and thus more sustainable tackling of the two problems. To support informed decision making and to work towards an integrated air quality and climate change mitigation policy requires the identification, quantification and communication of present-day and potential future co-benefits and trade-offs. The identification of co-benefits and trade-offs requires the application of appropriate metrics that are well rooted in science, easy to understand and reflect the needs of policy, industry and the public for informed decision making. For the purpose of this workshop, metrics were loosely defined as a quantified measure of effect or impact used to inform decision-making and to evaluate mitigation measures. The workshop held on October 9 and 10 and co-organized between the European Environment Agency and the Institute for Advanced Sustainability Studies brought together representatives from science, policy, NGOs, and industry to discuss whether current available metrics are 'fit for purpose' or whether there is a need to develop alternative metrics or reassess the way current metrics are used and communicated. Based on the workshop outcome the presentation will (a) summarize the informational needs and current application of metrics by the end-users, who, depending on their field and area of operation might require health, policy, and/or economically relevant parameters at different scales, (b) provide an overview of the state of the science of currently used and newly developed metrics, and the scientific validity of these

  3. Quality electric motor repair: A guidebook for electric utilities

    Energy Technology Data Exchange (ETDEWEB)

    Schueler, V.; Douglass, J.

    1995-08-01

    This guidebook provides utilities with a resource for better understanding and developing their roles in relation to electric motor repair shops and the industrial and commercial utility customers that use them. The guidebook includes information and tools that utilities can use to raise the quality of electric motor repair practices in their service territories.

  4. Using animation quality metric to improve efficiency of global illumination computation for dynamic environments

    Science.gov (United States)

    Myszkowski, Karol; Tawara, Takehiro; Seidel, Hans-Peter

    2002-06-01

    In this paper, we consider applications of perception-based video quality metrics to improve the performance of global lighting computations for dynamic environments. For this purpose we extend the Visible Difference Predictor (VDP) developed by Daly to handle computer animations. We incorporate into the VDP the spatio-velocity CSF model developed by Kelly. The CSF model requires data on the velocity of moving patterns across the image plane. We use the 3D image warping technique to compensate for the camera motion, and we conservatively assume that the motion of animated objects (usually strong attractors of the visual attention) is fully compensated by the smooth pursuit eye motion. Our global illumination solution is based on stochastic photon tracing and takes advantage of temporal coherence of lighting distribution, by processing photons both in the spatial and temporal domains. The VDP is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained.

  5. A new normalizing algorithm for BAC CGH arrays with quality control metrics.

    Science.gov (United States)

    Miecznikowski, Jeffrey C; Gaile, Daniel P; Liu, Song; Shepherd, Lori; Nowak, Norma

    2011-01-01

    The main focus in pin-tip (or print-tip) microarray analysis is determining which probes, genes, or oligonucleotides are differentially expressed. Specifically in array comparative genomic hybridization (aCGH) experiments, researchers search for chromosomal imbalances in the genome. To model this data, scientists apply statistical methods to the structure of the experiment and assume that the data consist of the signal plus random noise. In this paper we propose "SmoothArray", a new method to preprocess comparative genomic hybridization (CGH) bacterial artificial chromosome (BAC) arrays and we show the effects on a cancer dataset. As part of our R software package "aCGHplus," this freely available algorithm removes the variation due to the intensity effects, pin/print-tip, the spatial location on the microarray chip, and the relative location from the well plate. removal of this variation improves the downstream analysis and subsequent inferences made on the data. Further, we present measures to evaluate the quality of the dataset according to the arrayer pins, 384-well plates, plate rows, and plate columns. We compare our method against competing methods using several metrics to measure the biological signal. With this novel normalization algorithm and quality control measures, the user can improve their inferences on datasets and pinpoint problems that may arise in their BAC aCGH technology.

  6. On the performance of metrics to predict quality in point cloud representations

    Science.gov (United States)

    Alexiou, Evangelos; Ebrahimi, Touradj

    2017-09-01

    Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.

  7. A New Normalizing Algorithm for BAC CGH Arrays with Quality Control Metrics

    Directory of Open Access Journals (Sweden)

    Jeffrey C. Miecznikowski

    2011-01-01

    Full Text Available The main focus in pin-tip (or print-tip microarray analysis is determining which probes, genes, or oligonucleotides are differentially expressed. Specifically in array comparative genomic hybridization (aCGH experiments, researchers search for chromosomal imbalances in the genome. To model this data, scientists apply statistical methods to the structure of the experiment and assume that the data consist of the signal plus random noise. In this paper we propose “SmoothArray”, a new method to preprocess comparative genomic hybridization (CGH bacterial artificial chromosome (BAC arrays and we show the effects on a cancer dataset. As part of our R software package “aCGHplus,” this freely available algorithm removes the variation due to the intensity effects, pin/print-tip, the spatial location on the microarray chip, and the relative location from the well plate. removal of this variation improves the downstream analysis and subsequent inferences made on the data. Further, we present measures to evaluate the quality of the dataset according to the arrayer pins, 384-well plates, plate rows, and plate columns. We compare our method against competing methods using several metrics to measure the biological signal. With this novel normalization algorithm and quality control measures, the user can improve their inferences on datasets and pinpoint problems that may arise in their BAC aCGH technology.

  8. Utility service quality - telecommincations, electricity, water

    Energy Technology Data Exchange (ETDEWEB)

    Holt, L. [Florida Univ., Gainesville, FL (United States). Public Utility Research Center

    2005-09-01

    This survey of quality-of-service issues raised by regulation identifies 12 steps for promoting efficient sector performance. First, regulators must identify objectives and prioritize them. Inter-agency coordination is often required to establish targets. Regulators must also determine a process for selecting measures and an appropriate method for evaluating them. Finally, performance incentives must be established and outcomes periodically reviewed. Telecommunications, electricity, and water all have multiple dimensions of quality that warrant careful attention. (Author)

  9. Evaluation of the performance of a micromethod for measuring urinary iodine by using six sigma quality metrics.

    Science.gov (United States)

    Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud

    2013-09-01

    The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)Sigma metrics at all concentrations. Only one laboratory had TEcalc

  10. Engineering performance metrics

    Science.gov (United States)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  11. Metric qualities of the cognitive behavioral assessment for outcome evaluation to estimate psychological treatment effects.

    Science.gov (United States)

    Bertolotti, Giorgio; Michielin, Paolo; Vidotto, Giulio; Sanavio, Ezio; Bottesi, Gioia; Bettinardi, Ornella; Zotti, Anna Maria

    2015-01-01

    Cognitive behavioral assessment for outcome evaluation was developed to evaluate psychological treatment interventions, especially for counseling and psychotherapy. It is made up of 80 items and five scales: anxiety, well-being, perception of positive change, depression, and psychological distress. The aim of the study was to present the metric qualities and to show validity and reliability of the five constructs of the questionnaire both in nonclinical and clinical subjects. Four steps were completed to assess reliability and factor structure: criterion-related and concurrent validity, responsiveness, and convergent-divergent validity. A nonclinical group of 269 subjects was enrolled, as was a clinical group comprising 168 adults undergoing psychotherapy and psychological counseling provided by the Italian public health service. Cronbach's alphas were between 0.80 and 0.91 for the clinical sample and between 0.74 and 0.91 in the nonclinical one. We observed an excellent structural validity for the five interrelated dimensions. The clinical group showed higher scores in the anxiety, depression, and psychological distress scales, as well as lower scores in well-being and perception of positive change scales than those observed in the nonclinical group. Responsiveness was large for the anxiety, well-being, and depression scales; the psychological distress and perception of positive change scales showed a moderate effect. The questionnaire showed excellent psychometric properties, thus demonstrating that the questionnaire is a good evaluative instrument, with which to assess pre- and post-treatment outcomes.

  12. A No Reference Image Quality Assessment Metric Based on Visual Perception

    Directory of Open Access Journals (Sweden)

    Yan Fu

    2016-12-01

    Full Text Available Nowadays, how to evaluate image quality reasonably is a basic and challenging problem. In view of the present no reference evaluation methods, they cannot reflect the human visual perception of image quality accurately. In this paper, we propose an efficient general-purpose no reference image quality assessment (NRIQA method based on visual perception, and effectively integrates human visual characteristics into the NRIQA fields. First, a novel algorithm for salient region extraction is presented. Two characteristics graphs of texture and edging of the original image are added to the Itti model. Due to the normalized luminance coefficients of natural images obey the generalized Gauss probability distribution, we utilize this characteristic to extract statistical features in the regions of interest (ROI and regions of non-interest respectively. Then, the extracted features are fused to be an input to establish the support vector regression (SVR model. Finally, the IQA model obtained by training is used to predict the quality of the image. Experimental results show that this method has good predictive ability, and the evaluation effect is better than existing classical algorithms. Moreover, the predicted results are more consistent with human subjective perception, which can accurately reflect the human visual perception to image quality.

  13. Constructing a no-reference H.264/AVC bitstream-based video quality metric using genetic programming-based symbolic regression

    OpenAIRE

    Staelens, Nicolas; Deschrijver, Dirk; Vladislavleva, E; Vermeulen, Brecht; Dhaene, Tom; Demeester, Piet

    2013-01-01

    In order to ensure optimal quality of experience toward end users during video streaming, automatic video quality assessment becomes an important field-of-interest to video service providers. Objective video quality metrics try to estimate perceived quality with high accuracy and in an automated manner. In traditional approaches, these metrics model the complex properties of the human visual system. More recently, however, it has been shown that machine learning approaches can also yield comp...

  14. Sigma metrics as a tool for evaluating the performance of internal quality control in a clinical chemistry laboratory.

    Science.gov (United States)

    Kumar, B Vinodh; Mohan, Thuthi

    2018-01-01

    Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of sigma level. For all analytes sigma level, the quality goal index (QGI) was 1.2 indicated inaccuracy. This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes.

  15. Utilizing Machine Learning and Automated Performance Metrics to Evaluate Robot-Assisted Radical Prostatectomy Performance and Predict Outcomes.

    Science.gov (United States)

    Hung, Andrew J; Chen, Jian; Che, Zhengping; Nilanon, Tanachat; Jarc, Anthony; Titus, Micha; Oh, Paul J; Gill, Inderbir S; Liu, Yan

    2018-05-01

    Surgical performance is critical for clinical outcomes. We present a novel machine learning (ML) method of processing automated performance metrics (APMs) to evaluate surgical performance and predict clinical outcomes after robot-assisted radical prostatectomy (RARP). We trained three ML algorithms utilizing APMs directly from robot system data (training material) and hospital length of stay (LOS; training label) (≤2 days and >2 days) from 78 RARP cases, and selected the algorithm with the best performance. The selected algorithm categorized the cases as "Predicted as expected LOS (pExp-LOS)" and "Predicted as extended LOS (pExt-LOS)." We compared postoperative outcomes of the two groups (Kruskal-Wallis/Fisher's exact tests). The algorithm then predicted individual clinical outcomes, which we compared with actual outcomes (Spearman's correlation/Fisher's exact tests). Finally, we identified five most relevant APMs adopted by the algorithm during predicting. The "Random Forest-50" (RF-50) algorithm had the best performance, reaching 87.2% accuracy in predicting LOS (73 cases as "pExp-LOS" and 5 cases as "pExt-LOS"). The "pExp-LOS" cases outperformed the "pExt-LOS" cases in surgery time (3.7 hours vs 4.6 hours, p = 0.007), LOS (2 days vs 4 days, p = 0.02), and Foley duration (9 days vs 14 days, p = 0.02). Patient outcomes predicted by the algorithm had significant association with the "ground truth" in surgery time (p algorithm in predicting, were largely related to camera manipulation. To our knowledge, ours is the first study to show that APMs and ML algorithms may help assess surgical RARP performance and predict clinical outcomes. With further accrual of clinical data (oncologic and functional data), this process will become increasingly relevant and valuable in surgical assessment and training.

  16. Utilizing Educational Corporate Culture To Create a Quality School.

    Science.gov (United States)

    Osborne, Bill

    Strategies for utilizing educational corporate culture to create a quality school are presented in this paper, which argues that the understanding of the shared belief system of organizational members is crucial to the process. Creating a quality school entails moving from a "teach the process" oriented model to one that internalizes the…

  17. SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject to random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be

  18. Application of Sigma Metrics Analysis for the Assessment and Modification of Quality Control Program in the Clinical Chemistry Laboratory of a Tertiary Care Hospital.

    Science.gov (United States)

    Iqbal, Sahar; Mustansar, Tazeen

    2017-03-01

    Sigma is a metric that quantifies the performance of a process as a rate of Defects-Per-Million opportunities. In clinical laboratories, sigma metric analysis is used to assess the performance of laboratory process system. Sigma metric is also used as a quality management strategy for a laboratory process to improve the quality by addressing the errors after identification. The aim of this study is to evaluate the errors in quality control of analytical phase of laboratory system by sigma metric. For this purpose sigma metric analysis was done for analytes using the internal and external quality control as quality indicators. Results of sigma metric analysis were used to identify the gaps and need for modification in the strategy of laboratory quality control procedure. Sigma metric was calculated for quality control program of ten clinical chemistry analytes including glucose, chloride, cholesterol, triglyceride, HDL, albumin, direct bilirubin, total bilirubin, protein and creatinine, at two control levels. To calculate the sigma metric imprecision and bias was calculated with internal and external quality control data, respectively. The minimum acceptable performance was considered as 3 sigma. Westgard sigma rules were applied to customize the quality control procedure. Sigma level was found acceptable (≥3) for glucose (L2), cholesterol, triglyceride, HDL, direct bilirubin and creatinine at both levels of control. For rest of the analytes sigma metric was found control levels (8.8 and 8.0 at L2 and L3, respectively). We conclude that analytes with the sigma value quality control procedure. In this study application of sigma rules provided us the practical solution for improved and focused design of QC procedure.

  19. Problems in Systematic Application of Software Metrics and Possible Solution

    OpenAIRE

    Rakic, Gordana; Budimac, Zoran

    2013-01-01

    Systematic application of software metric techniques can lead to significant improvements of the quality of a final software product. However, there is still the evident lack of wider utilization of software metrics techniques and tools due to many reasons. In this paper we investigate some limitations of contemporary software metrics tools and then propose construction of a new tool that would solve some of the problems. We describe the promising prototype, its internal structure, and then f...

  20. Leveraging multi-channel x-ray detector technology to improve quality metrics for industrial and security applications

    Science.gov (United States)

    Jimenez, Edward S.; Thompson, Kyle R.; Stohn, Adriana; Goodner, Ryan N.

    2017-09-01

    Sandia National Laboratories has recently developed the capability to acquire multi-channel radio- graphs for multiple research and development applications in industry and security. This capability allows for the acquisition of x-ray radiographs or sinogram data to be acquired at up to 300 keV with up to 128 channels per pixel. This work will investigate whether multiple quality metrics for computed tomography can actually benefit from binned projection data compared to traditionally acquired grayscale sinogram data. Features and metrics to be evaluated include the ability to dis- tinguish between two different materials with similar absorption properties, artifact reduction, and signal-to-noise for both raw data and reconstructed volumetric data. The impact of this technology to non-destructive evaluation, national security, and industry is wide-ranging and has to potential to improve upon many inspection methods such as dual-energy methods, material identification, object segmentation, and computer vision on radiographs.

  1. Indoor air quality in public utility environments-a review.

    Science.gov (United States)

    Śmiełowska, Monika; Marć, Mariusz; Zabiegała, Bożena

    2017-04-01

    Indoor air quality has been the object of interest for scientists and specialists from the fields of science such as chemistry, medicine and ventilation system design. This results from a considerable number of potential factors, which may influence the quality of the broadly understood indoor air in a negative way. Poor quality of indoor air in various types of public utility buildings may significantly affect an increase in the incidence of various types of civilisation diseases. This paper presents information about a broad spectrum of chemical compounds that were identified and determined in the indoor environment of various types of public utility rooms such as churches, museums, libraries, temples and hospitals. An analysis of literature data allowed for identification of the most important transport paths of chemical compounds that significantly influence the quality of the indoor environment and thus the comfort of living and the health of persons staying in it.

  2. UTILIZATION OF QUALITY TOOLS: DOES SECTOR AND SIZE MATTER?

    Directory of Open Access Journals (Sweden)

    Luis Fonseca

    2015-12-01

    Full Text Available This research focuses on the influence of company sector and size on the level of utilization of Basic and Advanced Quality Tools. The paper starts with a literature review and then presents the methodology used for the survey. Based on the responses from 202 managers of Portuguese ISO 9001:2008 Quality Management System certified organizations, statistical tests were performed. Results show, with 95% confidence level, that industry and services have a similar proportion of use of Basic and Advanced Quality Tools. Concerning size, bigger companies show a higher trend to use Advanced Quality Tools than smaller ones. For Basic Quality Tools, there was no statistical significant difference at a 95% confidence level for different company sizes. The three basic Quality tools with higher utilization were Check sheets, Flow charts and Histograms (for Services or Control Charts/ (for Industry, however 22% of the surveyed organizations reported not using Basic Quality Tools, which highlights a major improvement opportunity for these companies. Additional studies addressing motivations, benefits and barriers for Quality Tools application should be undertaken for further validation and understanding of these results.

  3. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery

    International Nuclear Information System (INIS)

    Shiraishi, Satomi; Moore, Kevin L.; Tan, Jun; Olsen, Lindsey A.

    2015-01-01

    Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V 10Gy (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM clin − QM pred , and a coefficient of determination, R 2 . For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are stratified based on

  4. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Shiraishi, Satomi; Moore, Kevin L., E-mail: kevinmoore@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California 92093 (United States); Tan, Jun [Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas 75490 (United States); Olsen, Lindsey A. [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States)

    2015-02-15

    Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V{sub 10Gy} (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM{sub clin} − QM{sub pred}, and a coefficient of determination, R{sup 2}. For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are

  5. Cost and quality of fuels for electric utility plants, 1994

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-14

    This document presents an annual summary of statistics at the national, Census division, State, electric utility, and plant levels regarding the quantity, quality, and cost of fossil fuels used to produce electricity. Purpose of this publication is to provide energy decision-makers with accurate, timely information that may be used in forming various perspectives on issues regarding electric power.

  6. A Metric and Workflow for Quality Control in the Analysis of Heterogeneity in Phenotypic Profiles and Screens

    Science.gov (United States)

    Gough, Albert; Shun, Tongying; Taylor, D. Lansing; Schurdak, Mark

    2016-01-01

    Heterogeneity is well recognized as a common property of cellular systems that impacts biomedical research and the development of therapeutics and diagnostics. Several studies have shown that analysis of heterogeneity: gives insight into mechanisms of action of perturbagens; can be used to predict optimal combination therapies; and to quantify heterogeneity in tumors where heterogeneity is believed to be associated with adaptation and resistance. Cytometry methods including high content screening (HCS), high throughput microscopy, flow cytometry, mass spec imaging and digital pathology capture cell level data for populations of cells. However it is often assumed that the population response is normally distributed and therefore that the average adequately describes the results. A deeper understanding of the results of the measurements and more effective comparison of perturbagen effects requires analysis that takes into account the distribution of the measurements, i.e. the heterogeneity. However, the reproducibility of heterogeneous data collected on different days, and in different plates/slides has not previously been evaluated. Here we show that conventional assay quality metrics alone are not adequate for quality control of the heterogeneity in the data. To address this need, we demonstrate the use of the Kolmogorov-Smirnov statistic as a metric for monitoring the reproducibility of heterogeneity in an SAR screen, describe a workflow for quality control in heterogeneity analysis. One major challenge in high throughput biology is the evaluation and interpretation of heterogeneity in thousands of samples, such as compounds in a cell-based screen. In this study we also demonstrate that three heterogeneity indices previously reported, capture the shapes of the distributions and provide a means to filter and browse big data sets of cellular distributions in order to compare and identify distributions of interest. These metrics and methods are presented as a

  7. Custom power - the utility solution to distribution power quality

    Energy Technology Data Exchange (ETDEWEB)

    Woodley, N H [Westinghouse Electric Corp., Pittsburgh, PA (United States)

    1997-04-01

    The design of custom power products for electric power distribution system was discussed. Problems with power quality that result in loss of production to critical processes are costly and create a problem for the customer as well as the electric utility. Westinghouse has developed power quality improvement equipment for customers and utilities, using new technologies based on power electronics concepts. The Distribution Static Compensator (DSTATCOM) is a fast response, solid-state power controller that provides flexible voltage control for improving power quality at the point of connection to the utility`s 4.16 to 69 kV distribution feeder. STATCOM is a larger version of the DSTATCOM that can be used to solve voltage flicker problems caused by electric arc furnaces. Westinghouse has also developed a Dynamic Voltage Restorer (DVR) which protects a critical customer plant load from power system voltage disturbances. Solid-State Breakers (SSB) have also been developed which offer a solution to many of the distribution system problems that result in voltage sags, swells, and power outages. 6 refs., 8 figs.

  8. Quality measurement affecting surgical practice: Utility versus utopia.

    Science.gov (United States)

    Henry, Leonard R; von Holzen, Urs W; Minarich, Michael J; Hardy, Ashley N; Beachy, Wilbur A; Franger, M Susan; Schwarz, Roderich E

    2018-03-01

    The Triple Aim: improving healthcare quality, cost and patient experience has resulted in massive healthcare "quality" measurement. For many surgeons the origins, intent and strengths of this measurement barrage seems nebulous-though their shortcomings are noticeable. This article reviews the major organizations and programs (namely the Centers for Medicare and Medicaid Services) driving the somewhat burdensome healthcare quality climate. The success of this top-down approach is mixed, and far from convincing. We contend that the current programs disproportionately reflect the definitions of quality from (and the interests of) the national payer perspective; rather than a more balanced representation of all stakeholders interests-most importantly, patients' beneficence. The result is an environment more like performance management than one of valid quality assessment. Suggestions for a more meaningful construction of surgical quality measurement are offered, as well as a strategy to describe surgical quality from all of the stakeholders' perspectives. Our hope is to entice surgeons to engage in institution level quality improvement initiatives that promise utility and are less utopian than what is currently present. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Development of quality metrics for ambulatory pediatric cardiology: Transposition of the great arteries after arterial switch operation.

    Science.gov (United States)

    Baker-Smith, Carissa M; Carlson, Karina; Ettedgui, Jose; Tsuda, Takeshi; Jayakumar, K Anitha; Park, Matthew; Tede, Nikola; Uzark, Karen; Fleishman, Craig; Connuck, David; Likes, Maggie; Penny, Daniel J

    2018-01-01

    To develop quality metrics (QMs) for the ambulatory care of patients with transposition of the great arteries following arterial switch operation (TGA/ASO). Under the auspices of the American College of Cardiology Adult Congenital and Pediatric Cardiology (ACPC) Steering committee, the TGA/ASO team generated candidate QMs related to TGA/ASO ambulatory care. Candidate QMs were submitted to the ACPC Steering Committee and were reviewed for validity and feasibility using individual expert panel member scoring according to the RAND-UCLA methodology. QMs were then made available for review by the entire ACC ACPC during an "open comment period." Final approval of each QM was provided by a vote of the ACC ACPC Council. Patients with TGA who had undergone an ASO were included. Patients with complex transposition were excluded. Twelve candidate QMs were generated. Seven metrics passed the RAND-UCLA process. Four passed the "open comment period" and were ultimately approved by the Council. These included: (1) at least 1 echocardiogram performed during the first year of life reporting on the function, aortic dimension, degree of neoaortic valve insufficiency, the patency of the systemic and pulmonary outflows, the patency of the branch pulmonary arteries and coronary arteries, (2) neurodevelopmental (ND) assessment after ASO; (3) lipid profile by age 11 years; and (4) documentation of a transition of care plan to an adult congenital heart disease (CHD) provider by 18 years of age. Application of the RAND-UCLA methodology and linkage of this methodology to the ACPC approval process led to successful generation of 4 QMs relevant to the care of TGA/ASO pediatric patients in the ambulatory setting. These metrics have now been incorporated into the ACPC Quality Network providing guidance for the care of TGA/ASO patients across 30 CHD centers. © 2017 Wiley Periodicals, Inc.

  10. Utility of whole-lesion ADC histogram metrics for assessing the malignant potential of pancreatic intraductal papillary mucinous neoplasms (IPMNs).

    Science.gov (United States)

    Hoffman, David H; Ream, Justin M; Hajdu, Christina H; Rosenkrantz, Andrew B

    2017-04-01

    To evaluate whole-lesion ADC histogram metrics for assessing the malignant potential of pancreatic intraductal papillary mucinous neoplasms (IPMNs), including in comparison with conventional MRI features. Eighteen branch-duct IPMNs underwent MRI with DWI prior to resection (n = 16) or FNA (n = 2). A blinded radiologist placed 3D volumes-of-interest on the entire IPMN on the ADC map, from which whole-lesion histogram metrics were generated. The reader also assessed IPMN size, mural nodularity, and adjacent main-duct dilation. Benign (low-to-intermediate grade dysplasia; n = 10) and malignant (high-grade dysplasia or invasive adenocarcinoma; n = 8) IPMNs were compared. Whole-lesion ADC histogram metrics demonstrating significant differences between benign and malignant IPMNs were: entropy (5.1 ± 0.2 vs. 5.4 ± 0.2; p = 0.01, AUC = 86%); mean of the bottom 10th percentile (2.2 ± 0.4 vs. 1.6 ± 0.7; p = 0.03; AUC = 81%); and mean of the 10-25th percentile (2.8 ± 0.4 vs. 2.3 ± 0.6; p = 0.04; AUC = 79%). The overall mean ADC, skewness, and kurtosis were not significantly different between groups (p ≥ 0.06; AUC = 50-78%). For entropy (highest performing histogram metric), an optimal threshold of >5.3 achieved a sensitivity of 100%, a specificity of 70%, and an accuracy of 83% for predicting malignancy. No significant difference (p = 0.18-0.64) was observed between benign and malignant IPMNs for cyst size ≥3 cm, adjacent main-duct dilatation, or mural nodule. At multivariable analysis of entropy in combination with all other ADC histogram and conventional MRI features, entropy was the only significant independent predictor of malignancy (p = 0.004). Although requiring larger studies, ADC entropy obtained from 3D whole-lesion histogram analysis may serve as a biomarker for identifying the malignant potential of IPMNs, independent of conventional MRI features.

  11. Compromises Between Quality of Service Metrics and Energy Consumption of Hierarchical and Flat Routing Protocols for Wireless Sensors Network

    Directory of Open Access Journals (Sweden)

    Abdelbari BEN YAGOUTA

    2016-11-01

    Full Text Available Wireless Sensor Network (WSN is wireless network composed of spatially distributed and tiny autonomous nodes, which cooperatively monitor physical or environmental conditions. Among the concerns of these networks is prolonging the lifetime by saving nodes energy. There are several protocols specially designed for WSNs based on energy conservation. However, many WSNs applications require QoS (Quality of Service criteria, such as latency, reliability and throughput. In this paper, we will compare three routing protocols for wireless sensors network LEACH (Low Energy Adaptive Clustering Hierarchy, AODV (Ad hoc on demand Distance Vector and LABILE (Link Quality-Based Lexical Routing using Castalia simulator in terms of energy consumption, throughput, reliability and latency time of packets received by sink under different conditions to determinate the best configurations that offers the most suitable compromises between energy conservation and all QoS metrics for each routing protocols. The results show that, the best configurations that offer the suitable compromises between energy conservation and all QoS metrics is a large number of deployed nodes with low packet rate for LEACH (300 nodes and 1 packet/s, a medium number of deployed nodes with low packet rate For AODV (100 nodes and 1 packet/s and a very low nodes density with low packet rate for LABILE (50 nodes and 1 packet/s.

  12. Application of sigma metrics for the assessment of quality control in clinical chemistry laboratory in Ghana: A pilot study.

    Science.gov (United States)

    Afrifa, Justice; Gyekye, Seth A; Owiredu, William K B A; Ephraim, Richard K D; Essien-Baidoo, Samuel; Amoah, Samuel; Simpong, David L; Arthur, Aaron R

    2015-01-01

    Sigma metrics provide a uniquely defined scale with which we can assess the performance of a laboratory. The objective of this study was to assess the internal quality control (QC) in the clinical chemistry laboratory of the University of Cape Cost Hospital (UCC) using the six sigma metrics application. We used commercial control serum [normal (L1) and pathological (L2)] for validation of quality control. Metabolites (glucose, urea, and creatinine), lipids [triglycerides (TG), total cholesterol, high-density lipoprotein cholesterol (HDL-C)], enzymes [alkaline phosphatase (ALP), alanine aminotransferase (AST)], electrolytes (sodium, potassium, chloride) and total protein were assessed. Between-day imprecision (CVs), inaccuracy (Bias) and sigma values were calculated for each control level. Apart from sodium (2.40%, 3.83%), chloride (2.52% and 2.51%) for both L1 and L2 respectively, and glucose (4.82%), cholesterol (4.86%) for L2, CVs for all other parameters (both L1 and L2) were >5%. Four parameters (HDL-C, urea, creatinine and potassium) achieved sigma levels >1 for both controls. Chloride and sodium achieved sigma levels >1 for L1 but sigma levels 1 for L2. Glucose and ALP achieved a sigma level >1 for both control levels whereas TG achieved a sigma level >2 for both control levels. Unsatisfactory sigma levels (six sigma levels for the laboratory.

  13. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    International Nuclear Information System (INIS)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc

    2017-01-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  14. Application of a simple, affordable quality metric tool to colorectal, upper gastrointestinal, hernia, and hepatobiliary surgery patients: the HARM score.

    Science.gov (United States)

    Brady, Justin T; Ko, Bona; Hohmann, Samuel F; Crawshaw, Benjamin P; Leinicke, Jennifer A; Steele, Scott R; Augestad, Knut M; Delaney, Conor P

    2018-06-01

    Quality is the major driver for both clinical and financial assessment. There remains a need for simple, affordable, quality metric tools to evaluate patient outcomes, which led us to develop the HospitAl length of stay, Readmission and Mortality (HARM) score. We hypothesized that the HARM score would be a reliable tool to assess patient outcomes across various surgical specialties. From 2011 to 2015, we identified colorectal, hepatobiliary, upper gastrointestinal, and hernia surgery admissions using the Vizient Clinical Database. Individual and hospital HARM scores were calculated from length of stay, 30-day readmission, and mortality rates. We evaluated the correlation of HARM scores with complication rates using the Clavien-Dindo classification. We identified 525,083 surgical patients: 206,981 colorectal, 164,691 hepatobiliary, 97,157 hernia, and 56,254 upper gastrointestinal. Overall, 53.8% of patients were admitted electively with a mean HARM score of 2.24; 46.2% were admitted emergently with a mean HARM score of 1.45 (p  4 (p  4, complication rates were 9.3, 23.2, 38.8, and 71.6%, respectively. There was a similar trend for increasing HARM score in emergent admissions as well. For all surgical procedure categories, increasing HARM score, with and without risk adjustment, correlated with increasing severity of complications by Clavien-Dindo classification. The HARM score is an easy-to-use quality metric that correlates with increasing complication rates and complication severity across multiple surgical disciplines when evaluated on a large administrative database. This inexpensive tool could be adopted across multiple institutions to compare the quality of surgical care.

  15. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    Energy Technology Data Exchange (ETDEWEB)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc [German Cancer Research Center, Heidelberg (Germany).

    2017-10-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  16. Quality of life and utility in irradiated laryngeal cancer patients

    International Nuclear Information System (INIS)

    Ringash, Jolie; Redelmeier, Donald A.; O'Sullivan, Brian; Bezjak, Andrea

    2000-01-01

    Purpose: To determine quality of life (QOL) and health utility in irradiated laryngeal cancer survivors. Materials and Methods: Over 6 months, consecutive follow-up patients at a comprehensive cancer centre completed the QOL questionnaire FACT-H and N and the time trade-off (TTO) utility instrument. Results: Inclusion criteria were met by 339 patients, of whom 269 were eligible, 245 were approached, and 120 agreed to participate. Most participants were men (83%) who had received radiotherapy (97%) for Stage I disease (53%) of the glottis (75%); 7% had undergone total laryngectomy. Participants differed from nonparticipants only in being younger (mean age, 65 vs. 68 years, p = 0.0049) and having higher performance status (Karnofsky 88 vs. 84, p = 0.0012). The average scores for FACT-H and N and the TTO were 124/144 (SD, 14) and 0.90/1.0 (SD, 0.16) respectively. FACT-H and N score was more highly correlated with Karnofsky score (r = 0.43, p = 0.001) than with the TTO (r = 0.29, p = 0.002). Gender predicted QOL (means: M = 125, F 118), while natural speech, no relapses, and more time since initial treatment predicted higher utility. Conclusion: The QOL of irradiated laryngeal cancer survivors was reasonably high and independent of initial disease variables. The QOL questionnaire correlated more strongly with performance status than with utility, suggesting that QOL and utility measures may be perceived differently by patients

  17. qcML : an exchange format for quality control metrics from mass spectrometry experiments

    NARCIS (Netherlands)

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W P|info:eu-repo/dai/nl/31093205X; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A; Kelstrup, Christian D; Gatto, Laurent; van Breukelen, Bas|info:eu-repo/dai/nl/244219087; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S; Olsen, Jesper V; Heck, Albert J R|info:eu-repo/dai/nl/105189332; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart

    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data

  18. The Quality of Cost-Utility Analyses in Orthopedic Trauma.

    Science.gov (United States)

    Nwachukwu, Benedict U; Schairer, William W; O'Dea, Evan; McCormick, Frank; Lane, Joseph M

    2015-08-01

    As health care in the United States transitions toward a value-based model, there is increasing interest in applying cost-effectiveness analysis within orthopedic surgery. Orthopedic trauma care has traditionally underemphasized economic analysis. The goals of this review were to identify US-based cost-utility analysis in orthopedic trauma, to assess the quality of the available evidence, and to identify cost-effective strategies within orthopedic trauma. Based on a review of 971 abstracts, 8 US-based cost-utility analyses evaluating operative strategies in orthopedic trauma were identified. Study findings were recorded, and the Quality of Health Economic Studies (QHES) instrument was used to grade the overall quality. Of the 8 studies included in this review, 4 studies evaluated hip and femur fractures, 3 studies analyzed upper extremity fractures, and 1 study assessed open tibial fracture management. Cost-effective interventions identified in this review include total hip arthroplasty (over hemiarthroplasty) for femoral neck fractures in the active elderly, open reduction and internal fixation (over nonoperative management) for distal radius and scaphoid fractures, limb salvage (over amputation) for complex open tibial fractures, and systems-based interventions to prevent delay in hip fracture surgery. The mean QHES score of the studies was 79.25 (range, 67-89). Overall, there is a paucity of cost-utility analyses in orthopedic trauma; however, the available evidence suggests that certain operative interventions can be cost-effective. The quality of these studies, however, is fair, based on QHES grading. More attention should be paid to evaluating the cost-effectiveness of operative intervention in orthopedic trauma. Copyright 2015, SLACK Incorporated.

  19. THE MAQC PROJECT: ESTABLISHING QC METRICS AND THRESHOLDS FOR MICROARRAY QUALITY CONTROL

    Science.gov (United States)

    Microarrays represent a core technology in pharmacogenomics and toxicogenomics; however, before this technology can successfully and reliably be applied in clinical practice and regulatory decision-making, standards and quality measures need to be developed. The Microarray Qualit...

  20. Elliptical local vessel density: a fast and robust quality metric for retinal images

    OpenAIRE

    Giancardo, L.; Abramoff, M.D.; Chaum, E.; Karnowski, T.P.; Meriaudeau, F.; Tobin, K.W.

    2008-01-01

    A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches for automatically judging the image quality. We propose a new set of features independent of field of view or resolution to describe the morphology of the patient's vessels. Our initial results suggest that these features can be used to estimate the image quality i...

  1. Impact of Fellowship Training Level on Colonoscopy Quality and Efficiency Metrics.

    Science.gov (United States)

    Bitar, Hussein; Zia, Hassaan; Bashir, Muhammad; Parava, Pratyusha; Hanafi, Muhammad; Tierney, William; Madhoun, Mohammad

    2018-04-18

    Previous studies have described variable effects of fellow involvement on the adenoma detection rate (ADR), but few have stratified this effect by level of training. We aimed to evaluate the "fellow effect" on multiple procedural metrics including a newly defined adenoma management efficiency index, which may have a role in documenting colonoscopy proficiency for trainees. We also describe the impact of level of training on moderate sedation use. We performed a retrospective review of 2024 patients (mean age 60.9 ± 10. 94% males) who underwent outpatient colonoscopy between June 2012 and December 2014 at our Veterans Affairs Medical Center. Colonoscopies were divided into 5 groups. The first 2 groups were first year fellows in the first 6 months and last 6 months of the training year. Second and third year fellows and attending only procedures accounted for one group each. We collected data on doses of sedatives used, frequency of adjunctive agent use, procedural times as well as location, size and histology of polyps. We defined the adenoma management efficiency index as average time required per adenoma resected during withdrawal. 1675 colonoscopies involved a fellow. 349 were performed by the attending alone. There was no difference in ADR between fellows according to level of training (P=0.8), or between fellows compared with attending-only procedures (P=0.67). Procedural times decreased consistently during training, and declined further for attending only procedures. This translated into improvement in the adenoma management efficiency index (fellow groups by ascending level of training 23.5 minutes vs 18.3 minutes vs 13.7 minutes vs 13.4 minutes vs attending group 11.7 minutes; PEfficiency of detecting and resecting polyps improved throughout training without reaching attending level. Fellow involvement led to greater use of moderate sedation, which may relate to a longer procedure duration and an evolving experience in endoscopic technique. Copyright

  2. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2012-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed....... This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical...

  3. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    DEFF Research Database (Denmark)

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development...... of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed....... This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical...

  4. Elliptical Local Vessel Density: a Fast and Robust Quality Metric for Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Chaum, Edward [ORNL; Karnowski, Thomas Paul [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL; Abramoff, M.D. [University of Iowa

    2008-01-01

    A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches to the problem of automatically judging the image quality. We propose a new set of features independent of Field of View or resolution to describe the morphology of the patient's vessels. Our initial results suggest that they can be used to estimate the image quality in a time one order of magnitude shorter respect to previous techniques.

  5. Habitat connectivity as a metric for aquatic microhabitat quality: Application to Chinook salmon spawning habitat

    Science.gov (United States)

    Ryan Carnie; Daniele Tonina; Jim McKean; Daniel Isaak

    2016-01-01

    Quality of fish habitat at the scale of a single fish, at the metre resolution, which we defined here as microhabitat, has been primarily evaluated on short reaches, and their results have been extended through long river segments with methods that do not account for connectivity, a measure of the spatial distribution of habitat patches. However, recent...

  6. Ability to Work among Patients with ESKD: Relevance of Quality Care Metrics.

    Science.gov (United States)

    Kutner, Nancy G; Zhang, Rebecca

    2017-08-07

    Enabling patient ability to work was a key rationale for enacting the United States (US) Medicare program that provides financial entitlement to renal replacement therapy for persons with end-stage kidney disease (ESKD). However, fewer than half of working-age individuals in the US report the ability to work after starting maintenance hemodialysis (HD). Quality improvement is a well-established objective in oversight of the dialysis program, but a more patient-centered quality assessment approach is increasingly advocated. The ESKD Quality Incentive Program (QIP) initiated in 2012 emphasizes clinical performance indicators, but a newly-added measure requires the monitoring of patient depression-an issue that is important for work ability and employment. We investigated depression scores and four dialysis-specific QIP measures in relation to work ability reported by a multi-clinic cohort of 528 working-age maintenance HD patients. The prevalence of elevated depression scores was substantially higher among patients who said they were not able to work, while only one of the four dialysis-specific clinical measures differed for patients able/not able to work. Ability to work may be among patients' top priorities. As the parameters of quality assessment continue to evolve, increased attention to patient priorities might facilitate work ability and employment outcomes.

  7. A metrics-based comparison of secondary user quality between iOS and Android

    NARCIS (Netherlands)

    T. Amman

    2014-01-01

    htmlabstract Native mobile applications gain popularity in the commercial market. There is no other econom- ical sector that grows as fast. A lot of economical research is done in this sector, but there is very little research that deals with qualities for mobile application developers. This paper

  8. [Establishing IAQ Metrics and Baseline Measures.] "Indoor Air Quality Tools for Schools" Update #20

    Science.gov (United States)

    US Environmental Protection Agency, 2009

    2009-01-01

    This issue of "Indoor Air Quality Tools for Schools" Update ("IAQ TfS" Update) contains the following items: (1) News and Events; (2) IAQ Profile: Establishing Your Baseline for Long-Term Success (Feature Article); (3) Insight into Excellence: Belleville Township High School District #201, 2009 Leadership Award Winner; and (4) Have Your Questions…

  9. Tracker Performance Metric

    National Research Council Canada - National Science Library

    Olson, Teresa; Lee, Harry; Sanders, Johnnie

    2002-01-01

    .... We have developed the Tracker Performance Metric (TPM) specifically for this purpose. It was designed to measure the output performance, on a frame-by-frame basis, using its output position and quality...

  10. Determine metrics and set targets for soil quality on agriculture residue and energy crop pathways

    Energy Technology Data Exchange (ETDEWEB)

    Ian Bonner; David Muth

    2013-09-01

    There are three objectives for this project: 1) support OBP in meeting MYPP stated performance goals for the Sustainability Platform, 2) develop integrated feedstock production system designs that increase total productivity of the land, decrease delivered feedstock cost to the conversion facilities, and increase environmental performance of the production system, and 3) deliver to the bioenergy community robust datasets and flexible analysis tools for establishing sustainable and viable use of agricultural residues and dedicated energy crops. The key project outcome to date has been the development and deployment of a sustainable agricultural residue removal decision support framework. The modeling framework has been used to produce a revised national assessment of sustainable residue removal potential. The national assessment datasets are being used to update national resource assessment supply curves using POLYSIS. The residue removal modeling framework has also been enhanced to support high fidelity sub-field scale sustainable removal analyses. The framework has been deployed through a web application and a mobile application. The mobile application is being used extensively in the field with industry, research, and USDA NRCS partners to support and validate sustainable residue removal decisions. The results detailed in this report have set targets for increasing soil sustainability by focusing on primary soil quality indicators (total organic carbon and erosion) in two agricultural residue management pathways and a dedicated energy crop pathway. The two residue pathway targets were set to, 1) increase residue removal by 50% while maintaining soil quality, and 2) increase soil quality by 5% as measured by Soil Management Assessment Framework indicators. The energy crop pathway was set to increase soil quality by 10% using these same indicators. To demonstrate the feasibility and impact of each of these targets, seven case studies spanning the US are presented

  11. Challenges, Solutions, and Quality Metrics of Personal Genome Assembly in Advancing Precision Medicine

    Directory of Open Access Journals (Sweden)

    Wenming Xiao

    2016-04-01

    Full Text Available Even though each of us shares more than 99% of the DNA sequences in our genome, there are millions of sequence codes or structure in small regions that differ between individuals, giving us different characteristics of appearance or responsiveness to medical treatments. Currently, genetic variants in diseased tissues, such as tumors, are uncovered by exploring the differences between the reference genome and the sequences detected in the diseased tissue. However, the public reference genome was derived with the DNA from multiple individuals. As a result of this, the reference genome is incomplete and may misrepresent the sequence variants of the general population. The more reliable solution is to compare sequences of diseased tissue with its own genome sequence derived from tissue in a normal state. As the price to sequence the human genome has dropped dramatically to around $1000, it shows a promising future of documenting the personal genome for every individual. However, de novo assembly of individual genomes at an affordable cost is still challenging. Thus, till now, only a few human genomes have been fully assembled. In this review, we introduce the history of human genome sequencing and the evolution of sequencing platforms, from Sanger sequencing to emerging “third generation sequencing” technologies. We present the currently available de novo assembly and post-assembly software packages for human genome assembly and their requirements for computational infrastructures. We recommend that a combined hybrid assembly with long and short reads would be a promising way to generate good quality human genome assemblies and specify parameters for the quality assessment of assembly outcomes. We provide a perspective view of the benefit of using personal genomes as references and suggestions for obtaining a quality personal genome. Finally, we discuss the usage of the personal genome in aiding vaccine design and development, monitoring host

  12. Semantic metrics

    OpenAIRE

    Hu, Bo; Kalfoglou, Yannis; Dupplaw, David; Alani, Harith; Lewis, Paul; Shadbolt, Nigel

    2006-01-01

    In the context of the Semantic Web, many ontology-related operations, e.g. ontology ranking, segmentation, alignment, articulation, reuse, evaluation, can be boiled down to one fundamental operation: computing the similarity and/or dissimilarity among ontological entities, and in some cases among ontologies themselves. In this paper, we review standard metrics for computing distance measures and we propose a series of semantic metrics. We give a formal account of semantic metrics drawn from a...

  13. Mass Customization Measurements Metrics

    DEFF Research Database (Denmark)

    Nielsen, Kjeld; Brunø, Thomas Ditlev; Jørgensen, Kaj Asbjørn

    2014-01-01

    A recent survey has indicated that 17 % of companies have ceased mass customizing less than 1 year after initiating the effort. This paper presents measurement for a company’s mass customization performance, utilizing metrics within the three fundamental capabilities: robust process design, choice...... navigation, and solution space development. A mass customizer when assessing performance with these metrics can identify within which areas improvement would increase competitiveness the most and enable more efficient transition to mass customization....

  14. DOE JGI Quality Metrics; Approaches to Scaling and Improving Metagenome Assembly (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    Energy Technology Data Exchange (ETDEWEB)

    Copeland, Alex; Brown, C. Titus

    2011-10-13

    DOE JGI's Alex Copeland on "DOE JGI Quality Metrics" and Michigan State University's C. Titus Brown on "Approaches to Scaling and Improving Metagenome Assembly" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  15. Survival As a Quality Metric of Cancer Care: Use of the National Cancer Data Base to Assess Hospital Performance.

    Science.gov (United States)

    Shulman, Lawrence N; Palis, Bryan E; McCabe, Ryan; Mallin, Kathy; Loomis, Ashley; Winchester, David; McKellar, Daniel

    2018-01-01

    Survival is considered an important indicator of the quality of cancer care, but the validity of different methodologies to measure comparative survival rates is less well understood. We explored whether the National Cancer Data Base (NCDB) could serve as a source of unadjusted and risk-adjusted cancer survival data and whether these data could be used as quality indicators for individual hospitals or in the aggregate by hospital type. The NCDB, an aggregate of > 1,500 hospital cancer registries, was queried to analyze unadjusted and risk-adjusted hazards of death for patients with stage III breast cancer (n = 116,787) and stage IIIB or IV non-small-cell lung cancer (n = 252,392). Data were analyzed at the individual hospital level and by hospital type. At the hospital level, after risk adjustment, few hospitals had comparative risk-adjusted survival rates that were statistically better or worse. By hospital type, National Cancer Institute-designated comprehensive cancer centers had risk-adjusted survival ratios that were statistically significantly better than those of academic cancer centers and community hospitals. Using the NCDB as the data source, survival rates for patients with stage III breast cancer and stage IIIB or IV non-small-cell lung cancer were statistically better at National Cancer Institute-designated comprehensive cancer centers when compared with other hospital types. Compared with academic hospitals, risk-adjusted survival was lower in community hospitals. At the individual hospital level, after risk adjustment, few hospitals were shown to have statistically better or worse survival, suggesting that, using NCDB data, survival may not be a good metric to determine relative quality of cancer care at this level.

  16. A simple metric to predict stream water quality from storm runoff in an urban watershed.

    Science.gov (United States)

    Easton, Zachary M; Sullivan, Patrick J; Walter, M Todd; Fuka, Daniel R; Petrovic, A Martin; Steenhuis, Tammo S

    2010-01-01

    The contribution of runoff from various land uses to stream channels in a watershed is often speculated and used to underpin many model predictions. However, these contributions, often based on little or no measurements in the watershed, fail to appropriately consider the influence of the hydrologic location of a particular landscape unit in relation to the stream network. A simple model was developed to predict storm runoff and the phosphorus (P) status of a perennial stream in an urban watershed in New York State using the covariance structure of runoff from different landscape units in the watershed to predict runoff in time. One hundred and twenty-seven storm events were divided into parameterization (n = 85) and forecasting (n = 42) data sets. Runoff, dissolved P (DP), and total P (TP) were measured at nine sites distributed among three land uses (high maintenance, unmaintained, wooded), three positions in the watershed (near the outlet, midwatershed, upper watershed), and in the stream at the watershed outlet. The autocorrelation among runoff and P concentrations from the watershed landscape units (n = 9) and the covariance between measurements from the landscape units and measurements from the stream were calculated and used to predict the stream response. Models, validated using leave-one-out cross-validation and a forecasting method, were able to correctly capture temporal trends in streamflow and stream P chemistry (Nash-Sutcliffe efficiencies, 0.49-0.88). The analysis suggests that the covariance structure was consistent for all models, indicating that the physical processes governing runoff and P loss from these landscape units were stationary in time and that landscapes located in hydraulically active areas have a direct hydraulic link to the stream. This methodology provides insight into the impact of various urban landscape units on stream water quantity and quality.

  17. Brief educational interventions to improve performance on novel quality metrics in ambulatory settings in Kenya: A multi-site pre-post effectiveness trial.

    Science.gov (United States)

    Korom, Robert Ryan; Onguka, Stephanie; Halestrap, Peter; McAlhaney, Maureen; Adam, Mary

    2017-01-01

    The quality of primary care delivered in resource-limited settings is low. While some progress has been made using educational interventions, it is not yet clear how to sustainably improve care for common acute illnesses in the outpatient setting. Management of urinary tract infection is particularly important in resource-limited settings, where it is commonly diagnosed and associated with high levels of antimicrobial resistance. We describe an educational programme targeting non-physician health care providers and its effects on various clinical quality metrics for urinary tract infection. We used a series of educational interventions including 1) formal introduction of a clinical practice guideline, 2) peer-to-peer chart review, and 3) peer-reviewed literature describing local antimicrobial resistance patterns. Interventions were conducted for clinical officers (N = 24) at two outpatient centers near Nairobi, Kenya over a one-year period. The medical records of 474 patients with urinary tract infections were scored on five clinical quality metrics, with the primary outcome being the proportion of cases in which the guideline-recommended antibiotic was prescribed. The results at baseline and following each intervention were compared using chi-squared tests and unpaired two-tailed T-tests for significance. Logistic regression analysis was used to assess for possible confounders. Clinician adherence to the guideline-recommended antibiotic improved significantly during the study period, from 19% at baseline to 68% following all interventions (Χ2 = 150.7, p quality score also improved significantly from an average of 2.16 to 3.00 on a five-point scale (t = 6.58, p educational interventions can dramatically improve the quality of care for routine acute illnesses in the outpatient setting. Measurement of quality metrics allows for further targeting of educational interventions depending on the needs of the providers and the community. Further study is needed to expand

  18. Impact of artefact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data.

    Directory of Open Access Journals (Sweden)

    Thomas Samuel Carroll

    2014-04-01

    Full Text Available With the advent of ChIP-seq multiplexing technologies and the subsequent increase in ChIP-seq throughput, the development of working standards for the quality assessment of ChIP-seq studies has received significant attention. The ENCODE consortium’s large scale analysis of transcription factor binding and epigenetic marks as well as concordant work on ChIP-seq by other laboratories has established a new generation of ChIP-seq quality control measures. The use of these metrics alongside common processing steps has however not been evaluated. In this study, we investigate the effects of blacklisting and removal of duplicated reads on established metrics of ChIP-seq quality and show that the interpretation of these metrics is highly dependent on the ChIP-seq preprocessing steps applied. Further to this we perform the first investigation of the use of these metrics for ChIP-exo data and make recommendations for the adaptation of the NSC statistic to allow for the assessment of ChIP-exo efficiency.

  19. Modeling Relationships between Surface Water Quality and Landscape Metrics Using the Adaptive Neuro-Fuzzy Inference System, A Case Study in Mazandaran Province

    Directory of Open Access Journals (Sweden)

    mohsen Mirzayi

    2016-03-01

    Full Text Available Landscape indices can be used as an approach for predicting water quality changes to monitor non-point source pollution. In the present study, the data collected over the period from 2012 to 2013 from 81 water quality stations along the rivers flowing in Mazandaran Province were analyzed. Upstream boundries were drawn and landscape metrics were extracted for each of the sub-watersheds at class and landscape levels. Principal component analysis was used to single out the relevant water quality parameters and forward linear regression was employed to determine the optimal metrics for the description of each parameter. The first five components were able to describe 96.61% of the variation in water quality in Mazandaran Province. Adaptive Neuro-fuzzy Inference System (ANFIS and multiple linear regression were used to model the relationship between landscape metrics and water quality parameters. The results indicate that multiple regression was able to predict SAR, TDS, pH, NO3‒, and PO43‒ in the test step, with R2 values equal to 0.81, 0.56, 0.73, 0.44. and 0.63, respectively. The corresponding R2 value of ANFIS in the test step were 0.82, 0.79, 0.82, 0.31, and 0.36, respectively. Clearly, ANFIS exhibited a better performance in each case than did the linear regression model. This indicates a nonlinear relationship between the water quality parameters and landscape metrics. Since different land cover/uses have considerable impacts on both the outflow water quality and the available and dissolved pollutants in rivers, the method can be reasonably used for regional planning and environmental impact assessment in development projects in the region.

  20. In Data We Trust? Comparison of Electronic Versus Manual Abstraction of Antimicrobial Prescribing Quality Metrics for Hospitalized Veterans With Pneumonia.

    Science.gov (United States)

    Jones, Barbara E; Haroldsen, Candace; Madaras-Kelly, Karl; Goetz, Matthew B; Ying, Jian; Sauer, Brian; Jones, Makoto M; Leecaster, Molly; Greene, Tom; Fridkin, Scott K; Neuhauser, Melinda M; Samore, Matthew H

    2018-07-01

    Electronic health records provide the opportunity to assess system-wide quality measures. Veterans Affairs Pharmacy Benefits Management Center for Medication Safety uses medication use evaluation (MUE) through manual review of the electronic health records. To compare an electronic MUE approach versus human/manual review for extraction of antibiotic use (choice and duration) and severity metrics. Retrospective. Hospitalizations for uncomplicated pneumonia occurring during 2013 at 30 Veterans Affairs facilities. We compared summary statistics, individual hospitalization-level agreement, facility-level consistency, and patterns of variation between electronic and manual MUE for initial severity, antibiotic choice, daily clinical stability, and antibiotic duration. Among 2004 hospitalizations, electronic and manual abstraction methods showed high individual hospitalization-level agreement for initial severity measures (agreement=86%-98%, κ=0.5-0.82), antibiotic choice (agreement=89%-100%, κ=0.70-0.94), and facility-level consistency for empiric antibiotic choice (anti-MRSA r=0.97, P<0.001; antipseudomonal r=0.95, P<0.001) and therapy duration (r=0.77, P<0.001) but lower facility-level consistency for days to clinical stability (r=0.52, P=0.006) or excessive duration of therapy (r=0.55, P=0.005). Both methods identified widespread facility-level variation in antibiotic choice, but we found additional variation in manual estimation of excessive antibiotic duration and initial illness severity. Electronic and manual MUE agreed well for illness severity, antibiotic choice, and duration of therapy in pneumonia at both the individual and facility levels. Manual MUE showed additional reviewer-level variation in estimation of initial illness severity and excessive antibiotic use. Electronic MUE allows for reliable, scalable tracking of national patterns of antimicrobial use, enabling the examination of system-wide interventions to improve quality.

  1. Does Objective Quality of Physicians Correlate with Patient Satisfaction Measured by Hospital Compare Metrics in New York State?

    Science.gov (United States)

    Bekelis, Kimon; Missios, Symeon; MacKenzie, Todd A; O'Shaughnessy, Patrick M

    2017-07-01

    It is unclear whether publicly reported benchmarks correlate with quality of physicians and institutions. We investigated the association of patient satisfaction measures from a public reporting platform with performance of neurosurgeons in New York State. This cohort study comprised patients undergoing neurosurgical operations from 2009 to 2013 who were registered in the Statewide Planning and Research Cooperative System database. The cohort was merged with publicly available data from the Centers for Medicare and Medicaid Services Hospital Compare website. Propensity-adjusted regression analysis was used to investigate the association of patient satisfaction metrics with neurosurgeon quality, as measured by the neurosurgeon's individual rate of mortality and average length of stay. During the study period, 166,365 patients underwent neurosurgical procedures. Using propensity-adjusted multivariable regression analysis, we demonstrated that undergoing neurosurgical operations in hospitals with a greater percentage of patient-assigned "high" scores was associated with higher chance of being treated by a physician with superior performance in terms of mortality (odds ratio 1.90, 95% confidence interval 1.86-1.95), and a higher chance of being treated by a physician with superior performance in terms of length of stay (odds ratio 1.24, 95% confidence interval 1.21-1.27). Similar associations were identified for hospitals with a higher percentage of patients who claimed they would recommend these institutions to others. Merging a comprehensive all-payer cohort of neurosurgery patients in New York State with data from the Hospital Compare website, we observed an association of superior hospital-level patient satisfaction measures with objective performance of individual neurosurgeons in the corresponding hospitals. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. The health-related quality of life journey of gynecologic oncology surgical patients: Implications for the incorporation of patient-reported outcomes into surgical quality metrics.

    Science.gov (United States)

    Doll, Kemi M; Barber, Emma L; Bensen, Jeannette T; Snavely, Anna C; Gehrig, Paola A

    2016-05-01

    To report the changes in patient-reported quality of life for women undergoing gynecologic oncology surgeries. In a prospective cohort study from 10/2013-10/2014, women were enrolled pre-operatively and completed comprehensive interviews at baseline, 1, 3, and 6months post-operatively. Measures included the disease-specific Functional Assessment of Cancer Therapy-General (FACT-GP), general Patient Reported Outcome Measure Information System (PROMIS) global health and validated measures of anxiety and depression. Bivariate statistics were used to analyze demographic groups and changes in mean scores over time. Of 231 patients completing baseline interviews, 185 (80%) completed 1-month, 170 (74%) 3-month, and 174 (75%) 6-month interviews. Minimally invasive (n=115, 63%) and laparotomy (n=60, 32%) procedures were performed. Functional wellbeing (20 → 17.6, ptherapy administration. In an exploratory analysis of the interaction of QOL and quality, patients with increased postoperative healthcare resource use were noted to have higher baseline levels of anxiety. For women undergoing gynecologic oncology procedures, temporary declines in functional wellbeing are balanced by improvements in emotional wellbeing and decreased anxiety symptoms after surgery. Not all commonly used QOL surveys are sensitive to changes during the perioperative period and may not be suitable for use in surgical quality metrics. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Improvement in Total Joint Replacement Quality Metrics: Year One Versus Year Three of the Bundled Payments for Care Improvement Initiative.

    Science.gov (United States)

    Dundon, John M; Bosco, Joseph; Slover, James; Yu, Stephen; Sayeed, Yousuf; Iorio, Richard

    2016-12-07

    In January 2013, a large, tertiary, urban academic medical center began participation in the Bundled Payments for Care Improvement (BPCI) initiative for total joint arthroplasty, a program implemented by the Centers for Medicare & Medicaid Services (CMS) in 2011. Medicare Severity-Diagnosis Related Groups (MS-DRGs) 469 and 470 were included. We participated in BPCI Model 2, by which an episode of care includes the inpatient and all post-acute care costs through 90 days following discharge. The goal for this initiative is to improve patient care and quality through a patient-centered approach with increased care coordination supported through payment innovation. Length of stay (LOS), readmissions, discharge disposition, and cost per episode of care were analyzed for year 3 compared with year 1 of the initiative. Multiple programs were implemented after the first year to improve performance metrics: a surgeon-directed preoperative risk-factor optimization program, enhanced care coordination and home services, a change in venous thromboembolic disease (VTED) prophylaxis to a risk-stratified protocol, infection-prevention measures, a continued emphasis on discharge to home rather than to an inpatient facility, and a quality-dependent gain-sharing program among surgeons. There were 721 Medicare primary total joint arthroplasty patients in year 1 and 785 in year 3; their data were compared. The average hospital LOS decreased from 3.58 to 2.96 days. The rate of discharge to an inpatient facility decreased from 44% to 28%. The 30-day all-cause readmission rate decreased from 7% to 5%; the 60-day all-cause readmission rate decreased from 11% to 6%; and the 90-day all-cause readmission rate decreased from 13% to 8%. The average 90-day cost per episode decreased by 20%. Mid-term results from the implementation of Medicare BPCI Model 2 for primary total joint arthroplasty demonstrated decreased LOS, decreased discharges to inpatient facilities, decreased readmissions, and

  4. DLA Energy Biofuel Feedstock Metrics Study

    Science.gov (United States)

    2012-12-11

    moderately/highly in- vasive  Metric 2: Genetically modified organism ( GMO ) hazard, Yes/No and Hazard Category  Metric 3: Species hybridization...4– biofuel distribution Stage # 5– biofuel use Metric 1: State inva- siveness ranking Yes Minimal Minimal No No Metric 2: GMO hazard Yes...may utilize GMO microbial or microalgae species across the applicable biofuel life cycles (stages 1–3). The following consequence Metrics 4–6 then

  5. Investigation of stormwater quality improvements utilizing permeable friction course (PFC).

    Science.gov (United States)

    2010-09-01

    This report describes research into the water quality and hydraulics of the Permeable Friction Course (PFC). : Water quality monitoring of 3 locations in the Austin area indicates up to a 90 percent reduction in pollutant : discharges from PFC compar...

  6. General discussion of data quality challenges in social media metrics: Extensive comparison of four major altmetric data aggregators

    Science.gov (United States)

    2018-01-01

    The data collection and reporting approaches of four major altmetric data aggregators are studied. The main aim of this study is to understand how differences in social media tracking and data collection methodologies can have effects on the analytical use of altmetric data. For this purpose, discrepancies in the metrics across aggregators have been studied in order to understand how the methodological choices adopted by these aggregators can explain the discrepancies found. Our results show that different forms of accessing the data from diverse social media platforms, together with different approaches of collecting, processing, summarizing, and updating social media metrics cause substantial differences in the data and metrics offered by these aggregators. These results highlight the importance that methodological choices in the tracking, collecting, and reporting of altmetric data can have in the analytical value of the data. Some recommendations for altmetric users and data aggregators are proposed and discussed. PMID:29772003

  7. Metric learning

    CERN Document Server

    Bellet, Aurelien; Sebban, Marc

    2015-01-01

    Similarity between objects plays an important role in both human cognitive processes and artificial systems for recognition and categorization. How to appropriately measure such similarities for a given task is crucial to the performance of many machine learning, pattern recognition and data mining methods. This book is devoted to metric learning, a set of techniques to automatically learn similarity and distance functions from data that has attracted a lot of interest in machine learning and related fields in the past ten years. In this book, we provide a thorough review of the metric learnin

  8. Using the Consumer Experience with Pharmacy Services Survey as a quality metric for ambulatory care pharmacies: older adults' perspectives.

    Science.gov (United States)

    Shiyanbola, Olayinka O; Mott, David A; Croes, Kenneth D

    2016-05-26

    To describe older adults' perceptions of evaluating and comparing pharmacies based on the Consumer Experience with Pharmacy Services Survey (CEPSS), describe older adults' perceived importance of the CEPSS and its specific domains, and explore older adults' perceptions of the influence of specific CEPSS domains in choosing/switching pharmacies. Focus group methodology was combined with the administration of a questionnaire. The focus groups explored participants' perceived importance of the CEPSS and their perception of using the CEPSS to choose and/or switch pharmacies. Then, using the questionnaire, participants rated their perceived importance of each CEPSS domain in evaluating a pharmacy, and the likelihood of using CEPSS to switch pharmacies if their current pharmacy had low ratings. Descriptive and thematic analyses were done. 6 semistructured focus groups were conducted in a private meeting room in a Mid-Western state in the USA. 60 English-speaking adults who were at least 65 years, and had filled a prescription at a retail pharmacy within 90 days. During the focus groups, the older adults perceived the CEPSS to have advantages and disadvantages in evaluating and comparing pharmacies. Older adults thought the CEPSS was important in choosing the best pharmacies and avoiding the worst pharmacies. The perceived influence of the CEPSS in switching pharmacies varied depending on the older adult's personal experience or trust of other consumers' experience. Questionnaire results showed that participants perceived health/medication-focused communication as very important or extremely important (n=47, 82.5%) in evaluating pharmacies and would be extremely likely (n=21, 36.8%) to switch pharmacies if their pharmacy had low ratings in this domain. The older adults in this study are interested in using patient experiences as a quality metric for avoiding the worst pharmacies. Pharmacists' communication about health and medicines is perceived important and likely

  9. Power quality enhancement at distribution level utilizing the unified power quality conditioner (UPQC)

    Science.gov (United States)

    Khadkikar, Vinod

    The present doctoral work is based on the philosophy of optimal utilization of the available resources in a most effective and efficient way to improve the product efficiency and to reduce the overall cost. This work proposes a novel control philosophy termed as power angle control (PAC), in which both the series and shunt inverters share the load reactive power in co-ordination with each other without affecting the basic UPQC compensation capabilities. This eventually results in a better utilization of the series inverter, reduction in the shunt inverter rating to some extent and ultimately in the reduction of the overall cost of UPQC. Moreover, in this thesis work several other control approaches are also proposed, such as, unit vector template generation, quadrature voltage injection, generalized single-phase p-q theory and novel current unbalance compensation approach. All the developed concepts are successfully validated through digital simulation as well as extensive experimental investigations. Keywords. power quality, active power filter, unified power quality conditioner, reactive power compensation, harmonics compensation.

  10. Brief educational interventions to improve performance on novel quality metrics in ambulatory settings in Kenya: A multi-site pre-post effectiveness trial

    Science.gov (United States)

    Onguka, Stephanie; Halestrap, Peter; McAlhaney, Maureen; Adam, Mary

    2017-01-01

    Background The quality of primary care delivered in resource-limited settings is low. While some progress has been made using educational interventions, it is not yet clear how to sustainably improve care for common acute illnesses in the outpatient setting. Management of urinary tract infection is particularly important in resource-limited settings, where it is commonly diagnosed and associated with high levels of antimicrobial resistance. We describe an educational programme targeting non-physician health care providers and its effects on various clinical quality metrics for urinary tract infection. Methods We used a series of educational interventions including 1) formal introduction of a clinical practice guideline, 2) peer-to-peer chart review, and 3) peer-reviewed literature describing local antimicrobial resistance patterns. Interventions were conducted for clinical officers (N = 24) at two outpatient centers near Nairobi, Kenya over a one-year period. The medical records of 474 patients with urinary tract infections were scored on five clinical quality metrics, with the primary outcome being the proportion of cases in which the guideline-recommended antibiotic was prescribed. The results at baseline and following each intervention were compared using chi-squared tests and unpaired two-tailed T-tests for significance. Logistic regression analysis was used to assess for possible confounders. Findings Clinician adherence to the guideline-recommended antibiotic improved significantly during the study period, from 19% at baseline to 68% following all interventions (Χ2 = 150.7, p < 0.001). The secondary outcome of composite quality score also improved significantly from an average of 2.16 to 3.00 on a five-point scale (t = 6.58, p < 0.001). Interventions had different effects at different clinical sites; the primary outcome of appropriate antibiotic prescription was met 83% of the time at Penda Health, and 50% of the time at AICKH, possibly reflecting

  11. 42 CFR 423.153 - Drug utilization management, quality assurance, and medication therapy management programs (MTMPs).

    Science.gov (United States)

    2010-10-01

    ... PRESCRIPTION DRUG BENEFIT Cost Control and Quality Improvement Requirements § 423.153 Drug utilization... 42 Public Health 3 2010-10-01 2010-10-01 false Drug utilization management, quality assurance, and medication therapy management programs (MTMPs). 423.153 Section 423.153 Public Health CENTERS FOR MEDICARE...

  12. Chest CT using spectral filtration: radiation dose, image quality, and spectrum of clinical utility

    Energy Technology Data Exchange (ETDEWEB)

    Braun, Franziska M.; Johnson, Thorsten R.C.; Sommer, Wieland H.; Thierfelder, Kolja M.; Meinel, Felix G. [University Hospital Munich, Institute for Clinical Radiology, Munich (Germany)

    2015-06-01

    To determine the radiation dose, image quality, and clinical utility of non-enhanced chest CT with spectral filtration. We retrospectively analysed 25 non-contrast chest CT examinations acquired with spectral filtration (tin-filtered Sn100 kVp spectrum) compared to 25 examinations acquired without spectral filtration (120 kV). Radiation metrics were compared. Image noise was measured. Contrast-to-noise-ratio (CNR) and figure-of-merit (FOM) were calculated. Diagnostic confidence for the assessment of various thoracic pathologies was rated by two independent readers. Effective chest diameters were comparable between groups (P = 0.613). In spectral filtration CT, median CTDI{sub vol}, DLP, and size-specific dose estimate (SSDE) were reduced (0.46 vs. 4.3 mGy, 16 vs. 141 mGy*cm, and 0.65 vs. 5.9 mGy, all P < 0.001). Spectral filtration CT had higher image noise (21.3 vs. 13.2 HU, P < 0.001) and lower CNR (47.2 vs. 75.3, P < 0.001), but was more dose-efficient (FOM 10,659 vs. 2,231/mSv, P < 0.001). Diagnostic confidence for parenchymal lung disease and osseous pathologies was lower with spectral filtration CT, but no significant difference was found for pleural pathologies, pulmonary nodules, or pneumonia. Non-contrast chest CT using spectral filtration appears to be sufficient for the assessment of a considerable spectrum of thoracic pathologies, while providing superior dose efficiency, allowing for substantial radiation dose reduction. (orig.)

  13. The Quality of Life Scale (QOLS: Reliability, Validity, and Utilization

    Directory of Open Access Journals (Sweden)

    Anderson Kathryn L

    2003-10-01

    Full Text Available Abstract The Quality of Life Scale (QOLS, created originally by American psychologist John Flanagan in the 1970's, has been adapted for use in chronic illness groups. This paper reviews the development and psychometric testing of the QOLS. A descriptive review of the published literature was undertaken and findings summarized in the frequently asked questions format. Reliability, content and construct validity testing has been performed on the QOLS and a number of translations have been made. The QOLS has low to moderate correlations with physical health status and disease measures. However, content validity analysis indicates that the instrument measures domains that diverse patient groups with chronic illness define as quality of life. The QOLS is a valid instrument for measuring quality of life across patient groups and cultures and is conceptually distinct from health status or other causal indicators of quality of life.

  14. MO-D-213-06: Quantitative Image Quality Metrics Are for Physicists, Not Radiologists: How to Communicate to Your Radiologists Using Their Language

    International Nuclear Information System (INIS)

    Szczykutowicz, T; Rubert, N; Ranallo, F

    2015-01-01

    Purpose: A framework for explaining differences in image quality to non-technical audiences in medial imaging is needed. Currently, this task is something that is learned “on the job.” The lack of a formal methodology for communicating optimal acquisition parameters into the clinic effectively mitigates many technological advances. As a community, medical physicists need to be held responsible for not only advancing image science, but also for ensuring its proper use in the clinic. This work outlines a framework that bridges the gap between the results from quantitative image quality metrics like detectability, MTF, and NPS and their effect on specific anatomical structures present in diagnostic imaging tasks. Methods: Specific structures of clinical importance were identified for a body, an extremity, a chest, and a temporal bone protocol. Using these structures, quantitative metrics were used to identify the parameter space that should yield optimal image quality constrained within the confines of clinical logistics and dose considerations. The reading room workflow for presenting the proposed changes for imaging each of these structures is presented. The workflow consists of displaying images for physician review consisting of different combinations of acquisition parameters guided by quantitative metrics. Examples of using detectability index, MTF, NPS, noise and noise non-uniformity are provided. During review, the physician was forced to judge the image quality solely on those features they need for diagnosis, not on the overall “look” of the image. Results: We found that in many cases, use of this framework settled mis-agreements between physicians. Once forced to judge images on the ability to detect specific structures inter reader agreement was obtained. Conclusion: This framework will provide consulting, research/industrial, or in-house physicists with clinically relevant imaging tasks to guide reading room image review. This framework avoids use

  15. MO-D-213-06: Quantitative Image Quality Metrics Are for Physicists, Not Radiologists: How to Communicate to Your Radiologists Using Their Language

    Energy Technology Data Exchange (ETDEWEB)

    Szczykutowicz, T; Rubert, N; Ranallo, F [University Wisconsin-Madison, Madison, WI (United States)

    2015-06-15

    Purpose: A framework for explaining differences in image quality to non-technical audiences in medial imaging is needed. Currently, this task is something that is learned “on the job.” The lack of a formal methodology for communicating optimal acquisition parameters into the clinic effectively mitigates many technological advances. As a community, medical physicists need to be held responsible for not only advancing image science, but also for ensuring its proper use in the clinic. This work outlines a framework that bridges the gap between the results from quantitative image quality metrics like detectability, MTF, and NPS and their effect on specific anatomical structures present in diagnostic imaging tasks. Methods: Specific structures of clinical importance were identified for a body, an extremity, a chest, and a temporal bone protocol. Using these structures, quantitative metrics were used to identify the parameter space that should yield optimal image quality constrained within the confines of clinical logistics and dose considerations. The reading room workflow for presenting the proposed changes for imaging each of these structures is presented. The workflow consists of displaying images for physician review consisting of different combinations of acquisition parameters guided by quantitative metrics. Examples of using detectability index, MTF, NPS, noise and noise non-uniformity are provided. During review, the physician was forced to judge the image quality solely on those features they need for diagnosis, not on the overall “look” of the image. Results: We found that in many cases, use of this framework settled mis-agreements between physicians. Once forced to judge images on the ability to detect specific structures inter reader agreement was obtained. Conclusion: This framework will provide consulting, research/industrial, or in-house physicists with clinically relevant imaging tasks to guide reading room image review. This framework avoids use

  16. Metrication manual

    International Nuclear Information System (INIS)

    Harper, A.F.A.; Digby, R.B.; Thong, S.P.; Lacey, F.

    1978-04-01

    In April 1978 a meeting of senior metrication officers convened by the Commonwealth Science Council of the Commonwealth Secretariat, was held in London. The participants were drawn from Australia, Bangladesh, Britain, Canada, Ghana, Guyana, India, Jamaica, Papua New Guinea, Solomon Islands and Trinidad and Tobago. Among other things, the meeting resolved to develop a set of guidelines to assist countries to change to SI and to compile such guidelines in the form of a working manual

  17. Quality of renewable energy utilization in transport in Sweden

    Energy Technology Data Exchange (ETDEWEB)

    Lampinen, Ari

    2015-04-01

    Renewable energy utilization in transportation (RES-T) is a long way behind its utilization in power (RES-E) and heat (RES-H) sectors. International and national environmental policies have recently given a lot of emphasis on this problem. For that reason information is sought on how to implement solutions both politically and technologically. As Sweden is a global leader in this area, it can provide valuable examples. In 2012 Sweden became the first country to reach the binding requirement of the European Union for at least 10 % share for renewable energy in transport energy consumption. But qualitative development has been even stronger than quantitative. Among the success stories behind qualitative progress, most noteworthy are those created by innovative municipal policies. By 2030 Sweden aims to achieve fossil fuel independent road transport system and by 2050 completely carbon neutral transport system in all modes of transport.

  18. WE-AB-209-07: Explicit and Convex Optimization of Plan Quality Metrics in Intensity-Modulated Radiation Therapy Treatment Planning

    International Nuclear Information System (INIS)

    Engberg, L; Eriksson, K; Hardemark, B; Forsgren, A

    2016-01-01

    Purpose: To formulate objective functions of a multicriteria fluence map optimization model that correlate well with plan quality metrics, and to solve this multicriteria model by convex approximation. Methods: In this study, objectives of a multicriteria model are formulated to explicitly either minimize or maximize a dose-at-volume measure. Given the widespread agreement that dose-at-volume levels play important roles in plan quality assessment, these objectives correlate well with plan quality metrics. This is in contrast to the conventional objectives, which are to maximize clinical goal achievement by relating to deviations from given dose-at-volume thresholds: while balancing the new objectives means explicitly balancing dose-at-volume levels, balancing the conventional objectives effectively means balancing deviations. Constituted by the inherently non-convex dose-at-volume measure, the new objectives are approximated by the convex mean-tail-dose measure (CVaR measure), yielding a convex approximation of the multicriteria model. Results: Advantages of using the convex approximation are investigated through juxtaposition with the conventional objectives in a computational study of two patient cases. Clinical goals of each case respectively point out three ROI dose-at-volume measures to be considered for plan quality assessment. This is translated in the convex approximation into minimizing three mean-tail-dose measures. Evaluations of the three ROI dose-at-volume measures on Pareto optimal plans are used to represent plan quality of the Pareto sets. Besides providing increased accuracy in terms of feasibility of solutions, the convex approximation generates Pareto sets with overall improved plan quality. In one case, the Pareto set generated by the convex approximation entirely dominates that generated with the conventional objectives. Conclusion: The initial computational study indicates that the convex approximation outperforms the conventional objectives

  19. Cost and quality of fuels for electric utility plants 1991

    International Nuclear Information System (INIS)

    1992-01-01

    Data for 1991 and 1990 receipts and costs for fossil fuels discussed in the Executive Summary are displayed in Tables ES1 through ES7. These data are for electric generating plants with a total steam-electric and combined-cycle nameplate capacity of 50 or more megawatts. Data presented in the Executive Summary on generation, consumption, and stocks of fossil fuels at electric utilities are based on data collected on the Energy Information Administration, Form EIA-759, ''Monthly Power Plant Report.'' These data cover all electric generating plants. The average delivered cost of coal, petroleum, and gas each decreased in 1991 from 1990 levels. Overall, the average annual cost of fossil fuels delivered to electric utilities in 1991 was $1.60 per million Btu, a decrease of $0.09 per million Btu from 1990. This was the lowest average annual cost since 1978 and was the result of the abundant supply of coal, petroleum, and gas available to electric utilities. US net generation of electricity by all electric utilities in 1991 increased by less than I percent--the smallest increase since the decline that occurred in 1982.3 Coal and gas-fired steam net generation, each, decreased by less than I percent and petroleum-fired steam net generation by nearly 5 percent. Nuclear-powered net generation, however, increased by 6 percent. Fossil fuels accounted for 68 percent of all generation; nuclear, 22 percent; and hydroelectric, 10 percent. Sales of electricity to ultimate consumers in 1991 were 2 percent higher than during 1990

  20. Utility of pollution indices in assessment of soil quality around ...

    African Journals Online (AJOL)

    The quality of soil in the vicinity of Madaka mining sites were investigated in this study using Environmental Pollution Indices. Geological mapping of the study area indicated that the area was dominated by schist and granite. The static water level measurement revealed a westward groundwater flow direction which also ...

  1. Information Accessibility and Utilization as Correlate of Quality Of ...

    African Journals Online (AJOL)

    The quality of life of people in developing countries, including Nigeria, is often adjudged to be lower than the expected standard. This is worse with women living in the rural areas whose lives are characterised by inadequate access and use of basic amenities of life. The situation is compounded by the women's lack of ...

  2. Quality assessment and potential utilization of high amylolytic ...

    African Journals Online (AJOL)

    STORAGESEVER

    2008-12-03

    Dec 3, 2008 ... This study was carried out to compare the qualities of two acclaimed Nigerian amylolytic maize cultivars; SPMAT ... growing demand for usage as a gluten-free cereal. (Sweeny ... The grain was malted and the malting loss was calculated using the .... also confirmed that there was no significant difference in.

  3. Nutritional quality and utilization of local and improved cowpea ...

    African Journals Online (AJOL)

    Cowpeas are grown for their leaves and grains both of which are used as relish or side dishes together with the staple food. Little information is available on the nutritional quality of local and improved cowpea varieties grown in Tanzania as well as the recipes in which they are ingredients. This study was done to investigate ...

  4. The Impact of Patient Complexity on Healthcare Utilization

    Science.gov (United States)

    2017-10-27

    Primary Care Quality Metrics; Well Child Visits in First 15 Months of Life NQF 1392; Diabetes Mellitus NQF 0059; Colorectal Cancer Screening NQF 0034; Emergency Department Utilization; Alcohol and Drug Screening

  5. Development and utilization of sunflower genotypes with altered oil quality

    OpenAIRE

    Cvejić, Sandra; Jocić, Siniša; Miladinović, Dragana; Jocković, Milan; Imerovski, Ivana; Sakač, Zvonimir; Miklič, Vladimir

    2014-01-01

    Sunflower oil is among the highest quality oils of plant origin. The oil of standard sunflowers has an average of 10% saturated fatty acids, 20-30% oleic acid and 60-70% linoleic acid. The total content of tocopherols in standard sunflower oil is 700-1000 mg/kg with the predominant being alpha-tocopherol (vitamin-E). Following the trends of the food and non-food industries sunflower breeders have been able to significantly change the fatty acid composition of the oil. The oil of high-oleic hy...

  6. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    International Nuclear Information System (INIS)

    Yaparpalvi, R; Mynampati, D; Kuo, H; Garg, M; Tome, W; Kalnicki, S

    2016-01-01

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performed using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V_2_0 and V_5 to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm"3. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V_2_0 (+3.1%) and V_5 (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates attributable to algorithm

  7. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Yaparpalvi, R; Mynampati, D; Kuo, H; Garg, M; Tome, W; Kalnicki, S [Montefiore Medical Center, Bronx, NY (United States)

    2016-06-15

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performed using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V{sub 20} and V{sub 5} to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm{sup 3}. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V{sub 20} (+3.1%) and V{sub 5} (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates

  8. Near-Port Air Quality Assessment Utilizing a Mobile Monitoring Approach

    Data.gov (United States)

    U.S. Environmental Protection Agency — Near-Port Air Quality Assessment Utilizing a Mobile Monitoring Approach. This dataset is associated with the following publication: Steffens, J., S. Kimbrough, R....

  9. Does quality influence utilization of primary health care? Evidence from Haiti.

    Science.gov (United States)

    Gage, Anna D; Leslie, Hannah H; Bitton, Asaf; Jerome, J Gregory; Joseph, Jean Paul; Thermidor, Roody; Kruk, Margaret E

    2018-06-20

    Expanding coverage of primary healthcare services such as antenatal care and vaccinations is a global health priority; however, many Haitians do not utilize these services. One reason may be that the population avoids low quality health facilities. We examined how facility infrastructure and the quality of primary health care service delivery were associated with community utilization of primary health care services in Haiti. We constructed two composite measures of quality for all Haitian facilities using the 2013 Service Provision Assessment survey. We geographically linked population clusters from the Demographic and Health Surveys to nearby facilities offering primary health care services. We assessed the cross-sectional association between quality and utilization of four primary care services: antenatal care, postnatal care, vaccinations and sick child care, as well as one more complex service: facility delivery. Facilities performed poorly on both measures of quality, scoring 0.55 and 0.58 out of 1 on infrastructure and service delivery quality respectively. In rural areas, utilization of several primary cares services (antenatal care, postnatal care, and vaccination) was associated with both infrastructure and quality of service delivery, with stronger associations for service delivery. Facility delivery was associated with infrastructure quality, and there was no association for sick child care. In urban areas, care utilization was not associated with either quality measure. Poor quality of care may deter utilization of beneficial primary health care services in rural areas of Haiti. Improving health service quality may offer an opportunity not only to improve health outcomes for patients, but also to expand coverage of key primary health care services.

  10. Exploring the clinical utility of optical quality and fundus autofluorescence metrics for monitoring and screening for diabetes mellitus

    OpenAIRE

    Calvo Maroto, Ana María

    2017-01-01

    La Diabetes Mellitus (DM) es una enfermedad sistémica que se caracteriza por una hiperglucemia crónica asociada a daños a largo plazo de diferentes órganos, como son los ojos, riñones, corazón y vasos sanguíneos, entre otros. Normalmente, la DM se clasifica en DM tipo 1 y DM tipo 2, a los que también hay que añadir la diabetes gestacional y otros tipos de diabetes causados por factores genéticos y otras enfermedades o infecciones. La DM es una enfermedad que constituye un gran impacto soci...

  11. Utilization of Portulaca Oleracea L. to Improve Quality of Yoghurt

    International Nuclear Information System (INIS)

    Sallam, E.M.; Anwar, M.M.

    2015-01-01

    The present investigation was conducted to study the possibility of using Portulaca Oleracea L. as a source of Omega - 3 and Omega - 6 fatty acids as well as high vitamins and minerals, to improve the quality of yoghurt. Also, the microbial characteristics the treated yoghurt were evaluated. The obtained results showed that the replacement of milk fat by dry leaves of P. Oleracea had no effect on the chemical composition and the sensory properties of the treated yoghurt with 50 and 100% P. Oleracea L. leaves oil as milk fat substitute compared to the untreated one. In conclusion, manufacturing yoghurt is suitable as a rich nutrient food stuff for people suffering from blood hypertension, high blood cholesterol, liver and heart diseases

  12. Quality of Mount Etna groundwaters utilized for the potable supply

    International Nuclear Information System (INIS)

    Giammanco, G.; Giammanco, S.; Valenza, M.

    1995-01-01

    The groundwaters of many aquifers of Mt. Etna are naturally enriched in a number of elements that are present in the rocks making up the volcanic edifice. The concentrations of magnesium, iron and manganese in the waters from many wells and springs utilized for the potable supply of Catania and various other villages exceed the maximum admissible concentrations (CMA) fixed by the law n. 236 enacted in 1988. The literal observance of the law in force has led to the prohibition from drinking such waters, although the above-mentioned substances are not prejudicial to the health at the found concentrations. Further problems have arised from the presence of vanadium, even though no CMA has been fixed for this element. All this has provoked serious hardships to the population and risks to the health due to the reduced water delivery. In order to avoid such inconveniences, the revision of the law in force is necessary in all those geographical areas where are naturally rich in non toxic elements. For these elements is opportune that indicative and non prescriptive levels of acceptability were established instead of the CMA

  13. Air Quality Monitoring with Routine Utilization of Ion Beam Analysis

    International Nuclear Information System (INIS)

    Wegrzynek, D.

    2013-01-01

    Full text: Information on source contributions to ambient air particulate concentrations is a vital tool for air quality management. Traditional gravimetric analysis of airborne particulate matter is unable to provide information on the sources contributing to air particulate concentrations. Ion beam analysis is used to identify the elemental composition of air particulates for source apportionment and determining the relative contribution of biogenic and anthropogenic sources to air particulate pollution. The elemental composition is obtained by proton induced X-ray emission technique (PIXE), which is an ion beam analysis (IBA) technique. The element concentrations are deduced from the X ray spectra produced when the particulate collected on a filter is bombarded with a high-energy proton beam. As part of the UNDP/IAEA/RCA Project RAS/8/082 ‘Better Management of the Environment, Natural Resources and Industrial Growth through Isotope and Radiation Technology,’ a collaborative alliance was formed between the Institute of Geological and Nuclear Sciences Limited and the Wellington Regional Council, New Zeland [1]. The purpose of the project was to examine the elemental composition of air particulate matter and determine the origins through source apportionment techniques. In New Zealand PM 10 and PM 2.5 fractions have been collected at the industrial area of Seaview, Wellington over two years using a GENT stacked filter unit sampler. Concentrations of elements with atomic mass above neon were determined using ion beam analysis and elemental carbon concentrations were determined using a reflectometer. Specific ambient source elemental 'fingerprints' were then determined by factor analysis and the relative contributions of various local and regional sources were assessed. The significant factors (sources) were determined to be sea salt, soil, industry, and combustion sources. Local industry was found to contribute to ambient lead concentrations. (author)

  14. Multi-Robot Assembly Strategies and Metrics

    Science.gov (United States)

    MARVEL, JEREMY A.; BOSTELMAN, ROGER; FALCO, JOE

    2018-01-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies. PMID:29497234

  15. Multi-Robot Assembly Strategies and Metrics.

    Science.gov (United States)

    Marvel, Jeremy A; Bostelman, Roger; Falco, Joe

    2018-02-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies.

  16. Factors Associated with the Utilization and Quality of Prenatal Care in Western Rural Regions of China

    Science.gov (United States)

    Dongxu, Wang; Yuhui, Shi; Stewart, Donald; Chun, Chang; Chaoyang, Li

    2012-01-01

    Purpose: The paper seeks to identify key features of prenatal care utilization and quality in western regions of China and to determine the factors affecting the quality of prenatal care. Design/methodology/approach: A descriptive, cross-sectional study was conducted. The instrument for the study was a 10-stem respondent-administered, structured…

  17. The study of nano technology utilization in upgrading the quality of ...

    African Journals Online (AJOL)

    Today, the first requirement for increasing quality and strength in construction industry is utilization of appropriate and high quality materials so that usage of these materials, in addition to reducing maintenance costs and also upgrading the structure's longevity, in engineering respect through change on weight reduction ...

  18. Validation of Metrics for Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2008-01-01

    Full Text Available This paper describe the new concepts of collaborative systems metrics validation. The paper define the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.

  19. Validation of Metrics for Collaborative Systems

    OpenAIRE

    Ion IVAN; Cristian CIUREA

    2008-01-01

    This paper describe the new concepts of collaborative systems metrics validation. The paper define the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.

  20. Use of plan quality degradation to evaluate tradeoffs in delivery efficiency and clinical plan metrics arising from IMRT optimizer and sequencer compromises

    Science.gov (United States)

    Wilkie, Joel R.; Matuszak, Martha M.; Feng, Mary; Moran, Jean M.; Fraass, Benedick A.

    2013-01-01

    Purpose: Plan degradation resulting from compromises made to enhance delivery efficiency is an important consideration for intensity modulated radiation therapy (IMRT) treatment plans. IMRT optimization and/or multileaf collimator (MLC) sequencing schemes can be modified to generate more efficient treatment delivery, but the effect those modifications have on plan quality is often difficult to quantify. In this work, the authors present a method for quantitative assessment of overall plan quality degradation due to tradeoffs between delivery efficiency and treatment plan quality, illustrated using comparisons between plans developed allowing different numbers of intensity levels in IMRT optimization and/or MLC sequencing for static segmental MLC IMRT plans. Methods: A plan quality degradation method to evaluate delivery efficiency and plan quality tradeoffs was developed and used to assess planning for 14 prostate and 12 head and neck patients treated with static IMRT. Plan quality was evaluated using a physician's predetermined “quality degradation” factors for relevant clinical plan metrics associated with the plan optimization strategy. Delivery efficiency and plan quality were assessed for a range of optimization and sequencing limitations. The “optimal” (baseline) plan for each case was derived using a clinical cost function with an unlimited number of intensity levels. These plans were sequenced with a clinical MLC leaf sequencer which uses >100 segments, assuring delivered intensities to be within 1% of the optimized intensity pattern. Each patient's optimal plan was also sequenced limiting the number of intensity levels (20, 10, and 5), and then separately optimized with these same numbers of intensity levels. Delivery time was measured for all plans, and direct evaluation of the tradeoffs between delivery time and plan degradation was performed. Results: When considering tradeoffs, the optimal number of intensity levels depends on the treatment

  1. Utilization of Light Detection and Ranging for Quality Control and Quality Assurance of Pavement Grades

    Science.gov (United States)

    2018-02-01

    Light Detection and Ranging (Lidar) technology is a useful tool that can assist transportation agencies during the design, construction, and maintenance phases of transportation projects. To demonstrate the utility of Lidar, this report discusses how...

  2. Coverage and quality: A comparison of Web of Science and Scopus databases for reporting faculty nursing publication metrics.

    Science.gov (United States)

    Powell, Kimberly R; Peterson, Shenita R

    Web of Science and Scopus are the leading databases of scholarly impact. Recent studies outside the field of nursing report differences in journal coverage and quality. A comparative analysis of nursing publications reported impact. Journal coverage by each database for the field of nursing was compared. Additionally, publications by 2014 nursing faculty were collected in both databases and compared for overall coverage and reported quality, as modeled by Scimajo Journal Rank, peer review status, and MEDLINE inclusion. Individual author impact, modeled by the h-index, was calculated by each database for comparison. Scopus offered significantly higher journal coverage. For 2014 faculty publications, 100% of journals were found in Scopus, Web of Science offered 82%. No significant difference was found in the quality of reported journals. Author h-index was found to be higher in Scopus. When reporting faculty publications and scholarly impact, academic nursing programs may be better represented by Scopus, without compromising journal quality. Programs with strong interdisciplinary work should examine all areas of strength to ensure appropriate coverage. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Quality Markers in Cardiology. Main Markers to Measure Quality of Results (Outcomes) and Quality Measures Related to Better Results in Clinical Practice (Performance Metrics). INCARDIO (Indicadores de Calidad en Unidades Asistenciales del Área del Corazón): A SEC/SECTCV Consensus Position Paper.

    Science.gov (United States)

    López-Sendón, José; González-Juanatey, José Ramón; Pinto, Fausto; Cuenca Castillo, José; Badimón, Lina; Dalmau, Regina; González Torrecilla, Esteban; López-Mínguez, José Ramón; Maceira, Alicia M; Pascual-Figal, Domingo; Pomar Moya-Prats, José Luis; Sionis, Alessandro; Zamorano, José Luis

    2015-11-01

    Cardiology practice requires complex organization that impacts overall outcomes and may differ substantially among hospitals and communities. The aim of this consensus document is to define quality markers in cardiology, including markers to measure the quality of results (outcomes metrics) and quality measures related to better results in clinical practice (performance metrics). The document is mainly intended for the Spanish health care system and may serve as a basis for similar documents in other countries. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  4. Use of a line-pair resolution phantom for comprehensive quality assurance of electronic portal imaging devices based on fundamental imaging metrics

    International Nuclear Information System (INIS)

    Gopal, Arun; Samant, Sanjiv S.

    2009-01-01

    Image guided radiation therapy solutions based on megavoltage computed tomography (MVCT) involve the extension of electronic portal imaging devices (EPIDs) from their traditional role of weekly localization imaging and planar dose mapping to volumetric imaging for 3D setup and dose verification. To sustain the potential advantages of MVCT, EPIDs are required to provide improved levels of portal image quality. Therefore, it is vital that the performance of EPIDs in clinical use is maintained at an optimal level through regular and rigorous quality assurance (QA). Traditionally, portal imaging QA has been carried out by imaging calibrated line-pair and contrast resolution phantoms and obtaining arbitrarily defined QA indices that are usually dependent on imaging conditions and merely indicate relative trends in imaging performance. They are not adequately sensitive to all aspects of image quality unlike fundamental imaging metrics such as the modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) that are widely used to characterize detector performance in radiographic imaging and would be ideal for QA purposes. However, due to the difficulty of performing conventional MTF measurements, they have not been used for routine clinical QA. The authors present a simple and quick QA methodology based on obtaining the MTF, NPS, and DQE of a megavoltage imager by imaging standard open fields and a bar-pattern QA phantom containing 2 mm thick tungsten line-pair bar resolution targets. Our bar-pattern based MTF measurement features a novel zero-frequency normalization scheme that eliminates normalization errors typically associated with traditional bar-pattern measurements at megavoltage x-ray energies. The bar-pattern QA phantom and open-field images are used in conjunction with an automated image analysis algorithm that quickly computes the MTF, NPS, and DQE of an EPID system. Our approach combines the fundamental advantages of

  5. Drug utilization research in primary health care as exemplified by physicians' quality assessment groups.

    Science.gov (United States)

    von Ferber, L; Luciano, A; Köster, I; Krappweis, J

    1992-11-01

    Drugs in primary health care are often prescribed for nonrational reasons. Drug utilization research investigates the prescription of drugs with an eye to medical, social and economic causes and consequences of the prescribed drug's utilization. The results of this research show distinct differences in drug utilization in different age groups and between men and women. Indication and dosage appear irrational from a textbook point of view. This indicates nonpharmacological causes of drug utilization. To advice successfully changes for the better quality assessment groups of primary health care physicians get information about their established behavior by analysis of their prescriptions. The discussion and the comparisons in the group allow them to recognize their irrational prescribing and the social, psychological and economic reasons behind it. Guidelines for treatment are worked out which take into account the primary health care physician's situation. After a year with 6 meetings of the quality assessment groups the education process is evaluated by another drug utilization analysis on the basis of the physicians prescription. The evaluation shows a remarkable improvement of quality and cost effectiveness of the drug therapy of the participating physicians.

  6. Metrics of quantum states

    International Nuclear Information System (INIS)

    Ma Zhihao; Chen Jingling

    2011-01-01

    In this work we study metrics of quantum states, which are natural generalizations of the usual trace metric and Bures metric. Some useful properties of the metrics are proved, such as the joint convexity and contractivity under quantum operations. Our result has a potential application in studying the geometry of quantum states as well as the entanglement detection.

  7. Next-Generation Metrics: Responsible Metrics & Evaluation for Open Science

    Energy Technology Data Exchange (ETDEWEB)

    Wilsdon, J.; Bar-Ilan, J.; Peters, I.; Wouters, P.

    2016-07-01

    Metrics evoke a mixed reaction from the research community. A commitment to using data to inform decisions makes some enthusiastic about the prospect of granular, real-time analysis o of research and its wider impacts. Yet we only have to look at the blunt use of metrics such as journal impact factors, h-indices and grant income targets, to be reminded of the pitfalls. Some of the most precious qualities of academic culture resist simple quantification, and individual indicators often struggle to do justice to the richness and plurality of research. Too often, poorly designed evaluation criteria are “dominating minds, distorting behaviour and determining careers (Lawrence, 2007).” Metrics hold real power: they are constitutive of values, identities and livelihoods. How to exercise that power to more positive ends has been the focus of several recent and complementary initiatives, including the San Francisco Declaration on Research Assessment (DORA1), the Leiden Manifesto2 and The Metric Tide3 (a UK government review of the role of metrics in research management and assessment). Building on these initiatives, the European Commission, under its new Open Science Policy Platform4, is now looking to develop a framework for responsible metrics for research management and evaluation, which can be incorporated into the successor framework to Horizon 2020. (Author)

  8. Return to intended oncologic treatment (RIOT): a novel metric for evaluating the quality of oncosurgical therapy for malignancy.

    Science.gov (United States)

    Aloia, Thomas A; Zimmitti, Giuseppe; Conrad, Claudius; Gottumukalla, Vijaya; Kopetz, Scott; Vauthey, Jean-Nicolas

    2014-08-01

    After cancer surgery, complications, and disability prevent some patients from receiving subsequent treatments. Given that an inability to complete all intended cancer therapies might negate the oncologic benefits of surgical therapy, strategies to improve return to intended oncologic treatment (RIOT), including minimally invasive surgery (MIS), are being investigated. This project was designed to evaluate liver tumor patients to determine the RIOT rate, risk factors for inability to RIOT, and its impact on survivals. Outcomes for a homogenous cohort of 223 patients who underwent open-approach surgery for metachronous colorectal liver metastases and a group of 27 liver tumor patients treated with MIS hepatectomy were examined. Of the 223 open-approach patients, 167 were offered postoperative therapy, yielding a RIOT rate of 75%. The remaining 56 (25%) patients were unable to receive further treatment due to surgical complications (n = 29 pts) or poor performance status (n = 27 pts). Risk factors associated with inability to RIOT were hypertension (OR 2.2, P = 0.025), multiple preoperative chemotherapy regimens (OR 5.9, P = 0.039), and postoperative complications (OR 2.0, P = 0.039). Inability to RIOT correlated with shorter disease-free and overall survivals (P relationship between RIOT and long-term oncologic outcomes suggests that RIOT rates for both open- and MIS-approach cancer surgery should routinely be reported as a quality indicator. © 2014 Wiley Periodicals, Inc.

  9. $\\eta$-metric structures

    OpenAIRE

    Gaba, Yaé Ulrich

    2017-01-01

    In this paper, we discuss recent results about generalized metric spaces and fixed point theory. We introduce the notion of $\\eta$-cone metric spaces, give some topological properties and prove some fixed point theorems for contractive type maps on these spaces. In particular we show that theses $\\eta$-cone metric spaces are natural generalizations of both cone metric spaces and metric type spaces.

  10. [Professional's expectations to improve quality of care and social services utilization in geriatric oncology].

    Science.gov (United States)

    Antoine, Valéry; de Wazières, Benoît; Houédé, Nadine

    2015-02-01

    Coordination of a multidisciplinary and multi-professional intervention is a key issue in the management of elderly cancer patients to improve health status and quality of life. Optimizing the links between professionals is needed to improve care planning, health and social services utilization. Descriptive study in a French University Hospital. A 6-item structured questionnaire was addressed to professionals involved in global and supportive cares of elderly cancer patients (name, location, effective health care and services offered, needs to improve the quality of their intervention). After the analysis of answers, definition of propositions to improve cares and services utilization. The 37 respondents identified a total of 166 needs to improve quality of care in geriatric oncology. Major expectations were concerning improvement of global/supportive cares and health care services utilization, a better coordination between geriatric teams and oncologists. Ten propositions, including a model of in-hospital health care planning, were defined to answer to professional's needs with the aim of optimizing cancer treatment and global cares. Identification of effective services and needs can represent a first step in a continuous program to improve quality of cares, according to the French national cancer plan 2014-2019. It allows federating professionals for a coordination effort, a better organization of the clinical activity in geriatric oncology, to optimize clinical practice and global cares. Copyright © 2014 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  11. Thermodynamic metrics and optimal paths.

    Science.gov (United States)

    Sivak, David A; Crooks, Gavin E

    2012-05-11

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  12. Quality of life and functional capacity outcomes in the MOMENTUM 3 trial at 6 months: A call for new metrics for left ventricular assist device patients.

    Science.gov (United States)

    Cowger, Jennifer A; Naka, Yoshifumi; Aaronson, Keith D; Horstmanshof, Douglas; Gulati, Sanjeev; Rinde-Hoffman, Debbie; Pinney, Sean; Adatya, Sirtaz; Farrar, David J; Jorde, Ulrich P

    2018-01-01

    The Multicenter Study of MAGLEV Technology in Patients Undergoing Mechanical Circulatory Support Therapy with HeartMate 3 (MOMENTUM 3) clinical trial demonstrated improved 6-month event-free survival, but a detailed analysis of health-related quality of life (HR-QOL) and functional capacity (FC) was not presented. Further, the effect of early serious adverse events (SAEs) on these metrics and on the general ability to live well while supported with a left ventricular assist system (LVAS) warrants evaluation. FC (New York Heart Association [NYHA] and 6-minute walk test [6MWT]) and HR-QOL (European Quality of Life [EQ-5D-5L] and the Kansas City Cardiomyopathy [KCCQ]) assessments were obtained at baseline and 6 months after HeartMate 3 (HM3, n = 151; Abbott, Abbott Park, IL) or HeartMate II (HMII, n = 138; Abbott) implant as part of the MOMENTUM 3 clinical trial. Metrics were compared between devices and in those with and without events. The proportion of patients "living well on an LVAS" at 6 months, defined as alive with satisfactory FC (NYHA I/II or 6MWT > 300 meters) and HR-QOL (overall KCCQ > 50), was evaluated. Although the median (25th-75th percentile) patient KCCQ (change for HM3: +28 [10-46]; HMII: +29 [9-48]) and EQ-5D-5L (change for HM3: -1 [-5 to 0]; HMII: -2 [-6 to 0]) scores improved from baseline to 6 months (p 0.05). Likewise, there was an equivalent improvement in 6MWT distance at 6 months in HM3 (+94 [1-274] meters] and HMII (+188[43-340 meters]) from baseline. In patients with SAEs (n = 188), 6MWTs increased from baseline (p < 0.001), but gains for both devices were less than those without SAE (HM3: +74 [-9 to 183] meters with SAE vs +140 [35-329] meters without SAE; HMII: +177 [47-356] meters with SAE vs +192 [23-337] meters without SAE, both p < 0.003). SAEs did not affect the 6-month HR-QOL scores. The "living well" end point was achieved in 145 HM3 (63%) and 120 HMII (68%) patients (p = 0.44). Gains in HR-QOL and FC were similar early after HM3

  13. The application of a figure of merit for nuclear explosive utility as a metric for material attractiveness in a nuclear material theft scenario

    International Nuclear Information System (INIS)

    King, Wayne E.; Bradley, Keith; Jones, Edwin D.; Kramer, Kevin J.; Latkowski, Jeffery F.; Robel, Martin; Sleaford, Brad W.

    2010-01-01

    Effective integration of nonproliferation management into the design process is key to the broad deployment of advanced nuclear energy systems, and is an explicit goal of the Laser Inertial Fusion Energy (LIFE) project at Lawrence Livermore National Laboratory. The nuclear explosives utility of a nuclear material to a state (proliferator) or sub-state (terrorist) is a critical factor to be assessed and is one aspect of material attractiveness. In this work, we approached nuclear explosives utility through the calculation of a 'figure of merit' (FOM) that has recently been developed to capture the relative viability and difficulty of constructing nuclear explosives starting from various nuclear material forms and compositions. We discuss the integration of the figure of merit into an assessment of a nuclear material theft scenario and its use in the assessment. This paper demonstrates that material attractiveness is a multidimensional concept that embodies more than the FOM. It also seeks to propose that other attributes may be able to be quantified through analogous FOMs (e.g., transformation) and that, with quantification, aggregation may be possible using concepts from the risk community.

  14. The application of a figure of merit for nuclear explosive utility as a metric for material attractiveness in a nuclear material theft scenario

    Energy Technology Data Exchange (ETDEWEB)

    King, Wayne E., E-mail: weking@llnl.go [Physical and Life Sciences Directorate, Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States); Bradley, Keith [Global Security Directorate, Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States); Jones, Edwin D. [Physical and Life Sciences Directorate, Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States); Kramer, Kevin J.; Latkowski, Jeffery F. [Engineering Directorate, Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States); Robel, Martin [Physical and Life Sciences Directorate, Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States); Sleaford, Brad W. [Engineering Directorate, Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States)

    2010-11-15

    Effective integration of nonproliferation management into the design process is key to the broad deployment of advanced nuclear energy systems, and is an explicit goal of the Laser Inertial Fusion Energy (LIFE) project at Lawrence Livermore National Laboratory. The nuclear explosives utility of a nuclear material to a state (proliferator) or sub-state (terrorist) is a critical factor to be assessed and is one aspect of material attractiveness. In this work, we approached nuclear explosives utility through the calculation of a 'figure of merit' (FOM) that has recently been developed to capture the relative viability and difficulty of constructing nuclear explosives starting from various nuclear material forms and compositions. We discuss the integration of the figure of merit into an assessment of a nuclear material theft scenario and its use in the assessment. This paper demonstrates that material attractiveness is a multidimensional concept that embodies more than the FOM. It also seeks to propose that other attributes may be able to be quantified through analogous FOMs (e.g., transformation) and that, with quantification, aggregation may be possible using concepts from the risk community.

  15. Neurosurgical virtual reality simulation metrics to assess psychomotor skills during brain tumor resection.

    Science.gov (United States)

    Azarnoush, Hamed; Alzhrani, Gmaan; Winkler-Schwartz, Alexander; Alotaibi, Fahad; Gelinas-Phaneuf, Nicholas; Pazos, Valérie; Choudhury, Nusrat; Fares, Jawad; DiRaddo, Robert; Del Maestro, Rolando F

    2015-05-01

    Virtual reality simulator technology together with novel metrics could advance our understanding of expert neurosurgical performance and modify and improve resident training and assessment. This pilot study introduces innovative metrics that can be measured by the state-of-the-art simulator to assess performance. Such metrics cannot be measured in an operating room and have not been used previously to assess performance. Three sets of performance metrics were assessed utilizing the NeuroTouch platform in six scenarios with simulated brain tumors having different visual and tactile characteristics. Tier 1 metrics included percentage of brain tumor resected and volume of simulated "normal" brain tissue removed. Tier 2 metrics included instrument tip path length, time taken to resect the brain tumor, pedal activation frequency, and sum of applied forces. Tier 3 metrics included sum of forces applied to different tumor regions and the force bandwidth derived from the force histogram. The results outlined are from a novice resident in the second year of training and an expert neurosurgeon. The three tiers of metrics obtained from the NeuroTouch simulator do encompass the wide variability of technical performance observed during novice/expert resections of simulated brain tumors and can be employed to quantify the safety, quality, and efficiency of technical performance during simulated brain tumor resection. Tier 3 metrics derived from force pyramids and force histograms may be particularly useful in assessing simulated brain tumor resections. Our pilot study demonstrates that the safety, quality, and efficiency of novice and expert operators can be measured using metrics derived from the NeuroTouch platform, helping to understand how specific operator performance is dependent on both psychomotor ability and cognitive input during multiple virtual reality brain tumor resections.

  16. An entropy generation metric for non-energy systems assessments

    International Nuclear Information System (INIS)

    Sekulic, Dusan P.

    2009-01-01

    Processes in non-energy systems have not been as frequent a subject of sustainability studies based on Thermodynamics as have processes in energy systems. This paper offers insight into thermodynamic thinking devoted to selection of a sustainability energy-related metric based on entropy balancing of a non-energy system. An underlying objective in this sustainability oriented study is product quality involving thermal processing during manufacturing vs. resource utilization (say, energy). The product quality for the considered family of materials processing for manufacturing is postulated as inherently controlled by the imposed temperature non-uniformity margins. These temperature non-uniformities can be converted into a thermodynamic metric which can be related to either destruction of exergy of the available resource or, on a more fundamental level of process quality, to entropy generation inherent to the considered manufacturing system. Hence, a manufacturing system can be considered as if it were an energy system, although in the later case the system objective would be quite different. In a non-energy process, a metric may indicate the level of perfection of the process (not necessarily energy efficiency) and may be related to the sustainability footprint or, as advocated in this paper, it may be related to product quality. Controlled atmosphere brazing (CAB) of aluminum, a state-of-the-art manufacturing process involving mass production of compact heat exchangers for automotive, aerospace and process industries, has been used as an example.

  17. Dynamic Evaluation of Water Quality Improvement Based on Effective Utilization of Stockbreeding Biomass Resource

    Directory of Open Access Journals (Sweden)

    Jingjing Yan

    2014-11-01

    Full Text Available The stockbreeding industry is growing rapidly in rural regions of China, carrying a high risk to the water environment due to the emission of huge amounts of pollutants in terms of COD, T-N and T-P to rivers. On the other hand, as a typical biomass resource, stockbreeding waste can be used as a clean energy source by biomass utilization technologies. In this paper, we constructed a dynamic linear optimization model to simulate the synthetic water environment management policies which includes both the water environment system and social-economic situational changes over 10 years. Based on the simulation, the model can precisely estimate trends of water quality, production of stockbreeding biomass energy and economic development under certain restrictions of the water environment. We examined seven towns of Shunyi district of Beijing as the target area to analyse synthetic water environment management policies by computer simulation based on the effective utilization of stockbreeding biomass resources to improve water quality and realize sustainable development. The purpose of our research is to establish an effective utilization method of biomass resources incorporating water environment preservation, resource reutilization and economic development, and finally realize the sustainable development of the society.

  18. Energy and water quality management systems for water utility's operations: a review.

    Science.gov (United States)

    Cherchi, Carla; Badruzzaman, Mohammad; Oppenheimer, Joan; Bros, Christopher M; Jacangelo, Joseph G

    2015-04-15

    Holistic management of water and energy resources is critical for water utilities facing increasing energy prices, water supply shortage and stringent regulatory requirements. In the early 1990s, the concept of an integrated Energy and Water Quality Management System (EWQMS) was developed as an operational optimization framework for solving water quality, water supply and energy management problems simultaneously. Approximately twenty water utilities have implemented an EWQMS by interfacing commercial or in-house software optimization programs with existing control systems. For utilities with an installed EWQMS, operating cost savings of 8-15% have been reported due to higher use of cheaper tariff periods and better operating efficiencies, resulting in the reduction in energy consumption of ∼6-9%. This review provides the current state-of-knowledge on EWQMS typical structural features and operational strategies and benefits and drawbacks are analyzed. The review also highlights the challenges encountered during installation and implementation of EWQMS and identifies the knowledge gaps that should motivate new research efforts. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Drinking water sources, availability, quality, access and utilization for goats in the Karak Governorate, Jordan.

    Science.gov (United States)

    Al-Khaza'leh, Ja'far Mansur; Reiber, Christoph; Al Baqain, Raid; Valle Zárate, Anne

    2015-01-01

    Goat production is an important agricultural activity in Jordan. The country is one of the poorest countries in the world in terms of water scarcity. Provision of sufficient quantity of good quality drinking water is important for goats to maintain feed intake and production. This study aimed to evaluate the seasonal availability and quality of goats' drinking water sources, accessibility, and utilization in different zones in the Karak Governorate in southern Jordan. Data collection methods comprised interviews with purposively selected farmers and quality assessment of water sources. The provision of drinking water was considered as one of the major constraints for goat production, particularly during the dry season (DS). Long travel distances to the water sources, waiting time at watering points, and high fuel and labor costs were the key reasons associated with the problem. All the values of water quality (WQ) parameters were within acceptable limits of the guidelines for livestock drinking WQ with exception of iron, which showed slightly elevated concentration in one borehole source in the DS. These findings show that water shortage is an important problem leading to consequences for goat keepers. To alleviate the water shortage constraint and in view of the depleted groundwater sources, alternative water sources at reasonable distance have to be tapped and monitored for water quality and more efficient use of rainwater harvesting systems in the study area is recommended.

  20. METRIC context unit architecture

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, R.O.

    1988-01-01

    METRIC is an architecture for a simple but powerful Reduced Instruction Set Computer (RISC). Its speed comes from the simultaneous processing of several instruction streams, with instructions from the various streams being dispatched into METRIC's execution pipeline as they become available for execution. The pipeline is thus kept full, with a mix of instructions for several contexts in execution at the same time. True parallel programming is supported within a single execution unit, the METRIC Context Unit. METRIC's architecture provides for expansion through the addition of multiple Context Units and of specialized Functional Units. The architecture thus spans a range of size and performance from a single-chip microcomputer up through large and powerful multiprocessors. This research concentrates on the specification of the METRIC Context Unit at the architectural level. Performance tradeoffs made during METRIC's design are discussed, and projections of METRIC's performance are made based on simulation studies.

  1. An Innovative Metric to Evaluate Satellite Precipitation's Spatial Distribution

    Science.gov (United States)

    Liu, H.; Chu, W.; Gao, X.; Sorooshian, S.

    2011-12-01

    Thanks to its capability to cover the mountains, where ground measurement instruments cannot reach, satellites provide a good means of estimating precipitation over mountainous regions. In regions with complex terrains, accurate information on high-resolution spatial distribution of precipitation is critical for many important issues, such as flood/landslide warning, reservoir operation, water system planning, etc. Therefore, in order to be useful in many practical applications, satellite precipitation products should possess high quality in characterizing spatial distribution. However, most existing validation metrics, which are based on point/grid comparison using simple statistics, cannot effectively measure satellite's skill of capturing the spatial patterns of precipitation fields. This deficiency results from the fact that point/grid-wised comparison does not take into account of the spatial coherence of precipitation fields. Furth more, another weakness of many metrics is that they can barely provide information on why satellite products perform well or poor. Motivated by our recent findings of the consistent spatial patterns of the precipitation field over the western U.S., we developed a new metric utilizing EOF analysis and Shannon entropy. The metric can be derived through two steps: 1) capture the dominant spatial patterns of precipitation fields from both satellite products and reference data through EOF analysis, and 2) compute the similarities between the corresponding dominant patterns using mutual information measurement defined with Shannon entropy. Instead of individual point/grid, the new metric treat the entire precipitation field simultaneously, naturally taking advantage of spatial dependence. Since the dominant spatial patterns are shaped by physical processes, the new metric can shed light on why satellite product can or cannot capture the spatial patterns. For demonstration, a experiment was carried out to evaluate a satellite

  2. [Valuation of health-related quality of life and utilities in health economics].

    Science.gov (United States)

    Greiner, Wolfgang; Klose, Kristina

    2014-01-01

    Measuring health-related quality of life is an important aspect in economic evaluation of health programmes. The development of utility-based (preference-based) measures is advanced by the discipline of health economics. Different preference measures are applied for valuing health states to produce a weighted health state index. Those preference weights should be derived from a general population sample in case of resource allocation on a collective level (as in current valuation studies of the EuroQol group). Copyright © 2014. Published by Elsevier GmbH.

  3. Economic Valuation on Change of Tourism Quality in Rawapening, Indonesia: An Application of Random Utility Method

    Science.gov (United States)

    Subanti, S.; Irawan, B. R. M. B.; Sasongko, G.; Hakim, A. R.

    2017-04-01

    This study aims to determine the profit (loss) earned economic actors tourism activities if the condition or quality of tourism in Rawapening be improved (deteriorated). Change condition or quality can be seen by traveling expenses, natural environment, Japanese cultural performances, and traditional markets. The method used to measure changes in the economic benefits or economic loss with a random utility approach. The study was found that travel cost, natural environment, Japanese cultural performances, and traditional markets have significant factors about respondent preferences to choose the change of tourism condition. The value of compensation received by visitors as a result of changes in conditions improved by 2,932 billion, while the change in the condition worsens by 2,628 billion. Recommendation of this study is the local government should consider environmental factors in the formulation of tourism development in Rawapening.

  4. Quality of electric service in utility distribution networks under electromagnetic compatibility principles. [ENEL

    Energy Technology Data Exchange (ETDEWEB)

    Chizzolini, P.; Lagostena, L.; Mirra, C.; Sani, G. (ENEL, Rome Milan (Italy))

    1989-03-01

    The development of electromagnetic compatibility criteria, being worked out in international standardization activities, requires the establishment of the characteristics of public utility distribution networks as a reference ambient. This is necessary for gauging the immunity levels towards users and for defining the disturbance emission limits. Therefore, it is a new way to look at the quality of electric service. Consequently, it is necessary to check and specify, in an homogeneous manner, the phenomena that affect electric service. Use must be made of experimental tests and of the collection and elaboration of operation data. In addition to testing techniques, this paper describes the checking procedures for the quality of electric service as they are implemented in the information system developed by ENEL (Italian Electricity Board) for distribution activities. The first reference data obtained from the national and international fields about voltage shape and supply continuity are also indicated.

  5. Report on probabilistic safety assessment (PSA) quality assurance in utilization of risk information

    International Nuclear Information System (INIS)

    2006-12-01

    Recently in Japan, introduction of nuclear safety regulations using risk information such as probabilistic safety assessment (PSA) has been considered and utilization of risk information in the rational and practical measures on safety assurance has made a progress to start with the operation or inspection area. The report compiled results of investigation and studies of PSA quality assurance in risk-informed activities in the USA. Relevant regulatory guide and standard review plan as well as issues and recommendations were reviewed for technical adequacy and advancement of probabilistic risk assessment technology in risk-informed decision making. Useful and important information to be referred as issues in PSA quality assurance was identified. (T. Tanaka)

  6. [Quality of life of patients with asthma on beclomethasone/formoterol. Cost-utility analysis].

    Science.gov (United States)

    García-Ruiz, A J; Quintano Jiménez, J A; García-Agua Soler, N; Ginel Mendoza, L; Hidalgo Requena, A; Del Moral, F

    2016-01-01

    To perform a cost-utility analysis on asthmatic patients on beclomethasone/formoterol fixed combination in Primary Health Care. Material and methods Non-probability sampling was used to select a group of asthmatic patients with moderate/severe persistent severity (GEMA 2009), treated with beclomethasone/formoterol fixed combination, over 18 years, had given their informed consent. The study observation period was 6 months. The variables studied were: age, sex, duration of disease, health resources used, analysis of health related quality of life by EQ-5D and SF-36, and the specific Asthma Quality of Life Questionnaire. For the qualitative variables, the frequency and percentages were calculated, and for the quantitative variables, the mean, SD and 95% CI. Chi-square, Student t-test and ANOVA were used for statistical inference. Comparisons were made with a statistical significance of 0.05. Of the 64 patients that completed the study, 59.4% were female. The mean age was 49 years, and mean disease duration was 93 months. For asthma control, 53% of patients had a prescription pattern of one/12h. All health related quality of life scales were modified with respect to the baseline and the differences were statistically significant. Our patients had a better health related quality of life than Spanish asthma cohort. The incremental cost utility beclomethasone/formoterol versus usual treatment option was € 6,256/QALY. Copyright © 2015 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España, S.L.U. All rights reserved.

  7. Landscape morphology metrics for urban areas: analysis of the role of vegetation in the management of the quality of urban environment

    Directory of Open Access Journals (Sweden)

    Danilo Marques de Magalhães

    2013-05-01

    Full Text Available This study has the objective to demonstrate the applicability of landscape metric analysis undertaken in fragments of urban land use. More specifically, it focuses in low vegetation cover, arboreal and shrubbery vegetation and their distribution on land use. Differences of vegetation cover in dense urban areas are explained. It also discusses briefly the state-of-the-art Landscape Ecology and landscape metrics. It develops, as an example, a case study in Belo Horizonte, Minas Gerais, Brazil. For this study, it selects the use of the area’s metrics, the relation between area, perimeter, core, and circumscribed circle. From this analysis, this paper proposes the definition of priority areas for conservation, urban parks, free spaces of common land, linear parks and green corridors. It is demonstrated that, in order to design urban landscape, studies of two-dimension landscape representations are still interesting, but should consider the systemic relation between different factors related to shape and land use.

  8. The Utility of the OMI HCHO/NO2 in Air Quality Decision-Making Activities

    Science.gov (United States)

    Duncan, Bryan

    2010-01-01

    I will discuss a novel and practical application of the OMI HCHU and NO2 data products to the "weight of evidence" in the air quality decision-making process (e.g., State Implementation Plan (SIP)) for a city, region, or state to demonstrate that it is making progress toward attainment of the National Ambient Air Quality Standard (NAAQS) for ozone. Any trend, or lack thereof, in the observed OMI HCHO/NO2 may support that an emission control strategy implemented to reduce ozone is or is not occurring for a metropolitan area. In addition, the observed OMI HCHO/NO2 may be used to define new emission control strategies as the photochemical environments of urban areas evolve over time. I will demonstrate the utility of the OMI HCHO/NO2 over the U.S. for air quality applications with support from simulations with both a regional model and a photochemical box model. These results support mission planning of an OMI-like instrument for the proposed GEO-CAPE satellite that has as one of its objectives to study air quality from space. However, I'm attending the meeting as the Aura Deputy Project Scientist, so I don't technically need to present anything to justify the travel.

  9. Synthesized view comparison method for no-reference 3D image quality assessment

    Science.gov (United States)

    Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun

    2018-04-01

    We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.

  10. Pain, health related quality of life and healthcare resource utilization in Spain.

    Science.gov (United States)

    Langley, Paul; Pérez Hernández, Concepción; Margarit Ferri, César; Ruiz Hidalgo, Domingo; Lubián López, Manuel

    2011-01-01

    The aim of this paper is to consider the relationship between the experience of pain, health related quality of life (HRQoL) and healthcare resource utilization in Spain. The analysis contrasts the contribution of pain severity and frequency of pain reported against respondents reporting no pain in the previous month. Data are from the 2010 National Health and Wellness Survey (NHWS) for Spain. Single equation generalized linear regression models are used to evaluate the association of pain with the physical and mental component scores of the SF-12 questionnaire as well as health utilities generated from the SF-6D. In addition, the role of pain is assessed in its association with self-reported healthcare provider visits, emergency room visits and hospitalizations in the previous 6 months. The results indicate that the experience of pain, notably severe and frequent pain, is substantial and is significantly associated with the SF-12 physical component scores, health utilities and all aspects of healthcare resource utilization, which far outweighs the role of demographic and socioeconomic variables, health risk factors (in particular body mass index) and the presence of comorbidities. In the case of severe daily pain, the marginal contribution of the SF-12 physical component score is a deficit of -17.86 compared to those reporting no pain (population average score 46.49), while persons who are morbidly obese report a deficit of only -6.63 compared to those who are normal weight. The corresponding association with health utilities is equally dramatic with a severe daily pain deficit of -0.186 compared to those reporting no pain (average population utility 0.71). The impact of pain on healthcare resource utilization is marked. Severe daily pain increases traditional provider visits by 208.8%, emergency room visits by 373.0% and hospitalizations by 348.5%. As an internet-based survey there is the possibility of bias towards those with internet access, although telephone

  11. A CAD system and quality assurance protocol for bone age assessment utilizing digital hand atlas

    Science.gov (United States)

    Gertych, Arakadiusz; Zhang, Aifeng; Ferrara, Benjamin; Liu, Brent J.

    2007-03-01

    Determination of bone age assessment (BAA) in pediatric radiology is a task based on detailed analysis of patient's left hand X-ray. The current standard utilized in clinical practice relies on a subjective comparison of the hand with patterns in the book atlas. The computerized approach to BAA (CBAA) utilizes automatic analysis of the regions of interest in the hand image. This procedure is followed by extraction of quantitative features sensitive to skeletal development that are further converted to a bone age value utilizing knowledge from the digital hand atlas (DHA). This also allows providing BAA results resembling current clinical approach. All developed methodologies have been combined into one CAD module with a graphical user interface (GUI). CBAA can also improve the statistical and analytical accuracy based on a clinical work-flow analysis. For this purpose a quality assurance protocol (QAP) has been developed. Implementation of the QAP helped to make the CAD more robust and find images that cannot meet conditions required by DHA standards. Moreover, the entire CAD-DHA system may gain further benefits if clinical acquisition protocol is modified. The goal of this study is to present the performance improvement of the overall CAD-DHA system with QAP and the comparison of the CAD results with chronological age of 1390 normal subjects from the DHA. The CAD workstation can process images from local image database or from a PACS server.

  12. Baseline Utilization of Breast Radiotherapy Before Institution of the Medicare Practice Quality Reporting Initiative

    International Nuclear Information System (INIS)

    Smith, Benjamin D.; Smith, Grace L.; Roberts, Kenneth B.; Buchholz, Thomas A.

    2009-01-01

    Purpose: In 2007, Medicare implemented the Physician Quality Reporting Initiative (PQRI), which provides financial incentives to physicians who report their performance on certain quality measures. PQRI measure no. 74 recommends radiotherapy for patients treated with conservative surgery (CS) for invasive breast cancer. As a first step in evaluating the potential impact of this measure, we assessed baseline use of radiotherapy among women diagnosed with invasive breast cancer before implementation of PQRI. Methods and Materials: Using the SEER-Medicare data set, we identified women aged 66-70 diagnosed with invasive breast cancer and treated with CS between 2000 and 2002. Treatment with radiotherapy was determined using SEER and claims data. Multivariate logistic regression tested whether receipt of radiotherapy varied significantly across clinical, pathologic, and treatment covariates. Results: Of 3,674 patients, 94% (3,445) received radiotherapy. In adjusted analysis, the presence of comorbid illness (odds ratio [OR] 1.69; 95% confidence interval [CI], 1.19-2.42) and unmarried marital status were associated with omission of radiotherapy (OR 1.65; 95% CI, 1.22-2.20). In contrast, receipt of chemotherapy was protective against omission of radiotherapy (OR 0.25; 95% CI, 0.16-0.38). Race and geographic region did not correlate with radiotherapy utilization. Conclusions: Utilization of radiotherapy following CS was high for patients treated before institution of PQRI, suggesting that at most 6% of patients could benefit from measure no. 74. Further research is needed to determine whether institution of PQRI will affect radiotherapy utilization.

  13. Metric diffusion along foliations

    CERN Document Server

    Walczak, Szymon M

    2017-01-01

    Up-to-date research in metric diffusion along compact foliations is presented in this book. Beginning with fundamentals from the optimal transportation theory and the theory of foliations; this book moves on to cover Wasserstein distance, Kantorovich Duality Theorem, and the metrization of the weak topology by the Wasserstein distance. Metric diffusion is defined, the topology of the metric space is studied and the limits of diffused metrics along compact foliations are discussed. Essentials on foliations, holonomy, heat diffusion, and compact foliations are detailed and vital technical lemmas are proved to aide understanding. Graduate students and researchers in geometry, topology and dynamics of foliations and laminations will find this supplement useful as it presents facts about the metric diffusion along non-compact foliation and provides a full description of the limit for metrics diffused along foliation with at least one compact leaf on the two dimensions.

  14. Metric modular spaces

    CERN Document Server

    Chistyakov, Vyacheslav

    2015-01-01

    Aimed toward researchers and graduate students familiar with elements of functional analysis, linear algebra, and general topology; this book contains a general study of modulars, modular spaces, and metric modular spaces. Modulars may be thought of as generalized velocity fields and serve two important purposes: generate metric spaces in a unified manner and provide a weaker convergence, the modular convergence, whose topology is non-metrizable in general. Metric modular spaces are extensions of metric spaces, metric linear spaces, and classical modular linear spaces. The topics covered include the classification of modulars, metrizability of modular spaces, modular transforms and duality between modular spaces, metric  and modular topologies. Applications illustrated in this book include: the description of superposition operators acting in modular spaces, the existence of regular selections of set-valued mappings, new interpretations of spaces of Lipschitzian and absolutely continuous mappings, the existe...

  15. Understanding Acceptance of Software Metrics--A Developer Perspective

    Science.gov (United States)

    Umarji, Medha

    2009-01-01

    Software metrics are measures of software products and processes. Metrics are widely used by software organizations to help manage projects, improve product quality and increase efficiency of the software development process. However, metrics programs tend to have a high failure rate in organizations, and developer pushback is one of the sources…

  16. Considerations on development, validation, application, and quality control of immuno(metric) biomarker assays in clinical cancer research: an EORTC-NCI working group report.

    NARCIS (Netherlands)

    Sweep, C.G.J.; Fritsche, H.A.; Gion, M.; Klee, G.G.; Schmitt, M.

    2003-01-01

    A major dilemma associated with immuno(metric) assays for biomarkers is that various kits employing antibodies with differing specificities and binding affinities may generate non-equivalent test results. Also, variation in sample processing and the use of different standards (reference material)

  17. The independence of software metrics taken at different life-cycle stages

    Science.gov (United States)

    Kafura, D.; Canning, J.; Reddy, G.

    1984-01-01

    Over the past few years a large number of software metrics have been proposed and, in varying degrees, a number of these metrics have been subjected to empirical validation which demonstrated the utility of the metrics in the software development process. Attempts to classify these metrics and to determine if the metrics in these different classes appear to be measuring distinct attributes of the software product are studied. Statistical analysis is used to determine the degree of relationship among the metrics.

  18. Prognostic Performance Metrics

    Data.gov (United States)

    National Aeronautics and Space Administration — This chapter presents several performance metrics for offline evaluation of prognostics algorithms. A brief overview of different methods employed for performance...

  19. Overview of journal metrics

    Directory of Open Access Journals (Sweden)

    Kihong Kim

    2018-02-01

    Full Text Available Various kinds of metrics used for the quantitative evaluation of scholarly journals are reviewed. The impact factor and related metrics including the immediacy index and the aggregate impact factor, which are provided by the Journal Citation Reports, are explained in detail. The Eigenfactor score and the article influence score are also reviewed. In addition, journal metrics such as CiteScore, Source Normalized Impact per Paper, SCImago Journal Rank, h-index, and g-index are discussed. Limitations and problems that these metrics have are pointed out. We should be cautious to rely on those quantitative measures too much when we evaluate journals or researchers.

  20. Utility QA viewpoint: Quality assurance program conforming to 10CFR50, Appendix B and 10CFR71, subpart H

    International Nuclear Information System (INIS)

    Grodi, D.L.

    1987-01-01

    The Nuclear Regulatory Commission issued IE Information Notice No. 84-50: ''Clarification of Scope of Quality Assurance Programs for Transport Packages Pursuant to 10CFR50, Appendix B, in June, 1984. The reason for this notice was to eliminate confusion applicable to the quality assurance provisions of Appendix B, 10CFR50 to certain transport packages for which a quality assurance program is required by 10 CFR 71. The purpose of this paper is to provide methodology for establishing, implementing and verifying that all 10CFR71, Subpart H requirements are met with the utility's NRC approved 10CFR50, Appendix B Quality Assurance Program when utilizing a contractor (with a NRC approved Quality Assurance Program for Radioactive Waste Packaging and Transport) providing the radioactive waste solidification, packaging and transport for the utility. Collectively (utility and contractor) the quality assurance programs will meet the applicable regulatory requirements without the necessity of the utility establishing a separate and specific quality assurance program for Packaging and Transport of Radioactive Waste

  1. Power Quality Improvement Utilizing Photovoltaic Generation Connected to a Weak Grid

    Energy Technology Data Exchange (ETDEWEB)

    Muljadi, Eduard [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tumbelaka, Hanny H. [Petra Christian University; Gao, Wenzhong [UNiversity of Denver

    2017-11-07

    Microgrid research and development in the past decades have been one of the most popular topics. Similarly, the photovoltaic generation has been surging among renewable generation in the past few years, thanks to the availability, affordability, technology maturity of the PV panels and the PV inverter in the general market. Unfortunately, quite often, the PV installations are connected to weak grids and may have been considered as the culprit of poor power quality affecting other loads in particular sensitive loads connected to the same point of common coupling (PCC). This paper is intended to demystify the renewable generation, and turns the negative perception into positive revelation of the superiority of PV generation to the power quality improvement in a microgrid system. The main objective of this work is to develop a control method for the PV inverter so that the power quality at the PCC will be improved under various disturbances. The method is to control the reactive current based on utilizing the grid current to counteract the negative impact of the disturbances. The proposed control method is verified in PSIM platform. Promising results have been obtained.

  2. Sleep quality and health service utilization in Chinese general population: a cross-sectional study in Dongguan, China.

    Science.gov (United States)

    Zhang, Hui-Shan; Mai, Yan-Bing; Li, Wei-Da; Xi, Wen-Tao; Wang, Jin-Ming; Lei, Yi-Xiong; Wang, Pei-Xi

    The aims of this study were to explore the Pittsburgh Sleep Quality Index (PSQI) and health service utilization in Chinese general population, to investigate the association between PSQI and health service utilization and to identify the independent contributions of social demographic variables, health related factors and PSQI to health service utilization. In a cross-sectional community-based health survey using a multi-instrument questionnaire, 4067 subjects (≥15 years old) were studied. The Chinese version of the PSQI was used to assess sleep quality. Health service utilization was measured by recent two-week physician visit and annual hospitalization rates. Higher PSQI scores were associated with more frequent health service utilization. Higher scores in subjective sleep quality were associated with higher rate of recent two-week physician visit (adjusted OR = 1.24 per SD increase, P = 0.015). Higher scores in habitual sleep efficiency (adjusted OR = 1.24 per SD increase, P = 0.038) and sleep disturbances (adjusted OR = 2.09 per SD increase, P quality predicted more frequent health service utilization. The independent contribution of PSQI on health service utilization was smaller than social demographic variables. Copyright © 2016. Published by Elsevier B.V.

  3. Energy Metrics for State Government Buildings

    Science.gov (United States)

    Michael, Trevor

    Measuring true progress towards energy conservation goals requires the accurate reporting and accounting of energy consumption. An accurate energy metrics framework is also a critical element for verifiable Greenhouse Gas Inventories. Energy conservation in government can reduce expenditures on energy costs leaving more funds available for public services. In addition to monetary savings, conserving energy can help to promote energy security, air quality, and a reduction of carbon footprint. With energy consumption/GHG inventories recently produced at the Federal level, state and local governments are beginning to also produce their own energy metrics systems. In recent years, many states have passed laws and executive orders which require their agencies to reduce energy consumption. In June 2008, SC state government established a law to achieve a 20% energy usage reduction in state buildings by 2020. This study examines case studies from other states who have established similar goals to uncover the methods used to establish an energy metrics system. Direct energy consumption in state government primarily comes from buildings and mobile sources. This study will focus exclusively on measuring energy consumption in state buildings. The case studies reveal that many states including SC are having issues gathering the data needed to accurately measure energy consumption across all state buildings. Common problems found include a lack of enforcement and incentives that encourage state agencies to participate in any reporting system. The case studies are aimed at finding the leverage used to gather the needed data. The various approaches at coercing participation will hopefully reveal methods that SC can use to establish the accurate metrics system needed to measure progress towards its 20% by 2020 energy reduction goal. Among the strongest incentives found in the case studies is the potential for monetary savings through energy efficiency. Framing energy conservation

  4. Brand metrics that matter

    NARCIS (Netherlands)

    Muntinga, D.; Bernritter, S.

    2017-01-01

    Het merk staat steeds meer centraal in de organisatie. Het is daarom essentieel om de gezondheid, prestaties en ontwikkelingen van het merk te meten. Het is echter een uitdaging om de juiste brand metrics te selecteren. Een enorme hoeveelheid metrics vraagt de aandacht van merkbeheerders. Maar welke

  5. Privacy Metrics and Boundaries

    NARCIS (Netherlands)

    L-F. Pau (Louis-François)

    2005-01-01

    textabstractThis paper aims at defining a set of privacy metrics (quantitative and qualitative) in the case of the relation between a privacy protector ,and an information gatherer .The aims with such metrics are: -to allow to assess and compare different user scenarios and their differences; for

  6. A Delphi study assessing the utility of quality improvement tools and resources in Australian primary care.

    Science.gov (United States)

    Upham, Susan J; Janamian, Tina; Crossland, Lisa; Jackson, Claire L

    2016-04-18

    To determine the relevance and utility of online tools and resources to support organisational performance development in primary care and to complement the Primary Care Practice Improvement Tool (PC-PIT). A purposively recruited Expert Advisory Panel of 12 end users used a modified Delphi technique to evaluate 53 tools and resources identified through a previously conducted systematic review. The panel comprised six practice managers and six general practitioners who had participated in the PC-PIT pilot study in 2013-2014. Tools and resources were reviewed in three rounds using a standard pre-tested assessment form. Recommendations, scores and reasons for recommending or rejecting each tool or resource were analysed to determine the final suite of tools and resources. The evaluation was conducted from November 2014 to August 2015. Recommended tools and resources scored highly (mean score, 16/20) in Rounds 1 and 2 of review (n = 25). These tools and resources were perceived to be easily used, useful to the practice and supportive of the PC-PIT. Rejected resources scored considerably lower (mean score, 5/20) and were noted to have limitations such as having no value to the practice and poor utility (n = 6). A final review (Round 3) of 28 resources resulted in a suite of 21 to support the elements of the PC-PIT. This suite of tools and resources offers one approach to supporting the quality improvement initiatives currently in development in primary care reform.

  7. A concept paper: using the outcomes of common surgical conditions as quality metrics to benchmark district surgical services in South Africa as part of a systematic quality improvement programme.

    Science.gov (United States)

    Clarke, Damian L; Kong, Victor Y; Handley, Jonathan; Aldous, Colleen

    2013-07-31

    The fourth, fifth and sixth Millennium Development Goals relate directly to improving global healthcare and health outcomes. The focus is to improve global health outcomes by reducing maternal and childhood mortality and the burden of infectious diseases such as HIV/AIDS, tuberculosis and malaria. Specific targets and time frames have been set for these diseases. There is, however, no specific mention of surgically treated diseases in these goals, reflecting a bias that is slowly changing with emerging consensus that surgical care is an integral part of primary healthcare systems in the developing world. The disparities between the developed and developing world in terms of wealth and social indicators are reflected in disparities in access to surgical care. Health administrators must develop plans and strategies to reduce these disparities. However, any strategic plan that addresses deficits in healthcare must have a system of metrics, which benchmark the current quality of care so that specific improvement targets may be set.This concept paper outlines the role of surgical services in a primary healthcare system, highlights the ongoing disparities in access to surgical care and outcomes of surgical care, discusses the importance of a systems-based approach to healthcare and quality improvement, and reviews the current state of surgical care at district hospitals in South Africa. Finally, it proposes that the results from a recently published study on acute appendicitis, as well as data from a number of other common surgical conditions, can provide measurable outcomes across a healthcare system and so act as an indicator for judging improvements in surgical care. This would provide a framework for the introduction of collection of these outcomes as a routine epidemiological health policy tool.

  8. Metrics Are Needed for Collaborative Software Development

    Directory of Open Access Journals (Sweden)

    Mojgan Mohtashami

    2011-10-01

    Full Text Available There is a need for metrics for inter-organizational collaborative software development projects, encompassing management and technical concerns. In particular, metrics are needed that are aimed at the collaborative aspect itself, such as readiness for collaboration, the quality and/or the costs and benefits of collaboration in a specific ongoing project. We suggest questions and directions for such metrics, spanning the full lifespan of a collaborative project, from considering the suitability of collaboration through evaluating ongoing projects to final evaluation of the collaboration.

  9. Pragmatic security metrics applying metametrics to information security

    CERN Document Server

    Brotby, W Krag

    2013-01-01

    Other books on information security metrics discuss number theory and statistics in academic terms. Light on mathematics and heavy on utility, PRAGMATIC Security Metrics: Applying Metametrics to Information Security breaks the mold. This is the ultimate how-to-do-it guide for security metrics.Packed with time-saving tips, the book offers easy-to-follow guidance for those struggling with security metrics. Step by step, it clearly explains how to specify, develop, use, and maintain an information security measurement system (a comprehensive suite of metrics) to

  10. A choice modelling analysis on the similarity between distribution utilities' and industrial customers' price and quality preferences

    International Nuclear Information System (INIS)

    Soederberg, Magnus

    2008-01-01

    The Swedish Electricity Act states that electricity distribution must comply with both price and quality requirements. In order to maintain efficient regulation it is necessary to firstly, define quality attributes and secondly, determine a customer's priorities concerning price and quality attributes. If distribution utilities gain an understanding of customer preferences and incentives for reporting them, the regulator can save a lot of time by surveying them rather than their customers. This study applies a choice modelling methodology where utilities and industrial customers are asked to evaluate the same twelve choice situations in which price and four specific quality attributes are varied. The preferences expressed by the utilities, and estimated by a random parameter logit, correspond quite well with the preferences expressed by the largest industrial customers. The preferences expressed by the utilities are reasonably homogenous in relation to forms of association (private limited, public and trading partnership). If the regulator acts according to the preferences expressed by the utilities, smaller industrial customers will have to pay for quality they have not asked for. (author)

  11. Holographic Spherically Symmetric Metrics

    Science.gov (United States)

    Petri, Michael

    The holographic principle (HP) conjectures, that the maximum number of degrees of freedom of any realistic physical system is proportional to the system's boundary area. The HP has its roots in the study of black holes. It has recently been applied to cosmological solutions. In this article we apply the HP to spherically symmetric static space-times. We find that any regular spherically symmetric object saturating the HP is subject to tight constraints on the (interior) metric, energy-density, temperature and entropy-density. Whenever gravity can be described by a metric theory, gravity is macroscopically scale invariant and the laws of thermodynamics hold locally and globally, the (interior) metric of a regular holographic object is uniquely determined up to a constant factor and the interior matter-state must follow well defined scaling relations. When the metric theory of gravity is general relativity, the interior matter has an overall string equation of state (EOS) and a unique total energy-density. Thus the holographic metric derived in this article can serve as simple interior 4D realization of Mathur's string fuzzball proposal. Some properties of the holographic metric and its possible experimental verification are discussed. The geodesics of the holographic metric describe an isotropically expanding (or contracting) universe with a nearly homogeneous matter-distribution within the local Hubble volume. Due to the overall string EOS the active gravitational mass-density is zero, resulting in a coasting expansion with Ht = 1, which is compatible with the recent GRB-data.

  12. Probabilistic metric spaces

    CERN Document Server

    Schweizer, B

    2005-01-01

    Topics include special classes of probabilistic metric spaces, topologies, and several related structures, such as probabilistic normed and inner-product spaces. 1983 edition, updated with 3 new appendixes. Includes 17 illustrations.

  13. Creation of a simple natural language processing tool to support an imaging utilization quality dashboard.

    Science.gov (United States)

    Swartz, Jordan; Koziatek, Christian; Theobald, Jason; Smith, Silas; Iturrate, Eduardo

    2017-05-01

    ://iturrate.com/simpleNLP. Results obtained using this tool can be applied to enhance quality by presenting information about utilization and yield to providers via an imaging dashboard. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. IT Project Management Metrics

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Many software and IT projects fail in completing theirs objectives because different causes of which the management of the projects has a high weight. In order to have successfully projects, lessons learned have to be used, historical data to be collected and metrics and indicators have to be computed and used to compare them with past projects and avoid failure to happen. This paper presents some metrics that can be used for the IT project management.

  15. Utilization of Prickly Pear Peels to Improve Quality of Pan Bread

    International Nuclear Information System (INIS)

    Anwar, M.M.; Sallam, E.M.

    2016-01-01

    This investigation aimed to study utilization of prickly pear peels to improve quality of pan bread. Prickly pear peels powder added to wheat flour 72 % at levels 1.0 and 2.0% to make pan beard. In this study, evaluation of nutrients and chemical constitutes and functional properties of prickly pear peels as well as the rheological properties of dough contained prickly pear peels at levels 1% and 2% has been conducted. Then evaluated organoleptic characteristics of pan bread made of it also determined staling rate. Results showed that prickly pear peels had higher content of 32.67 % fiber, 14.25 % pectin and 87.82 % ascorbic acid, and higher contents of antioxidant components. It consists of 441.11 mg/100 g, total phenols, 35.2 flavonoids mg/100 g and DPPH radical-scavenging 62.14%, also water holding capacity was 1.8 ml H 2 O / g , oil holding capacity 2.35 (ml oil/g) and foam stability 7.15%. The major phenolic compounds Oleuro 1264.407, pyrogallo 1149.68, Benzoic 982.37, 3-oH Tyrosol 588.53, Ellagic 413.26, Chorogenic 271.10, Protocatechuic acid 176.02, P-oH- Benzoic 112.78, Epicatechin105.99, Gallic acid 61.26 ppm. The results revealed that addition of prickly pear peels to wheat flour increased the nutrition values of pan bread made of it due to high contents of fiber, ascorbic acid and natural antioxidants, and also decreased staling which improves the quality of pan bread, as well as increases shelf-life of pan bread.

  16. Evaluating Modeled Impact Metrics for Human Health, Agriculture Growth, and Near-Term Climate

    Science.gov (United States)

    Seltzer, K. M.; Shindell, D. T.; Faluvegi, G.; Murray, L. T.

    2017-12-01

    Simulated metrics that assess impacts on human health, agriculture growth, and near-term climate were evaluated using ground-based and satellite observations. The NASA GISS ModelE2 and GEOS-Chem models were used to simulate the near-present chemistry of the atmosphere. A suite of simulations that varied by model, meteorology, horizontal resolution, emissions inventory, and emissions year were performed, enabling an analysis of metric sensitivities to various model components. All simulations utilized consistent anthropogenic global emissions inventories (ECLIPSE V5a or CEDS), and an evaluation of simulated results were carried out for 2004-2006 and 2009-2011 over the United States and 2014-2015 over China. Results for O3- and PM2.5-based metrics featured minor differences due to the model resolutions considered here (2.0° × 2.5° and 0.5° × 0.666°) and model, meteorology, and emissions inventory each played larger roles in variances. Surface metrics related to O3 were consistently high biased, though to varying degrees, demonstrating the need to evaluate particular modeling frameworks before O3 impacts are quantified. Surface metrics related to PM2.5 were diverse, indicating that a multimodel mean with robust results are valuable tools in predicting PM2.5-related impacts. Oftentimes, the configuration that captured the change of a metric best over time differed from the configuration that captured the magnitude of the same metric best, demonstrating the challenge in skillfully simulating impacts. These results highlight the strengths and weaknesses of these models in simulating impact metrics related to air quality and near-term climate. With such information, the reliability of historical and future simulations can be better understood.

  17. Empirical analysis of change metrics for software fault prediction

    NARCIS (Netherlands)

    Choudhary, Garvit Rajesh; Kumar, Sandeep; Kumar, Kuldeep; Mishra, Alok; Catal, Cagatay

    2018-01-01

    A quality assurance activity, known as software fault prediction, can reduce development costs and improve software quality. The objective of this study is to investigate change metrics in conjunction with code metrics to improve the performance of fault prediction models. Experimental studies are

  18. Utilization of low quality roughage by ruminants. A contribution to animal nutrition in the tropics

    International Nuclear Information System (INIS)

    Sastradipradja, D.; Soewardi, B.; Sofjan, L.A.; Hendratno, C.W.

    1976-01-01

    Because of the importance of low quality roughage in ruminant production in Indonesia, a research project to evaluate the nutritive value of forages, and the possible improvement in their utilization, was undertaken. These studies are here reviewed. The effects of cassava meal, sago starch and molasses as supplements in urea-containing rations with alangalang (Imperata cylindrica (L) Beauv.) as the sole roughage source were investigated in Ongole grade calves. Using 35 S in vitro systems, the effects of levels of only cassava meal supplements in alangalang rations on rumen microbial protein synthesis were estimated. Another experiment was carried out to study alangalang in pelleted rations of different main energy source and concentrate to roughage ratios for Ongole grade bulls. Data on the average daily gain in bodyweight and ruminal ammonia concentration suggest that alangalang alone already produced enough ammonia for protein synthesis, and that non-protein nitrogen supplementation can be beneficial if cassava meal is offered at a level greater than 3% metabolic body size (MBS). Results from the in vitro studies show that the level of 1.0% MBS gave the highest rate of microbial protein synthesis. With alangalang in pelleted complete rations, acceptable gains in body weight were obtained, and it was concluded that dried cassava roots can replace corn as the main energy source. (author)

  19. Ophthalmic patients' utilization of technology and social media: an assessment to improve quality of care.

    Science.gov (United States)

    Aleo, Chelsea L; Hark, Lisa; Leiby, Benjamin; Dai, Yang; Murchison, Ann P; Martinez, Patricia; Haller, Julia A

    2014-10-01

    E-health tools have the potential to improve the quality of care for ophthalmic patients, many of whom have chronic conditions. However, little research has assessed ophthalmic patients' use or acceptance of technological devices and social media platforms for health-related purposes. The present study evaluated utilization of technological devices and social media platforms by eye clinic patients, as well as their willingness to receive health reminders through these technologies. A 31-item paper questionnaire was administered to eye clinic patients (n=843) at an urban, tertiary-care center. Questions focused on technology ownership, comfort levels, frequency of use, and preferences for receiving health reminders. Demographic data were also recorded. Eye clinic patients most commonly owned cellular phones (90%), landline phones (81%), and computers (80%). Overall, eye clinic patients preferred to receive health reminders through phone calls and e-mail and used these technologies frequently and with a high level of comfort. Less than 3% of patients preferred using social networking to receive health reminders. In addition, age was significantly associated with technology ownership, comfort level, and frequency of use (ptechnologies more frequently and with a higher comfort level (ptechnologies for appointment reminders, general eye and vision health information, asking urgent medical questions, and requesting prescription refills. Future controlled trials could further explore the efficacy of e-health tools for these purposes.

  20. Reducing the CP content in broiler feeds: impact on animal performance, meat quality and nitrogen utilization.

    Science.gov (United States)

    Belloir, P; Méda, B; Lambert, W; Corrent, E; Juin, H; Lessire, M; Tesseraud, S

    2017-11-01

    Reducing the dietary CP content is an efficient way to limit nitrogen excretion in broilers but, as reported in the literature, it often reduces performance, probably because of an inadequate provision in amino acids (AA). The aim of this study was to investigate the effect of decreasing the CP content in the diet on animal performance, meat quality and nitrogen utilization in growing-finishing broilers using an optimized dietary AA profile based on the ideal protein concept. Two experiments (1 and 2) were performed using 1-day-old PM3 Ross male broilers (1520 and 912 for experiments 1 and 2, respectively) using the minimum AA:Lys ratios proposed by Mack et al. with modifications for Thr and Arg. The digestible Thr (dThr): dLys ratio was increased from 63% to 68% and the dArg:dLys ratio was decreased from 112% to 108%. In experiment 1, the reduction of dietary CP from 19% to 15% (five treatments) did not alter feed intake or BW, but the feed conversion ratio was increased for the 16% and 15% CP diets (+2.4% and +3.6%, respectively), while in experiment 2 (three treatments: 19%, 17.5% and 16% CP) there was no effect of dietary CP on performance. In both experiments, dietary CP content did not affect breast meat yield. However, abdominal fat content (expressed as a percentage of BW) was increased by the decrease in CP content (up to +0.5 and +0.2 percentage point, in experiments 1 and 2, respectively). In experiment 2, meat quality traits responded to dietary CP content with a higher ultimate pH and lower lightness and drip loss values for the low CP diets. Nitrogen retention efficiency increased when reducing CP content in both experiments (+3.5 points/CP percentage point). The main consequence of this higher efficiency was a decrease in nitrogen excretion (-2.5 g N/kg BW gain) and volatilization (expressed as a percentage of excretion: -5 points/CP percentage point). In conclusion, this study demonstrates that with an adapted AA profile, it is possible to reduce

  1. Crowdsourcing metrics of digital collections

    Directory of Open Access Journals (Sweden)

    Tuula Pääkkönen

    2015-12-01

    Full Text Available In the National Library of Finland (NLF there are millions of digitized newspaper and journal pages, which are openly available via the public website  http://digi.kansalliskirjasto.fi. To serve users better, last year the front end was completely overhauled with its main aim in crowdsourcing features, e.g., by giving end-users the opportunity to create digital clippings and a personal scrapbook from the digital collections. But how can you know whether crowdsourcing has had an impact? How much crowdsourcing functionalities have been used so far? Did crowdsourcing work? In this paper the statistics and metrics of a recent crowdsourcing effort are analysed across the different digitized material types (newspapers, journals, ephemera. The subjects, categories and keywords given by the users are analysed to see which topics are the most appealing. Some notable public uses of the crowdsourced article clippings are highlighted. These metrics give us indications on how the end-users, based on their own interests, are investigating and using the digital collections. Therefore, the suggested metrics illustrate the versatility of the information needs of the users, varying from citizen science to research purposes. By analysing the user patterns, we can respond to the new needs of the users by making minor changes to accommodate the most active participants, while still making the service more approachable for those who are trying out the functionalities for the first time. Participation in the clippings and annotations can enrich the materials in unexpected ways and can possibly pave the way for opportunities of using crowdsourcing more also in research contexts. This creates more opportunities for the goals of open science since source data becomes ­available, making it possible for researchers to reach out to the general public for help. In the long term, utilizing, for example, text mining methods can allow these different end-user segments to

  2. Association between obesity, quality of life, physical activity and health service utilization in primary care patients with osteoarthritis.

    NARCIS (Netherlands)

    Rosemann, T.J.; Grol, R.P.T.M.; Herman, K.; Wensing, M.J.P.; Szecsenyi, J.

    2008-01-01

    ABSTRACT: OBJECTIVE: To assess the association of obesity with quality of life, health service utilization and physical activity in a large sample of primary care patients with osteoarthritis (OA). METHODS: Data were retrieved from the PraxArt project, representing a cohort of 1021 primary care

  3. Fault Management Metrics

    Science.gov (United States)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  4. Deep Transfer Metric Learning.

    Science.gov (United States)

    Junlin Hu; Jiwen Lu; Yap-Peng Tan; Jie Zhou

    2016-12-01

    Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption does not hold in many real visual recognition applications, especially when samples are captured across different data sets. In this paper, we propose a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain. Specifically, our DTML learns a deep metric network by maximizing the inter-class variations and minimizing the intra-class variations, and minimizing the distribution divergence between the source domain and the target domain at the top layer of the network. To better exploit the discriminative information from the source domain, we further develop a deeply supervised transfer metric learning (DSTML) method by including an additional objective on DTML, where the output of both the hidden layers and the top layer are optimized jointly. To preserve the local manifold of input data points in the metric space, we present two new methods, DTML with autoencoder regularization and DSTML with autoencoder regularization. Experimental results on face verification, person re-identification, and handwritten digit recognition validate the effectiveness of the proposed methods.

  5. Cyber threat metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Frye, Jason Neal; Veitch, Cynthia K.; Mateski, Mark Elliot; Michalski, John T.; Harris, James Mark; Trevino, Cassandra M.; Maruoka, Scott

    2012-03-01

    Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.

  6. Changes in quality management for light water reactor fuel manufacturing: A utility's view of driving forces and status

    International Nuclear Information System (INIS)

    Huettmann, A.; Skusa, J.; Ketteler, M.

    2000-01-01

    Quality management in LWR fuel manufacturing for the use in German reactors is based on international guidelines and national/local authority requirements defined in operational licenses. The quality management is twofold and comprises a quality assurance system and the check of manufacturing documents including witnessing of fabrication processes and inspections. Utility and authority appointed technical expert witness manufacturing and take part in inspections performed by the manufacturer where the scope is strictly defined and does not provide possibilities of flexible responses to manufacturing occurrences. For future developments in quality management HEW supports strengthening the ideas of quality planning. Analysis of all factors influencing fuel reliability shall be performed prior to manufacturing. This will increase the efforts in reviewing of drawings and specifications. Included here shall be a review of processes that will be used in manufacturing. The qualification and robustness of processes shall be demonstrated with special qualification programs and analysis of manufacturing statistics. Instead of product/project related inspections the use of all manufacturing data will provide a complete picture of the manufacturing quality. By applying statistical methods it will be possible to identify trends in manufacturing before deviations occur. So the basic driving force to implement statistical process control for the utilities is the wish to get comprehensive information of delivered quality, whereas for manufacturers it might be to increase production yields and thus to lower costs. The introduction and full use of statistical process control requires open information about manufacturing processes and inspection results by the manufacturers. This might include data judged to be economically sensitive. It also requires changes in attitude at the utilities and appointed experts. HEW has started to review and change internal guidelines to allow

  7. An elicitation of utility for quality of life under prospect theory.

    Science.gov (United States)

    Attema, Arthur E; Brouwer, Werner B F; l'Haridon, Olivier; Pinto, Jose Luis

    2016-07-01

    This paper performs several tests of decision analysis applied to the health domain. First, we conduct a test of the normative expected utility theory. Second, we investigate the possibility to elicit the more general prospect theory. We observe risk aversion for gains and losses and violations of expected utility. These results imply that mechanisms governing decisions in the health domain are similar to those in the monetary domain. However, we also report one important deviation: utility is universally concave for the health outcomes used in this study, in contrast to the commonly found S-shaped utility for monetary outcomes, with concave utility for gains and convex utility for losses. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  9. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  10. Identifying role of perceived quality and satisfaction on the utilization status of the community clinic services; Bangladesh context.

    Science.gov (United States)

    Karim, Rizwanul M; Abdullah, Mamun S; Rahman, Anisur M; Alam, Ashraful M

    2016-06-24

    Bangladesh is one among the few countries of the world that provides free medical services at the community level through various public health facilities. It is now evident that, clients' perceived quality of services and their expectations of service standards affect health service utilization to a great extent. The aim of the study was to develop and validate the measures for perception and satisfaction of primary health care quality in Bangladesh context and to identify their aspects on the utilization status of the Community Clinic services. This mixed method cross sectional survey was conducted from January to June 2012, in the catchment area of 12 community clinics. Since most of the outcome indicators focus mainly on women and children, women having children less than 2 years of age were randomly assigned and interviewed for the study purpose. Data were collected through FGD, Key informants interview and a pretested semi- structured questionnaire. About 95 % of the respondents were Muslims and 5 % were Hindus. The average age of the respondents was 23.38 (SD 4.15) and almost all of them are home makers. The average monthly expenditure of their family was 95US $ (SD 32US$). At the beginning of the study, two psychometric research instruments; 24 items perceived quality of primary care services PQPCS scale (chronbach's α = .89) and 22 items community clinic service satisfaction CCSS scale (chronbach's α = .97), were constructed and validated. This study showed less educated, poor, landless mothers utilized the community clinic services more than their educated and wealthier counterpart. Women who lived in their own residence used the community clinic services more frequently than those who lived in a rental house. Perceptions concerning skill and competence of the health care provider and satisfaction indicating interpersonal communication and attitude of the care provider were important predictors for community clinic service utilization

  11. Development of a validation process for parameters utilized in optimizing construction quality management of pavements.

    Science.gov (United States)

    2006-01-01

    The implementation of an effective performance-based construction quality management requires a tool for determining impacts of construction quality on the life-cycle performance of pavements. This report presents an update on the efforts in the deve...

  12. Benefits of a good quality assurance program to an electric utility

    Energy Technology Data Exchange (ETDEWEB)

    Mahoney, W.J. (Detroit Edison, Detroit, MI (United States))

    1994-10-01

    A good quality assurance program at a coal mine or power plant should be timely and consistent. The quality analysis is accurate due to a complete sampling of the coal stream loaded into the unit train. The sample analysis is accurate because standardized testing procedures are applied. A good coal quality assurance program includes: coal quality analysis of the delivered coal; bias testing of mechanical coal samplers; dust control during coal handling; and freeze conditioning during the winter. 2 figs., 2 plates

  13. Structural Quality and Utilization of Outpatient Curative Care Under Family Medicine Scheme in Rural Area of Mazandaran– Iran

    Directory of Open Access Journals (Sweden)

    Samad Rouhani

    2013-09-01

    Full Text Available Background & purpose: Since 2005, a reform known as Rural Insurance and Family Medicine Scheme has introduced to primary health care network in Iran in rural areas and small towns. The content of the reform implies a substantial change in those aspects of health centers that mainly could be categorized as structural quality. Although, this is the requirement of all health care providers, they are not identical in those items. In this article, we have tried to report the relation between structural quality of health centers and utilization of curative care in Mazandran province. Materials & Methods: This was a cross-sectional study conducted in 2013. Secondary and routinely collected data was used to answer the research questions. The source of original data was provincial health authority’s data set. A check list containing pre-identified variables was used to extract the data. Using SPSS software package, regression analysis was run to measure the role of different independent variable on dependent variable. Results: There were 215 rural health centers affiliated to 16 cities or small towns that the reform has taken place. The outreach area population of these health centers was 1ˏ330ˏ212 of which 834ˏ189 (62.71% were covered by rural insurance solely. Health centers are not identical in terms of the characteristics of health centers and their utilization. Among the variables with significant impact on the utilization of outpatient care, except for number of physician in each health centre and existence of state owned pharmacy that were found in some health centers, the rest of variables had significant positive impact on the demand for physician visit. Conclusion: Structural quality has significant impact on the utilization of curative care of primary healthcare units at rural area in Iran. The reform seems well targeted the quality improvement and utilization of effective primary health care.

  14. Metrical Phonology and SLA.

    Science.gov (United States)

    Tice, Bradley S.

    Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English language with the intention that it may be used in second language instruction. Stress is defined by its physical and acoustical correlates, and the principles of…

  15. Metrics for Probabilistic Geometries

    DEFF Research Database (Denmark)

    Tosi, Alessandra; Hauberg, Søren; Vellido, Alfredo

    2014-01-01

    the distribution over mappings is given by a Gaussian process. We treat the corresponding latent variable model as a Riemannian manifold and we use the expectation of the metric under the Gaussian process prior to define interpolating paths and measure distance between latent points. We show how distances...

  16. Using software metrics and software reliability models to attain acceptable quality software for flight and ground support software for avionic systems

    Science.gov (United States)

    Lawrence, Stella

    1992-01-01

    This paper is concerned with methods of measuring and developing quality software. Reliable flight and ground support software is a highly important factor in the successful operation of the space shuttle program. Reliability is probably the most important of the characteristics inherent in the concept of 'software quality'. It is the probability of failure free operation of a computer program for a specified time and environment.

  17. Development of quality control and instrumentation performance metrics for diffuse optical spectroscopic imaging instruments in the multi-center clinical environment

    Science.gov (United States)

    Keene, Samuel T.; Cerussi, Albert E.; Warren, Robert V.; Hill, Brian; Roblyer, Darren; Leproux, AnaÑ--s.; Durkin, Amanda F.; O'Sullivan, Thomas D.; Haghany, Hosain; Mantulin, William W.; Tromberg, Bruce J.

    2013-03-01

    Instrument equivalence and quality control are critical elements of multi-center clinical trials. We currently have five identical Diffuse Optical Spectroscopic Imaging (DOSI) instruments enrolled in the American College of Radiology Imaging Network (ACRIN, #6691) trial located at five academic clinical research sites in the US. The goal of the study is to predict the response of breast tumors to neoadjuvant chemotherapy in 60 patients. In order to reliably compare DOSI measurements across different instruments, operators and sites, we must be confident that the data quality is comparable. We require objective and reliable methods for identifying, correcting, and rejecting low quality data. To achieve this goal, we developed and tested an automated quality control algorithm that rejects data points below the instrument noise floor, improves tissue optical property recovery, and outputs a detailed data quality report. Using a new protocol for obtaining dark-noise data, we applied the algorithm to ACRIN patient data and successfully improved the quality of recovered physiological data in some cases.

  18. The impact of chronic hepatitis B on quality of life: a multinational study of utilities from infected and uninfected persons.

    Science.gov (United States)

    Levy, Adrian R; Kowdley, Kris V; Iloeje, Uchenna; Tafesse, Eskinder; Mukherjee, Jayanti; Gish, Robert; Bzowej, Natalie; Briggs, Andrew H

    2008-01-01

    Chronic hepatitis B (CHB) is a condition that results in substantial morbidity and mortality worldwide because of progressive liver damage. Investigators undertaking economic evaluations of new therapeutic agents require estimates of health-related quality of life (HRQOL). Recently, evidence has begun to accumulate that differences in cultural backgrounds have a quantifiable impact on perceptions of health. The objective was to elicit utilities for six health states that occur after infection with the hepatitis B virus from infected and uninfected respondents living in jurisdictions with low and with high CHB endemicity. Standard gamble utilities were elicited from hepatitis patients and uninfected respondents using an interviewer-administered survey in the United States, Canada, United Kingdom, Spain, Hong Kong, and mainland China. Generalized linear models were used to the effect on utilities of current health, age and sex, jurisdiction and, for infected respondents, current disease state. The sample included 534 CHB-infected patients and 600 uninfected respondents. CHB and compensated cirrhosis had a moderate impact on HRQOL with utilities ranging from 0.68 to 0.80. Decompensated cirrhosis and hepatocellular carcinoma had a stronger impact with utilities ranging from 0.35 to 0.41. Significant variation was observed between countries, with both types of respondents in mainland China and Hong Kong reporting systematically lower utilities. Health states related to CHB infection have substantial reductions in HRQOL and the utilities reported in this study provide valuable information for comparing new treatment options. The observed intercountry differences suggest that economic evaluations may benefit from country-specific utility estimates. The extent that systematic intercountry differences in utilities hold true for other infectious and chronic diseases remains an open question and has considerable implications for the proper conduct and interpretation of

  19. Utilizing knowledge from prior plans in the evaluation of quality assurance

    International Nuclear Information System (INIS)

    Stanhope, Carl; Wu, Q Jackie; Yuan, Lulin; Liu, Jianfei; Hood, Rodney; Yin, Fang-Fang; Adamson, Justus

    2015-01-01

    Increased interest regarding sensitivity of pre-treatment intensity modulated radiotherapy and volumetric modulated arc radiotherapy (VMAT) quality assurance (QA) to delivery errors has led to the development of dose-volume histogram (DVH) based analysis. This paradigm shift necessitates a change in the acceptance criteria and action tolerance for QA. Here we present a knowledge based technique to objectively quantify degradations in DVH for prostate radiotherapy.Using machine learning, organ-at-risk (OAR) DVHs from a population of 198 prior patients’ plans were adapted to a test patient’s anatomy to establish patient-specific DVH ranges. This technique was applied to single arc prostate VMAT plans to evaluate various simulated delivery errors: systematic single leaf offsets, systematic leaf bank offsets, random normally distributed leaf fluctuations, systematic lag in gantry angle of the mutli-leaf collimators (MLCs), fluctuations in dose rate, and delivery of each VMAT arc with a constant rather than variable dose rate.Quantitative Analyses of Normal Tissue Effects in the Clinic suggests V 75Gy dose limits of 15% for the rectum and 25% for the bladder, however the knowledge based constraints were more stringent: 8.48   ±   2.65% for the rectum and 4.90   ±   1.98% for the bladder. 19   ±   10 mm single leaf and 1.9   ±   0.7 mm single bank offsets resulted in rectum DVHs worse than 97.7% (2σ) of clinically accepted plans. PTV degradations fell outside of the acceptable range for 0.6   ±   0.3 mm leaf offsets, 0.11   ±   0.06 mm bank offsets, 0.6   ±   1.3 mm of random noise, and 1.0   ±   0.7° of gantry-MLC lag.Utilizing a training set comprised of prior treatment plans, machine learning is used to predict a range of achievable DVHs for the test patient’s anatomy. Consequently, degradations leading to statistical outliers may be identified. A

  20. Metrics for energy resilience

    International Nuclear Information System (INIS)

    Roege, Paul E.; Collier, Zachary A.; Mancillas, James; McDonagh, John A.; Linkov, Igor

    2014-01-01

    Energy lies at the backbone of any advanced society and constitutes an essential prerequisite for economic growth, social order and national defense. However there is an Achilles heel to today's energy and technology relationship; namely a precarious intimacy between energy and the fiscal, social, and technical systems it supports. Recently, widespread and persistent disruptions in energy systems have highlighted the extent of this dependence and the vulnerability of increasingly optimized systems to changing conditions. Resilience is an emerging concept that offers to reconcile considerations of performance under dynamic environments and across multiple time frames by supplementing traditionally static system performance measures to consider behaviors under changing conditions and complex interactions among physical, information and human domains. This paper identifies metrics useful to implement guidance for energy-related planning, design, investment, and operation. Recommendations are presented using a matrix format to provide a structured and comprehensive framework of metrics relevant to a system's energy resilience. The study synthesizes previously proposed metrics and emergent resilience literature to provide a multi-dimensional model intended for use by leaders and practitioners as they transform our energy posture from one of stasis and reaction to one that is proactive and which fosters sustainable growth. - Highlights: • Resilience is the ability of a system to recover from adversity. • There is a need for methods to quantify and measure system resilience. • We developed a matrix-based approach to generate energy resilience metrics. • These metrics can be used in energy planning, system design, and operations

  1. Is quality of care a key predictor of perinatal health care utilization and patient satisfaction in Malawi?

    Science.gov (United States)

    Creanga, Andreea A; Gullo, Sara; Kuhlmann, Anne K Sebert; Msiska, Thumbiko W; Galavotti, Christine

    2017-05-22

    The Malawi government encourages early antenatal care, delivery in health facilities, and timely postnatal care. Efforts to sustain or increase current levels of perinatal service utilization may not achieve desired gains if the quality of care provided is neglected. This study examined predictors of perinatal service utilization and patients' satisfaction with these services with a focus on quality of care. We used baseline, two-stage cluster sampling household survey data collected between November and December, 2012 before implementation of CARE's Community Score Card© intervention in Ntcheu district, Malawi. Women with a birth during the last year (N = 1301) were asked about seeking: 1) family planning, 2) antenatal, 3) delivery, and 4) postnatal care; the quality of care received; and their overall satisfaction with the care received. Specific quality of care items were assessed for each type of service, and up to five such items per type of service were used in analyses. Separate logistic regression models were fitted to examine predictors of family planning, antenatal, delivery, and postnatal service utilization and of complete satisfaction with each of these services; all models were adjusted for women's socio-demographic characteristics, perceptions of the closest facility to their homes, service use indicators, and quality of care items. We found higher levels of perinatal service use than previously documented in Malawi (baseline antenatal care 99.4%; skilled birth attendance 97.3%; postnatal care 77.5%; current family planning use 52.8%). Almost 73% of quality of perinatal care items assessed were favorably reported by > 90% of women. Women reported high overall satisfaction (≥85%) with all types of services examined, higher for antenatal and postnatal care than for family planning and delivery care. We found significant associations between perceived and actual quality of care and both women's use and satisfaction with the perinatal health

  2. Future of the PCI Readmission Metric.

    Science.gov (United States)

    Wasfy, Jason H; Yeh, Robert W

    2016-03-01

    Between 2013 and 2014, the Centers for Medicare and Medicaid Services and the National Cardiovascular Data Registry publically reported risk-adjusted 30-day readmission rates after percutaneous coronary intervention (PCI) as a pilot project. A key strength of this public reporting effort included risk adjustment with clinical rather than administrative data. Furthermore, because readmission after PCI is common, expensive, and preventable, this metric has substantial potential to improve quality and value in American cardiology care. Despite this, concerns about the metric exist. For example, few PCI readmissions are caused by procedural complications, limiting the extent to which improved procedural technique can reduce readmissions. Also, similar to other readmission measures, PCI readmission is associated with socioeconomic status and race. Accordingly, the metric may unfairly penalize hospitals that care for underserved patients. Perhaps in the context of these limitations, Centers for Medicare and Medicaid Services has not yet included PCI readmission among metrics that determine Medicare financial penalties. Nevertheless, provider organizations may still wish to focus on this metric to improve value for cardiology patients. PCI readmission is associated with low-risk chest discomfort and patient anxiety. Therefore, patient education, improved triage mechanisms, and improved care coordination offer opportunities to minimize PCI readmissions. Because PCI readmission is common and costly, reducing PCI readmission offers provider organizations a compelling target to improve the quality of care, and also performance in contracts involve shared financial risk. © 2016 American Heart Association, Inc.

  3. A new universal colour image fidelity metric

    NARCIS (Netherlands)

    Toet, A.; Lucassen, M.P.

    2003-01-01

    We extend a recently introduced universal grayscale image quality index to a newly developed perceptually decorrelated colour space. The resulting colour image fidelity metric quantifies the distortion of a processed colour image relative to its original version. We evaluated the new colour image

  4. Mapping EQ-5D utilities to GBD 2010 and GBD 2013 disability weights: Results of two pilot studies in Belgium

    NARCIS (Netherlands)

    C. Maertens De Noordhout (Charline); B. Devleesschauwer (Brecht); Gielens, L.; Plasmans, M.H.D.; J.A. Haagsma (Juanita); N. Speybroeck (Niko)

    2017-01-01

    textabstractBackground: Utilities and disability weights (DWs) are metrics used for calculating Quality-Adjusted Life Years and Disability-Adjusted Life Years (DALYs), respectively. Utilities can be obtained with multi-attribute instruments such as the EuroQol 5 dimensions questionnaire (EQ-5D). In

  5. Improving the Quality of Radiographs in Neonatal Intensive Care Unit Utilizing Educational Interventions.

    Science.gov (United States)

    Gupta, Ashish O; Rorke, Jeanne; Abubakar, Kabir

    2015-08-01

    We aimed to develop an educational tool to improve the radiograph quality, sustain this improvement overtime, and reduce the number of repeat radiographs. A three phase quality control study was conducted at a tertiary care NICU. A retrospective data collection (phase1) revealed suboptimal radiograph quality and led to an educational intervention and development of X-ray preparation checklist (primary intervention), followed by a prospective data collection for 4 months (phase 2). At the end of phase 2, interim analysis revealed a gradual decline in radiograph quality, which prompted a more comprehensive educational session with constructive feedback to the NICU staff (secondary intervention), followed by another data collection for 6 months (phase 3). There was a significant improvement in the quality of radiographs obtained after primary educational intervention (phase 2) compared with phase 1 (p quality declined but still remained significantly better than phase 1. Secondary intervention resulted in significant improvement in radiograph quality to > 95% in all domains of image quality. No radiographs were repeated in phase 3, compared with 5.8% (16/277) in phase 1. A structured, collaborated educational intervention successfully improves the radiograph quality and decreases the need for repeat radiographs and radiation exposure in the neonates. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  6. Utility of routine data sources for feedback on the quality of cancer care: an assessment based on clinical practice guidelines

    OpenAIRE

    Coory, Michael; Thompson, Bridie; Baade, Peter; Fritschi, Lin

    2009-01-01

    Abstract Background Not all cancer patients receive state-of-the-art care and providing regular feedback to clinicians might reduce this problem. The purpose of this study was to assess the utility of various data sources in providing feedback on the quality of cancer care. Methods Published clinical practice guidelines were used to obtain a list of processes-of-care of interest to clinicians. These were assigned to one of four data categories according to their availability and the marginal ...

  7. Extended depth of focus contact lenses vs. two commercial multifocals: Part 1. Optical performance evaluation via computed through-focus retinal image quality metrics

    Directory of Open Access Journals (Sweden)

    Ravi C. Bakaraju

    2018-01-01

    Conclusion: With the through focus retinal image quality as the gauge of optical performance, we demonstrated that the prototype EDOF designs were less susceptible to variations in pupil, inherent ocular aberrations and decentration, compared to the commercial designs. To ascertain whether these incremental improvements translate to a clinically palpable outcome requires investigation through human trials.

  8. Innovating for quality and value: Utilizing national quality improvement programs to identify opportunities for responsible surgical innovation.

    Science.gov (United States)

    Woo, Russell K; Skarsgard, Erik D

    2015-06-01

    Innovation in surgical techniques, technology, and care processes are essential for improving the care and outcomes of surgical patients, including children. The time and cost associated with surgical innovation can be significant, and unless it leads to improvements in outcome at equivalent or lower costs, it adds little or no value from the perspective of the patients, and decreases the overall resources available to our already financially constrained healthcare system. The emergence of a safety and quality mandate in surgery, and the development of the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) allow needs-based surgical care innovation which leads to value-based improvement in care. In addition to general and procedure-specific clinical outcomes, surgeons should consider the measurement of quality from the patients' perspective. To this end, the integration of validated Patient Reported Outcome Measures (PROMs) into actionable, benchmarked institutional outcomes reporting has the potential to facilitate quality improvement in process, treatment and technology that optimizes value for our patients and health system. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Effectiveness of trauma team on medical resource utilization and quality of care for patients with major trauma.

    Science.gov (United States)

    Wang, Chih-Jung; Yen, Shu-Ting; Huang, Shih-Fang; Hsu, Su-Chen; Ying, Jeremy C; Shan, Yan-Shen

    2017-07-24

    Trauma is one of the leading causes of death in Taiwan, and its medical expenditure escalated drastically. This study aimed to explore the effectiveness of trauma team, which was established in September 2010, on medical resource utilization and quality of care among major trauma patients. This was a retrospective study, using trauma registry data bank and inpatient medical service charge databases. Study subjects were major trauma patients admitted to a medical center in Tainan during 2009 and 2013, and was divided into case group (from January, 2011 to August, 2013) and comparison group (from January, 2009 to August, 2010). Significant reductions in several items of medical resource utilization were identified after the establishment of trauma team. In the sub-group of patients who survived to discharge, examination, radiology and operation charges declined significantly. The radiation and examination charges reduced significantly in the subcategories of ISS = 16 ~ 24 and ISS > 24 respectively. However, no significant effectiveness on quality of care was identified. The establishment of trauma team is effective in containing medical resource utilization. In order to verify the effectiveness on quality of care, extended time frame and extra study subjects are needed.

  10. Diabetes and quality of life: Comparing results from utility instruments and Diabetes-39.

    Science.gov (United States)

    Chen, Gang; Iezzi, Angelo; McKie, John; Khan, Munir A; Richardson, Jeff

    2015-08-01

    To compare the Diabetes-39 (D-39) with six multi-attribute utility (MAU) instruments (15D, AQoL-8D, EQ-5D, HUI3, QWB, and SF-6D), and to develop mapping algorithms which could be used to transform the D-39 scores into the MAU scores. Self-reported diabetes sufferers (N=924) and members of the healthy public (N=1760), aged 18 years and over, were recruited from 6 countries (Australia 18%, USA 18%, UK 17%, Canada 16%, Norway 16%, and Germany 15%). Apart from the QWB which was distributed normally, non-parametric rank tests were used to compare subgroup utilities and D-39 scores. Mapping algorithms were estimated using ordinary least squares (OLS) and generalised linear models (GLM). MAU instruments discriminated between diabetes patients and the healthy public; however, utilities varied between instruments. The 15D, SF-6D, AQoL-8D had the strongest correlations with the D-39. Except for the HUI3, there were significant differences by gender. Mapping algorithms based on the OLS estimator consistently gave better goodness-of-fit results. The mean absolute error (MAE) values ranged from 0.061 to 0.147, the root mean square error (RMSE) values 0.083 to 0.198, and the R-square statistics 0.428 and 0.610. Based on MAE and RMSE values the preferred mapping is D-39 into 15D. R-square statistics and the range of predicted utilities indicate the preferred mapping is D-39 into AQoL-8D. Utilities estimated from different MAU instruments differ significantly and the outcome of a study could depend upon the instrument used. The algorithms reported in this paper enable D-39 data to be mapped into utilities predicted from any of six instruments. This provides choice for those conducting cost-utility analyses. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Enterprise Sustainment Metrics

    Science.gov (United States)

    2015-06-19

    are negatively impacting KPIs” (Parmenter, 2010: 31). In the current state, the Air Force’s AA and PBL metrics are once again split . AA does...must have the authority to “take immediate action to rectify situations that are negatively impacting KPIs” (Parmenter, 2010: 31). 3. Measuring...highest profitability and shareholder value for each company” (2014: 273). By systematically diagraming a process, either through a swim lane flowchart

  12. Symmetries of the dual metrics

    International Nuclear Information System (INIS)

    Baleanu, D.

    1998-01-01

    The geometric duality between the metric g μν and a Killing tensor K μν is studied. The conditions were found when the symmetries of the metric g μν and the dual metric K μν are the same. Dual spinning space was constructed without introduction of torsion. The general results are applied to the case of Kerr-Newmann metric

  13. Environmental benefits of utilizing biogas and upgrading it to natural gas quality

    International Nuclear Information System (INIS)

    Eriksen, K.; Jensby, T.; Norddahl, B.

    1997-01-01

    In case of successful development of the upgrading technology, the utilization of biomass for biogas production may be increased tenfold in Funen. The pilot plant using low-pressure membranes for the upgrading of biogas is expected to be able to undertake a cost-neutral upgrading of biogas due to earnings which arise from the sale of CO 2 . It appears that the utilization of the upgrading technology for the production of 10 7 Nm 3 within region Funen alone will result in a reduction of the greenhouse effect corresponding to 74 * 10 3 t CO 2 . (au)

  14. Landscape pattern metrics and regional assessment

    Science.gov (United States)

    O'Neill, R. V.; Riitters, K.H.; Wickham, J.D.; Jones, K.B.

    1999-01-01

    The combination of remote imagery data, geographic information systems software, and landscape ecology theory provides a unique basis for monitoring and assessing large-scale ecological systems. The unique feature of the work has been the need to develop and interpret quantitative measures of spatial pattern-the landscape indices. This article reviews what is known about the statistical properties of these pattern metrics and suggests some additional metrics based on island biogeography, percolation theory, hierarchy theory, and economic geography. Assessment applications of this approach have required interpreting the pattern metrics in terms of specific environmental endpoints, such as wildlife and water quality, and research into how to represent synergystic effects of many overlapping sources of stress.

  15. Study of water quality improvements during riverbank filtration at three midwestern United States drinking water utilities

    Science.gov (United States)

    Weiss, W.; Bouwer, E.; Ball, W.; O'Melia, C.; Lechevallier, M.; Arora, H.; Aboytes, R.; Speth, T.

    2003-04-01

    Riverbank filtration (RBF) is a process during which surface water is subjected to subsurface flow prior to extraction from wells. During infiltration and soil passage, surface water is subjected to a combination of physical, chemical, and biological processes such as filtration, dilution, sorption, and biodegradation that can significantly improve the raw water quality (Tufenkji et al, 2002; Kuehn and Mueller, 2000; Kivimaki et al, 1998; Stuyfzand, 1998). Transport through alluvial aquifers is associated with a number of water quality benefits, including removal of microbes, pesticides, total and dissolved organic carbon (TOC and DOC), nitrate, and other contaminants (Hiscock and Grischek, 2002; Tufenkji et al., 2002; Ray et al, 2002; Kuehn and Mueller, 2000; Doussan et al, 1997; Cosovic et al, 1996; Juttner, 1995; Miettinen et al, 1994). In comparison to most groundwater sources, alluvial aquifers that are hydraulically connected to rivers are typically easier to exploit (shallow) and more highly productive for drinking water supplies (Doussan et al, 1997). Increased applications of RBF are anticipated as drinking water utilities strive to meet increasingly stringent drinking water regulations, especially with regard to the provision of multiple barriers for protection against microbial pathogens, and with regard to tighter regulations for disinfection by-products (DBPs), such as trihalomethanes (THMs) and haloacetic acids (HAAs). In the above context, research was conducted to document the water quality benefits during RBF at three major river sources in the mid-western United States, specifically with regard to DBP precursor organic matter and microbial pathogens. Specific objectives were to: 1. Evaluate the merits of RBF for removing/controlling DBP precursors and certain other drinking water contaminants (e.g. microorganisms). 2. Evaluate whether RBF can improve finished drinking water quality by removing and/or altering natural organic matter (NOM) in a

  16. Kerr metric in cosmological background

    Energy Technology Data Exchange (ETDEWEB)

    Vaidya, P C [Gujarat Univ., Ahmedabad (India). Dept. of Mathematics

    1977-06-01

    A metric satisfying Einstein's equation is given which in the vicinity of the source reduces to the well-known Kerr metric and which at large distances reduces to the Robertson-Walker metric of a nomogeneous cosmological model. The radius of the event horizon of the Kerr black hole in the cosmological background is found out.

  17. Learning Low-Dimensional Metrics

    OpenAIRE

    Jain, Lalit; Mason, Blake; Nowak, Robert

    2017-01-01

    This paper investigates the theoretical foundations of metric learning, focused on three key questions that are not fully addressed in prior work: 1) we consider learning general low-dimensional (low-rank) metrics as well as sparse metrics; 2) we develop upper and lower (minimax)bounds on the generalization error; 3) we quantify the sample complexity of metric learning in terms of the dimension of the feature space and the dimension/rank of the underlying metric;4) we also bound the accuracy ...

  18. Importance and utility of microbial elements in evaluating soil quality: case studies in silvopastoral systems

    Directory of Open Access Journals (Sweden)

    Victoria Eugenia Vallejo Quintero

    2013-07-01

    Full Text Available Environmental sustainability is achieved by main-taining and improving soil quality. This quality is defined as “the ability of soil to function” and is evaluated through measuring a minimum set of data corresponding to different soil properties (physical, chemical and biological. However, assessment of these properties does not meet all the conditions necessary to be ideal indicators such as: clearly discriminating between the systems use and / or management evaluation, sensitivity to stress conditions associated with anthropogenic actions, easy measurement, accessibility to many users and short response time. Because loss in quality is associated with the alteration of many processes performed by soil microorganisms they meet the above conditions and have been proposed as valid indicators for diagnosing the impact of changes in land-use and ecosystem restoration. Thus, through the evaluation of the density, activity and /or structure-composition of microorganisms we can determine whether current management systems maintain, improve or degrade the soil. In this article we review the main concepts related to soil quality and its indicators. We discuss the effect of the implementation of silvopastoral systems on soil quality, with an emphasis on the use of microbial indicators.

  19. The effect of phytase and fructooligosaccharide supplementation on growth performance, bone quality, and phosphorus utilization in broiler chickens.

    Science.gov (United States)

    Shang, Y; Rogiewicz, A; Patterson, R; Slominski, B A; Kim, W K

    2015-05-01

    An experiment was conducted to investigate the effects of phytase and 2 levels of fructooligosaccharide (FOS) supplementation on growth performance, bone mineralization, and P utilization of broiler chickens. A total of 210 day-old male broiler chickens (Ross) were randomly placed into 7 dietary treatments consisting of 6 replicates with 5 birds per pen. The experiment was designed as an augmented 2 × 3 factorial arrangement with 0 or 500 U/kg of phytase and 0, 0.5% or 1% of FOS added to a reduced Ca (0.8%) and available P (0.25%) negative control diet (NC). A positive control diet (PC) that contained 1% Ca and 0.45% available P was also included. During the entire experimental period, phytase supplementation significantly improved (P Phytase supplementation increased femur BMD (P Phytase alone and in combination with 0.5% FOS increased P utilization significantly when compared with other treatments (P phytase supplementation in low Ca and P diets improved growth performance, bone quality, and P utilization. However, supplementing NC diets with phytase and FOS did not result in bone mineralization values comparable with that of the PC diet. The application of dietary FOS alone had a negative effect on broiler bone quality. © 2015 Poultry Science Association Inc.

  20. Distance from health facility and mothers' perception of quality related to skilled delivery service utilization in northern Ethiopia.

    Science.gov (United States)

    Fisseha, Girmatsion; Berhane, Yemane; Worku, Alemayehu; Terefe, Wondwossen

    2017-01-01

    Poor maternal health service utilization is one of the contributing factors to a high level of maternal and newborn mortality in Ethiopia. The factors associated with utilization of services are believed to differ from one context to another. We assessed the factors associated with skilled delivery service utilization in rural northern Ethiopia. A community-based survey was conducted among mothers who gave birth in the 12 months preceding the study period, from January to February 2015, in the Tigray region of Ethiopia. Multistage sampling technique was used to select mothers from the identified clusters. Households within a 10 km radius of the health facility were taken as a cluster for a community survey. Data were collected using face-to-face interview at the household level. We compared the mothers who reported giving birth to the index child in a health facility and those who reported delivering at home, in order to identify the predictors of skilled delivery utilization. Multivariable logistic regression model was used to determine the predictors of skilled delivery service utilization. The results are presented with odds ratio (OR) and 95% confidence interval (CI). A total of 1,796 mothers participated in the study, with a 100% response rate. Distance to health facilities (adjusted odds ratio [AOR] =0.53 [95% CI: 0.39, 0.71]), perception of mothers to the availability of adequate equipment in the delivery service in their catchment area (AOR =1.5 [95% CI: 1.11, 2.13]), experiencing any complication during childbirth, using antenatal care, lower birth order and having an educated partner were the significant predictors of skilled delivery service utilization. Implementing community-based intervention programs that will address the physical accessibility of delivery services, such as the ambulance service, road issues and waiting rooms, and improving quality maternity service will likely reduce the current problem.

  1. Distance from health facility and mothers’ perception of quality related to skilled delivery service utilization in northern Ethiopia

    Science.gov (United States)

    Fisseha, Girmatsion; Berhane, Yemane; Worku, Alemayehu; Terefe, Wondwossen

    2017-01-01

    Background Poor maternal health service utilization is one of the contributing factors to a high level of maternal and newborn mortality in Ethiopia. The factors associated with utilization of services are believed to differ from one context to another. We assessed the factors associated with skilled delivery service utilization in rural northern Ethiopia. Subjects and methods A community-based survey was conducted among mothers who gave birth in the 12 months preceding the study period, from January to February 2015, in the Tigray region of Ethiopia. Multistage sampling technique was used to select mothers from the identified clusters. Households within a 10 km radius of the health facility were taken as a cluster for a community survey. Data were collected using face-to-face interview at the household level. We compared the mothers who reported giving birth to the index child in a health facility and those who reported delivering at home, in order to identify the predictors of skilled delivery utilization. Multivariable logistic regression model was used to determine the predictors of skilled delivery service utilization. The results are presented with odds ratio (OR) and 95% confidence interval (CI). Results A total of 1,796 mothers participated in the study, with a 100% response rate. Distance to health facilities (adjusted odds ratio [AOR] =0.53 [95% CI: 0.39, 0.71]), perception of mothers to the availability of adequate equipment in the delivery service in their catchment area (AOR =1.5 [95% CI: 1.11, 2.13]), experiencing any complication during childbirth, using antenatal care, lower birth order and having an educated partner were the significant predictors of skilled delivery service utilization. Conclusion Implementing community-based intervention programs that will address the physical accessibility of delivery services, such as the ambulance service, road issues and waiting rooms, and improving quality maternity service will likely reduce the current

  2. Historical instrumental climate data for Australia - quality and utility for palaeoclimatic studies

    Science.gov (United States)

    Nicholls, Neville; Collins, Dean; Trewin, Blair; Hope, Pandora

    2006-10-01

    The quality and availability of climate data suitable for palaeoclimatic calibration and verification for the Australian region are discussed and documented. Details of the various datasets, including problems with the data, are presented. High-quality datasets, where such problems are reduced or even eliminated, are discussed. Many climate datasets are now analysed onto grids, facilitating the preparation of regional-average time series. Work is under way to produce such high-quality, gridded datasets for a variety of hitherto unavailable climate data, including surface humidity, pan evaporation, wind, and cloud. An experiment suggests that only a relatively small number of palaeoclimatic time series could provide a useful estimate of long-term changes in Australian annual average temperature. Copyright

  3. UTILIZATION OF QUALITY MANAGERIAL SYSTEMS IN BUSINESS ENTITIES IN THE SLOVAK REPUBLIC

    Directory of Open Access Journals (Sweden)

    Zuzana Kapsdorferová

    2015-06-01

    Full Text Available Current global trends force businesses to enhance their competitiveness via quality, innovations, leaning of production processes and shortening of production cycles, development of employees and satisfying of customer's needs. At the same time, the society demands from entities more emphasis on sustainable development, environmental protection, social responsibility and on other social aspects of the business. Many firms seek the ways how to master such important demands and gain the recognition on the market. One of the avenues how to achieve planned results resides in implementation of the Total Quality Management systems, which also provide grounds for reaching a status of reliable business partner. Presented research paper puts an emphasis on execution of research in order to find out about the situation with the status of implementation of the quality managerial systems in Slovak businesses as well as to recognize reasons and contributions of usage of these systems in their activities.

  4. Using Principal Component and Tidal Analysis as a Quality Metric for Detecting Systematic Heading Uncertainty in Long-Term Acoustic Doppler Current Profiler Data

    Science.gov (United States)

    Morley, M. G.; Mihaly, S. F.; Dewey, R. K.; Jeffries, M. A.

    2015-12-01

    Ocean Networks Canada (ONC) operates the NEPTUNE and VENUS cabled ocean observatories to collect data on physical, chemical, biological, and geological ocean conditions over multi-year time periods. Researchers can download real-time and historical data from a large variety of instruments to study complex earth and ocean processes from their home laboratories. Ensuring that the users are receiving the most accurate data is a high priority at ONC, requiring quality assurance and quality control (QAQC) procedures to be developed for all data types. While some data types have relatively straightforward QAQC tests, such as scalar data range limits that are based on expected observed values or measurement limits of the instrument, for other data types the QAQC tests are more comprehensive. Long time series of ocean currents from Acoustic Doppler Current Profilers (ADCP), stitched together from multiple deployments over many years is one such data type where systematic data biases are more difficult to identify and correct. Data specialists at ONC are working to quantify systematic compass heading uncertainty in long-term ADCP records at each of the major study sites using the internal compass, remotely operated vehicle bearings, and more analytical tools such as principal component analysis (PCA) to estimate the optimal instrument alignments. In addition to using PCA, some work has been done to estimate the main components of the current at each site using tidal harmonic analysis. This paper describes the key challenges and presents preliminary PCA and tidal analysis approaches used by ONC to improve long-term observatory current measurements.

  5. Extended depth of focus contact lenses vs. two commercial multifocals: Part 1. Optical performance evaluation via computed through-focus retinal image quality metrics.

    Science.gov (United States)

    Bakaraju, Ravi C; Ehrmann, Klaus; Ho, Arthur

    To compare the computed optical performance of prototype lenses designed using deliberate manipulation of higher-order spherical aberrations to extend depth-of-focus (EDOF) with two commercial multifocals. Emmetropic, presbyopic, schematic eyes were coupled with prototype EDOF and commercial multifocal lenses (Acuvue Oasys for presbyopia, AOP, Johnson & Johnson & Air Optix Aqua multifocal, AOMF, Alcon). For each test configuration, the through-focus retinal image quality (TFRIQ) values were computed over 21 vergences, ranging from -0.50 to 2.00D, in 0.125D steps. Analysis was performed considering eyes with three different inherent aberration profiles: five different pupils and five different lens decentration levels. Except the LOW design, the AOP lenses offered 'bifocal' like TFRIQ performance. Lens performance was relatively independent to pupil and aberrations but not centration. Contrastingly, AOMF demonstrated distance centric performance, most dominant in LOW followed by MED and HIGH designs. AOMF lenses were the most sensitive to pupil, aberrations and centration. The prototypes demonstrated a 'lift-off' in the TFRIQ performance, particularly at intermediate and near, without trading performance at distance. When compared with AOP and AOMF, EDOF lenses demonstrated reduced sensitivity to pupil, aberrations and centration. With the through focus retinal image quality as the gauge of optical performance, we demonstrated that the prototype EDOF designs were less susceptible to variations in pupil, inherent ocular aberrations and decentration, compared to the commercial designs. To ascertain whether these incremental improvements translate to a clinically palpable outcome requires investigation through human trials. Copyright © 2017 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  6. Distance from health facility and mothers’ perception of quality related to skilled delivery service utilization in northern Ethiopia

    Directory of Open Access Journals (Sweden)

    Fisseha G

    2017-10-01

    birth order and having an educated partner were the significant predictors of skilled delivery service utilization.Conclusion: Implementing community-based intervention programs that will address the physical accessibility of delivery services, such as the ambulance service, road issues and waiting rooms, and improving quality maternity service will likely reduce the current problem. Keywords: distance from health facility, perception of quality, skilled delivery service utilization, Northern Ethiopia

  7. A Retention Assessment Process: Utilizing Total Quality Management Principles and Focus Groups

    Science.gov (United States)

    Codjoe, Henry M.; Helms, Marilyn M.

    2005-01-01

    Retaining students is a critical topic in higher education. Methodologies abound to gather attrition data as well as key variables important to retention. Using the theories of total quality management and focus groups, this case study gathers and reports data from current college students. Key results, suggestions for replication, and areas for…

  8. Utilization of ko-factors for quality assurance in neutron activation analysis

    DEFF Research Database (Denmark)

    Heydorn, K.; Damsgaard, E.

    1994-01-01

    deviations from unity in case of stoichiometric or other gross errors. Quality assurance based on the Analysis of Precision of k0-ratios from replicate analyses detects unexpected variability associated with inaccurate comparator standards. In two actual cases of cerification lack of statistical control...

  9. Effects of mowing utilization on forage yield and quality in five oat ...

    African Journals Online (AJOL)

    Oat (Avena sativa) is grown to provide feed in winter for livestock production in the alpine area of Qinghai-Tibetan Plateau. The effect of early cutting (T1), late cutting (T2) as well as once cutting and twice cutting (T3) on forage yields and qualities were investigated for five oat varieties (YTA, CNC, B3, Q473 and Q444).

  10. Quality improvement in healthcare delivery utilizing the patient-centered medical home model.

    Science.gov (United States)

    Akinci, Fevzi; Patel, Poonam M

    2014-01-01

    Despite the fact that the United States dedicates so much of its resources to healthcare, the current healthcare delivery system still faces significant quality challenges. The lack of effective communication and coordination of care services across the continuum of care poses disadvantages for those requiring long-term management of their chronic conditions. This is why the new transformation in healthcare known as the patient-centered medical home (PCMH) can help restore confidence in our population that the healthcare services they receive is of the utmost quality and will effectively enhance their quality of life. Healthcare using the PCMH model is delivered with the patient at the center of the transformation and by reinvigorating primary care. The PCMH model strives to deliver effective quality care while attempting to reduce costs. In order to relieve some of our healthcare system distresses, organizations can modify their delivery of care to be patient centered. Enhanced coordination of services, better provider access, self-management, and a team-based approach to care represent some of the key principles of the PCMH model. Patients that can most benefit are those that require long-term management of their conditions such as chronic disease and behavioral health patient populations. The PCMH is a feasible option for delivery reform as pilot studies have documented successful outcomes. Controversy about the lack of a medical neighborhood has created concern about the overall sustainability of the medical home. The medical home can stand independently and continuously provide enhanced care services as a movement toward higher quality care while organizations and government policy assess what types of incentives to put into place for the full collaboration and coordination of care in the healthcare system.

  11. Medicare program; acquisition, protection and disclosure of utilization and quality control peer review organization (PRO) information--HCFA. Proposed rule.

    Science.gov (United States)

    1984-04-16

    This proposal would govern the acquisition, protection and disclosure of information obtained or generated by Utilization and Quality Control Peer Review Organizations (PROs). The Peer Review Improvement Act of 1982 authorizes PROs to acquire information necessary to fulfill their duties and functions, places limits on the disclosure of PRO information, and establishes penalties for unauthorized disclosure. These regulations would implement the PROs' statutory right of access to necessary information and set forth their responsibilities to assure that information once acquired is adequately safeguarded, and used only for proper purposes.

  12. Identifying influence of perceived quality and satisfaction on the utilization status of the community clinic services; Bangladesh context.

    Science.gov (United States)

    Karim, R M; Abdullah, M S; Rahman, A M; Alam, A M

    2015-04-01

    Bangladesh is one among the few countries of the world that provides free medical services at the community level through various public health facilities. It is now evident that, clients' perceived quality of services and their expectations of service standards affect health service utilization to a great extent. The aim of the study was to develop and validate the measures for perception and satisfaction of primary health care quality in Bangladesh context and to identify their aspects on the utilization status of the Community Clinic (CC) services. This mixed method cross sectional survey was conducted from January to June 2012, in the catchment area of 12 Community Clinics (CCs). Since most of the outcome indicators focus mainly on women and children, women having children less than two years of age were randomly assigned and interviewed for the study purpose. Data for the development of perceived service quality and satisfaction tools were collected through Focus Group Discussion (FGD), key informants interview and data for measuring the utilization status were collected by an interviewer administered pretested semi-structured questionnaire. About 95% of the respondents were Muslims and 5% were Hindus. The average age of the respondents was 23.38 (SD ± 4.15) years and almost all of them are home makers. The average monthly expenditure of their family was 7462.92 (SD ± 2545) BDT equivalent to 95 (SD ± 32) US$. To measure lay peoples' perception and satisfaction regarding primary health care service quality two scales e.g. Slim Haddad's 20-item scale for measuring perceived quality of primary health care services (PQPCS) validated in Guinea and Burkina Fuso and primary care satisfaction survey for women (PCSSW) developed by Scholle and colleagues 2004; is a 24-item survey tool validated in Turkey were chosen as a reference tools. Based on those, two psychometric research instruments; 24 items PQPCS scale (chronbach's α =0.89) and 22-items Community Clinic

  13. Metrics with vanishing quantum corrections

    International Nuclear Information System (INIS)

    Coley, A A; Hervik, S; Gibbons, G W; Pope, C N

    2008-01-01

    We investigate solutions of the classical Einstein or supergravity equations that solve any set of quantum corrected Einstein equations in which the Einstein tensor plus a multiple of the metric is equated to a symmetric conserved tensor T μν (g αβ , ∂ τ g αβ , ∂ τ ∂ σ g αβ , ...,) constructed from sums of terms, the involving contractions of the metric and powers of arbitrary covariant derivatives of the curvature tensor. A classical solution, such as an Einstein metric, is called universal if, when evaluated on that Einstein metric, T μν is a multiple of the metric. A Ricci flat classical solution is called strongly universal if, when evaluated on that Ricci flat metric, T μν vanishes. It is well known that pp-waves in four spacetime dimensions are strongly universal. We focus attention on a natural generalization; Einstein metrics with holonomy Sim(n - 2) in which all scalar invariants are zero or constant. In four dimensions we demonstrate that the generalized Ghanam-Thompson metric is weakly universal and that the Goldberg-Kerr metric is strongly universal; indeed, we show that universality extends to all four-dimensional Sim(2) Einstein metrics. We also discuss generalizations to higher dimensions

  14. Utilizing Operational and Improved Remote Sensing Measurements to Assess Air Quality Monitoring Model Forecasts

    Science.gov (United States)

    Gan, Chuen-Meei

    Air quality model forecasts from Weather Research and Forecast (WRF) and Community Multiscale Air Quality (CMAQ) are often used to support air quality applications such as regulatory issues and scientific inquiries on atmospheric science processes. In urban environments, these models become more complex due to the inherent complexity of the land surface coupling and the enhanced pollutants emissions. This makes it very difficult to diagnose the model, if the surface parameter forecasts such as PM2.5 (particulate matter with aerodynamic diameter less than 2.5 microm) are not accurate. For this reason, getting accurate boundary layer dynamic forecasts is as essential as quantifying realistic pollutants emissions. In this thesis, we explore the usefulness of vertical sounding measurements on assessing meteorological and air quality forecast models. In particular, we focus on assessing the WRF model (12km x 12km) coupled with the CMAQ model for the urban New York City (NYC) area using multiple vertical profiling and column integrated remote sensing measurements. This assessment is helpful in probing the root causes for WRF-CMAQ overestimates of surface PM2.5 occurring both predawn and post-sunset in the NYC area during the summer. In particular, we find that the significant underestimates in the WRF PBL height forecast is a key factor in explaining this anomaly. On the other hand, the model predictions of the PBL height during daytime when convective heating dominates were found to be highly correlated to lidar derived PBL height with minimal bias. Additional topics covered in this thesis include mathematical method using direct Mie scattering approach to convert aerosol microphysical properties from CMAQ into optical parameters making direct comparisons with lidar and multispectral radiometers feasible. Finally, we explore some tentative ideas on combining visible (VIS) and mid-infrared (MIR) sensors to better separate aerosols into fine and coarse modes.

  15. lakemorpho: Calculating lake morphometry metrics in R.

    Science.gov (United States)

    Hollister, Jeffrey; Stachelek, Joseph

    2017-01-01

    Metrics describing the shape and size of lakes, known as lake morphometry metrics, are important for any limnological study. In cases where a lake has long been the subject of study these data are often already collected and are openly available. Many other lakes have these data collected, but access is challenging as it is often stored on individual computers (or worse, in filing cabinets) and is available only to the primary investigators. The vast majority of lakes fall into a third category in which the data are not available. This makes broad scale modelling of lake ecology a challenge as some of the key information about in-lake processes are unavailable. While this valuable in situ information may be difficult to obtain, several national datasets exist that may be used to model and estimate lake morphometry. In particular, digital elevation models and hydrography have been shown to be predictive of several lake morphometry metrics. The R package lakemorpho has been developed to utilize these data and estimate the following morphometry metrics: surface area, shoreline length, major axis length, minor axis length, major and minor axis length ratio, shoreline development, maximum depth, mean depth, volume, maximum lake length, mean lake width, maximum lake width, and fetch. In this software tool article we describe the motivation behind developing lakemorpho , discuss the implementation in R, and describe the use of lakemorpho with an example of a typical use case.

  16. Community drinking water quality monitoring data: utility for public health research and practice.

    Science.gov (United States)

    Jones, Rachael M; Graber, Judith M; Anderson, Robert; Rockne, Karl; Turyk, Mary; Stayner, Leslie T

    2014-01-01

    Environmental Public Health Tracking (EPHT) tracks the occurrence and magnitude of environmental hazards and associated adverse health effects over time. The EPHT program has formally expanded its scope to include finished drinking water quality. Our objective was to describe the features, strengths, and limitations of using finished drinking water quality data from community water systems (CWSs) for EPHT applications, focusing on atrazine and nitrogen compounds in 8 Midwestern states. Water quality data were acquired after meeting with state partners and reviewed and merged for analysis. Data and the coding of variables, particularly with respect to censored results (nondetects), were not standardized between states. Monitoring frequency varied between CWSs and between atrazine and nitrates, but this was in line with regulatory requirements. Cumulative distributions of all contaminants were not the same in all states (Peto-Prentice test P water as the CWS source water type. Nitrate results showed substantial state-to-state variability in censoring (20.5%-100%) and in associations between concentrations and the CWS source water type. Statistical analyses of these data are challenging due to high rates of censoring and uncertainty about the appropriateness of parametric assumptions for time-series data. Although monitoring frequency was consistent with regulations, the magnitude of time gaps coupled with uncertainty about CWS service areas may limit linkage with health outcome data.

  17. Quality of community basic medical service utilization in urban and suburban areas in Shanghai from 2009 to 2014

    Science.gov (United States)

    Ma, Jun; Li, Shujun; Cai, Yuyang; Sun, Wei; Liu, Qiaohong

    2018-01-01

    Urban areas usually display better health care services than rural areas, but data about suburban areas in China are lacking. Hence, this cross-sectional study compared the utilization of community basic medical services in Shanghai urban and suburban areas between 2009 and 2014. These data were used to improve the efficiency of community health service utilization and to provide a reference for solving the main health problems of the residents in urban and suburban areas of Shanghai. Using a two-stage random sampling method, questionnaires were completed by 73 community health service centers that were randomly selected from six districts that were also randomly selected from 17 counties in Shanghai. Descriptive statistics, principal component analysis, and forecast analysis were used to complete a gap analysis of basic health services utilization quality between urban and suburban areas. During the 6-year study period, there was an increasing trend toward greater efficiency of basic medical service provision, benefits of basic medical service provision, effectiveness of common chronic disease management, overall satisfaction of community residents, and two-way referral effects. In addition to the implementation effect of hypertension management and two-way referral, the remaining indicators showed a superior effect in urban areas compared with the suburbs (Pservice utilization. Comprehensive satisfaction clearly improved as well. Nevertheless, there was an imbalance in health service utilization between urban and suburban areas. There is a need for the health administrative department to address this imbalance between urban and suburban institutions and to provide the required support to underdeveloped areas to improve resident satisfaction. PMID:29791470

  18. Quality of community basic medical service utilization in urban and suburban areas in Shanghai from 2009 to 2014.

    Science.gov (United States)

    Guo, Lijun; Bao, Yong; Ma, Jun; Li, Shujun; Cai, Yuyang; Sun, Wei; Liu, Qiaohong

    2018-01-01

    Urban areas usually display better health care services than rural areas, but data about suburban areas in China are lacking. Hence, this cross-sectional study compared the utilization of community basic medical services in Shanghai urban and suburban areas between 2009 and 2014. These data were used to improve the efficiency of community health service utilization and to provide a reference for solving the main health problems of the residents in urban and suburban areas of Shanghai. Using a two-stage random sampling method, questionnaires were completed by 73 community health service centers that were randomly selected from six districts that were also randomly selected from 17 counties in Shanghai. Descriptive statistics, principal component analysis, and forecast analysis were used to complete a gap analysis of basic health services utilization quality between urban and suburban areas. During the 6-year study period, there was an increasing trend toward greater efficiency of basic medical service provision, benefits of basic medical service provision, effectiveness of common chronic disease management, overall satisfaction of community residents, and two-way referral effects. In addition to the implementation effect of hypertension management and two-way referral, the remaining indicators showed a superior effect in urban areas compared with the suburbs (Pservice utilization. Comprehensive satisfaction clearly improved as well. Nevertheless, there was an imbalance in health service utilization between urban and suburban areas. There is a need for the health administrative department to address this imbalance between urban and suburban institutions and to provide the required support to underdeveloped areas to improve resident satisfaction.

  19. Quality of community basic medical service utilization in urban and suburban areas in Shanghai from 2009 to 2014.

    Directory of Open Access Journals (Sweden)

    Lijun Guo

    Full Text Available Urban areas usually display better health care services than rural areas, but data about suburban areas in China are lacking. Hence, this cross-sectional study compared the utilization of community basic medical services in Shanghai urban and suburban areas between 2009 and 2014. These data were used to improve the efficiency of community health service utilization and to provide a reference for solving the main health problems of the residents in urban and suburban areas of Shanghai. Using a two-stage random sampling method, questionnaires were completed by 73 community health service centers that were randomly selected from six districts that were also randomly selected from 17 counties in Shanghai. Descriptive statistics, principal component analysis, and forecast analysis were used to complete a gap analysis of basic health services utilization quality between urban and suburban areas. During the 6-year study period, there was an increasing trend toward greater efficiency of basic medical service provision, benefits of basic medical service provision, effectiveness of common chronic disease management, overall satisfaction of community residents, and two-way referral effects. In addition to the implementation effect of hypertension management and two-way referral, the remaining indicators showed a superior effect in urban areas compared with the suburbs (P<0.001. In addition, among the seven principal components, four principal component scores were better in urban areas than in suburban areas (P = <0.001, 0.004, 0.036, and 0.022. The urban comprehensive score also exceeded that of the suburbs (P<0.001. In summary, over the 6-year period, there was a rapidly increasing trend in basic medical service utilization. Comprehensive satisfaction clearly improved as well. Nevertheless, there was an imbalance in health service utilization between urban and suburban areas. There is a need for the health administrative department to address this

  20. Narrowing the Gap Between QoS Metrics and Web QoE Using Above-the-fold Metrics

    OpenAIRE

    da Hora, Diego Neves; Asrese, Alemnew; Christophides, Vassilis; Teixeira, Renata; Rossi, Dario

    2018-01-01

    International audience; Page load time (PLT) is still the most common application Quality of Service (QoS) metric to estimate the Quality of Experience (QoE) of Web users. Yet, recent literature abounds with proposals for alternative metrics (e.g., Above The Fold, SpeedIndex and variants) that aim at better estimating user QoE. The main purpose of this work is thus to thoroughly investigate a mapping between established and recently proposed objective metrics and user QoE. We obtain ground tr...

  1. Sharp metric obstructions for quasi-Einstein metrics

    Science.gov (United States)

    Case, Jeffrey S.

    2013-02-01

    Using the tractor calculus to study smooth metric measure spaces, we adapt results of Gover and Nurowski to give sharp metric obstructions to the existence of quasi-Einstein metrics on suitably generic manifolds. We do this by introducing an analogue of the Weyl tractor W to the setting of smooth metric measure spaces. The obstructions we obtain can be realized as tensorial invariants which are polynomial in the Riemann curvature tensor and its divergence. By taking suitable limits of their tensorial forms, we then find obstructions to the existence of static potentials, generalizing to higher dimensions a result of Bartnik and Tod, and to the existence of potentials for gradient Ricci solitons.

  2. A study on correlation between 2D and 3D gamma evaluation metrics in patient-specific quality assurance for VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Rajasekaran, Dhanabalan, E-mail: dhanabalanraj@gmail.com; Jeevanandam, Prakash; Sukumar, Prabakar; Ranganathan, Arulpandiyan; Johnjothi, Samdevakumar; Nagarajan, Vivekanandan

    2014-01-01

    In this study, we investigated the correlation between 2-dimensional (2D) and 3D gamma analysis using the new PTW OCTAVIUS 4D system for various parameters. For this study, we selected 150 clinically approved volumetric-modulated arc therapy (VMAT) plans of head and neck (50), thoracic (esophagus) (50), and pelvic (cervix) (50) sites. Individual verification plans were created and delivered to the OCTAVIUS 4D phantom. Measured and calculated dose distributions were compared using the 2D and 3D gamma analysis by global (maximum), local and selected (isocenter) dose methods. The average gamma passing rate for 2D global gamma analysis in coronal and sagittal plane was 94.81% ± 2.12% and 95.19% ± 1.76%, respectively, for commonly used 3-mm/3% criteria with 10% low-dose threshold. Correspondingly, for the same criteria, the average gamma passing rate for 3D planar global gamma analysis was 95.90% ± 1.57% and 95.61% ± 1.65%. The volumetric 3D gamma passing rate for 3-mm/3% (10% low-dose threshold) global gamma was 96.49% ± 1.49%. Applying stringent gamma criteria resulted in higher differences between 2D planar and 3D planar gamma analysis across all the global, local, and selected dose gamma evaluation methods. The average gamma passing rate for volumetric 3D gamma analysis was 1.49%, 1.36%, and 2.16% higher when compared with 2D planar analyses (coronal and sagittal combined average) for 3 mm/3% global, local, and selected dose gamma analysis, respectively. On the basis of the wide range of analysis and correlation study, we conclude that there is no assured correlation or notable pattern that could provide relation between planar 2D and volumetric 3D gamma analysis. Owing to higher passing rates, higher action limits can be set while performing 3D quality assurance. Site-wise action limits may be considered for patient-specific QA in VMAT.

  3. Quality of fresh organic matter affects priming of soil organic matter and substrate utilization patterns of microbes

    Science.gov (United States)

    Wang, Hui; Boutton, Thomas W.; Xu, Wenhua; Hu, Guoqing; Jiang, Ping; Bai, Edith

    2015-05-01

    Changes in biogeochemical cycles and the climate system due to human activities are expected to change the quantity and quality of plant litter inputs to soils. How changing quality of fresh organic matter (FOM) might influence the priming effect (PE) on soil organic matter (SOM) mineralization is still under debate. Here we determined the PE induced by two 13C-labeled FOMs with contrasting nutritional quality (leaf vs. stalk of Zea mays L.). Soils from two different forest types yielded consistent results: soils amended with leaf tissue switched faster from negative PE to positive PE due to greater microbial growth compared to soils amended with stalks. However, after 16 d of incubation, soils amended with stalks had a higher PE than those amended with leaf. Phospholipid fatty acid (PLFA) results suggested that microbial demand for carbon and other nutrients was one of the major determinants of the PE observed. Therefore, consideration of both microbial demands for nutrients and FOM supply simultaneously is essential to understand the underlying mechanisms of PE. Our study provided evidence that changes in FOM quality could affect microbial utilization of substrate and PE on SOM mineralization, which may exacerbate global warming problems under future climate change.

  4. Effect of drug utilization reviews on the quality of in-hospital prescribing: a quasi-experimental study

    Directory of Open Access Journals (Sweden)

    Chabot Isabelle

    2006-03-01

    Full Text Available Abstract Background Drug utilization review (DUR programs are being conducted in Canadian hospitals with the aim of improving the appropriateness of prescriptions. However, there is little evidence of their effectiveness. The objective of this study was to assess the impact of both a retrospective and a concurrent DUR programs on the quality of in-hospital prescribing. Methods We conducted an interrupted time series quasi-experimental study. Using explicit criteria for quality of prescribing, the natural history of cisapride prescription was established retrospectively in three university-affiliated hospitals. A retrospective DUR was implemented in one of the hospitals, a concurrent DUR in another, whereas the third hospital served as a control. An archivist abstracted records of all patients who were prescribed cisapride during the observation period. The effect of DURs relative to the control hospital was determined by comparing estimated regression coefficients from the time series models and by testing the statistical significance using a 2-tailed Student's t test. Results The concurrent DUR program significantly improved the appropriateness of prescriptions for the indication for use whereas the retrospective DUR brought about no significant effect on the quality of prescribing. Conclusion Results suggest a retrospective DUR approach may not be sufficient to improve the quality of prescribing. However, a concurrent DUR strategy, with direct feedback to prescribers seems effective and should be tested in other settings with other drugs.

  5. Completion of a Dislocated Metric Space

    Directory of Open Access Journals (Sweden)

    P. Sumati Kumari

    2015-01-01

    Full Text Available We provide a construction for the completion of a dislocated metric space (abbreviated d-metric space; we also prove that the completion of the metric associated with a d-metric coincides with the metric associated with the completion of the d-metric.

  6. On the Empirical Estimation of Utility Distribution Damping Parameters Using Power Quality Waveform Data

    Directory of Open Access Journals (Sweden)

    Irene Y. H. Gu

    2007-01-01

    Full Text Available This paper describes an efficient yet accurate methodology for estimating system damping. The proposed technique is based on linear dynamic system theory and the Hilbert damping analysis. The proposed technique requires capacitor switching waveforms only. The detected envelope of the intrinsic transient portion of the voltage waveform after capacitor bank energizing and its decay rate along with the damped resonant frequency are used to quantify effective X/R ratio of a system. Thus, the proposed method provides complete knowledge of system impedance characteristics. The estimated system damping can also be used to evaluate the system vulnerability to various PQ disturbances, particularly resonance phenomena, so that a utility may take preventive measures and improve PQ of the system.

  7. Metric adjusted skew information

    DEFF Research Database (Denmark)

    Hansen, Frank

    2008-01-01

    ) that vanishes for observables commuting with the state. We show that the skew information is a convex function on the manifold of states. It also satisfies other requirements, proposed by Wigner and Yanase, for an effective measure-of-information content of a state relative to a conserved observable. We...... establish a connection between the geometrical formulation of quantum statistics as proposed by Chentsov and Morozova and measures of quantum information as introduced by Wigner and Yanase and extended in this article. We show that the set of normalized Morozova-Chentsov functions describing the possible......We extend the concept of Wigner-Yanase-Dyson skew information to something we call "metric adjusted skew information" (of a state with respect to a conserved observable). This "skew information" is intended to be a non-negative quantity bounded by the variance (of an observable in a state...

  8. Metrics for Analyzing Quantifiable Differentiation of Designs with Varying Integrity for Hardware Assurance

    Science.gov (United States)

    2017-03-01

    Keywords — Trojan; integrity; trust; quantify; hardware; assurance; verification; metrics ; reference, quality ; profile I. INTRODUCTION A. The Rising...as a framework for benchmarking Trusted Part certifications. Previous work conducted in Trust Metric development has focused on measures at the...the lowest integrities. Based on the analysis, the DI metric shows measurable differentiation between all five Test Article Error Location Error

  9. A Metrics Approach for Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Cristian CIUREA

    2009-01-01

    Full Text Available This article presents different types of collaborative systems, their structure and classification. This paper defines the concept of virtual campus as a collaborative system. It builds architecture for virtual campus oriented on collaborative training processes. It analyses the quality characteristics of collaborative systems and propose techniques for metrics construction and validation in order to evaluate them. The article analyzes different ways to increase the efficiency and the performance level in collaborative banking systems.

  10. Metrics and Its Function in Poetry

    Institute of Scientific and Technical Information of China (English)

    XIAO Zhong-qiong; CHEN Min-jie

    2013-01-01

    Poetry is a special combination of musical and linguistic qualities-of sounds both regarded as pure sound and as mean-ingful speech. Part of the pleasure of poetry lies in its relationship with music. Metrics, including rhythm and meter, is an impor-tant method for poetry to express poetic sentiment. Through the introduction of poetic language and typical examples, the writer of this paper tries to discuss the relationship between sound and meaning.

  11. The impact of an integrated hospital-community medical information system on quality and service utilization in hospital departments.

    Science.gov (United States)

    Nirel, Nurit; Rosen, Bruce; Sharon, Assaf; Blondheim, Orna; Sherf, Michael; Samuel, Hadar; Cohen, Arnon D

    2010-09-01

    In 2005, an innovative system of hospital-community on-line medical records (OFEK) was implemented at Clalit Health Services (CHS). The goals of the study were to examine the extent of OFEK's use and its impact on quality indicators and medical-service utilization in Internal Medicine and General Surgery wards of CHS hospitals. Examining the frequency of OFEK's use with its own track-log data; comparing, "before" and "after", quality indicators and service utilization data in experimental (CHS patients) versus control groups (other patients). OFEK's use increased by tens of percentages each year, Internal Medicine wards showed a significant decrease in the number of laboratory tests and 3 CT tests performed compared with the control group. Wards using OFEK extensively showed a greater decrease in CT tests, in one imaging test, and in the average number of ambulatory hospitalizations. No similar changes were found in General Surgery wards. The study helps evaluate the extent to which OFEK's targets were achieved and contributes to the development of measures to examine the impact of such systems, which can be used to assess a broad range of Health Information Technology (HIT) systems. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  12. Research on customer satisfaction with the quality of services provided by public utilities of the city of Belgrade

    Directory of Open Access Journals (Sweden)

    Živković Radmila

    2014-01-01

    Full Text Available Monopoly market conditions, in which public companies used to operate ten to twenty years ago, substantially dictated the way of considering and creating business of public companies in Serbia. However, introduction of changes to the environment, such as more intensive competition and changes of needs and demands of the customers requires abandoning old orientations to business. Public companies are in position to create and offer a higher level of service quality, based on better and more intensified communication with their customers. Public enterprises are monitored by public authorities, especially in the areas of restrictions on the choice of business strategies, pricing and price restrictions, selection of suppliers and the like. On the other hand, there is a branch competition occurring, on which public companies must count. In such an environment, creating effective services should be the key strategic objective for the development of public utility companies of the city of Belgrade. Service companies should be modern service companies, able to actively participate in the market, looking upon customers - citizens as users of their services. The aim of the research is to determine the perception of value and customer satisfaction with the services provided by the public utilities of Belgrade. The results of the study indicate that respondents are not satisfied with provided services and do not have clearly defined attitudes towards key aspects of public enterprises, which are supposed to be important for positioning and improving the quality of services in the market.

  13. Improving the quality of urban public space through the identification of space utilization index at Imam Bonjol Park, Padang city

    Science.gov (United States)

    Eriawan, Tomi; Setiawati, Lestari

    2017-06-01

    Padang City as a big city with a population approaching one million people has to address the issue of increased activities of the population and increased need for land and space for those activities. One of the effects of population growth and the development of activities in Padang is the decreasing number of open spaces for the outdoor public activities, both the natural and artificial public. However, Padang City has several open spaces that are built and managed by the government including 40 units of open spaces in the form of plansum parks, playgrounds, and sports parks, with a total area of 10.88 hectares. Despite their status as public open spaces, not all of them can be used and enjoyed by the public since most of them are passive parks, in which they are made only as a garden without any indulgences. This study was performed to assess the quality of public spaces in the central business of Padang City, namely Imam Bonjol Park (Taman Imam Bonjol). The methods of this study were done through several stages, which were to identify the typology of function space based on [1] Carmona (2008) and to assess the space utilization index based on the approach of Public Space Index according to Mehta [2] (2007). The purpose of this study was to assess the quality of space which is a public space in Padang City. The space quality was measured based on the variables in Good Public Space Index, the intensity of use, the intensity of social activity, the duration of activity, the variations in usage, and the diversity of use. The rate of the index of public space quality at Taman Imam Bonjol was determined by assessing 5 (five) variables of space quality. Based on the results of the analysis, public space utilization index was equal to 0.696. This result could be used to determine the quality of public space, in this case was Imam Bonjol Park was in Medium category. The parameters indicated several results including the lack of diversity in users' activity time, less

  14. Precision scan-imaging for paperboard quality inspection utilizing X-ray fluorescence

    Science.gov (United States)

    Norlin, B.; Reza, S.; Fröjdh, C.; Nordin, T.

    2018-01-01

    Paperboard is typically made up of a core of cellulose fibers [C6H10O5] and a coating layer of [CaCO3]. The uniformity of these layers is a critical parameter for the printing quality. Current quality control methods include chemistry based visual inspection methods as well as X-ray based methods to measure the coating thickness. In this work we combine the X-ray fluorescence signals from the Ca atoms (3.7 keV) in the coating and from a Cu target (8.0 keV) placed behind the paper to simultaneously measure both the coating and the fibers. Cu was selected as the target material since its fluorescence signal is well separated from the Ca signal while its fluorescence's still are absorbed sufficiently in the paper. A laboratory scale setup is built using stepper motors, a silicon drift detector based spectrometer and a collimated X-ray beam. The spectroscopic image is retrieved by scanning the paperboard surface and registering the fluorescence signals from Ca and Cu. The exposure time for this type of setups can be significantly improved by implementing spectroscopic imaging sensors. The material contents in the layers can then be retrieved from the absolute and relative intensities of these two signals.

  15. THE QUALITY AND UTILITY OF ANNUAL FINANCIAL REPORTS BETWEEN EXPECTATIONS AND REALITY

    Directory of Open Access Journals (Sweden)

    Magdalena Mihai

    2016-12-01

    Full Text Available Any company makes its presence felt in the market by providing financial - accounting. Users will be more interested in issuing entity, to the extent that the information provided is attractive and quality, showing favorable results. So any accounting information quality is a measure of objectivity and transparency pursued transposition. Setting targets for financial statements depends on many factors and in addition, there is a universal set of objective, valid for all businesses, whatever the accounting system adopted. Over time, in our country, the accounting system has undergone various changes aimed at ensuring financial accounting information as qualitative targets, so try to bring the national accounting system close to international accounting standards. From an analysis of the three periods to improve national accounting system, no one can see only instability reform process accounting, which included regulations associated initially with European directives (which is also found in French accounting system, then with IAS and European directives, in order to subsequently take place reversing the order of accounting regulations, putting in this regard, European directives first, and IFRS (whose application is reduced on the second place, combination that has led to the emergence of contradictory situation in some cases.

  16. Utilization of laserpuncture induction as spawning stimulation in catfish (Clarias spp. crossbreeding toward egg quality

    Directory of Open Access Journals (Sweden)

    Pungky S.W. Kusuma

    2015-12-01

    Full Text Available The induction of laserpuncture on the reproductive acupoint of catfish can accelerate gonadotropin hormone formation from the pituitary especially gonadotropin II (GTH-II which has a role in the final stage of oocyte maturation, ovulation and spawning stimulation. The purpose of this study is to evaluate the effects of laserpuncture induction toward the egg quality from crossbreeding catfish male var. Paiton and female var. Sangkuriang. The egg quality was measured by the following parameters: fertilization rate (FR, egg hatching rate (HR, and larvae survival rate (SR. The research treatments were conducted using two levels along with eight repetitions. The results show that the crossbreed catfish using laserpuncture induction affected the parameters by increasing the mean value of fertilization rate, egg hatching rate and larvae survival rate significantly (P < 0.05 compared with mean value of fish without induction. This study concluded that laserpuncture induction on the crossbreeding between broodstock of male catfish var. Paiton and female var. Sangkuriang will increase FR, HR and SR.

  17. The metric system: An introduction

    Energy Technology Data Exchange (ETDEWEB)

    Lumley, S.M.

    1995-05-01

    On July 13, 1992, Deputy Director Duane Sewell restated the Laboratory`s policy on conversion to the metric system which was established in 1974. Sewell`s memo announced the Laboratory`s intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory`s conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on July 25, 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation`s conversion to the metric system. The second part of this report is on applying the metric system.

  18. Attack-Resistant Trust Metrics

    Science.gov (United States)

    Levien, Raph

    The Internet is an amazingly powerful tool for connecting people together, unmatched in human history. Yet, with that power comes great potential for spam and abuse. Trust metrics are an attempt to compute the set of which people are trustworthy and which are likely attackers. This chapter presents two specific trust metrics developed and deployed on the Advogato Website, which is a community blog for free software developers. This real-world experience demonstrates that the trust metrics fulfilled their goals, but that for good results, it is important to match the assumptions of the abstract trust metric computation to the real-world implementation.

  19. The metric system: An introduction

    Science.gov (United States)

    Lumley, Susan M.

    On 13 Jul. 1992, Deputy Director Duane Sewell restated the Laboratory's policy on conversion to the metric system which was established in 1974. Sewell's memo announced the Laboratory's intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory's conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on 25 Jul. 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation's conversion to the metric system. The second part of this report is on applying the metric system.

  20. Metric-adjusted skew information

    DEFF Research Database (Denmark)

    Liang, Cai; Hansen, Frank

    2010-01-01

    on a bipartite system and proved superadditivity of the Wigner-Yanase-Dyson skew informations for such states. We extend this result to the general metric-adjusted skew information. We finally show that a recently introduced extension to parameter values 1 ...We give a truly elementary proof of the convexity of metric-adjusted skew information following an idea of Effros. We extend earlier results of weak forms of superadditivity to general metric-adjusted skew information. Recently, Luo and Zhang introduced the notion of semi-quantum states...... of (unbounded) metric-adjusted skew information....

  1. Two classes of metric spaces

    Directory of Open Access Journals (Sweden)

    Isabel Garrido

    2016-04-01

    Full Text Available The class of metric spaces (X,d known as small-determined spaces, introduced by Garrido and Jaramillo, are properly defined by means of some type of real-valued Lipschitz functions on X. On the other hand, B-simple metric spaces introduced by Hejcman are defined in terms of some kind of bornologies of bounded subsets of X. In this note we present a common framework where both classes of metric spaces can be studied which allows us to see not only the relationships between them but also to obtain new internal characterizations of these metric properties.

  2. Responses of macroinvertebrate community metrics to a wastewater discharge in the Upper Blue River of Kansas and Missouri, USA

    Science.gov (United States)

    Poulton, Barry C.; Graham, Jennifer L.; Rasmussen, Teresa J.; Stone, Mandy L.

    2015-01-01

    The Blue River Main wastewater treatment facility (WWTF) discharges into the upper Blue River (725 km2), and is recently upgraded to implement biological nutrient removal. We measured biotic condition upstream and downstream of the discharge utilizing the macroinvertebrate protocol developed for Kansas streams. We examined responses of 34 metrics to determine the best indicators for discriminating site differences and for predicting biological condition. Significant differences between sites upstream and downstream of the discharge were identified for 15 metrics in April and 12 metrics in August. Upstream biotic condition scores were significantly greater than scores at both downstream sites in April (p = 0.02), and in August the most downstream site was classified as non-biologically supporting. Thirteen EPT taxa (Ephemeroptera, Plecoptera, Trichoptera) considered intolerant of degraded stream quality were absent at one or both downstream sites. Increases in tolerance metrics and filtering macroinvertebrates, and a decline in ratio of scrapers to filterers all indicated effects of increased nutrient enrichment. Stepwise regressions identified several significant models containing a suite of metrics with low redundancy (R2 = 0.90 - 0.99). Based on the rapid decline in biological condition downstream of the discharge, the level of nutrient removal resulting from the facility upgrade (10% - 20%) was not enough to mitigate negative effects on macroinvertebrate communities.

  3. Utility of gram staining for evaluation of the quality of cystic fibrosis sputum samples.

    Science.gov (United States)

    Nair, Bindu; Stapp, Jenny; Stapp, Lynn; Bugni, Linda; Van Dalfsen, Jill; Burns, Jane L

    2002-08-01

    The microscopic examination of Gram-stained sputum specimens is very helpful in the evaluation of patients with community-acquired pneumonia and has also been recommended for use in cystic fibrosis (CF) patients. This study was undertaken to evaluate that recommendation. One hundred one sputum samples from CF patients were cultured for gram-negative bacilli and examined by Gram staining for both sputum adequacy (using the quality [Q] score) and bacterial morphology. Subjective evaluation of adequacy was also performed and categorized. Based on Q score evaluation, 41% of the samples would have been rejected despite a subjective appearance of purulence. Only three of these rejected samples were culture negative for gram-negative CF pathogens. Correlation between culture results and quantitative Gram stain examination was also poor. These data suggest that subjective evaluation combined with comprehensive bacteriology is superior to Gram staining in identifying pathogens in CF sputum.

  4. Development of functional extruded snacks by utilizing paste shrimp (Acetes spp.): process optimization and quality evaluation.

    Science.gov (United States)

    Kumar, Raushan; Xavier, Ka Martin; Lekshmi, Manjusha; Dhanabalan, Vignaesh; Thachil, Madonna T; Balange, Amjad K; Gudipati, Venkateshwarlu

    2018-04-01

    Functional extruded snacks were prepared using paste shrimp powder (Acetes spp.), which is rich in protein. The process variables required for the preparation of extruded snacks was optimized using response surface methodology. Extrusion temperature (130-144 °C), level of Acetes powder (100-200 g kg -1 ) and feed moisture (140-200 g kg -1 ) were selected as design variables, and expansion ratio, porosity, hardness, crispness and thiobarbituric acid reactive substance value were taken as the response variables. Extrusion temperature significantly influenced all the response variables, while Acetes inclusion influenced all variables except porosity. Feed moisture content showed a significant quadratic effect on all responses and an interactive effect on expansion ratio and hardness. Shrimp powder incorporation increased the protein and mineral content of the final product. The extruded snack made with the combination of extrusion temperature 144.59 °C, feed moisture 178.5 g kg -1 and Acetes inclusion level 146.7 g kg -1 was found to be the best one based on sensory evaluation. The study suggests that use of Acetes species for the development of extruded snacks will serve as a means of utilization of Acetes as well as being a rich source of proteins for human consumption, which would otherwise remain unexploited as a by-catch. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  5. Health-related quality of life of cataract patients: cross-cultural comparisons of utility and psychometric measures.

    Science.gov (United States)

    Lee, Jae Eun; Fos, Peter J; Zuniga, Miguel A; Kastl, Peter R; Sung, Jung Hye

    2003-07-01

    This study was conducted to assess the presence and/or absence of cross-cultural differences or similarities between Korean and United States cataract patients. A systematic assessment was performed using utility and psychometric measures in the study population. A cross-sectional study design was used to examine the comparison of preoperative outcomes measures in cataract patients in Korea and the United States. Study subjects were selected using non-probabilistic methods and included 132 patients scheduled for cataract surgery in one eye. Subjects were adult cataract patients at Samsung and Kunyang General Hospital in Seoul, Korea, and Tulane University Hospital and Clinics in New Orleans, Louisiana. Preoperative utility was assessed using the verbal rating scale and standard reference gamble techniques. Current preoperative health status was assessed using the SF-36 and VF-14 surveys. Current preoperative Snellen visual acuity was used as a clinical measure of vision status. Korean patients were more likely to be younger (p = 0.001), less educated (p = 0.001), and to have worse Snellen visual acuity (p = 0.002) than United States patients. Multivariate analysis of variance (MANOVA) revealed that in contrast to Korean patients, United States patients were assessed to have higher scoring in general health, vitality, VF-14, and verbal rating for visual health. This higher scoring trend persisted after controlling for age, gender, education and Snellen visual acuity. The difference in health-related quality of life (HRQOL) between the two countries was quite clear, especially in the older age and highly educated group. Subjects in Korea and the United States were significantly different in quality of life, functional status and clinical outcomes. Subjects in the United States had more favorable health outcomes than those in Korea. These differences may be caused by multiple factors, including country-specific differences in economic status, health care system

  6. Construction of the descriptive system for the Assessment of Quality of Life AQoL-6D utility instrument.

    Science.gov (United States)

    Richardson, Jeffrey R J; Peacock, Stuart J; Hawthorne, Graeme; Iezzi, Angelo; Elsworth, Gerald; Day, Neil A

    2012-04-17

    Multi attribute utility (MAU) instruments are used to include the health related quality of life (HRQoL) in economic evaluations of health programs. Comparative studies suggest different MAU instruments measure related but different constructs. The objective of this paper is to describe the methods employed to achieve content validity in the descriptive system of the Assessment of Quality of Life (AQoL)-6D, MAU instrument. The AQoL program introduced the use of psychometric methods in the construction of health related MAU instruments. To develop the AQoL-6D we selected 112 items from previous research, focus groups and expert judgment and administered them to 316 members of the public and 302 hospital patients. The search for content validity across a broad spectrum of health states required both formative and reflective modelling. We employed Exploratory Factor Analysis and Structural Equation Modelling (SEM) to meet these dual requirements. The resulting instrument employs 20 items in a multi-tier descriptive system. Latent dimension variables achieve sensitive descriptions of 6 dimensions which, in turn, combine to form a single latent QoL variable. Diagnostic statistics from the SEM analysis are exceptionally good and confirm the hypothesised structure of the model. The AQoL-6D descriptive system has good psychometric properties. They imply that the instrument has achieved construct validity and provides a sensitive description of HRQoL. This means that it may be used with confidence for measuring health related quality of life and that it is a suitable basis for modelling utilities for inclusion in the economic evaluation of health programs.

  7. Construction of the descriptive system for the assessment of quality of life AQoL-6D utility instrument

    Directory of Open Access Journals (Sweden)

    Richardson Jeffrey RJ

    2012-04-01

    Full Text Available Abstract Background Multi attribute utility (MAU instruments are used to include the health related quality of life (HRQoL in economic evaluations of health programs. Comparative studies suggest different MAU instruments measure related but different constructs. The objective of this paper is to describe the methods employed to achieve content validity in the descriptive system of the Assessment of Quality of Life (AQoL-6D, MAU instrument. Methods The AQoL program introduced the use of psychometric methods in the construction of health related MAU instruments. To develop the AQoL-6D we selected 112 items from previous research, focus groups and expert judgment and administered them to 316 members of the public and 302 hospital patients. The search for content validity across a broad spectrum of health states required both formative and reflective modelling. We employed Exploratory Factor Analysis and Structural Equation Modelling (SEM to meet these dual requirements. Results and Discussion The resulting instrument employs 20 items in a multi-tier descriptive system. Latent dimension variables achieve sensitive descriptions of 6 dimensions which, in turn, combine to form a single latent QoL variable. Diagnostic statistics from the SEM analysis are exceptionally good and confirm the hypothesised structure of the model. Conclusions The AQoL-6D descriptive system has good psychometric properties. They imply that the instrument has achieved construct validity and provides a sensitive description of HRQoL. This means that it may be used with confidence for measuring health related quality of life and that it is a suitable basis for modelling utilities for inclusion in the economic evaluation of health programs.

  8. A condition metric for Eucalyptus woodland derived from expert evaluations.

    Science.gov (United States)

    Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D

    2018-02-01

    The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.

  9. The utilization of coconut waste fermentated by aspergillus niger and saccharomyces cerevisiae on meat quality of weaning males rex rabbit

    Science.gov (United States)

    Wahyuni, T. H.; Ginting, N.; Yunilas; Hasnudi; Mirwandono, E.; Siregar, G. A.; Sinaga, I. G.; Sembiring, I.

    2018-02-01

    Coconut waste (CW) could be applied for animal feed while its nutrition quality were low. This study aims to investigate fermented CW effect on meat quality of Rex rabbit which feed by fermented CW either by Aspergillus niger or Tape Yeast. This research was conducted in rabbit farm Brastagi, using 24 male Rex rabbits with initial weight 1012 ± 126.67 gram in July-October 2016. The design used was complete randomized design : 6 treatment 4 replications. Treatment were T1 (unfermented 10%); T2 (unfermented 20%); T3 (a.niger fermentation 10%); T4 (a niger fermentation 20%); T5 (tape yeast fermentation 10%) and T6 (tape yeast fermentation 20%). The parameters were pH, meat texture either raw or cooked, water content, fat content, protein content of meat and cooking loss. The results showed that effect of treatment was not significantly different (P>0.05) on pH and raw meat texture, but significantly different (Pmeat cooked and meat fat content and very significantly different effect ( P> 0,01) on cooking loss, water content and protein content of meat. The conclusion of this research was the utilization of fermented CW by Aspergillius niger and Tape Yeast improved the quality of Rex rabbit meat

  10. Using meta-quality to assess the utility of volunteered geographic information for science.

    Science.gov (United States)

    Langley, Shaun A; Messina, Joseph P; Moore, Nathan

    2017-11-06

    Volunteered geographic information (VGI) has strong potential to be increasingly valuable to scientists in collaboration with non-scientists. The abundance of mobile phones and other wireless forms of communication open up significant opportunities for the public to get involved in scientific research. As these devices and activities become more abundant, questions of uncertainty and error in volunteer data are emerging as critical components for using volunteer-sourced spatial data. Here we present a methodology for using VGI and assessing its sensitivity to three types of error. More specifically, this study evaluates the reliability of data from volunteers based on their historical patterns. The specific context is a case study in surveillance of tsetse flies, a health concern for being the primary vector of African Trypanosomiasis. Reliability, as measured by a reputation score, determines the threshold for accepting the volunteered data for inclusion in a tsetse presence/absence model. Higher reputation scores are successful in identifying areas of higher modeled tsetse prevalence. A dynamic threshold is needed but the quality of VGI will improve as more data are collected and the errors in identifying reliable participants will decrease. This system allows for two-way communication between researchers and the public, and a way to evaluate the reliability of VGI. Boosting the public's ability to participate in such work can improve disease surveillance and promote citizen science. In the absence of active surveillance, VGI can provide valuable spatial information given that the data are reliable.

  11. General review of quality assurance system requirements. The utility or customer requirement

    International Nuclear Information System (INIS)

    Fowler, J.L.

    1976-01-01

    What are the customer's Quality Assurance requirements and how does he convey these to his contractor, or apply them to himself. Many documents have been prepared mostly by countries with high technology availability and it is significant to note that many of the documents, particularly those of the United States of America, were prepared for nuclear safety related plant, but the logic of these documents equally applied to heavy engineering projects that are cost effective, and this is the current thinking and practice within the CEGB (Central Electricity Generating Board). Some documents have legislative backing, others rely on contractual disciplines, but they all appear to repeat the same basic requirements, so why does one continue to write more documents. The basic problem is that customers have to satisfy differing national legislative, economic and commercial requirements and, like all discerning customers, wish to reserve the right to satisfy their own needs, which are very often highly specialized. The CEGB are aware of this problem and are actively co-operating with most of the national and international authorities who are leading in this field, with a view to obtaining compatibility of requirements, but now there still remains the problem of satisfying national custom and practice. (author)

  12. Utilization of Cinnamon Leaf and Shrimp Flour as an Enhancer of Catfish Meat Quality

    Directory of Open Access Journals (Sweden)

    Mia Setiawati

    2017-04-01

    Full Text Available Catfish (Pangasianodon hypophthalmus is a freshwater fish that has been produced in the form of a filet. One of the problems in producing good catfish fillet is compactness and brightness of catfish farmed meat. This research aimed to get feed formulation as enhancer meat quality of striped catfish with added Cinnamon leaves flour (Cinnamomum burmannii and used shrimp head meal. A Fish with a weight of 208.98±25.76 g reared in 12 floating nets cage (2x1x1.5 m3 with density of 15 fish/nets for 60 days. As treatment, fish were fed with feed contains 1% cinnamon leaves, 45% shrimp head meal, and combined of cinnamon leaves and shrimp head meal, and as control used feed were formulated without cinnamon leaves and shrimp head meal. Fish were fed 2 times a daily with feeding rate 3.5% of average body weight of fish. The test parameters observed were physical, chemical and organoleptic test of catfish meat. The results showed feed with contains cinnamon leaves and shrimp head meal could decrease level of body fat 14.7% compared than control (p<0.05. Feed with used cinnamon leaves and shrimp head meal gave a texture of fillet fish more compact, elastic and color of fillet fish white. Keywords: Cinnamomum burmannii, fillet, shrimp head meal, feed formulated, Pangasianodon hypophthalmus

  13. Utilization of Cinnamon Leaf and Shrimp Flour as an Enhancer of Catfish Meat Quality

    Directory of Open Access Journals (Sweden)

    Mia Setiawati

    2017-05-01

    Full Text Available Catfish (Pangasianodon hypophthalmus is a freshwater fish that has been produced in the form of a filet. One of the problems in producing good catfish fillet is compactness and brightness of catfish farmed meat. This research aimed to get feed formulation as enhancer  meat quality of striped catfish with added Cinnamon leaves flour (Cinnamomum burmannii  and used shrimp head meal. A Fish with a weight of  208.98±25.76 g reared in 12 floating nets cage (2x1x1.5 m3 with density of 15 fish/nets for 60 days. As treatment, fish were fed with feed contains 1% cinnamon leaves,  45% shrimp head meal, and combined of cinnamon leaves and shrimp head meal, and as control used feed were formulated without cinnamon leaves and shrimp head meal. Fish were fed 2 times a daily with feeding rate 3.5% of average body weight offish. The test parameters observed were physical, chemical and organoleptic test of catfish meat. The results showed feed with contains cinnamon leaves and shrimp head meal could decrease level of body fat 14.7% compared than control (p<0.05. Feed with used cinnamon leaves and shrimp head meal gave a texture offillet fish more compact,  elastic and color of fillet fish white.

  14. Multimetric indices: How many metrics?

    Science.gov (United States)

    Multimetric indices (MMI’s) often include 5 to 15 metrics, each representing a different attribute of assemblage condition, such as species diversity, tolerant taxa, and nonnative taxa. Is there an optimal number of metrics for MMIs? To explore this question, I created 1000 9-met...

  15. Metrical Phonology: German Sound System.

    Science.gov (United States)

    Tice, Bradley S.

    Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English and German languages. The objective is to promote use of metrical phonology as a tool for enhancing instruction in stress patterns in words and sentences, particularly in…

  16. Extending cosmology: the metric approach

    OpenAIRE

    Mendoza, S.

    2012-01-01

    Comment: 2012, Extending Cosmology: The Metric Approach, Open Questions in Cosmology; Review article for an Intech "Open questions in cosmology" book chapter (19 pages, 3 figures). Available from: http://www.intechopen.com/books/open-questions-in-cosmology/extending-cosmology-the-metric-approach

  17. Numerical Calabi-Yau metrics

    International Nuclear Information System (INIS)

    Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, Rene

    2008-01-01

    We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results

  18. High resolution metric imaging payload

    Science.gov (United States)

    Delclaud, Y.

    2017-11-01

    Alcatel Space Industries has become Europe's leader in the field of high and very high resolution optical payloads, in the frame work of earth observation system able to provide military government with metric images from space. This leadership allowed ALCATEL to propose for the export market, within a French collaboration frame, a complete space based system for metric observation.

  19. At most hospitals in the state of Iowa, most surgeons' daily lists of elective cases include only 1 or 2 cases: Individual surgeons' percentage operating room utilization is a consistently unreliable metric.

    Science.gov (United States)

    Dexter, Franklin; Jarvie, Craig; Epstein, Richard H

    2017-11-01

    Percentage utilization of operating room (OR) time is not an appropriate endpoint for planning additional OR time for surgeons with high caseloads, and cannot be measured accurately for surgeons with low caseloads. Nonetheless, many OR directors claim that their hospitals make decisions based on individual surgeons' OR utilizations. This incongruity could be explained by the OR managers considering the earlier mathematical studies, performed using data from a few large teaching hospitals, as irrelevant to their hospitals. The important mathematical parameter for the prior observations is the percentage of surgeon lists of elective cases that include 1 or 2 cases; "list" meaning a combination of surgeon, hospital, and date. We measure the incidence among many hospitals. Observational cohort study. 117 hospitals in Iowa from July 2013 through September 2015. Surgeons with same identifier among hospitals. Surgeon lists of cases including at least one outpatient surgical case, so that Relative Value Units (RVU's) could be measured. Averaging among hospitals in Iowa, more than half of the surgeons' lists included 1 or 2 cases (77%; P<0.00001 vs. 50%). Approximately half had 1 case (54%; P=0.0012 vs. 50%). These percentages exceeded 50% even though nearly all the surgeons operated at just 1 hospital on days with at least 1 case (97.74%; P<0.00001 vs. 50%). The cases were not of long durations; among the 82,928 lists with 1 case, the median was 6 intraoperative RVUs (e.g., adult inguinal herniorrhaphy). Accurate confidence intervals for raw or adjusted utilizations are so wide for individual surgeons that decisions based on utilization are equivalent to decisions based on random error. The implication of the current study is generalizability of that finding from the largest teaching hospital in the state to the other hospitals in the state. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Quality of life in haemophilia A: Hemophilia Utilization Group Study Va (HUGS-Va).

    Science.gov (United States)

    Poon, J-L; Zhou, Z-Y; Doctor, J N; Wu, J; Ullman, M M; Ross, C; Riske, B; Parish, K L; Lou, M; Koerper, M A; Gwadry-Sridhar, F; Forsberg, A D; Curtis, R G; Johnson, K A

    2012-09-01

    This study describes health-related quality of life (HRQoL) of persons with haemophilia A in the United States (US) and determines associations between self-reported joint pain, motion limitation and clinically evaluated joint range of motion (ROM), and between HRQoL and ROM. As part of a 2-year cohort study, we collected baseline HRQoL using the SF-12 (adults) and PedsQL (children), along with self-ratings of joint pain and motion limitation, in persons with factor VIII deficiency recruited from six Haemophilia Treatment Centres (HTCs) in geographically diverse regions of the US. Clinically measured joint ROM measurements were collected from medical charts of a subset of participants. Adults (N = 156, mean age: 33.5 ± 12.6 years) had mean physical and mental component scores of 43.4 ± 10.7 and 50.9 ± 10.1, respectively. Children (N = 164, mean age: 9.7 ± 4.5 years) had mean total PedsQL, physical functioning, and psychosocial health scores of 85.9 ± 13.8, 89.5 ± 15.2, and 84.1 ± 15.3, respectively. Persons with more severe haemophilia and higher self-reported joint pain and motion limitation had poorer scores, particularly in the physical aspects of HRQoL. In adults, significant correlations (P < 0.01) were found between ROM measures and both self-reported measures. Except among those with severe disease, children and adults with haemophilia have HRQoL scores comparable with those of the healthy US population. The physical aspects of HRQoL in both adults and children with haemophilia A in the US decrease with increasing severity of illness. However, scores for mental aspects of HRQoL do not differ between severity groups. These findings are comparable with those from studies in European and Canadian haemophilia populations. © 2012 Blackwell Publishing Ltd.

  1. Automated image quality evaluation of T2 -weighted liver MRI utilizing deep learning architecture.

    Science.gov (United States)

    Esses, Steven J; Lu, Xiaoguang; Zhao, Tiejun; Shanbhogue, Krishna; Dane, Bari; Bruno, Mary; Chandarana, Hersh

    2018-03-01

    To develop and test a deep learning approach named Convolutional Neural Network (CNN) for automated screening of T 2 -weighted (T 2 WI) liver acquisitions for nondiagnostic images, and compare this automated approach to evaluation by two radiologists. We evaluated 522 liver magnetic resonance imaging (MRI) exams performed at 1.5T and 3T at our institution between November 2014 and May 2016 for CNN training and validation. The CNN consisted of an input layer, convolutional layer, fully connected layer, and output layer. 351 T 2 WI were anonymized for training. Each case was annotated with a label of being diagnostic or nondiagnostic for detecting lesions and assessing liver morphology. Another independently collected 171 cases were sequestered for a blind test. These 171 T 2 WI were assessed independently by two radiologists and annotated as being diagnostic or nondiagnostic. These 171 T 2 WI were presented to the CNN algorithm and image quality (IQ) output of the algorithm was compared to that of two radiologists. There was concordance in IQ label between Reader 1 and CNN in 79% of cases and between Reader 2 and CNN in 73%. The sensitivity and the specificity of the CNN algorithm in identifying nondiagnostic IQ was 67% and 81% with respect to Reader 1 and 47% and 80% with respect to Reader 2. The negative predictive value of the algorithm for identifying nondiagnostic IQ was 94% and 86% (relative to Readers 1 and 2). We demonstrate a CNN algorithm that yields a high negative predictive value when screening for nondiagnostic T 2 WI of the liver. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:723-728. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Weyl metrics and wormholes

    Energy Technology Data Exchange (ETDEWEB)

    Gibbons, Gary W. [DAMTP, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA U.K. (United Kingdom); Volkov, Mikhail S., E-mail: gwg1@cam.ac.uk, E-mail: volkov@lmpt.univ-tours.fr [Laboratoire de Mathématiques et Physique Théorique, LMPT CNRS—UMR 7350, Université de Tours, Parc de Grandmont, Tours, 37200 France (France)

    2017-05-01

    We study solutions obtained via applying dualities and complexifications to the vacuum Weyl metrics generated by massive rods and by point masses. Rescaling them and extending to complex parameter values yields axially symmetric vacuum solutions containing singularities along circles that can be viewed as singular matter sources. These solutions have wormhole topology with several asymptotic regions interconnected by throats and their sources can be viewed as thin rings of negative tension encircling the throats. For a particular value of the ring tension the geometry becomes exactly flat although the topology remains non-trivial, so that the rings literally produce holes in flat space. To create a single ring wormhole of one metre radius one needs a negative energy equivalent to the mass of Jupiter. Further duality transformations dress the rings with the scalar field, either conventional or phantom. This gives rise to large classes of static, axially symmetric solutions, presumably including all previously known solutions for a gravity-coupled massless scalar field, as for example the spherically symmetric Bronnikov-Ellis wormholes with phantom scalar. The multi-wormholes contain infinite struts everywhere at the symmetry axes, apart from solutions with locally flat geometry.

  3. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  4. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  5. A Quality Assurance Method that Utilizes 3D Dosimetry and Facilitates Clinical Interpretation

    Energy Technology Data Exchange (ETDEWEB)

    Oldham, Mark, E-mail: mark.oldham@duke.edu [Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States); Thomas, Andrew; O' Daniel, Jennifer; Juang, Titania [Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States); Ibbott, Geoffrey [University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Adamovics, John [Rider University, Lawrenceville, New Jersey (United States); Kirkpatrick, John P. [Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States)

    2012-10-01

    Purpose: To demonstrate a new three-dimensional (3D) quality assurance (QA) method that provides comprehensive dosimetry verification and facilitates evaluation of the clinical significance of QA data acquired in a phantom. Also to apply the method to investigate the dosimetric efficacy of base-of-skull (BOS) intensity-modulated radiotherapy (IMRT) treatment. Methods and Materials: Two types of IMRT QA verification plans were created for 6 patients who received BOS IMRT. The first plan enabled conventional 2D planar IMRT QA using the Varian portal dosimetry system. The second plan enabled 3D verification using an anthropomorphic head phantom. In the latter, the 3D dose distribution was measured using the DLOS/Presage dosimetry system (DLOS = Duke Large-field-of-view Optical-CT System, Presage Heuris Pharma, Skillman, NJ), which yielded isotropic 2-mm data throughout the treated volume. In a novel step, measured 3D dose distributions were transformed back to the patient's CT to enable calculation of dose-volume histograms (DVH) and dose overlays. Measured and planned patient DVHs were compared to investigate clinical significance. Results: Close agreement between measured and calculated dose distributions was observed for all 6 cases. For gamma criteria of 3%, 2 mm, the mean passing rate for portal dosimetry was 96.8% (range, 92.0%-98.9%), compared to 94.9% (range, 90.1%-98.9%) for 3D. There was no clear correlation between 2D and 3D passing rates. Planned and measured dose distributions were evaluated on the patient's anatomy, using DVH and dose overlays. Minor deviations were detected, and the clinical significance of these are presented and discussed. Conclusions: Two advantages accrue to the methods presented here. First, treatment accuracy is evaluated throughout the whole treated volume, yielding comprehensive verification. Second, the clinical significance of any deviations can be assessed through the generation of DVH curves and dose overlays on

  6. [Utilization of Quality-of-life assessment Questionnaires for Intermittent Exotropia in China].

    Science.gov (United States)

    Zhu, H; Xu, S; Leng, Z H; Fu, Z J; Xiao, Y H; Liu, H

    2016-08-01

    To evaluate the reliability and validity of Chinese version of Quality-of-life assessment Questionnaires for Intermittent Exotropia (CIXTQ). Cross-sectional study. The original English version of the IXTQ was translated into Chinese. The final Chinese version of the IXTQ (CIXTQ) consists of 3 parts: the 12-item child CIXTQ (for children ≥5 andlife (HRQoL)), the 12-item proxy CIXTQ (for parents to assess children's HRQoL), and the 17-item parent CIXTQ (containing functional, psychosocial, and surgery subscales; for parents to assess their HRQoL). 175 IXT children and 151 control children along with one of their parents were recruited to answer the CIXTQ. Cronbach's α coefficient and split-half reliability were used to test the internal consistency reliability of the CIXTQ. Kappa coefficient was used to assess the test-retest reliability. Scale-level content validity index/average (S-CVI/Ave) was used to evaluate the content validity of the CIXTQ. Principal component analysis (PCA) was used to verify the construct validity of the parent CIXTQ. Comparison of different CIXTQ scores in IXT patients with controls was conducted by independent-samples t test to evaluate the discriminate validity of the CIXTQ. For all scales and subscales of the CIXTQ in different age groups, the Cronbach's α ranged from 0.804 to 0.963; the split-half reliability ranged from 0.658 to 0.963 and was higher than 0.7 except for the proxy CIXTQ for children aged ≥5 andparent CIXTQ was 0.988, 0.988 and 0.966, respectively. Principal factors identified by PCA for the parent CIXTQ could be regrouped into the originally described 3 subscales which was functional, social psychology and surgery in different age groups. The mean scores of all the scales and subscales among IXT children and their parents (8.0±12.5-81.6 ±15.1) were significantly lower than these among control children and their parents (83.1±11.3-99.6±1.2) (t values range from -50.36 to -6.93, Pinfluence of IXT on HRQoL among

  7. The utility of comparative models and the local model quality for protein crystal structure determination by Molecular Replacement

    Directory of Open Access Journals (Sweden)

    Pawlowski Marcin

    2012-11-01

    Full Text Available Abstract Background Computational models of protein structures were proved to be useful as search models in Molecular Replacement (MR, a common method to solve the phase problem faced by macromolecular crystallography. The success of MR depends on the accuracy of a search model. Unfortunately, this parameter remains unknown until the final structure of the target protein is determined. During the last few years, several Model Quality Assessment Programs (MQAPs that predict the local accuracy of theoretical models have been developed. In this article, we analyze whether the application of MQAPs improves the utility of theoretical models in MR. Results For our dataset of 615 search models, the real local accuracy of a model increases the MR success ratio by 101% compared to corresponding polyalanine templates. On the contrary, when local model quality is not utilized in MR, the computational models solved only 4.5% more MR searches than polyalanine templates. For the same dataset of the 615 models, a workflow combining MR with predicted local accuracy of a model found 45% more correct solution than polyalanine templates. To predict such accuracy MetaMQAPclust, a “clustering MQAP” was used. Conclusions Using comparative models only marginally increases the MR success ratio in comparison to polyalanine structures of templates. However, the situation changes dramatically once comparative models are used together with their predicted local accuracy. A new functionality was added to the GeneSilico Fold Prediction Metaserver in order to build models that are more useful for MR searches. Additionally, we have developed a simple method, AmIgoMR (Am I good for MR?, to predict if an MR search with a template-based model for a given template is likely to find the correct solution.

  8. Quality of Recovery, Postdischarge Hospital Utilization, and 2-Year Functional Outcomes After an Outpatient Total Knee Arthroplasty Program.

    Science.gov (United States)

    Gauthier-Kwan, Olivier Y; Dobransky, Johanna S; Dervin, Geoffrey F

    2018-02-05

    Outpatient total knee arthroplasty (TKA) has been made possible with advances in perioperative care and standardized clinical inpatient pathways. While many studies report on benefits of outpatient programs, none explore patient-reported outcome measures. As such, our goals were to compare the short-term quality of recovery; highlight postdischarge hospital resources utilization; and report on 2-year functional outcomes scores. This was a prospective comparative cohort study of 43 inpatients (43 TKAs) and 43 outpatients (43 TKAs) operated on by a single surgeon between September 28, 2010 and May 5, 2015. All patients were given a diary to complete at 1, 3, 7, 14, and 28 days postoperatively; we collected 90-day complications, readmissions, and emergency department visits; Knee Injury and Osteoarthritis Outcome Score and Western Ontario and McMaster Universities Osteoarthritis Index scores were completed preoperatively and 2 years postoperatively. SPSS (IBM, version 22.0) was used for all statistical analyses. Quality of recovery (QoR-9) was similar in the outpatient TKA group compared with the inpatient group. No statistically significant differences were observed for Knee Injury and Osteoarthritis Outcome Score and Western Ontario and McMaster Universities Osteoarthritis Index subscores (P > .05). There was 1 readmission in both outpatient and inpatient groups. Six inpatients and 8 outpatients returned to the emergency department for any reason within 90 days, with no statistical significance observed between the 2 groups (P = .771). Outpatient TKA in selected patients produced similar short-term and 2-year patient-reported outcome measures and a comparable 90-day postdischarge hospital resource utilization when compared to an inpatient cohort, supporting further investigation into outpatient TKA. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Corn silage hybrid type and quality of alfalfa hay affect dietary nitrogen utilization by early lactating dairy cows.

    Science.gov (United States)

    Holt, M S; Neal, K; Eun, J-S; Young, A J; Hall, J O; Nestor, K E

    2013-10-01

    This experiment was conducted to determine the effects of corn silage (CS) hybrids and quality of alfalfa hay (AH) in high-forage dairy diets on N utilization, ruminal fermentation, and lactational performance by early-lactating dairy cows. Eight multiparous Holstein cows were used in a duplicated 4 × 4 Latin square experiment with a 2 × 2 factorial arrangement of dietary treatments. The 8 cows (average days in milk = 23 ± 11.2) were surgically fitted with ruminal cannula, and the 2 squares were conducted simultaneously. Within square, cows were randomly assigned to a sequence of 4 diets: conventional CS (CCS) or brown midrib CS (BMR) was combined with fair-quality AH [FAH: 46.7% neutral detergent fiber (NDF) and 18.4% crude protein (CP)] or high-quality AH (HAH: 39.2% NDF and 20.7% CP) to form 4 treatments: CCS with FAH, CCS with HAH, BMR with FAH, and BMR with HAH. Diets were isonitrogenous across treatments, averaging 15.9% CP. Each period lasted a total of 21 d, with 14 d for treatment adaptation and 7d for data collection and sampling. Intake of DM and milk yield did not differ in response to CS hybrids or AH quality. Although feeding BMR-based diets decreased urinary N output by 24%, it did not affect fecal N output. Feeding HAH decreased urinary N output by 15% but increased fecal N output by 20%. Nitrogen efficiency [milk N (g/d)/intake N (g/d)] tended to increase for BMR treatments. Ruminal ammonia-N concentration was lower for cows fed BMR-based diets than for those fed CCS-based diets but was not affected by quality of AH. Feeding BMR-based diets or HAH decreased milk urea N concentration by 23 or 15%, respectively, compared with CCS-based diets or FAH. Total volatile fatty acid concentration increased with HAH but was not influenced by CS hybrids. Feeding BMR-based diets decreased urinary N-to-fecal N ratio (UN:FN), and it was further reduced by feeding HAH. Although cows fed the BMR-based diets tended to increase milk N-to-manure N ratio, the

  10. Utilization of research reactor to the environmental application in Thailand. Air quality study in Saraburi Province, central Thailand

    International Nuclear Information System (INIS)

    Laoharojanaphand, Sirinart; Ninlaphruk, Sumalee; Mungpayaban, Harinate; Siese, Piyamaporn; Suanmamuang, Boonlert

    2007-01-01

    Saraburi Province is facing difficulties due to high dust generating Industries which is the major economy of the area. Thus, the elemental composition of SPMs in Tumbon Na Phra Lan, Saraburi Province is being monitored. The samples were collected in each quarter from May 2005 to March 2006. Soil as well as fine particles from stacks of some selected manufacturers were also analyzed. The average weight of SPM was found lowest in wet season and highest in the middle of dry season. The average weight of SPM is also high in dry season and low in wet season. The elements found in the samples are Na, Mg, Al, As, Sr, Br, Si, P, S, Cl, K, Ca, Ti, V, Mn, Fe, Co, Cu, and Zn. Calcium is selected as the key elements since most postulated source of pollution is due industrial utilization of the limestone deposit. It is observed that the fine partials form stack are quite low which mean an effective emission control of fine particles form the selected manufacturers. The data is being utilized by the Pollution Control Department, Ministry of Natural Resources and Environment, the environmental authority in Thailand. The authority will use this data to find possible solution for air quality improvement of the area. Future collaboration with environmental authority will be on the study of Thalenoi conservation area in the southern part of Thailand. (author)

  11. METRICS DEVELOPMENT FOR PATENTS.

    Science.gov (United States)

    Veiga, Daniela Francescato; Ferreira, Lydia Masako

    2015-01-01

    To develop a proposal for metrics for patents to be applied in assessing the postgraduate programs of Medicine III - Capes. From the reading and analysis of the 2013 area documents of all the 48 areas of Capes, a proposal for metrics for patents was developed to be applied in Medicine III programs. Except for the areas Biotechnology, Food Science, Biological Sciences III, Physical Education, Engineering I, III and IV and Interdisciplinary, most areas do not adopt a scoring system for patents. The proposal developed was based on the criteria of Biotechnology, with adaptations. In general, it will be valued, in ascending order, the deposit, the granting and licensing/production. It will also be assigned higher scores to patents registered abroad and whenever there is a participation of students. This proposal can be applied to the item Intellectual Production of the evaluation form, in subsection Technical Production/Patents. The percentage of 10% for academic programs and 40% for Masters Professionals should be maintained. The program will be scored as Very Good when it reaches 400 points or over; Good, between 200 and 399 points; Regular, between 71 and 199 points; Weak up to 70 points; Insufficient, no punctuation. Desenvolver uma proposta de métricas para patentes a serem aplicadas na avaliação dos Programas de Pós-Graduação da Área Medicina III - Capes. A partir da leitura e análise dos documentos de área de 2013 de todas as 48 Áreas da Capes, desenvolveu-se uma proposta de métricas para patentes, a ser aplicada na avaliação dos programas da área. Constatou-se que, com exceção das áreas Biotecnologia, Ciência de Alimentos, Ciências Biológicas III, Educação Física, Engenharias I, III e IV e Interdisciplinar, a maioria não adota sistema de pontuação para patentes. A proposta desenvolvida baseou-se nos critérios da Biotecnologia, com adaptações. De uma forma geral, foi valorizado, em ordem crescente, o depósito, a concessão e o

  12. State of the art metrics for aspect oriented programming

    Science.gov (United States)

    Ghareb, Mazen Ismaeel; Allen, Gary

    2018-04-01

    The quality evaluation of software, e.g., defect measurement, gains significance with higher use of software applications. Metric measurements are considered as the primary indicator of imperfection prediction and software maintenance in various empirical studies of software products. However, there is no agreement on which metrics are compelling quality indicators for novel development approaches such as Aspect Oriented Programming (AOP). AOP intends to enhance programming quality, by providing new and novel constructs for the development of systems, for example, point cuts, advice and inter-type relationships. Hence, it is not evident if quality pointers for AOP can be derived from direct expansions of traditional OO measurements. Then again, investigations of AOP do regularly depend on established coupling measurements. Notwithstanding the late reception of AOP in empirical studies, coupling measurements have been adopted as useful markers of flaw inclination in this context. In this paper we will investigate the state of the art metrics for measurement of Aspect Oriented systems development.

  13. A Metric for Heterotic Moduli

    Science.gov (United States)

    Candelas, Philip; de la Ossa, Xenia; McOrist, Jock

    2017-12-01

    Heterotic vacua of string theory are realised, at large radius, by a compact threefold with vanishing first Chern class together with a choice of stable holomorphic vector bundle. These form a wide class of potentially realistic four-dimensional vacua of string theory. Despite all their phenomenological promise, there is little understanding of the metric on the moduli space of these. What is sought is the analogue of special geometry for these vacua. The metric on the moduli space is important in phenomenology as it normalises D-terms and Yukawa couplings. It is also of interest in mathematics, since it generalises the metric, first found by Kobayashi, on the space of gauge field connections, to a more general context. Here we construct this metric, correct to first order in {α^{\\backprime}}, in two ways: first by postulating a metric that is invariant under background gauge transformations of the gauge field, and also by dimensionally reducing heterotic supergravity. These methods agree and the resulting metric is Kähler, as is required by supersymmetry. Checking the metric is Kähler is intricate and the anomaly cancellation equation for the H field plays an essential role. The Kähler potential nevertheless takes a remarkably simple form: it is the Kähler potential of special geometry with the Kähler form replaced by the {α^{\\backprime}}-corrected hermitian form.

  14. Implications of Metric Choice for Common Applications of Readmission Metrics

    OpenAIRE

    Davies, Sheryl; Saynina, Olga; Schultz, Ellen; McDonald, Kathryn M; Baker, Laurence C

    2013-01-01

    Objective. To quantify the differential impact on hospital performance of three readmission metrics: all-cause readmission (ACR), 3M Potential Preventable Readmission (PPR), and Centers for Medicare and Medicaid 30-day readmission (CMS).

  15. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  16. Background metric in supergravity theories

    International Nuclear Information System (INIS)

    Yoneya, T.

    1978-01-01

    In supergravity theories, we investigate the conformal anomaly of the path-integral determinant and the problem of fermion zero modes in the presence of a nontrivial background metric. Except in SO(3) -invariant supergravity, there are nonvanishing conformal anomalies. As a consequence, amplitudes around the nontrivial background metric contain unpredictable arbitrariness. The fermion zero modes which are explicitly constructed for the Euclidean Schwarzschild metric are interpreted as an indication of the supersymmetric multiplet structure of a black hole. The degree of degeneracy of a black hole is 2/sup 4n/ in SO(n) supergravity

  17. Generalized Painleve-Gullstrand metrics

    Energy Technology Data Exchange (ETDEWEB)

    Lin Chunyu [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China)], E-mail: l2891112@mail.ncku.edu.tw; Soo Chopin [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China)], E-mail: cpsoo@mail.ncku.edu.tw

    2009-02-02

    An obstruction to the implementation of spatially flat Painleve-Gullstrand (PG) slicings is demonstrated, and explicitly discussed for Reissner-Nordstroem and Schwarzschild-anti-deSitter spacetimes. Generalizations of PG slicings which are not spatially flat but which remain regular at the horizons are introduced. These metrics can be obtained from standard spherically symmetric metrics by physical Lorentz boosts. With these generalized PG metrics, problematic contributions to the imaginary part of the action in the Parikh-Wilczek derivation of Hawking radiation due to the obstruction can be avoided.

  18. Daylight metrics and energy savings

    Energy Technology Data Exchange (ETDEWEB)

    Mardaljevic, John; Heschong, Lisa; Lee, Eleanor

    2009-12-31

    The drive towards sustainable, low-energy buildings has increased the need for simple, yet accurate methods to evaluate whether a daylit building meets minimum standards for energy and human comfort performance. Current metrics do not account for the temporal and spatial aspects of daylight, nor of occupants comfort or interventions. This paper reviews the historical basis of current compliance methods for achieving daylit buildings, proposes a technical basis for development of better metrics, and provides two case study examples to stimulate dialogue on how metrics can be applied in a practical, real-world context.

  19. The air quality and human health effects of integrating utility-scale batteries into the New York State electricity grid

    Science.gov (United States)

    Gilmore, Elisabeth A.; Apt, Jay; Walawalkar, Rahul; Adams, Peter J.; Lave, Lester B.

    In a restructured electricity market, utility-scale energy storage technologies such as advanced batteries can generate revenue by charging at low electricity prices and discharging at high prices. This strategy changes the magnitude and distribution of air quality emissions and the total carbon dioxide (CO 2) emissions. We evaluate the social costs associated with these changes using a case study of 500 MW sodium-sulfur battery installations with 80% round-trip efficiency. The batteries displace peaking generators in New York City and charge using off-peak generation in the New York Independent System Operator (NYISO) electricity grid during the summer. We identify and map charging and displaced plant types to generators in the NYISO. We then convert the emissions into ambient concentrations with a chemical transport model, the Particulate Matter Comprehensive Air Quality Model with extensions (PMCAM x). Finally, we transform the concentrations into their equivalent human health effects and social benefits and costs. Reductions in premature mortality from fine particulate matter (PM 2.5) result in a benefit of 4.5 ¢ kWh -1 and 17 ¢ kWh -1 from displacing a natural gas and distillate fuel oil fueled peaking plant, respectively, in New York City. Ozone (O 3) concentrations increase due to decreases in nitrogen oxide (NO x) emissions, although the magnitude of the social cost is less certain. Adding the costs from charging, displacing a distillate fuel oil plant yields a net social benefit, while displacing the natural gas plant has a net social cost. With the existing base-load capacity, the upstate population experiences an increase in adverse health effects. If wind generation is charging the battery, both the upstate charging location and New York City benefit. At 20 per tonne of CO 2, the costs from CO 2 are small compared to those from air quality. We conclude that storage could be added to existing electricity grids as part of an integrated strategy from a

  20. Do generic utility measures capture what is important to the quality of life of people with multiple sclerosis?

    Science.gov (United States)

    Kuspinar, Ayse; Mayo, Nancy E

    2013-04-25

    The three most widely used utility measures are the Health Utilities Index Mark 2 and 3 (HUI2 and HUI3), the EuroQol-5D (EQ-5D) and the Short-Form-6D (SF-6D). In line with guidelines for economic evaluation from agencies such as the National Institute for Health and Clinical Excellence (NICE) and the Canadian Agency for Drugs and Technologies in Health (CADTH), these measures are currently being used to evaluate the cost-effectiveness of different interventions in MS. However, the challenge of using such measures in people with a specific health condition, such as MS, is that they may not capture all of the domains that are impacted upon by the condition. If important domains are missing from the generic measures, the value derived will be higher than the real impact creating invalid comparisons across interventions and populations. Therefore, the objective of this study is to estimate the extent to which generic utility measures capture important domains that are affected by MS. The available study population consisted of men and women who had been registered after 1994 in three participating MS clinics in Greater Montreal, Quebec, Canada. Subjects were first interviewed on an individualized measure of quality of life (QOL) called the Patient Generated Index (PGI). The domains identified with the PGI were then classified and grouped together using the World Health Organization's International Classification of Functioning, Disability and Health (ICF), and mapped onto the HUI2, HUI3, EQ-5D and SF-6D. A total of 185 persons with MS were interviewed on the PGI. The sample was relatively young (mean age 43) and predominantly female. Both men and women had mild disability with a median Expanded Disability Status Scale (EDSS) score of 2. The top 10 domains that patients identified to be the most affected by their MS were, work (62%), fatigue (48%), sports (39%), social life (28%), relationships (23%), walking/mobility (22%), cognition (21%), balance (14%), housework (12

  1. Let's Make Metric Ice Cream

    Science.gov (United States)

    Zimmerman, Marianna

    1975-01-01

    Describes a classroom activity which involved sixth grade students in a learning situation including making ice cream, safety procedures in a science laboratory, calibrating a thermometer, using metric units of volume and mass. (EB)

  2. Experiential space is hardly metric

    Czech Academy of Sciences Publication Activity Database

    Šikl, Radovan; Šimeček, Michal; Lukavský, Jiří

    2008-01-01

    Roč. 2008, č. 37 (2008), s. 58-58 ISSN 0301-0066. [European Conference on Visual Perception. 24.08-28.08.2008, Utrecht] R&D Projects: GA ČR GA406/07/1676 Institutional research plan: CEZ:AV0Z70250504 Keywords : visual space perception * metric and non-metric perceptual judgments * ecological validity Subject RIV: AN - Psychology

  3. Coverage Metrics for Model Checking

    Science.gov (United States)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  4. Phantom metrics with Killing spinors

    Directory of Open Access Journals (Sweden)

    W.A. Sabra

    2015-11-01

    Full Text Available We study metric solutions of Einstein–anti-Maxwell theory admitting Killing spinors. The analogue of the IWP metric which admits a space-like Killing vector is found and is expressed in terms of a complex function satisfying the wave equation in flat (2+1-dimensional space–time. As examples, electric and magnetic Kasner spaces are constructed by allowing the solution to depend only on the time coordinate. Euclidean solutions are also presented.

  5. Relevance of motion-related assessment metrics in laparoscopic surgery.

    Science.gov (United States)

    Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J

    2013-06-01

    Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.

  6. Interactions of visual attention and quality perception

    NARCIS (Netherlands)

    Redi, J.A.; Liu, H.; Zunino, R.; Heynderickx, I.E.J.R.

    2011-01-01

    Several attempts to integrate visual saliency information in quality metrics are described in literature, albeit with contradictory results. The way saliency is integrated in quality metrics should reflect the mechanisms underlying the interaction between image quality assessment and visual

  7. Overall Environmental Equipment Effectiveness as a Metric of a Lean and Green Manufacturing System

    Directory of Open Access Journals (Sweden)

    Rosario Domingo

    2015-07-01

    Full Text Available This paper presents a new metric for describing the sustainability improvements achieved, relative to the company’s initial situation, after implementing a lean and green manufacturing system. The final value of this metric is identified as the Overall Environmental Equipment Effectiveness (OEEE, which is used to analyze the evolution between two identified states of the Overall Equipment Effectiveness (OEE and the sustainability together, and references, globally and individually, the production steps. The OEE is a known measure of equipment utilization, which includes the availability, quality and performance of each production step, In addition to these factors, the OEEE incorporates the concept of sustainability based on the calculated environmental impact of the complete product life cycle. Action research based on the different manufacturing processes of a tube fabrication company is conducted to assess the potential impact of this new indicator. The case study demonstrates the compatibility between green and lean manufacturing, using a common metric. The OEEE allows sustainability to be integrated into business decisions, and compares the environmental impact of two states, by identifying the improvements undertaken within the company’s processes.

  8. Revision and extension of Eco-LCA metrics for sustainability assessment of the energy and chemical processes.

    Science.gov (United States)

    Yang, Shiying; Yang, Siyu; Kraslawski, Andrzej; Qian, Yu

    2013-12-17

    Ecologically based life cycle assessment (Eco-LCA) is an appealing approach for the evaluation of resources utilization and environmental impacts of the process industries from an ecological scale. However, the aggregated metrics of Eco-LCA suffer from some drawbacks: the environmental impact metric has limited applicability; the resource utilization metric ignores indirect consumption; the renewability metric fails to address the quantitative distinction of resources availability; the productivity metric seems self-contradictory. In this paper, the existing Eco-LCA metrics are revised and extended for sustainability assessment of the energy and chemical processes. A new Eco-LCA metrics system is proposed, including four independent dimensions: environmental impact, resource utilization, resource availability, and economic effectiveness. An illustrative example of comparing assessment between a gas boiler and a solar boiler process provides insight into the features of the proposed approach.

  9. Addressing water quality issues on a watershed basis: a comprehensive approach for utilizing chapter 20 of the Michigan drain code

    International Nuclear Information System (INIS)

    McCulloch, J.P.

    2002-01-01

    There are five major watersheds in Oakland County. They are the Clinton, Flint, Huron, Rouge and Shiawassee. Included in these watersheds are 61 individual cities, villages and townships. Actions taken by one community within the watershed have a significant impact on other communities in the watershed. Consequently, a multi-community approach needs to be identified and utilized to comprehensively address public health and water quality issues. Some of the issues faced by these communities individually include stormwater management, flooding, drainage, and river and stream management. Failing septic systems, illicit connections causing groundwater contamination, and habitat and wetland degradation are also primary concerns. Finally, wastewater treatment capacity and sanitary sewer service also are regularly dealt with by these communities. Traditionally, short-term solutions to these often urgent problems required the construction of relief sewers or temporary retention structures. Unfortunately, solving the problem in one area often meant the creation of new problems downstream. Coordinating efforts among these 61 individual communities is difficult. These difficult challenges are best met with a coordinated, comprehensive plan. (author)

  10. On the utility of vacancies and tensile strain-induced quality factor enhancement for mass sensing using graphene monolayers

    International Nuclear Information System (INIS)

    Kim, Sung Youb; Park, Harold S

    2010-01-01

    We have utilized classical molecular dynamics to investigate the mass sensing potential of graphene monolayers, using gold as the model adsorbed atom. In doing so, we report two key findings. First, we find that while perfect graphene monolayers are effective mass sensors at very low (T < 10 K) temperatures, their mass sensing capability is lost at higher temperatures due to diffusion of the adsorbed atom at elevated temperatures. We demonstrate that even if the quality (Q) factors are significantly elevated through the application of tensile mechanical strain, the mass sensing resolution is still lost at elevated temperatures, which demonstrates that high Q-factors alone are insufficient to ensure the mass sensing capability of graphene. Second, we find that while the introduction of single vacancies into the graphene monolayer prevents the diffusion of the adsorbed atom, the mass sensing resolution is still lost at higher temperatures, again due to Q-factor degradation. We finally demonstrate that if the Q-factors of the graphene monolayers with single vacancies are kept acceptably high through the application of tensile strain, then the high Q-factors, in conjunction with the single atom vacancies to stop the diffusion of the adsorbed atom, enable graphene to maintain its mass sensing capability across a range of technologically relevant operating temperatures.

  11. The air quality and human health effects of integrating utility-scale batteries into the New York State electricity grid

    International Nuclear Information System (INIS)

    Gilmore, Elisabeth A.; Apt, Jay; Lave, Lester B.; Walawalkar, Rahul; Adams, Peter J.

    2010-01-01

    In a restructured electricity market, utility-scale energy storage technologies such as advanced batteries can generate revenue by charging at low electricity prices and discharging at high prices. This strategy changes the magnitude and distribution of air quality emissions and the total carbon dioxide (CO 2 ) emissions. We evaluate the social costs associated with these changes using a case study of 500 MW sodium-sulfur battery installations with 80% round-trip efficiency. The batteries displace peaking generators in New York City and charge using off-peak generation in the New York Independent System Operator (NYISO) electricity grid during the summer. We identify and map charging and displaced plant types to generators in the NYISO. We then convert the emissions into ambient concentrations with a chemical transport model, the Particulate Matter Comprehensive Air Quality Model with extensions (PMCAM x ). Finally, we transform the concentrations into their equivalent human health effects and social benefits and costs. Reductions in premature mortality from fine particulate matter (PM 2.5 ) result in a benefit of 4.5 cents kWh -1 and 17 cents kWh -1 from displacing a natural gas and distillate fuel oil fueled peaking plant, respectively, in New York City. Ozone (O 3 ) concentrations increase due to decreases in nitrogen oxide (NO x ) emissions, although the magnitude of the social cost is less certain. Adding the costs from charging, displacing a distillate fuel oil plant yields a net social benefit, while displacing the natural gas plant has a net social cost. With the existing base-load capacity, the upstate population experiences an increase in adverse health effects. If wind generation is charging the battery, both the upstate charging location and New York City benefit. At $20 per tonne of CO 2 , the costs from CO 2 are small compared to those from air quality. We conclude that storage could be added to existing electricity grids as part of an integrated

  12. Moisture Metrics Project

    Energy Technology Data Exchange (ETDEWEB)

    Schuchmann, Mark

    2011-08-31

    the goal of this project was to determine the optimum moisture levels for biomass processing for pellets commercially, by correlating data taken from numerous points in the process, and across several different feedstock materials produced and harvested using a variety of different management practices. This was to be done by correlating energy consumption and material through put rates with the moisture content of incoming biomass ( corn & wheat stubble, native grasses, weeds, & grass straws), and the quality of the final pellet product.This project disseminated the data through a public website, and answering questions form universities across Missouri that are engaged in biomass conversion technologies. Student interns from a local university were employed to help collect data, which enabled them to learn firsthand about biomass processing.

  13. Scalar-metric and scalar-metric-torsion gravitational theories

    International Nuclear Information System (INIS)

    Aldersley, S.J.

    1977-01-01

    The techniques of dimensional analysis and of the theory of tensorial concomitants are employed to study field equations in gravitational theories which incorporate scalar fields of the Brans-Dicke type. Within the context of scalar-metric gravitational theories, a uniqueness theorem for the geometric (or gravitational) part of the field equations is proven and a Lagrangian is determined which is uniquely specified by dimensional analysis. Within the context of scalar-metric-torsion gravitational theories a uniqueness theorem for field Lagrangians is presented and the corresponding Euler-Lagrange equations are given. Finally, an example of a scalar-metric-torsion theory is presented which is similar in many respects to the Brans-Dicke theory and the Einstein-Cartan theory

  14. Regge calculus from discontinuous metrics

    International Nuclear Information System (INIS)

    Khatsymovsky, V.M.

    2003-01-01

    Regge calculus is considered as a particular case of the more general system where the linklengths of any two neighbouring 4-tetrahedra do not necessarily coincide on their common face. This system is treated as that one described by metric discontinuous on the faces. In the superspace of all discontinuous metrics the Regge calculus metrics form some hypersurface defined by continuity conditions. Quantum theory of the discontinuous metric system is assumed to be fixed somehow in the form of quantum measure on (the space of functionals on) the superspace. The problem of reducing this measure to the Regge hypersurface is addressed. The quantum Regge calculus measure is defined from a discontinuous metric measure by inserting the δ-function-like phase factor. The requirement that continuity conditions be imposed in a 'face-independent' way fixes this factor uniquely. The term 'face-independent' means that this factor depends only on the (hyper)plane spanned by the face, not on it's form and size. This requirement seems to be natural from the viewpoint of existence of the well-defined continuum limit maximally free of lattice artefacts

  15. FQC Dashboard: integrates FastQC results into a web-based, interactive, and extensible FASTQ quality control tool

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Joseph; Pirrung, Meg; McCue, Lee Ann

    2017-06-09

    FQC is software that facilitates large-scale quality control of FASTQ files by carrying out a QC protocol, parsing results, and aggregating quality metrics within and across experiments into an interactive dashboard. The dashboard utilizes human-readable configuration files to manipulate the pages and tabs, and is extensible with CSV data.

  16. Symmetries of Taub-NUT dual metrics

    International Nuclear Information System (INIS)

    Baleanu, D.; Codoban, S.

    1998-01-01

    Recently geometric duality was analyzed for a metric which admits Killing tensors. An interesting example arises when the manifold has Killing-Yano tensors. The symmetries of the dual metrics in the case of Taub-NUT metric are investigated. Generic and non-generic symmetries of dual Taub-NUT metric are analyzed

  17. A Kerr-NUT metric

    International Nuclear Information System (INIS)

    Vaidya, P.C.; Patel, L.K.; Bhatt, P.V.

    1976-01-01

    Using Galilean time and retarded distance as coordinates the usual Kerr metric is expressed in form similar to the Newman-Unti-Tamburino (NUT) metric. The combined Kerr-NUT metric is then investigated. In addition to the Kerr and NUT solutions of Einstein's equations, three other types of solutions are derived. These are (i) the radiating Kerr solution, (ii) the radiating NUT solution satisfying Rsub(ik) = sigmaxisub(i)xisub(k), xisub(i)xisup(i) = 0, and (iii) the associated Kerr solution satisfying Rsub(ik) = 0. Solution (i) is distinct from and simpler than the one reported earlier by Vaidya and Patel (Phys. Rev.; D7:3590 (1973)). Solutions (ii) and (iii) gave line elements which have the axis of symmetry as a singular line. (author)

  18. Complexity Metrics for Workflow Nets

    DEFF Research Database (Denmark)

    Lassen, Kristian Bisgaard; van der Aalst, Wil M.P.

    2009-01-01

    analysts have difficulties grasping the dynamics implied by a process model. Recent empirical studies show that people make numerous errors when modeling complex business processes, e.g., about 20 percent of the EPCs in the SAP reference model have design flaws resulting in potential deadlocks, livelocks......, etc. It seems obvious that the complexity of the model contributes to design errors and a lack of understanding. It is not easy to measure complexity, however. This paper presents three complexity metrics that have been implemented in the process analysis tool ProM. The metrics are defined...... for a subclass of Petri nets named Workflow nets, but the results can easily be applied to other languages. To demonstrate the applicability of these metrics, we have applied our approach and tool to 262 relatively complex Protos models made in the context of various student projects. This allows us to validate...

  19. The uniqueness of the Fisher metric as information metric

    Czech Academy of Sciences Publication Activity Database

    Le, Hong-Van

    2017-01-01

    Roč. 69, č. 4 (2017), s. 879-896 ISSN 0020-3157 Institutional support: RVO:67985840 Keywords : Chentsov’s theorem * mixed topology * monotonicity of the Fisher metric Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.049, year: 2016 https://link.springer.com/article/10.1007%2Fs10463-016-0562-0

  20. Invariant metrics for Hamiltonian systems

    International Nuclear Information System (INIS)

    Rangarajan, G.; Dragt, A.J.; Neri, F.

    1991-05-01

    In this paper, invariant metrics are constructed for Hamiltonian systems. These metrics give rise to norms on the space of homeogeneous polynomials of phase-space variables. For an accelerator lattice described by a Hamiltonian, these norms characterize the nonlinear content of the lattice. Therefore, the performance of the lattice can be improved by minimizing the norm as a function of parameters describing the beam-line elements in the lattice. A four-fold increase in the dynamic aperture of a model FODO cell is obtained using this procedure. 7 refs

  1. Generalization of Vaidya's radiation metric

    Energy Technology Data Exchange (ETDEWEB)

    Gleiser, R J; Kozameh, C N [Universidad Nacional de Cordoba (Argentina). Instituto de Matematica, Astronomia y Fisica

    1981-11-01

    In this paper it is shown that if Vaidya's radiation metric is considered from the point of view of kinetic theory in general relativity, the corresponding phase space distribution function can be generalized in a particular way. The new family of spherically symmetric radiation metrics obtained contains Vaidya's as a limiting situation. The Einstein field equations are solved in a ''comoving'' coordinate system. Two arbitrary functions of a single variable are introduced in the process of solving these equations. Particular examples considered are a stationary solution, a nonvacuum solution depending on a single parameter, and several limiting situations.

  2. Technical Privacy Metrics: a Systematic Survey

    OpenAIRE

    Wagner, Isabel; Eckhoff, David

    2018-01-01

    The file attached to this record is the author's final peer reviewed version The goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature makes an informed choice of metrics challenging. As a result, instead of using existing metrics, n...

  3. Remarks on G-Metric Spaces

    Directory of Open Access Journals (Sweden)

    Bessem Samet

    2013-01-01

    Full Text Available In 2005, Mustafa and Sims (2006 introduced and studied a new class of generalized metric spaces, which are called G-metric spaces, as a generalization of metric spaces. We establish some useful propositions to show that many fixed point theorems on (nonsymmetric G-metric spaces given recently by many authors follow directly from well-known theorems on metric spaces. Our technique can be easily extended to other results as shown in application.

  4. Utility of routine data sources for feedback on the quality of cancer care: an assessment based on clinical practice guidelines

    Directory of Open Access Journals (Sweden)

    Baade Peter

    2009-05-01

    Full Text Available Abstract Background Not all cancer patients receive state-of-the-art care and providing regular feedback to clinicians might reduce this problem. The purpose of this study was to assess the utility of various data sources in providing feedback on the quality of cancer care. Methods Published clinical practice guidelines were used to obtain a list of processes-of-care of interest to clinicians. These were assigned to one of four data categories according to their availability and the marginal cost of using them for feedback. Results Only 8 (3% of 243 processes-of-care could be measured using population-based registry or administrative inpatient data (lowest cost. A further 119 (49% could be measured using a core clinical registry, which contains information on important prognostic factors (e.g., clinical stage, physiological reserve, hormone-receptor status. Another 88 (36% required an expanded clinical registry or medical record review; mainly because they concerned long-term management of disease progression (recurrences and metastases and 28 (11.5% required patient interview or audio-taping of consultations because they involved information sharing between clinician and patient. Conclusion The advantages of population-based cancer registries and administrative inpatient data are wide coverage and low cost. The disadvantage is that they currently contain information on only a few processes-of-care. In most jurisdictions, clinical cancer registries, which can be used to report on many more processes-of-care, do not cover smaller hospitals. If we are to provide feedback about all patients, not just those in larger academic hospitals with the most developed data systems, then we need to develop sustainable population-based data systems that capture information on prognostic factors at the time of initial diagnosis and information on management of disease progression.

  5. Utility of routine data sources for feedback on the quality of cancer care: an assessment based on clinical practice guidelines.

    Science.gov (United States)

    Coory, Michael; Thompson, Bridie; Baade, Peter; Fritschi, Lin

    2009-05-27

    Not all cancer patients receive state-of-the-art care and providing regular feedback to clinicians might reduce this problem. The purpose of this study was to assess the utility of various data sources in providing feedback on the quality of cancer care. Published clinical practice guidelines were used to obtain a list of processes-of-care of interest to clinicians. These were assigned to one of four data categories according to their availability and the marginal cost of using them for feedback. Only 8 (3%) of 243 processes-of-care could be measured using population-based registry or administrative inpatient data (lowest cost). A further 119 (49%) could be measured using a core clinical registry, which contains information on important prognostic factors (e.g., clinical stage, physiological reserve, hormone-receptor status). Another 88 (36%) required an expanded clinical registry or medical record review; mainly because they concerned long-term management of disease progression (recurrences and metastases) and 28 (11.5%) required patient interview or audio-taping of consultations because they involved information sharing between clinician and patient. The advantages of population-based cancer registries and administrative inpatient data are wide coverage and low cost. The disadvantage is that they currently contain information on only a few processes-of-care. In most jurisdictions, clinical cancer registries, which can be used to report on many more processes-of-care, do not cover smaller hospitals. If we are to provide feedback about all patients, not just those in larger academic hospitals with the most developed data systems, then we need to develop sustainable population-based data systems that capture information on prognostic factors at the time of initial diagnosis and information on management of disease progression.

  6. Separable metrics and radiating stars

    Indian Academy of Sciences (India)

    We study the junction condition relating the pressure to heat flux at the boundary of an accelerating and expanding spherically symmetric radiating star. We transform the junction condition to an ordinary differential equation by making a separability assumption on the metric functions in the space–time variables.

  7. Socio-technical security metrics

    NARCIS (Netherlands)

    Gollmann, D.; Herley, C.; Koenig, V.; Pieters, W.; Sasse, M.A.

    2015-01-01

    Report from Dagstuhl seminar 14491. This report documents the program and the outcomes of Dagstuhl Seminar 14491 “Socio-Technical Security Metrics”. In the domain of safety, metrics inform many decisions, from the height of new dikes to the design of nuclear plants. We can state, for example, that

  8. Leading Gainful Employment Metric Reporting

    Science.gov (United States)

    Powers, Kristina; MacPherson, Derek

    2016-01-01

    This chapter will address the importance of intercampus involvement in reporting of gainful employment student-level data that will be used in the calculation of gainful employment metrics by the U.S. Department of Education. The authors will discuss why building relationships within the institution is critical for effective gainful employment…

  9. Defining a standard metric for electricity savings

    International Nuclear Information System (INIS)

    Koomey, Jonathan; Akbari, Hashem; Blumstein, Carl; Brown, Marilyn; Brown, Richard; Calwell, Chris; Carter, Sheryl; Cavanagh, Ralph; Chang, Audrey; Claridge, David; Craig, Paul; Diamond, Rick; Eto, Joseph H; Fulkerson, William; Gadgil, Ashok; Geller, Howard; Goldemberg, Jose; Goldman, Chuck; Goldstein, David B; Greenberg, Steve

    2010-01-01

    The growing investment by governments and electric utilities in energy efficiency programs highlights the need for simple tools to help assess and explain the size of the potential resource. One technique that is commonly used in this effort is to characterize electricity savings in terms of avoided power plants, because it is easier for people to visualize a power plant than it is to understand an abstraction such as billions of kilowatt-hours. Unfortunately, there is no standardization around the characteristics of such power plants. In this letter we define parameters for a standard avoided power plant that have physical meaning and intuitive plausibility, for use in back-of-the-envelope calculations. For the prototypical plant this article settles on a 500 MW existing coal plant operating at a 70% capacity factor with 7% T and D losses. Displacing such a plant for one year would save 3 billion kWh/year at the meter and reduce emissions by 3 million metric tons of CO 2 per year. The proposed name for this metric is the Rosenfeld, in keeping with the tradition among scientists of naming units in honor of the person most responsible for the discovery and widespread adoption of the underlying scientific principle in question-Dr Arthur H Rosenfeld.

  10. Defining a standard metric for electricity savings

    Energy Technology Data Exchange (ETDEWEB)

    Koomey, Jonathan [Lawrence Berkeley National Laboratory and Stanford University, PO Box 20313, Oakland, CA 94620-0313 (United States); Akbari, Hashem; Blumstein, Carl; Brown, Marilyn; Brown, Richard; Calwell, Chris; Carter, Sheryl; Cavanagh, Ralph; Chang, Audrey; Claridge, David; Craig, Paul; Diamond, Rick; Eto, Joseph H; Fulkerson, William; Gadgil, Ashok; Geller, Howard; Goldemberg, Jose; Goldman, Chuck; Goldstein, David B; Greenberg, Steve, E-mail: JGKoomey@stanford.ed

    2010-01-15

    The growing investment by governments and electric utilities in energy efficiency programs highlights the need for simple tools to help assess and explain the size of the potential resource. One technique that is commonly used in this effort is to characterize electricity savings in terms of avoided power plants, because it is easier for people to visualize a power plant than it is to understand an abstraction such as billions of kilowatt-hours. Unfortunately, there is no standardization around the characteristics of such power plants. In this letter we define parameters for a standard avoided power plant that have physical meaning and intuitive plausibility, for use in back-of-the-envelope calculations. For the prototypical plant this article settles on a 500 MW existing coal plant operating at a 70% capacity factor with 7% T and D losses. Displacing such a plant for one year would save 3 billion kWh/year at the meter and reduce emissions by 3 million metric tons of CO{sub 2} per year. The proposed name for this metric is the Rosenfeld, in keeping with the tradition among scientists of naming units in honor of the person most responsible for the discovery and widespread adoption of the underlying scientific principle in question-Dr Arthur H Rosenfeld.

  11. Defining a Standard Metric for Electricity Savings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Marilyn; Akbari, Hashem; Blumstein, Carl; Koomey, Jonathan; Brown, Richard; Calwell, Chris; Carter, Sheryl; Cavanagh, Ralph; Chang, Audrey; Claridge, David; Craig, Paul; Diamond, Rick; Eto, Joseph H.; Fulkerson, William; Gadgil, Ashok; Geller, Howard; Goldemberg, Jose; Goldman, Chuck; Goldstein, David B.; Greenberg, Steve; Hafemeister, David; Harris, Jeff; Harvey, Hal; Heitz, Eric; Hirst, Eric; Hummel, Holmes; Kammen, Dan; Kelly, Henry; Laitner, Skip; Levine, Mark; Lovins, Amory; Masters, Gil; McMahon, James E.; Meier, Alan; Messenger, Michael; Millhone, John; Mills, Evan; Nadel, Steve; Nordman, Bruce; Price, Lynn; Romm, Joe; Ross, Marc; Rufo, Michael; Sathaye, Jayant; Schipper, Lee; Schneider, Stephen H; Sweeney, James L; Verdict, Malcolm; Vorsatz, Diana; Wang, Devra; Weinberg, Carl; Wilk, Richard; Wilson, John; Worrell, Ernst

    2009-03-01

    The growing investment by governments and electric utilities in energy efficiency programs highlights the need for simple tools to help assess and explain the size of the potential resource. One technique that is commonly used in this effort is to characterize electricity savings in terms of avoided power plants, because it is easier for people to visualize a power plant than it is to understand an abstraction such as billions of kilowatt-hours. Unfortunately, there is no standardization around the characteristics of such power plants. In this letter we define parameters for a standard avoided power plant that have physical meaning and intuitive plausibility, for use in back-of-the-envelope calculations. For the prototypical plant this article settles on a 500 MW existing coal plant operating at a 70percent capacity factor with 7percent T&D losses. Displacing such a plant for one year would save 3 billion kW h per year at the meter and reduce emissions by 3 million metric tons of CO2 per year. The proposed name for this metric is the Rosenfeld, in keeping with the tradition among scientists of naming units in honor of the person most responsible for the discovery and widespread adoption of the underlying scientific principle in question--Dr. Arthur H. Rosenfeld.

  12. Do efforts to standardize, assess and improve the quality of health service provision to adolescents by government-run health services in low and middle income countries, lead to improvements in service-quality and service-utilization by adolescents?

    Science.gov (United States)

    Chandra-Mouli, Venkatraman; Chatterjee, Subidita; Bose, Krishna

    2016-02-06

    Researchers and implementers working in adolescent health, and adolescents themselves question whether government-run health services in conservative and resource-constrained settings can be made adolescent friendly. This paper aims to find out what selected low and middle income country (LMIC) governments have set out to do to improve the quality of health service provision to adolescents; whether their efforts led to measurable improvements in quality and to increased health service-utilization by adolescents. We gathered normative guidance and reports from eight LMICs in Asia, Africa, Central and Eastern Europe and the Western Pacific. We analysed national quality standards for adolescent friendly health services, findings from the assessments of the quality of health service provision, and findings on the utilization of health services. Governments of LMICs have set out to improve the accessibility, acceptability, equity, appropriateness and effectiveness of health service provision to adolescents by defining standards and actions to achieve them. Their actions have led to measurable improvements in quality and to increases in health service utilisation by adolescents. With support, government-run health facilities in LMICs can improve the quality of health services and their utilization by adolescents.

  13. A weighted coupling metric for business process models

    NARCIS (Netherlands)

    Vanderfeesten, I.T.P.; Cardoso, J.; Reijers, H.A.; Eder, J.; Tomassen, S.L.; Opdahl, A.; Sindre, G.

    2007-01-01

    Various efforts recently aimed at the development of quality metrics for process models. In this paper, we propose a new notion of coupling, which has been used successfully in software engineering for many years. It extends other work by specifically incorporating the effects of different types of

  14. Value-based metrics and Internet-based enterprises

    Science.gov (United States)

    Gupta, Krishan M.

    2001-10-01

    Within the last few years, a host of value-based metrics like EVA, MVA, TBR, CFORI, and TSR have evolved. This paper attempts to analyze the validity and applicability of EVA and Balanced Scorecard for Internet based organizations. Despite the collapse of the dot-com model, the firms engaged in e- commerce continue to struggle to find new ways to account for customer-base, technology, employees, knowledge, etc, as part of the value of the firm. While some metrics, like the Balance Scorecard are geared towards internal use, others like EVA are for external use. Value-based metrics are used for performing internal audits as well as comparing firms against one another; and can also be effectively utilized by individuals outside the firm looking to determine if the firm is creating value for its stakeholders.

  15. Utilizing a Post-discharge Telephone Call in Outpatient Parenteral Antimicrobial Therapy (OPAT): Findings from a Quality Improvement Project

    Science.gov (United States)

    Felder, Kimberly; Vaz, Louise; Barnes, Penelope; Varley, Cara

    2017-01-01

    Abstract Background Transitions of care from hospitals to outpatient settings, especially for patients requiring outpatient parenteral antimicrobial therapy (OPAT) are complex. OPAT complications, such as adverse antimicrobial reactions, vascular access problems, and hospital readmissions are common. Data from transitions of care literature suggest that post-discharge telephone calls (TCs) may significantly decrease re-hospitalization but no studies have assessed the utility of post-discharge TCs as an OPAT program quality improvement process. Methods Adult OPAT patients discharged from our hospital between April 1, 2015 and May 31, 2016 were queried for post-discharge concerns. TCs to patients or their caregivers were administered by trained medical assistants within the Department of Infectious Diseases using a standardized script and documented in the electronic medical record (EMR). Feasibility was assessed using call completion rate. The type and frequency of reported issues were analyzed by retrospective chart review. Results 636 of 689 eligible adult OPAT patients or their caregivers received a TC with responses to scripted questions documented in the EMR (92% completion rate). 302 patients (47%) reported 319 issues, including 293 (92%) relevant to OPAT. Antimicrobial issues included diarrhea/stool changes (58; 9%); nausea/vomiting (27; 4%); and missed antimicrobial doses (22; 3%). Vascular access issues included line patency concerns (21; 3%); vascular access dressing problems (17; 2.6%) and arm pain/swelling (6; 1%). OPAT vendor issues included delays in lab or line care services (23; 4%) and OPAT orders reported as lost/not received (21; 3%). Other ID-related issues included fevers/chills/sweats (27; 4%), wound concerns (16; 2.5%), and pain (15; 2.5%). Conclusion Adding a post-discharge TC to an OPAT program was feasible and resulted in frequent and early identification of significant OPAT patient and caregiver concerns. Findings suggest potential high

  16. A generalized Web Service response time metric to support collaborative and corroborative Web Service monitoring

    CSIR Research Space (South Africa)

    Makitla, I

    2015-12-01

    Full Text Available In this paper, we describe the development of a generalized metric for computing response time of a web service. Such a generalized metric would help to develop consensus with regards to the meanings of contracted Quality of Service (QoS) parameters...

  17. Group covariance and metrical theory

    International Nuclear Information System (INIS)

    Halpern, L.

    1983-01-01

    The a priori introduction of a Lie group of transformations into a physical theory has often proved to be useful; it usually serves to describe special simplified conditions before a general theory can be worked out. Newton's assumptions of absolute space and time are examples where the Euclidian group and translation group have been introduced. These groups were extended to the Galilei group and modified in the special theory of relativity to the Poincare group to describe physics under the given conditions covariantly in the simplest way. The criticism of the a priori character leads to the formulation of the general theory of relativity. The general metric theory does not really give preference to a particular invariance group - even the principle of equivalence can be adapted to a whole family of groups. The physical laws covariantly inserted into the metric space are however adapted to the Poincare group. 8 references

  18. General relativity: An erfc metric

    Science.gov (United States)

    Plamondon, Réjean

    2018-06-01

    This paper proposes an erfc potential to incorporate in a symmetric metric. One key feature of this model is that it relies on the existence of an intrinsic physical constant σ, a star-specific proper length that scales all its surroundings. Based thereon, the new metric is used to study the space-time geometry of a static symmetric massive object, as seen from its interior. The analytical solutions to the Einstein equation are presented, highlighting the absence of singularities and discontinuities in such a model. The geodesics are derived in their second- and first-order differential formats. Recalling the slight impact of the new model on the classical general relativity tests in the solar system, a number of facts and open problems are briefly revisited on the basis of a heuristic definition of σ. A special attention is given to gravitational collapses and non-singular black holes.

  19. hdm: High-dimensional metrics

    OpenAIRE

    Chernozhukov, Victor; Hansen, Christian; Spindler, Martin

    2016-01-01

    In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...

  20. Multi-Metric Sustainability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cowlin, Shannon [National Renewable Energy Lab. (NREL), Golden, CO (United States); Heimiller, Donna [National Renewable Energy Lab. (NREL), Golden, CO (United States); Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Pless, Jacquelyn [National Renewable Energy Lab. (NREL), Golden, CO (United States); Munoz, David [Colorado School of Mines, Golden, CO (United States)

    2014-12-01

    A readily accessible framework that allows for evaluating impacts and comparing tradeoffs among factors in energy policy, expansion planning, and investment decision making is lacking. Recognizing this, the Joint Institute for Strategic Energy Analysis (JISEA) funded an exploration of multi-metric sustainability analysis (MMSA) to provide energy decision makers with a means to make more comprehensive comparisons of energy technologies. The resulting MMSA tool lets decision makers simultaneously compare technologies and potential deployment locations.

  1. Sensory Metrics of Neuromechanical Trust.

    Science.gov (United States)

    Softky, William; Benford, Criscillia

    2017-09-01

    Today digital sources supply a historically unprecedented component of human sensorimotor data, the consumption of which is correlated with poorly understood maladies such as Internet addiction disorder and Internet gaming disorder. Because both natural and digital sensorimotor data share common mathematical descriptions, one can quantify our informational sensorimotor needs using the signal processing metrics of entropy, noise, dimensionality, continuity, latency, and bandwidth. Such metrics describe in neutral terms the informational diet human brains require to self-calibrate, allowing individuals to maintain trusting relationships. With these metrics, we define the trust humans experience using the mathematical language of computational models, that is, as a primitive statistical algorithm processing finely grained sensorimotor data from neuromechanical interaction. This definition of neuromechanical trust implies that artificial sensorimotor inputs and interactions that attract low-level attention through frequent discontinuities and enhanced coherence will decalibrate a brain's representation of its world over the long term by violating the implicit statistical contract for which self-calibration evolved. Our hypersimplified mathematical understanding of human sensorimotor processing as multiscale, continuous-time vibratory interaction allows equally broad-brush descriptions of failure modes and solutions. For example, we model addiction in general as the result of homeostatic regulation gone awry in novel environments (sign reversal) and digital dependency as a sub-case in which the decalibration caused by digital sensorimotor data spurs yet more consumption of them. We predict that institutions can use these sensorimotor metrics to quantify media richness to improve employee well-being; that dyads and family-size groups will bond and heal best through low-latency, high-resolution multisensory interaction such as shared meals and reciprocated touch; and

  2. Metric reconstruction from Weyl scalars

    Energy Technology Data Exchange (ETDEWEB)

    Whiting, Bernard F; Price, Larry R [Department of Physics, PO Box 118440, University of Florida, Gainesville, FL 32611 (United States)

    2005-08-07

    The Kerr geometry has remained an elusive world in which to explore physics and delve into the more esoteric implications of general relativity. Following the discovery, by Kerr in 1963, of the metric for a rotating black hole, the most major advance has been an understanding of its Weyl curvature perturbations based on Teukolsky's discovery of separable wave equations some ten years later. In the current research climate, where experiments across the globe are preparing for the first detection of gravitational waves, a more complete understanding than concerns just the Weyl curvature is now called for. To understand precisely how comparatively small masses move in response to the gravitational waves they emit, a formalism has been developed based on a description of the whole spacetime metric perturbation in the neighbourhood of the emission region. Presently, such a description is not available for the Kerr geometry. While there does exist a prescription for obtaining metric perturbations once curvature perturbations are known, it has become apparent that there are gaps in that formalism which are still waiting to be filled. The most serious gaps include gauge inflexibility, the inability to include sources-which are essential when the emitting masses are considered-and the failure to describe the l = 0 and 1 perturbation properties. Among these latter properties of the perturbed spacetime, arising from a point mass in orbit, are the perturbed mass and axial component of angular momentum, as well as the very elusive Carter constant for non-axial angular momentum. A status report is given on recent work which begins to repair these deficiencies in our current incomplete description of Kerr metric perturbations.

  3. Metric reconstruction from Weyl scalars

    International Nuclear Information System (INIS)

    Whiting, Bernard F; Price, Larry R

    2005-01-01

    The Kerr geometry has remained an elusive world in which to explore physics and delve into the more esoteric implications of general relativity. Following the discovery, by Kerr in 1963, of the metric for a rotating black hole, the most major advance has been an understanding of its Weyl curvature perturbations based on Teukolsky's discovery of separable wave equations some ten years later. In the current research climate, where experiments across the globe are preparing for the first detection of gravitational waves, a more complete understanding than concerns just the Weyl curvature is now called for. To understand precisely how comparatively small masses move in response to the gravitational waves they emit, a formalism has been developed based on a description of the whole spacetime metric perturbation in the neighbourhood of the emission region. Presently, such a description is not available for the Kerr geometry. While there does exist a prescription for obtaining metric perturbations once curvature perturbations are known, it has become apparent that there are gaps in that formalism which are still waiting to be filled. The most serious gaps include gauge inflexibility, the inability to include sources-which are essential when the emitting masses are considered-and the failure to describe the l = 0 and 1 perturbation properties. Among these latter properties of the perturbed spacetime, arising from a point mass in orbit, are the perturbed mass and axial component of angular momentum, as well as the very elusive Carter constant for non-axial angular momentum. A status report is given on recent work which begins to repair these deficiencies in our current incomplete description of Kerr metric perturbations

  4. Sustainability Metrics: The San Luis Basin Project

    Science.gov (United States)

    Sustainability is about promoting humanly desirable dynamic regimes of the environment. Metrics: ecological footprint, net regional product, exergy, emergy, and Fisher Information. Adaptive management: (1) metrics assess problem, (2) specific problem identified, and (3) managemen...

  5. [Sexual hormone and traditional Chinese patent medicine for early postmenopausal women: effect on quality of life and cost-utility analysis].

    Science.gov (United States)

    Zhou, Ling-Ling; Xu, Liang-Zhi; Liu, Hong-Wei; Zhang, Jing; Liu, Ying; Liu, Xiao-Fang; Tang, Liu-Lin; Zhuang, Jing; Liu, Xiao-Xian; Qiao, Lin

    2009-11-01

    To evaluate the effect of Premarin and Kuntai capsule (a traditional Chinese patent medicine) on the quality of life (QOL) and their cost-utility in early postmenopausal women. Fifty-seven women with menopausal syndrome in the early postmenopausal stage were randomly allocated into Premarin group (0.3 mg/day and 0.6 mg/day alternately, n=29) and Kuntai group (4 g/day, n=28). The therapies lasted for one year and the patients were followed up every 3 months. The QOL of the patients was evaluated and the utility scores were obtained from rating scale to conduct a cost-utility analysis (CUA). At each follow-up examination, no significant difference was found in the QOL between the two groups (P>0.05). The QOL obviously increased after the 1-year-long therapy in both the groups, and Kuntai required longer treatment time than Premarin to take effect. The cost-utility ratio of Premarin and Kuntai were 13581.45 yuan/QALY (quality adjusted life year) and 25105.12 yuan/QALY, respectively. Both incremental cost analysis and sensitivity analysis showed that Kuntai was more costly than Premarin. The result of per-protocol analysis was consistent with that of intention-to-treat analysis. At early stage of menopause, the QOL of women with menopausal syndrome can be significantly improved by low-dose Premarin and Kuntai capsule, but the latter is more costly.

  6. A Single Conjunction Risk Assessment Metric: the F-Value

    Science.gov (United States)

    Frigm, Ryan Clayton; Newman, Lauri K.

    2009-01-01

    The Conjunction Assessment Team at NASA Goddard Space Flight Center provides conjunction risk assessment for many NASA robotic missions. These risk assessments are based on several figures of merit, such as miss distance, probability of collision, and orbit determination solution quality. However, these individual metrics do not singly capture the overall risk associated with a conjunction, making it difficult for someone without this complete understanding to take action, such as an avoidance maneuver. The goal of this analysis is to introduce a single risk index metric that can easily convey the level of risk without all of the technical details. The proposed index is called the conjunction "F-value." This paper presents the concept of the F-value and the tuning of the metric for use in routine Conjunction Assessment operations.

  7. The Imprecise Science of Evaluating Scholarly Performance: Utilizing Broad Quality Categories for an Assessment of Business and Management Journals

    Science.gov (United States)

    Lange, Thomas

    2006-01-01

    In a growing number of countries, government-appointed assessment panels develop ranks on the basis of the quality of scholarly outputs to apportion budgets in recognition of evaluated performance and to justify public funds for future R&D activities. When business and management journals are being grouped in broad quality categories, a recent…

  8. A family of metric gravities

    Science.gov (United States)

    Shuler, Robert

    2018-04-01

    The goal of this paper is to take a completely fresh approach to metric gravity, in which the metric principle is strictly adhered to but its properties in local space-time are derived from conservation principles, not inferred from a global field equation. The global field strength variation then gains some flexibility, but only in the regime of very strong fields (2nd-order terms) whose measurement is now being contemplated. So doing provides a family of similar gravities, differing only in strong fields, which could be developed into meaningful verification targets for strong fields after the manner in which far-field variations were used in the 20th century. General Relativity (GR) is shown to be a member of the family and this is demonstrated by deriving the Schwarzschild metric exactly from a suitable field strength assumption. The method of doing so is interesting in itself because it involves only one differential equation rather than the usual four. Exact static symmetric field solutions are also given for one pedagogical alternative based on potential, and one theoretical alternative based on inertia, and the prospects of experimentally differentiating these are analyzed. Whether the method overturns the conventional wisdom that GR is the only metric theory of gravity and that alternatives must introduce additional interactions and fields is somewhat semantical, depending on whether one views the field strength assumption as a field and whether the assumption that produces GR is considered unique in some way. It is of course possible to have other fields, and the local space-time principle can be applied to field gravities which usually are weak-field approximations having only time dilation, giving them the spatial factor and promoting them to full metric theories. Though usually pedagogical, some of them are interesting from a quantum gravity perspective. Cases are noted where mass measurement errors, or distributions of dark matter, can cause one

  9. Hybrid metric-Palatini stars

    Science.gov (United States)

    Danilǎ, Bogdan; Harko, Tiberiu; Lobo, Francisco S. N.; Mak, M. K.

    2017-02-01

    We consider the internal structure and the physical properties of specific classes of neutron, quark and Bose-Einstein condensate stars in the recently proposed hybrid metric-Palatini gravity theory, which is a combination of the metric and Palatini f (R ) formalisms. It turns out that the theory is very successful in accounting for the observed phenomenology, since it unifies local constraints at the Solar System level and the late-time cosmic acceleration, even if the scalar field is very light. In this paper, we derive the equilibrium equations for a spherically symmetric configuration (mass continuity and Tolman-Oppenheimer-Volkoff) in the framework of the scalar-tensor representation of the hybrid metric-Palatini theory, and we investigate their solutions numerically for different equations of state of neutron and quark matter, by adopting for the scalar field potential a Higgs-type form. It turns out that the scalar-tensor definition of the potential can be represented as an Clairaut differential equation, and provides an explicit form for f (R ) given by f (R )˜R +Λeff, where Λeff is an effective cosmological constant. Furthermore, stellar models, described by the stiff fluid, radiation-like, bag model and the Bose-Einstein condensate equations of state are explicitly constructed in both general relativity and hybrid metric-Palatini gravity, thus allowing an in-depth comparison between the predictions of these two gravitational theories. As a general result it turns out that for all the considered equations of state, hybrid gravity stars are more massive than their general relativistic counterparts. Furthermore, two classes of stellar models corresponding to two particular choices of the functional form of the scalar field (constant value, and logarithmic form, respectively) are also investigated. Interestingly enough, in the case of a constant scalar field the equation of state of the matter takes the form of the bag model equation of state describing

  10. Metrics for Evaluation of Student Models

    Science.gov (United States)

    Pelanek, Radek

    2015-01-01

    Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…

  11. Context-dependent ATC complexity metric

    NARCIS (Netherlands)

    Mercado Velasco, G.A.; Borst, C.

    2015-01-01

    Several studies have investigated Air Traffic Control (ATC) complexity metrics in a search for a metric that could best capture workload. These studies have shown how daunting the search for a universal workload metric (one that could be applied in different contexts: sectors, traffic patterns,

  12. Properties of C-metric spaces

    Science.gov (United States)

    Croitoru, Anca; Apreutesei, Gabriela; Mastorakis, Nikos E.

    2017-09-01

    The subject of this paper belongs to the theory of approximate metrics [23]. An approximate metric on X is a real application defined on X × X that satisfies only a part of the metric axioms. In a recent paper [23], we introduced a new type of approximate metric, named C-metric, that is an application which satisfies only two metric axioms: symmetry and triangular inequality. The remarkable fact in a C-metric space is that a topological structure induced by the C-metric can be defined. The innovative idea of this paper is that we obtain some convergence properties of a C-metric space in the absence of a metric. In this paper we investigate C-metric spaces. The paper is divided into four sections. Section 1 is for Introduction. In Section 2 we recall some concepts and preliminary results. In Section 3 we present some properties of C-metric spaces, such as convergence properties, a canonical decomposition and a C-fixed point theorem. Finally, in Section 4 some conclusions are highlighted.

  13. Effective quality auditing

    International Nuclear Information System (INIS)

    Sivertsen, Terje

    2004-01-01

    The present report focuses on how to improve the effectiveness of quality audits and organization-wide quality management. It discusses several concepts related to internal quality auditing, includes guidelines on how to establish auditing as a key process of the organization, and exemplifies its application in the management of quality, strategy, and change. The report follows a line of research documented previously in the reports 'Continuous Improvement of Software Quality' (HWR-584) and 'ISO 9000 Quality Systems for Software Development' (HWR-629). In particular, the concepts of measurement programmes and process improvement cycles, discussed in HWR-584, form the basis for the approach advocated in the present report to the continual improvement of the internal quality audit process. Internal auditing is an important ingredient in ISO 9000 quality systems, and continual improvement of this process is consistent with the process-oriented view of the 2000 revision of the ISO 9000 family (HWR-629). The overall aim of the research is to provide utilities and their system vendors with better tools for quality management in digital I and C projects. The research results are expected to provide guidance to the choice of software engineering practices to obtain a system fulfilling safety requirements at an acceptable cost. For licensing authorities, the results are intended to make the review process more efficient through the use of appropriate measures (metrics), and to be of help in establishing requirements to software quality assurance in digital I and C projects. (Author)

  14. Características métricas del Cuestionario de Calidad de Vida Profesional (CVP-35 Metric characteristics of the Professional Quality of Life Questionnaire [QPL-35] in primary care professionals

    Directory of Open Access Journals (Sweden)

    Jesús Martín

    2004-04-01

    de vida profesional en atención primaria.Objective: To assess the internal consistency, discriminative capacity and factorial composition of the Professional Quality of Life Questionnaire (QPL-35 in a population of primary care professionals. Methods: We performed a cross-sectional analytical study in a primary care area in Madrid from 2001 to 2003. Random sampling of 450 healthcare professionals was performed on 2 occasions. The sample was stratified into 3 groups: group I (clinicians, pharmacologists, psychologists, group II (nurses, midwives, physiotherapists, social workers and group III (administrative staff, porters, auxiliary nurses. The self-administered questionnaire QPL-35 was sent in January 2001 and January 2003 and on each occasion the questionnaire was sent again 1 month later. The percentages of total responses and responses per item were studied. We also studied the distribution of each answer by examining the «floor effect» and «ceiling effect», as well as the factorial composition based on a previous validation study. Results: Five hundred sixty-three questionnaires (62.6% were returned. All the questions had a response rate of more than 96%. At least one unanswered question was found in 22.0% of the questionnaires, and at least 2 were unanswered in 7.1%. The distribution of the answers did not fit normal distribution in any of the cases. The floor effect was present in questions related to management support and the ceiling effect was found in those related to motivation. The factorial analysis found 3 factors that explained 39.6% of the variance in the total number of questions. These factors were very similar to those of the previous validation study: «management support», «perception of workload» and «intrinsic motivation» explained 17.0%, 13.2% and 9.4% of the variance, respectively. Internal consistency was high for each factor (Cronbach's α > 0.7 and for the total score (Cronbach's α = 0.81. Conclusions: The metric properties of

  15. Development of a Premium Quality Plasma-derived IVIg (IQYMUNE®) Utilizing the Principles of Quality by Design-A Worked-through Case Study.

    Science.gov (United States)

    Paolantonacci, Philippe; Appourchaux, Philippe; Claudel, Béatrice; Ollivier, Monique; Dennett, Richard; Siret, Laurent

    2018-01-01

    Polyvalent human normal immunoglobulins for intravenous use (IVIg), indicated for rare and often severe diseases, are complex plasma-derived protein preparations. A quality by design approach has been used to develop the Laboratoire Français du Fractionnement et des Biotechnologies new-generation IVIg, targeting a high level of purity to generate an enhanced safety profile while maintaining a high level of efficacy. A modular approach of quality by design was implemented consisting of five consecutive steps to cover all the stages from the product design to the final product control strategy.A well-defined target product profile was translated into 27 product quality attributes that formed the basis of the process design. In parallel, a product risk analysis was conducted and identified 19 critical quality attributes among the product quality attributes. Process risk analysis was carried out to establish the links between process parameters and critical quality attributes. Twelve critical steps were identified, and for each of these steps a risk mitigation plan was established.Among the different process risk mitigation exercises, five process robustness studies were conducted at qualified small scale with a design of experiment approach. For each process step, critical process parameters were identified and, for each critical process parameter, proven acceptable ranges were established. The quality risk management and risk mitigation outputs, including verification of proven acceptable ranges, were used to design the process verification exercise at industrial scale.Finally, the control strategy was established using a mix, or hybrid, of the traditional approach plus elements of the quality by design enhanced approach, as illustrated, to more robustly assign material and process controls and in order to securely meet product specifications.The advantages of this quality by design approach were improved process knowledge for industrial design and process

  16. METRIC EVALUATION PIPELINE FOR 3D MODELING OF URBAN SCENES

    Directory of Open Access Journals (Sweden)

    M. Bosch

    2017-05-01

    Full Text Available Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  17. Metric Evaluation Pipeline for 3d Modeling of Urban Scenes

    Science.gov (United States)

    Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.

    2017-05-01

    Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  18. Do generic utility measures capture what is important to the quality of life of people with multiple sclerosis?

    OpenAIRE

    Kuspinar, Ayse; Mayo, Nancy E

    2013-01-01

    Purpose The three most widely used utility measures are the Health Utilities Index Mark 2 and 3 (HUI2 and HUI3), the EuroQol-5D (EQ-5D) and the Short-Form-6D (SF-6D). In line with guidelines for economic evaluation from agencies such as the National Institute for Health and Clinical Excellence (NICE) and the Canadian Agency for Drugs and Technologies in Health (CADTH), these measures are currently being used to evaluate the cost-effectiveness of different interventions in MS. However, the cha...

  19. Microservice scaling optimization based on metric collection in Kubernetes

    OpenAIRE

    Blažej, Aljaž

    2017-01-01

    As web applications become more complex and the number of internet users rises, so does the need to optimize the use of hardware supporting these applications. Optimization can be achieved with microservices, as they offer several advantages compared to the monolithic approach, such as better utilization of resources, scalability and isolation of different parts of an application. Another important part is collecting metrics, since they can be used for analysis and debugging as well as the ba...

  20. Investigation of in-vehicle speech intelligibility metrics for normal hearing and hearing impaired listeners

    Science.gov (United States)

    Samardzic, Nikolina

    The effectiveness of in-vehicle speech communication can be a good indicator of the perception of the overall vehicle quality and customer satisfaction. Currently available speech intelligibility metrics do not account in their procedures for essential parameters needed for a complete and accurate evaluation of in-vehicle speech intelligibility. These include the directivity and the distance of the talker with respect to the listener, binaural listening, hearing profile of the listener, vocal effort, and multisensory hearing. In the first part of this research the effectiveness of in-vehicle application of these metrics is investigated in a series of studies to reveal their shortcomings, including a wide range of scores resulting from each of the metrics for a given measurement configuration and vehicle operating condition. In addition, the nature of a possible correlation between the scores obtained from each metric is unknown. The metrics and the subjective perception of speech intelligibility using, for example, the same speech material have not been compared in literature. As a result, in the second part of this research, an alternative method for speech intelligibility evaluation is proposed for use in the automotive industry by utilizing a virtual reality driving environment for ultimately setting targets, including the associated statistical variability, for future in-vehicle speech intelligibility evaluation. The Speech Intelligibility Index (SII) was evaluated at the sentence Speech Receptions Threshold (sSRT) for various listening situations and hearing profiles using acoustic perception jury testing and a variety of talker and listener configurations and background noise. In addition, the effect of individual sources and transfer paths of sound in an operating vehicle to the vehicle interior sound, specifically their effect on speech intelligibility was quantified, in the framework of the newly developed speech intelligibility evaluation method. Lastly

  1. Utility of registries for post-marketing evaluation of medicines. A survey of Swedish health care quality registries from a regulatory perspective.

    Science.gov (United States)

    Feltelius, Nils; Gedeborg, Rolf; Holm, Lennart; Zethelius, Björn

    2017-06-01

    The aim of this study was to describe content and procedures in some selected Swedish health care quality registries (QRs) of relevance to regulatory decision-making. A workshop was organized with participation of seven Swedish QRs which subsequently answered a questionnaire regarding registry content on drug treatments and outcomes. Patient populations, coverage, data handling and quality control, as well as legal and ethical aspects are presented. Scientific publications from the QRs are used as a complementary measure of quality and scientific relevance. The registries under study collect clinical data of high relevance to regulatory and health technology agencies. Five out of seven registries provide information on the drug of interest. When applying external quality criteria, we found a high degree of fulfillment, although information on medication was not sufficient to answer all questions of regulatory interest. A notable strength is the option for linkage to the Prescribed Drug Registry and to information on education and socioeconomic status. Data on drugs used during hospitalization were also collected to some extent. Outcome measures collected resemble those used in relevant clinical trials. All registries collected patient-reported outcome measures. The number of publications from the registries was substantial, with studies of appropriate design, including randomized registry trials. Quality registries may provide a valuable source of post-marketing data on drug effectiveness, safety, and cost-effectiveness. Closer collaboration between registries and regulators to improve quality and usefulness of registry data could benefit both regulatory utility and value for health care providers.

  2. Metric integration architecture for product development

    Science.gov (United States)

    Sieger, David B.

    1997-06-01

    Present-day product development endeavors utilize the concurrent engineering philosophy as a logical means for incorporating a variety of viewpoints into the design of products. Since this approach provides no explicit procedural provisions, it is necessary to establish at least a mental coupling with a known design process model. The central feature of all such models is the management and transformation of information. While these models assist in structuring the design process, characterizing the basic flow of operations that are involved, they provide no guidance facilities. The significance of this feature, and the role it plays in the time required to develop products, is increasing in importance due to the inherent process dynamics, system/component complexities, and competitive forces. The methodology presented in this paper involves the use of a hierarchical system structure, discrete event system specification (DEVS), and multidimensional state variable based metrics. This approach is unique in its capability to quantify designer's actions throughout product development, provide recommendations about subsequent activity selection, and coordinate distributed activities of designers and/or design teams across all design stages. Conceptual design tool implementation results are used to demonstrate the utility of this technique in improving the incremental decision making process.

  3. On characterizations of quasi-metric completeness

    Energy Technology Data Exchange (ETDEWEB)

    Dag, H.; Romaguera, S.; Tirado, P.

    2017-07-01

    Hu proved in [4] that a metric space (X, d) is complete if and only if for any closed subspace C of (X, d), every Banach contraction on C has fixed point. Since then several authors have investigated the problem of characterizing the metric completeness by means of fixed point theorems. Recently this problem has been studied in the more general context of quasi-metric spaces for different notions of completeness. Here we present a characterization of a kind of completeness for quasi-metric spaces by means of a quasi-metric versions of Hu’s theorem. (Author)

  4. With better connection between utility and its customers and with more quality database toward more efficiently DSM program

    International Nuclear Information System (INIS)

    Tomasic-Skevin, S.

    1996-01-01

    In this paper new demand-side technologies and their influence on power system are described. Better connection between utility and its customers is the most important thing for build up good data-base and that data-base is base for efficient usage of DSM program. (author)

  5. Hurricane exposure and county fetal death rates, utilization of a county environmental quality index for confounding control.

    Science.gov (United States)

    The effects of natural disasters on public health are a rising concern, with increasing severity of disaster events. Many disaster studies utilize county-level analysis, however most do not control for county level environmental factors. Hurricane exposure during pregnancy could ...

  6. National phantoms bank for the service of nuclear medicine in Cuba. Utility for the quality control of the instrumentation

    International Nuclear Information System (INIS)

    Varela C, C.; Diaz B, M.; Lopez B, G.M.

    2006-01-01

    Although, most of the applications in Nuclear Medicine have diagnostic ends, its going enlarging considerably the therapeutic applications. So that the diagnostic accuracy or the therapy effectiveness have not been affected, it becomes indispensable the quality control of the instrumentation, independently of its technological complexity and/or its exploitation period. Before the real lack of phantoms in the institutions, it was created a bank that puts to disposition of all the institutions, the existent phantoms in the country, and those that are going acquired, centralized by the State Control of Medical Equipment Center (CCEEM) and with Web access in its place www.eqmed.sld.cu. Having like base the elaboration of the National Protocol for the Quality Control of the Instrumentation in Nuclear Medicine that keeps in mind the international normative and the own existent conditions, were dictated and established two national regulations and its are being carried out the first audits to the instrumentation quality. These have evidenced the partial realization of the established quality controls in the services, the necessity to make aware as for the fulfillment of the criteria and quality concepts for the instrumentation, as well as the necessity to increase the phantoms number to the bank to guarantee the fulfillment of the Quality Control Programs. (Author)

  7. Load Balancing Metric with Diversity for Energy Efficient Routing in Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Moad, Sofiane; Hansen, Morten Tranberg; Jurdak, Raja

    2011-01-01

    The expected number of transmission (ETX) represents a routing metric that considers the highly variable link qualities for a specific radio in Wireless Sensor Networks (WSNs). To adapt to these differences, radio diversity is a recently explored solution for WSNs. In this paper, we propose...... an energy balancing metric which explores the diversity in link qualities present at different radios. The goal is to effectively use the energy of the network and therefore extend the network lifetime. The proposed metric takes into account the transmission and reception costs for a specific radio in order...... to choose an energy efficient radio. In addition, the metric uses the remaining energy of nodes in order to regulate the traffic so that critical nodes are avoided. We show by simulations that our metric can improve the network lifetime up to 20%....

  8. The Metric of Colour Space

    DEFF Research Database (Denmark)

    Gravesen, Jens

    2015-01-01

    and found the MacAdam ellipses which are often interpreted as defining the metric tensor at their centres. An important question is whether it is possible to define colour coordinates such that the Euclidean distance in these coordinates correspond to human perception. Using cubic splines to represent......The space of colours is a fascinating space. It is a real vector space, but no matter what inner product you put on the space the resulting Euclidean distance does not correspond to human perception of difference between colours. In 1942 MacAdam performed the first experiments on colour matching...

  9. Product Operations Status Summary Metrics

    Science.gov (United States)

    Takagi, Atsuya; Toole, Nicholas

    2010-01-01

    The Product Operations Status Summary Metrics (POSSUM) computer program provides a readable view into the state of the Phoenix Operations Product Generation Subsystem (OPGS) data pipeline. POSSUM provides a user interface that can search the data store, collect product metadata, and display the results in an easily-readable layout. It was designed with flexibility in mind for support in future missions. Flexibility over various data store hierarchies is provided through the disk-searching facilities of Marsviewer. This is a proven program that has been in operational use since the first day of the Phoenix mission.

  10. Web metrics for library and information professionals

    CERN Document Server

    Stuart, David

    2014-01-01

    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional.Th...

  11. Disturbance metrics predict a wetland Vegetation Index of Biotic Integrity

    Science.gov (United States)

    Stapanian, Martin A.; Mack, John; Adams, Jean V.; Gara, Brian; Micacchion, Mick

    2013-01-01

    Indices of biological integrity of wetlands based on vascular plants (VIBIs) have been developed in many areas in the USA. Knowledge of the best predictors of VIBIs would enable management agencies to make better decisions regarding mitigation site selection and performance monitoring criteria. We use a novel statistical technique to develop predictive models for an established index of wetland vegetation integrity (Ohio VIBI), using as independent variables 20 indices and metrics of habitat quality, wetland disturbance, and buffer area land use from 149 wetlands in Ohio, USA. For emergent and forest wetlands, predictive models explained 61% and 54% of the variability, respectively, in Ohio VIBI scores. In both cases the most important predictor of Ohio VIBI score was a metric that assessed habitat alteration and development in the wetland. Of secondary importance as a predictor was a metric that assessed microtopography, interspersion, and quality of vegetation communities in the wetland. Metrics and indices assessing disturbance and land use of the buffer area were generally poor predictors of Ohio VIBI scores. Our results suggest that vegetation integrity of emergent and forest wetlands could be most directly enhanced by minimizing substrate and habitat disturbance within the wetland. Such efforts could include reducing or eliminating any practices that disturb the soil profile, such as nutrient enrichment from adjacent farm land, mowing, grazing, or cutting or removing woody plants.

  12. A Practitioners’ Perspective on Developmental Models, Metrics and Community

    Directory of Open Access Journals (Sweden)

    Chad Stewart

    2009-12-01

    Full Text Available This article builds on a paper by Stein and Heikkinen (2009, and suggestsways to expand and improve our measurement of the quality of the developmentalmodels, metrics and instruments and the results we get in collaborating with clients. Wesuggest that this dialogue needs to be about more than stage development measured by(even calibrated stage development-focused, linguistic-based, developmental psychologymetrics that produce lead indicators and are shown to be reliable and valid bypsychometric qualities alone. The article first provides a brief overview of ourbackground and biases, and an applied version of Ken Wilber’s Integral OperatingSystem that has provided increased development, client satisfaction, and contribution toour communities measured by verifiable, tangible results (as well as intangible resultssuch as increased ability to cope with complex surroundings, reduced stress and growthin developmental stages to better fit to the environment in which our clients wereengaged at that time. It then addresses four key points raised by Stein and Heikkinen(need for quality control, defining and deciding on appropriate metrics, building a systemto evaluate models and metrics, and clarifying and increasing the reliability and validityof the models and metrics we use by providing initial concrete steps to:• Adopt a systemic value-chain approach• Measure results in addition to language• Build on the evaluation system for instruments, models and metrics suggested byStein & Heikkinen• Clarify and improve the reliability and validity of the instruments, models andmetrics we useWe complete the article with an echoing call for the community of AppliedDevelopmental Theory suggested by Ross (2008 and Stein and Heikkinen, a briefdescription of that community (from our perspective, and a table that builds on Table 2proposed by Stein and Heikkinen.

  13. Expected utility without utility

    OpenAIRE

    Castagnoli, E.; Licalzi, M.

    1996-01-01

    This paper advances an interpretation of Von Neumann–Morgenstern’s expected utility model for preferences over lotteries which does not require the notion of a cardinal utility over prizes and can be phrased entirely in the language of probability. According to it, the expected utility of a lottery can be read as the probability that this lottery outperforms another given independent lottery. The implications of this interpretation for some topics and models in decision theory are considered....

  14. Impact of Risk Aversion on Price and Quality Decisions under Demand Uncertainty via the CARA Utility Function

    Directory of Open Access Journals (Sweden)

    Qinqin Li

    2014-01-01

    Full Text Available This paper investigates optimal price and quality decisions of a manufacturer-retailer supply chain under demand uncertainty, in which players are both risk-averse decision makers. The manufacturer determines the wholesale price and quality of the product, and the retailer determines the retail price. By means of game theory, we employ the constant absolute risk aversion (CARA function to analyze two different supply chain structures, that is, manufacturer Stackelberg model (MS and retailer Stackelberg model (RS. We then analyze the results to explore the effects of risk aversion of the manufacturer and the retailer upon the equilibrium decisions. Our results imply that both the risk aversion of the manufacturer and the retailer play an important role in the price and quality decisions. We find that, in general, in MS and RS models, the optimal wholesale price and quality decrease with the risk aversion of the manufacturer but increase with the risk aversion of the retailer, while the retail price decreases with the risk aversion of the manufacturer as well as the retailer. We also examine the impact of quality cost coefficient on the optimal decisions. Finally, numerical examples are presented to illustrate the different degree of effects of players’ risk aversion on equilibrium results and to compare results in different models considered.

  15. Quality of Life, Depression, and Healthcare Resource Utilization among Adults with Type 2 Diabetes Mellitus and Concomitant Hypertension and Obesity: A Prospective Survey

    Directory of Open Access Journals (Sweden)

    Andrew J. Green

    2012-01-01

    Full Text Available Background. This study compared quality of life, depression, and healthcare resource utilization among adults with type 2 diabetes mellitus (T2DM and comorbid hypertension (HTN and obesity with those of adults reporting T2DM alone. Methods. Respondents to the US SHIELD survey self-reported their height, weight, comorbid conditions, hospitalizations, and outpatient visits and completed the Short Form-12 (SF-12 and Patient Health Questionnaire (PHQ-9. Respondents reporting T2DM and HTN and obesity (body mass index, BMI, ≥30 kg/m2 were compared with a T2DM-alone group. Results. Respondents with T2DM, HTN, and obesity (n=1292 had significantly lower SF-12 Physical and Mental Component Summary scores (37.3 and 50.9, resp. than T2DM-alone respondents (n=349 (45.8 and 53.5, resp., P<0.0001. Mean PHQ-9 scores were significantly higher among T2DM respondents with comorbid HTN and obesity (5.0 versus 2.5, P<0.0001, indicating greater depression burden. Respondents with T2DM, HTN, and obesity had significantly more resource utilization with respect to physician visits and emergency room visits but not hospitalizations than respondents with T2DM alone (P=0.03. Conclusions. SHIELD respondents with comorbid conditions of T2DM, HTN, and obesity reported greater healthcare resource utilization, more depression symptoms, and lower quality of life than the T2DM-alone group.

  16. VALUING BENEFITS FROM WATER QUALITY IMPROVEMENTS USING KUHN TUCKER MODEL - A COMPARATIVE ANALYSIS ON UTILITY FUNCTIONAL FORMS-

    Science.gov (United States)

    Okuyama, Tadahiro

    Kuhn-Tucker model, which has studied in recent years, is a benefit valuation technique using the revealed-preference data, and the feature is to treatvarious patterns of corner solutions flexibly. It is widely known for the benefit calculation using the revealed-preference data that a value of a benefit changes depending on a functional form. However, there are little studies which examine relationship between utility functions and values of benefits in Kuhn-Tucker model. The purpose of this study is to analysis an influence of the functional form to the value of a benefit. Six types of utility functions are employed for benefit calculations. The data of the recreational activity of 26 beaches of Miyagi Prefecture were employed. Calculation results indicated that Phaneuf and Siderelis (2003) and Whitehead et al.(2010)'s functional forms are useful for benefit calculations.

  17. Distance from health facility and mothers’ perception of quality related to skilled delivery service utilization in northern Ethiopia

    OpenAIRE

    Fisseha,Girmatsion; Berhane,Yemane; Worku,Alemayehu; Terefe,Wondwossen

    2017-01-01

    Girmatsion Fisseha,1 Yemane Berhane,2 Alemayehu Worku,2,3 Wondwossen Terefe1 1Mekelle University, College of Health Science, School of Public Health, Mekelle, Ethiopia; 2Addis Continental Institute of Public Health, Epidemiology Department, Addis Ababa, Ethiopia; 3Addis Ababa University, School of Public Health, Biostatistics Department, Addis Ababa, Ethiopia Background: Poor maternal health service utilization is one of the contributing factors to a high level of maternal and newborn morta...

  18. Metrics for building performance assurance

    Energy Technology Data Exchange (ETDEWEB)

    Koles, G.; Hitchcock, R.; Sherman, M.

    1996-07-01

    This report documents part of the work performed in phase I of a Laboratory Directors Research and Development (LDRD) funded project entitled Building Performance Assurances (BPA). The focus of the BPA effort is to transform the way buildings are built and operated in order to improve building performance by facilitating or providing tools, infrastructure, and information. The efforts described herein focus on the development of metrics with which to evaluate building performance and for which information and optimization tools need to be developed. The classes of building performance metrics reviewed are (1) Building Services (2) First Costs, (3) Operating Costs, (4) Maintenance Costs, and (5) Energy and Environmental Factors. The first category defines the direct benefits associated with buildings; the next three are different kinds of costs associated with providing those benefits; the last category includes concerns that are broader than direct costs and benefits to the building owner and building occupants. The level of detail of the various issues reflect the current state of knowledge in those scientific areas and the ability of the to determine that state of knowledge, rather than directly reflecting the importance of these issues; it intentionally does not specifically focus on energy issues. The report describes work in progress and is intended as a resource and can be used to indicate the areas needing more investigation. Other reports on BPA activities are also available.

  19. Quality of Life and Utility in Patients with Metastatic Soft Tissue and Bone Sarcoma: The Sarcoma Treatment and Burden of Illness in North America and Europe (SABINE Study

    Directory of Open Access Journals (Sweden)

    Peter Reichardt

    2012-01-01

    Full Text Available The aim of the study was to assess health-related quality of life (HRQoL among metastatic soft tissue (mSTS or bone sarcoma (mBS patients who had attained a favourable response to chemotherapy. We employed the EORTC QLQ-C30, the 3-item Cancer-Related Symptoms Questionnaire, and the EQ-5D instrument. HRQoL was evaluated overall and by health state in 120 mSTS/mBS patients enrolled in the SABINE study across nine countries in Europe and North America. Utility was estimated from responses to the EQ-5D instrument using UK population-based weights. The mean EQ-5D utility score was 0.69 for the pooled patient sample with little variation across health states. However, patients with progressive disease reported a clinically significant lower utility (0.56. Among disease symptoms, pain and respiratory symptoms are common. This study showed that mSTS/mBS is associated with reduced HRQoL and utility among patients with metastatic disease.

  20. Utilization of Chemometric Technique to Determine the Quality of Fresh and Used Palm, Corn and Coconut Oil

    International Nuclear Information System (INIS)

    Hamizah Mat Agil; Mohd Zuli Jaafar; Suzeren Jamil; Azwan Mat Lazim

    2014-01-01

    This study was conducted to evaluate the quality of natural oil and the deterioration of frying oil. A total of 12 different oil samples from palm oil, corn oil and coconut oil were used. The frying process was repeated four times at 180 degree Celsius in order to observe the stability of the oil towards oxidation. Three main parameters have been studied to determine oil qualities which were peroxide value, iodine value and acid value. This study emphasized on the usage of FTIR in the range of 4000-700 cm -1 . Alternatively, the chemometrics method based on pattern recognition has been used to determination the oil quality. Data analysis were conducted by using PCA and PLS method in the Matlab modeling. The PCA provided data classification according to types of oil while PLS predicted the oil quality of the parameters studied. For the classification of pure oil, the variance for PC1 was 70 % while PC2 was 15 %. For the fried/ used oil, PC1 gave 57 % while PC2 gave 25 %. By using PLS, the iodine the best model for pure oils value model variable based on correlation with R2CV > 0.984. Whereas, the peroxide value model for fried/ used oils, was the best obtained R 2 CV > 0.7423. (author)

  1. Connection Setup Signaling Scheme with Flooding-Based Path Searching for Diverse-Metric Network

    Science.gov (United States)

    Kikuta, Ko; Ishii, Daisuke; Okamoto, Satoru; Oki, Eiji; Yamanaka, Naoaki

    Connection setup on various computer networks is now achieved by GMPLS. This technology is based on the source-routing approach, which requires the source node to store metric information of the entire network prior to computing a route. Thus all metric information must be distributed to all network nodes and kept up-to-date. However, as metric information become more diverse and generalized, it is hard to update all information due to the huge update overhead. Emerging network services and applications require the network to support diverse metrics for achieving various communication qualities. Increasing the number of metrics supported by the network causes excessive processing of metric update messages. To reduce the number of metric update messages, another scheme is required. This paper proposes a connection setup scheme that uses flooding-based signaling rather than the distribution of metric information. The proposed scheme requires only flooding of signaling messages with requested metric information, no routing protocol is required. Evaluations confirm that the proposed scheme achieves connection establishment without excessive overhead. Our analysis shows that the proposed scheme greatly reduces the number of control messages compared to the conventional scheme, while their blocking probabilities are comparable.

  2. Complex problems require complex solutions: the utility of social quality theory for addressing the Social Determinants of Health

    Directory of Open Access Journals (Sweden)

    Ward Paul R

    2011-08-01

    Full Text Available Abstract Background In order to improve the health of the most vulnerable groups in society, the WHO Commission on Social Determinants of Health (CSDH called for multi-sectoral action, which requires research and policy on the multiple and inter-linking factors shaping health outcomes. Most conceptual tools available to researchers tend to focus on singular and specific social determinants of health (SDH (e.g. social capital, empowerment, social inclusion. However, a new and innovative conceptual framework, known as social quality theory, facilitates a more complex and complete understanding of the SDH, with its focus on four domains: social cohesion, social inclusion, social empowerment and socioeconomic security, all within the same conceptual framework. This paper provides both an overview of social quality theory in addition to findings from a national survey of social quality in Australia, as a means of demonstrating the operationalisation of the theory. Methods Data were collected using a national random postal survey of 1044 respondents in September, 2009. Multivariate logistic regression analysis was conducted. Results Statistical analysis revealed that people on lower incomes (less than $45000 experience worse social quality across all of the four domains: lower socio-economic security, lower levels of membership of organisations (lower social cohesion, higher levels of discrimination and less political action (lower social inclusion and lower social empowerment. The findings were mixed in terms of age, with people over 65 years experiencing lower socio-economic security, but having higher levels of social cohesion, experiencing lower levels of discrimination (higher social inclusion and engaging in more political action (higher social empowerment. In terms of gender, women had higher social cohesion than men, although also experienced more discrimination (lower social inclusion. Conclusions Applying social quality theory allows

  3. The Utility of the OMI HCHO and NO2 Data Products in Air Quality Decision- Making Activities

    Science.gov (United States)

    Duncan, Bryan N.

    2010-01-01

    We will present three related air quality applications of the OMI HCHO (formaldehyde) and NO2 (nitrogen dioxide) data products, which we us to support mission planning of an OMI-like instrument for the proposed GEO-CAPE satellite that has as one of its objectives to study air quality from space. First, we will discuss a novel and practical application of the data products to the "weight of evidence" in the air quality decision-making process (e.g., State Implementation Plan (SIP)) for a city, region, or state to demonstrate that it is making progress toward attainment of the National Ambient Air Quality Standard (NAAQS) for ozone. Any trend, or lack thereof, in the observed OMI HCHO/NO2, which we use as an air quality indicator, may support that an emission control strategy implemented to reduce ozone is or is not occurring for a metropolitan area. Second, we will discuss how we use variations in the OMI HCHO product as a proxy for variability in the biogenic hydrocarbon, isoprene, which is an important player for the formation of high levels of ozone and the dominant source of HCHO in the eastern U.S. Third, we will discuss the variability of NO2 in the U.S. as indicated by the OMI NO2 product. In addition, we will show the impact of the 2005 hurricanes on pollutant emissions, including those associated with the intensive oil extraction and refining activities, in the Gulf of Mexico region using the OMI NO2 product. The variability of HCHO and NO2 as indicated by OMI helps us to understand changes in the OMI HCHO/NO2 and the implications for ozone formation.

  4. Three journal similarity metrics and their application to biomedical journals.

    Science.gov (United States)

    D'Souza, Jennifer L; Smalheiser, Neil R

    2014-01-01

    In the present paper, we have created several novel journal similarity metrics. The MeSH odds ratio measures the topical similarity of any pair of journals, based on the major MeSH headings assigned to articles in MEDLINE. The second metric employed the 2009 Author-ity author name disambiguation dataset as a gold standard for estimating the author odds ratio. This gives a straightforward, intuitive answer to the question: Given two articles in PubMed that share the same author name (lastname, first initial), how does knowing only the identity of the journals (in which the articles were published) predict the relative likelihood that they are written by the same person vs. different persons? The article pair odds ratio detects the tendency of authors to publish repeatedly in the same journal, as well as in specific pairs of journals. The metrics can be applied not only to estimate the similarity of a pair of journals, but to provide novel profiles of individual journals as well. For example, for each journal, one can define the MeSH cloud as the number of other journals that are topically more similar to it than expected by chance, and the author cloud as the number of other journals that share more authors than expected by chance. These metrics for journal pairs and individual journals have been provided in the form of public datasets that can be readily studied and utilized by others.

  5. Three Journal Similarity Metrics and Their Application to Biomedical Journals

    Science.gov (United States)

    D′Souza, Jennifer L.; Smalheiser, Neil R.

    2014-01-01

    In the present paper, we have created several novel journal similarity metrics. The MeSH odds ratio measures the topical similarity of any pair of journals, based on the major MeSH headings assigned to articles in MEDLINE. The second metric employed the 2009 Author-ity author name disambiguation dataset as a gold standard for estimating the author odds ratio. This gives a straightforward, intuitive answer to the question: Given two articles in PubMed that share the same author name (lastname, first initial), how does knowing only the identity of the journals (in which the articles were published) predict the relative likelihood that they are written by the same person vs. different persons? The article pair odds ratio detects the tendency of authors to publish repeatedly in the same journal, as well as in specific pairs of journals. The metrics can be applied not only to estimate the similarity of a pair of journals, but to provide novel profiles of individual journals as well. For example, for each journal, one can define the MeSH cloud as the number of other journals that are topically more similar to it than expected by chance, and the author cloud as the number of other journals that share more authors than expected by chance. These metrics for journal pairs and individual journals have been provided in the form of public datasets that can be readily studied and utilized by others. PMID:25536326

  6. Toward utilization of data for program management and evaluation: quality assessment of five years of health management information system data in Rwanda.

    Science.gov (United States)

    Nisingizwe, Marie Paul; Iyer, Hari S; Gashayija, Modeste; Hirschhorn, Lisa R; Amoroso, Cheryl; Wilson, Randy; Rubyutsa, Eric; Gaju, Eric; Basinga, Paulin; Muhire, Andrew; Binagwaho, Agnès; Hedt-Gauthier, Bethany

    2014-01-01

    Health data can be useful for effective service delivery, decision making, and evaluating existing programs in order to maintain high quality of healthcare. Studies have shown variability in data quality from national health management information systems (HMISs) in sub-Saharan Africa which threatens utility of these data as a tool to improve health systems. The purpose of this study is to assess the quality of Rwanda's HMIS data over a 5-year period. The World Health Organization (WHO) data quality report card framework was used to assess the quality of HMIS data captured from 2008 to 2012 and is a census of all 495 publicly funded health facilities in Rwanda. Factors assessed included completeness and internal consistency of 10 indicators selected based on WHO recommendations and priority areas for the Rwanda national health sector. Completeness was measured as percentage of non-missing reports. Consistency was measured as the absence of extreme outliers, internal consistency between related indicators, and consistency of indicators over time. These assessments were done at the district and national level. Nationally, the average monthly district reporting completeness rate was 98% across 10 key indicators from 2008 to 2012. Completeness of indicator data increased over time: 2008, 88%; 2009, 91%; 2010, 89%; 2011, 90%; and 2012, 95% (pservice output increased from 3% (2011) to 9% (2012). Eighty-three percent of districts reported ratios between related indicators (ANC/DTP1, DTP1/DTP3) consistent with HMIS national ratios. Conclusion and policy implications: Our findings suggest that HMIS data quality in Rwanda has been improving over time. We recommend maintaining these assessments to identify remaining gaps in data quality and that results are shared publicly to support increased use of HMIS data.

  7. Assessing utility where short measures are required: development of the short Assessment of Quality of Life-8 (AQoL-8) instrument.

    Science.gov (United States)

    Hawthorne, Graeme

    2009-09-01

    As researchers seek to include clinical outcomes, the health-related quality of life (HRQoL) of participants and meet economic evaluation demands, they are confronted with collecting disparate outcome data where parsimony is imperative. This study addressed this through construction of a short HRQoL measure, the Assessment of Quality of Life (AQoL)-8 from the original AQoL. Data from the AQoL validation database (N = 996) were reanalyzed using item response theory (IRT) to identify the least fitting items, which were removed. The standard AQoL scoring algorithm and weights were applied. Validity, reliability, and sensitivity tests were carried out using the 2004 South Australian Health Omnibus Survey (N = 3015), including direct comparisons with other short utility measures, the EQ5D and SF6D. The IRT analysis showed that the AQoL was a weak scale (Loevinger H = 0.36) but reliable (Mokken rho = 0.84). Removal of the four weakest items led to an 8-item instrument with two items per subscale, the AQoL-8. The AQoL-8 Loevinger H = 0.38 and Mokken rho = 0.80 suggested similar psychometric properties to the AQoL. It correlated (intraclass correlation coefficient) 0.95 (or 90% of shared variance) with the AQoL. The AQoL-8 was as sensitive to six common health conditions as the AQoL, EQ5D, and SF6D. The utility scores fall on the same life-death scale as those of the AQoL. Where parsimony is imperative, researchers may consider use of the AQoL-8 to collect participant self-report HRQoL data that is suitable for use either as reported outcomes or for the calculation of quality-adjusted life-years for cost-utility analysis.

  8. Assessment of the Log-Euclidean Metric Performance in Diffusion Tensor Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mostafa Charmi

    2010-06-01

    Full Text Available Introduction: Appropriate definition of the distance measure between diffusion tensors has a deep impact on Diffusion Tensor Image (DTI segmentation results. The geodesic metric is the best distance measure since it yields high-quality segmentation results. However, the important problem with the geodesic metric is a high computational cost of the algorithms based on it. The main goal of this paper is to assess the possible substitution of the geodesic metric with the Log-Euclidean one to reduce the computational cost of a statistical surface evolution algorithm. Materials and Methods: We incorporated the Log-Euclidean metric in the statistical surface evolution algorithm framework. To achieve this goal, the statistics and gradients of diffusion tensor images were defined using the Log-Euclidean metric. Numerical implementation of the segmentation algorithm was performed in the MATLAB software using the finite difference techniques. Results: In the statistical surface evolution framework, the Log-Euclidean metric was able to discriminate the torus and helix patterns in synthesis datasets and rat spinal cords in biological phantom datasets from the background better than the Euclidean and J-divergence metrics. In addition, similar results were obtained with the geodesic metric. However, the main advantage of the Log-Euclidean metric over the geodesic metric was the dramatic reduction of computational cost of the segmentation algorithm, at least by 70 times. Discussion and Conclusion: The qualitative and quantitative results have shown that the Log-Euclidean metric is a good substitute for the geodesic metric when using a statistical surface evolution algorithm in DTIs segmentation.

  9. Novel computed tomographic chest metrics to detect pulmonary hypertension

    International Nuclear Information System (INIS)

    Chan, Andrew L; Juarez, Maya M; Shelton, David K; MacDonald, Taylor; Li, Chin-Shang; Lin, Tzu-Chun; Albertson, Timothy E

    2011-01-01

    Early diagnosis of pulmonary hypertension (PH) can potentially improve survival and quality of life. Detecting PH using echocardiography is often insensitive in subjects with lung fibrosis or hyperinflation. Right heart catheterization (RHC) for the diagnosis of PH adds risk and expense due to its invasive nature. Pre-defined measurements utilizing computed tomography (CT) of the chest may be an alternative non-invasive method of detecting PH. This study retrospectively reviewed 101 acutely hospitalized inpatients with heterogeneous diagnoses, who consecutively underwent CT chest and RHC during the same admission. Two separate teams, each consisting of a radiologist and pulmonologist, blinded to clinical and RHC data, individually reviewed the chest CT's. Multiple regression analyses controlling for age, sex, ascending aortic diameter, body surface area, thoracic diameter and pulmonary wedge pressure showed that a main pulmonary artery (PA) diameter ≥29 mm (odds ratio (OR) = 4.8), right descending PA diameter ≥19 mm (OR = 7.0), true right descending PA diameter ≥ 16 mm (OR = 4.1), true left descending PA diameter ≥ 21 mm (OR = 15.5), right ventricular (RV) free wall ≥ 6 mm (OR = 30.5), RV wall/left ventricular (LV) wall ratio ≥0.32 (OR = 8.8), RV/LV lumen ratio ≥1.28 (OR = 28.8), main PA/ascending aorta ratio ≥0.84 (OR = 6.0) and main PA/descending aorta ratio ≥ 1.29 (OR = 5.7) were significant predictors of PH in this population of hospitalized patients. This combination of easily measured CT-based metrics may, upon confirmatory studies, aid in the non-invasive detection of PH and hence in the determination of RHC candidacy in acutely hospitalized patients

  10. Ethnographic Exploration of Elderly Residents' Perceptions and Utilization of Health Care to Improve Their Quality of Life

    OpenAIRE

    Seyed Ziya Tabatabaei; Azimi Bin Hj Hamzah; Fatemeh Ebrahimi

    2016-01-01

    The increase in proportion of older people in Malaysia has led to a significant growth of health care demands. The aim of this study is to explore how perceived health care needs influence on quality of life among elderly Malay residents who reside in a Malaysian residential home. This study employed a method known as ethnographic research from May 2011 to January 2012. Four data collection strategies were selected as the main data-collecting tools including participant observation, field not...

  11. SU-G-BRB-16: Vulnerabilities in the Gamma Metric

    International Nuclear Information System (INIS)

    Neal, B; Siebers, J

    2016-01-01

    Purpose: To explore vulnerabilities in the gamma index metric that undermine its wide use as a radiation therapy quality assurance tool. Methods: 2D test field pairs (images) are created specifically to achieve high gamma passing rates, but to also include gross errors by exploiting the distance-to-agreement and percent-passing components of the metric. The first set has no requirement of clinical practicality, but is intended to expose vulnerabilities. The second set exposes clinically realistic vulnerabilities. To circumvent limitations inherent to user-specific tuning of prediction algorithms to match measurements, digital test cases are manually constructed, thereby mimicking high-quality image prediction. Results: With a 3 mm distance-to-agreement metric, changing field size by ±6 mm results in a gamma passing rate over 99%. For a uniform field, a lattice of passing points spaced 5 mm apart results in a passing rate of 100%. Exploiting the percent-passing component, a 10×10 cm"2 field can have a 95% passing rate when an 8 cm"2=2.8×2.8 cm"2 highly out-of-tolerance (e.g. zero dose) square is missing from the comparison image. For clinically realistic vulnerabilities, an arc plan for which a 2D image is created can have a >95% passing rate solely due to agreement in the lateral spillage, with the failing 5% in the critical target region. A field with an integrated boost (e.g whole brain plus small metastases) could neglect the metastases entirely, yet still pass with a 95% threshold. All the failure modes described would be visually apparent on a gamma-map image. Conclusion: The %gamma<1 metric has significant vulnerabilities. High passing rates can obscure critical faults in hypothetical and delivered radiation doses. Great caution should be used with gamma as a QA metric; users should inspect the gamma-map. Visual analysis of gamma-maps may be impractical for cine acquisition.

  12. A new approach for power quality improvement of DFIG based wind farms connected to weak utility grid

    Directory of Open Access Journals (Sweden)

    Hossein Mahvash

    2017-09-01

    Full Text Available Most of power quality problems for grid connected doubly fed induction generators (DFIGs with wind turbine include flicker, variations of voltage RMS profile, and injected harmonics due to switching in DFIG converters. Flicker phenomenon is the most important problem in wind power systems. This paper described an effective method for mitigating flicker emission and power quality improvement for a fairly weak grid connected to a wind farm with DFIGs. The method was applied in the rotor side converter (RSC of the DFIG to control the output reactive power. q axis reference current was directly derived according to the mathematical relation between rotor q axis current and DFIG output reactive power without using PI controller. To extract the reference reactive power, the stator voltage control loop with the droop coefficient was proposed to regulate the grid voltage level in each operational condition. The DFIG output active power was separately controlled in d axis considering the stator voltage orientation control (SVOC. Different simulations were carried out on the test system and the flicker short term severity index (Pst was calculated for each case study using the discrete flickermeter model according to IEC 61400 standard. The obtained results validated flicker mitigation and power quality enhancement for the grid.

  13. Metric approach to quantum constraints

    International Nuclear Information System (INIS)

    Brody, Dorje C; Hughston, Lane P; Gustavsson, Anna C T

    2009-01-01

    A framework for deriving equations of motion for constrained quantum systems is introduced and a procedure for its implementation is outlined. In special cases, the proposed new method, which takes advantage of the fact that the space of pure states in quantum mechanics has both a symplectic structure and a metric structure, reduces to a quantum analogue of the Dirac theory of constraints in classical mechanics. Explicit examples involving spin-1/2 particles are worked out in detail: in the first example, our approach coincides with a quantum version of the Dirac formalism, while the second example illustrates how a situation that cannot be treated by Dirac's approach can nevertheless be dealt with in the present scheme.

  14. Metrics for Business Process Models

    Science.gov (United States)

    Mendling, Jan

    Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.

  15. Quality

    International Nuclear Information System (INIS)

    Burnett, N.; Jeffries, J.; Mach, J.; Robson, M.; Pajot, D.; Harrigan, J.; Lebsack, T.; Mullen, D.; Rat, F.; Theys, P.

    1993-01-01

    What is quality? How do you achieve it? How do you keep it once you have got it. The answer for industry at large is the three-step hierarchy of quality control, quality assurance and Total quality Management. An overview is given of the history of quality movement, illustrated with examples from Schlumberger operations, as well as the oil industry's approach to quality. An introduction of the Schlumberger's quality-associated ClientLink program is presented. 15 figs., 4 ills., 16 refs

  16. Can working with the private for-profit sector improve utilization of quality health services by the poor? A systematic review of the literature

    Science.gov (United States)

    Patouillard, Edith; Goodman, Catherine A; Hanson, Kara G; Mills, Anne J

    2007-01-01

    Background There has been a growing interest in the role of the private for-profit sector in health service provision in low- and middle-income countries. The private sector represents an important source of care for all socioeconomic groups, including the poorest and substantial concerns have been raised about the quality of care it provides. Interventions have been developed to address these technical failures and simultaneously take advantage of the potential for involving private providers to achieve public health goals. Limited information is available on the extent to which these interventions have successfully expanded access to quality health services for poor and disadvantaged populations. This paper addresses this knowledge gap by presenting the results of a systematic literature review on the effectiveness of working with private for-profit providers to reach the poor. Methods The search topic of the systematic literature review was the effectiveness of interventions working with the private for-profit sector to improve utilization of quality health services by the poor. Interventions included social marketing, use of vouchers, pre-packaging of drugs, franchising, training, regulation, accreditation and contracting-out. The search for published literature used a series of electronic databases including PubMed, Popline, HMIC and CabHealth Global Health. The search for grey and unpublished literature used documents available on the World Wide Web. We focused on studies which evaluated the impact of interventions on utilization and/or quality of services and which provided information on the socioeconomic status of the beneficiary populations. Results A total of 2483 references were retrieved, of which 52 qualified as impact evaluations. Data were available on the average socioeconomic status of recipient communities for 5 interventions, and on the distribution of benefits across socioeconomic groups for 5 interventions. Conclusion Few studies provided

  17. Can working with the private for-profit sector improve utilization of quality health services by the poor? A systematic review of the literature

    Directory of Open Access Journals (Sweden)

    Hanson Kara G

    2007-11-01

    Full Text Available Abstract Background There has been a growing interest in the role of the private for-profit sector in health service provision in low- and middle-income countries. The private sector represents an important source of care for all socioeconomic groups, including the poorest and substantial concerns have been raised about the quality of care it provides. Interventions have been developed to address these technical failures and simultaneously take advantage of the potential for involving private providers to achieve public health goals. Limited information is available on the extent to which these interventions have successfully expanded access to quality health services for poor and disadvantaged populations. This paper addresses this knowledge gap by presenting the results of a systematic literature review on the effectiveness of working with private for-profit providers to reach the poor. Methods The search topic of the systematic literature review was the effectiveness of interventions working with the private for-profit sector to improve utilization of quality health services by the poor. Interventions included social marketing, use of vouchers, pre-packaging of drugs, franchising, training, regulation, accreditation and contracting-out. The search for published literature used a series of electronic databases including PubMed, Popline, HMIC and CabHealth Global Health. The search for grey and unpublished literature used documents available on the World Wide Web. We focused on studies which evaluated the impact of interventions on utilization and/or quality of services and which provided information on the socioeconomic status of the beneficiary populations. Results A total of 2483 references were retrieved, of which 52 qualified as impact evaluations. Data were available on the average socioeconomic status of recipient communities for 5 interventions, and on the distribution of benefits across socioeconomic groups for 5 interventions

  18. Can working with the private for-profit sector improve utilization of quality health services by the poor? A systematic review of the literature.

    Science.gov (United States)

    Patouillard, Edith; Goodman, Catherine A; Hanson, Kara G; Mills, Anne J

    2007-11-07

    There has been a growing interest in the role of the private for-profit sector in health service provision in low- and middle-income countries. The private sector represents an important source of care for all socioeconomic groups, including the poorest and substantial concerns have been raised about the quality of care it provides. Interventions have been developed to address these technical failures and simultaneously take advantage of the potential for involving private providers to achieve public health goals. Limited information is available on the extent to which these interventions have successfully expanded access to quality health services for poor and disadvantaged populations. This paper addresses this knowledge gap by presenting the results of a systematic literature review on the effectiveness of working with private for-profit providers to reach the poor. The search topic of the systematic literature review was the effectiveness of interventions working with the private for-profit sector to improve utilization of quality health services by the poor. Interventions included social marketing, use of vouchers, pre-packaging of drugs, franchising, training, regulation, accreditation and contracting-out. The search for published literature used a series of electronic databases including PubMed, Popline, HMIC and CabHealth Global Health. The search for grey and unpublished literature used documents available on the World Wide Web. We focused on studies which evaluated the impact of interventions on utilization and/or quality of services and which provided information on the socioeconomic status of the beneficiary populations. A total of 2483 references were retrieved, of which 52 qualified as impact evaluations. Data were available on the average socioeconomic status of recipient communities for 5 interventions, and on the distribution of benefits across socioeconomic groups for 5 interventions. Few studies provided evidence on the impact of private sector

  19. Active Metric Learning for Supervised Classification

    OpenAIRE

    Kumaran, Krishnan; Papageorgiou, Dimitri; Chang, Yutong; Li, Minhan; Takáč, Martin

    2018-01-01

    Clustering and classification critically rely on distance metrics that provide meaningful comparisons between data points. We present mixed-integer optimization approaches to find optimal distance metrics that generalize the Mahalanobis metric extensively studied in the literature. Additionally, we generalize and improve upon leading methods by removing reliance on pre-designated "target neighbors," "triplets," and "similarity pairs." Another salient feature of our method is its ability to en...

  20. On Nakhleh's metric for reduced phylogenetic networks

    OpenAIRE

    Cardona, Gabriel; Llabrés, Mercè; Rosselló, Francesc; Valiente Feruglio, Gabriel Alejandro

    2009-01-01

    We prove that Nakhleh’s metric for reduced phylogenetic networks is also a metric on the classes of tree-child phylogenetic networks, semibinary tree-sibling time consistent phylogenetic networks, and multilabeled phylogenetic trees. We also prove that it separates distinguishable phylogenetic networks. In this way, it becomes the strongest dissimilarity measure for phylogenetic networks available so far. Furthermore, we propose a generalization of that metric that separates arbitrary phyl...

  1. Generalized tolerance sensitivity and DEA metric sensitivity

    OpenAIRE

    Neralić, Luka; E. Wendell, Richard

    2015-01-01

    This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA). Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  2. The definitive guide to IT service metrics

    CERN Document Server

    McWhirter, Kurt

    2012-01-01

    Used just as they are, the metrics in this book will bring many benefits to both the IT department and the business as a whole. Details of the attributes of each metric are given, enabling you to make the right choices for your business. You may prefer and are encouraged to design and create your own metrics to bring even more value to your business - this book will show you how to do this, too.

  3. Generalized tolerance sensitivity and DEA metric sensitivity

    Directory of Open Access Journals (Sweden)

    Luka Neralić

    2015-03-01

    Full Text Available This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA. Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  4. Common Metrics for Human-Robot Interaction

    Science.gov (United States)

    Steinfeld, Aaron; Lewis, Michael; Fong, Terrence; Scholtz, Jean; Schultz, Alan; Kaber, David; Goodrich, Michael

    2006-01-01

    This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.

  5. Chaotic inflation with metric and matter perturbations

    International Nuclear Information System (INIS)

    Feldman, H.A.; Brandenberger, R.H.

    1989-01-01

    A perturbative scheme to analyze the evolution of both metric and scalar field perturbations in an expanding universe is developed. The scheme is applied to study chaotic inflation with initial metric and scalar field perturbations present. It is shown that initial gravitational perturbations with wavelength smaller than the Hubble radius rapidly decay. The metric simultaneously picks up small perturbations determined by the matter inhomogeneities. Both are frozen in once the wavelength exceeds the Hubble radius. (orig.)

  6. Gravitational lensing in metric theories of gravity

    International Nuclear Information System (INIS)

    Sereno, Mauro

    2003-01-01

    Gravitational lensing in metric theories of gravity is discussed. I introduce a generalized approximate metric element, inclusive of both post-post-Newtonian contributions and a gravitomagnetic field. Following Fermat's principle and standard hypotheses, I derive the time delay function and deflection angle caused by an isolated mass distribution. Several astrophysical systems are considered. In most of the cases, the gravitomagnetic correction offers the best perspectives for an observational detection. Actual measurements distinguish only marginally different metric theories from each other

  7. A global survey of clinicians' awareness, accessibility, utilization of e-continuous education, and quality of clinical blood use: policy considerations

    Directory of Open Access Journals (Sweden)

    Smit Sibinga CT

    2017-07-01

    Full Text Available Cees Th Smit Sibinga,1 Maruff A Oladejo,2 Olamide Hakeem Adejumo,3 Quentin Eichbaum,4 Midori Kumagawa,5 Shuichi Kino,5 Sima Zolfaghari,6 Silvano Wendel,7 Gordana Rasovic,8 Namjil Erdenebayar,9 Maya Makhmudova,10 Loyiso Mpuntsha,11 Charlotte Ingram,11 Bakyt B Kharabaev,12 Isaac Kajja,13 Zainab Mukhtar Hussain Sanji,14 Maria M M Satti15 1IQM Consulting for International Development of Quality Management in Transfusion Medicine, University of Groningen, Groningen, the Netherlands; 2Department of Educational Management, University of Lagos, Lagos, 3Olabisi Onabanjo University Teaching Hospital, Sagamu, Nigeria; 4Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA; 5Japanese Red Cross Hokkaido Block Blood Center, Japan; 6IBTO, Tehran, Iran; 7Blood Bank, Hospital Sirio Libanês, Sao Paulo, Brazil; 8Montenegro National Blood Transfusion Center, Podgorica, Montenegro; 9National Center for Transfusion Medicine, Ulaanbaatar, Mongolia; 10Consultant IQM Consulting, Tashkent, Uzbekistan; 11South Africa National Blood Transfusion Service, Johannesburg, South Africa; 12National Blood Transfusion Service, Bishkek, Kyrgyzstan; 13Department of Orthopedics, Mulago Hospital, Makerere University, Uganda; 14Consultant, Dow University of Health Sciences, Karachi, Pakistan; 15National Blood Transfusion Service, Khartoum, Sudan Introduction: Clinical use of blood has shown the least developed part in the vein-to-vein transfusion chain. This global study was carried out in order to investigate the level of awareness, accessibility and utilization of continuous e-learning and education, and quality of blood use among blood prescribing clinicians and nurses.Approach: Descriptive ex post facto survey design.Methods: A total of 264 purposively selected blood prescribing clinicians and nurses from the four Human Development Index (HDI groups of countries (low, medium, high, and very high participated in this study

  8. MOL-Eye: A New Metric for the Performance Evaluation of a Molecular Signal

    OpenAIRE

    Turan, Meric; Kuran, Mehmet Sukru; Yilmaz, H. Birkan; Chae, Chan-Byoung; Tugcu, Tuna

    2017-01-01

    Inspired by the eye diagram in classical radio frequency (RF) based communications, the MOL-Eye diagram is proposed for the performance evaluation of a molecular signal within the context of molecular communication. Utilizing various features of this diagram, three new metrics for the performance evaluation of a molecular signal, namely the maximum eye height, standard deviation of received molecules, and counting SNR (CSNR) are introduced. The applicability of these performance metrics in th...

  9. About the possibility of a generalized metric

    International Nuclear Information System (INIS)

    Lukacs, B.; Ladik, J.

    1991-10-01

    The metric (the structure of the space-time) may be dependent on the properties of the object measuring it. The case of size dependence of the metric was examined. For this dependence the simplest possible form of the metric tensor has been constructed which fulfils the following requirements: there be two extremal characteristic scales; the metric be unique and the usual between them; the change be sudden in the neighbourhood of these scales; the size of the human body appear as a parameter (postulated on the basis of some philosophical arguments). Estimates have been made for the two extremal length scales according to existing observations. (author) 19 refs

  10. Utility-preserving anonymization for health data publishing.

    Science.gov (United States)

    Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn

    2017-07-11

    Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.

  11. Determination of a Screening Metric for High Diversity DNA Libraries.

    Science.gov (United States)

    Guido, Nicholas J; Handerson, Steven; Joseph, Elaine M; Leake, Devin; Kung, Li A

    2016-01-01

    The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.

  12. Determination of a Screening Metric for High Diversity DNA Libraries.

    Directory of Open Access Journals (Sweden)

    Nicholas J Guido

    Full Text Available The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.

  13. ISS Logistics Hardware Disposition and Metrics Validation

    Science.gov (United States)

    Rogers, Toneka R.

    2010-01-01

    I was assigned to the Logistics Division of the International Space Station (ISS)/Spacecraft Processing Directorate. The Division consists of eight NASA engineers and specialists that oversee the logistics portion of the Checkout, Assembly, and Payload Processing Services (CAPPS) contract. Boeing, their sub-contractors and the Boeing Prime contract out of Johnson Space Center, provide the Integrated Logistics Support for the ISS activities at Kennedy Space Center. Essentially they ensure that spares are available to support flight hardware processing and the associated ground support equipment (GSE). Boeing maintains a Depot for electrical, mechanical and structural modifications and/or repair capability as required. My assigned task was to learn project management techniques utilized by NASA and its' contractors to provide an efficient and effective logistics support infrastructure to the ISS program. Within the Space Station Processing Facility (SSPF) I was exposed to Logistics support components, such as, the NASA Spacecraft Services Depot (NSSD) capabilities, Mission Processing tools, techniques and Warehouse support issues, required for integrating Space Station elements at the Kennedy Space Center. I also supported the identification of near-term ISS Hardware and Ground Support Equipment (GSE) candidates for excessing/disposition prior to October 2010; and the validation of several Logistics Metrics used by the contractor to measure logistics support effectiveness.

  14. Maternal and Neonatal Health Knowledge, Service Quality and Utilization: Findings from a Community Based Quasi-experimental Trial in Arghakhanchi District of Nepal.

    Science.gov (United States)

    Shrestha, J R; Manandhar, D S; Manandhar, S R; Adhikari, D; Rai, C; Rana, H; Poudel, M; Pradhan, A

    2015-01-01

    As part of the Partnership for Maternal and Newborn Health Project (PMNH), HealthRight International collaborated with Mother and Infant Research Activities (MIRA) to conduct operations research in Arghakhanchi district of Nepal to explore the intervention impact of strengthening health facility, improving community facility linkages along with Community Based Newborn Care Program (CB-NCP) on Maternal Neonatal Care (MNC) service quality, utilization, knowledge and care seeking behavior. This was a quasi-experimental study. Siddahara, Pokharathok, Subarnakhal,Narpani Health Posts (HPs) and Thada Primary Health Care Center(PHCC)in Electoral Constituency-2 were selected as intervention sites and Arghatosh, ,Argha, Khana, Hansapur HPs and Balkot PHCC in Electoral Constituency-1 were chosen as controls. The intervention started in February 2011 and was evaluated in August 2013. To compare MNC knowledge and practice in the community, mothers of children aged 0-23 months were selected from the corresponding Village Development Committees(VDCs) by a two stage cluster sampling design during both baseline (July 2010) and endline (August, 2013) assessments. The difference in difference analysis was used to understand the intervention impact. Local resource mobilization for MNC, knowledge about MNC and service utilization increased in intervention sites. Though there were improvements, many effects were not significant. Extensive trainings followed by reviews and quality monitoring visits increased the knowledge, improved skills and fostered motivation of health facility workers for better MNC service delivery. MNC indicators showed an upsurge in numbers due to the synergistic effects of many interventions.

  15. Clinical Utility and Psychometric Properties of the Traumatic Brain Injury Quality of Life Scale (TBI-QOL) in US Military Service Members.

    Science.gov (United States)

    Lange, Rael T; Brickell, Tracey A; Bailie, Jason M; Tulsky, David S; French, Louis M

    2016-01-01

    To examine the clinical utility and psychometric properties of the Traumatic Brain Injury Quality of Life (TBI-QOL) scale in a US military population. One hundred fifty-two US military service members (age: M = 34.3, SD = 9.4; 89.5% men) prospectively enrolled from the Walter Reed National Military Medical Center and other nationwide community outreach initiatives. Participants included 99 service members who had sustained a mild traumatic brain injury (TBI) and 53 injured or noninjured controls without TBI (n = 29 and n = 24, respectively). Participants completed the TBI-QOL scale and 5 other behavioral measures, on average, 33.8 months postinjury (SD = 37.9). Fourteen TBI-QOL subscales; Neurobehavioral Symptom Inventory; Posttraumatic Stress Disorder Checklist-Civilian version; Alcohol Use Disorders Identification Test; Combat Exposure Scale. The internal consistency reliability of the TBI-QOL scales ranged from α = .91 to α = .98. The convergent and discriminant validity of the 14 TBI-QOL subscales was high. The mild TBI group had significantly worse scores on 10 of the 14 TBI-QOL subscales than the control group (range, P quality of life in a mild TBI military sample. Additional research is recommended to further evaluate the clinical utility of the TBI-QOL scale in both military and civilian settings.

  16. THE UTILIZATION OF Fe(III WASTE OF ETCHING INDUSTRY AS QUALITY ENHANCHEMENT MATERIAL IN CERAMIC ROOFTILE SYNTHESIS

    Directory of Open Access Journals (Sweden)

    Eva Vaulina Yulistia Delsy

    2015-11-01

    Full Text Available Waste is produced from various industrial activities. FeCl3 used in this study as an addition to the material quality in synthesis of ceramic rooftile from Kalijaran village clay, Purbalingga. Etching industrial waste FeCl3 contacted with clay. Waste being varied waste as diluted and undiluted while clay grain size varied as 60, 80, 100, 140, and 230 mesh. Both clay and waste are contacted at 30-100 minutes. The results showed that the optimum of time and grain size variation is clay with 80 mesh grain size within 70 minutes. While physical properties of the rooftile contained Fe meet all ISO standards and are known to tile, the best quality is to use clay that has been in contact with the waste that is created 1000 times dilution. The stripping test of Fe (III by rain water and sea water showed that the average rate of Fe-striped of the tile body that is made with soaked with diluted waste are 0.068 ppm/day and 0.055 ppm/day while for tile bodies soaked with waste is not diluted are 0.0722 ppm/day and 0.0560 ppm/day.

  17. Lyapunov exponent as a metric for assessing the dynamic content and predictability of large-eddy simulations

    Science.gov (United States)

    Nastac, Gabriel; Labahn, Jeffrey W.; Magri, Luca; Ihme, Matthias

    2017-09-01

    Metrics used to assess the quality of large-eddy simulations commonly rely on a statistical assessment of the solution. While these metrics are valuable, a dynamic measure is desirable to further characterize the ability of a numerical simulation for capturing dynamic processes inherent in turbulent flows. To address this issue, a dynamic metric based on the Lyapunov exponent is proposed which assesses the growth rate of the solution separation. This metric is applied to two turbulent flow configurations: forced homogeneous isotropic turbulence and a turbulent jet diffusion flame. First, it is shown that, despite the direct numerical simulation (DNS) and large-eddy simulation (LES) being high-dimensional dynamical systems with O (107) degrees of freedom, the separation growth rate qualitatively behaves like a lower-dimensional dynamical system, in which the dimension of the Lyapunov system is substantially smaller than the discretized dynamical system. Second, a grid refinement analysis of each configuration demonstrates that as the LES filter width approaches the smallest scales of the system the Lyapunov exponent asymptotically approaches a plateau. Third, a small perturbation is superimposed onto the initial conditions of each configuration, and the Lyapunov exponent is used to estimate the time required for divergence, thereby providing a direct assessment of the predictability time of simulations. By comparing inert and reacting flows, it is shown that combustion increases the predictability of the turbulent simulation as a result of the dilatation and increased viscosity by heat release. The predictability time is found to scale with the integral time scale in both the reacting and inert jet flows. Fourth, an analysis of the local Lyapunov exponent is performed to demonstrate that this metric can also determine flow-dependent properties, such as regions that are sensitive to small perturbations or conditions of large turbulence within the flow field. Finally

  18. The effects of coal quality on NO{sub x} emissions and carbon burnout in pulverised coal-fired utility boilers

    Energy Technology Data Exchange (ETDEWEB)

    O`Connor, M. [National Power plc, Swindon (United Kingdom)

    1999-04-01

    A comprehensive study is reported on the impact of coal quality on nitrogen oxides emissions and carbon burnout in utility boilers, with the aim of assessing their relationship and developing predictive tools for assessing coals. Experimental work was carried out on various laboratory-scale apparatus and on single burner test facilities ranging from 160 kW{sub th} to 40 MW{sub th} in size and measurements were obtained from full-scale 500 MW{sub e} utility boiler trials. This data and basic coal data were then used to develop mathematical models to predict full-scale boiler performance with respect to NO{sub x} emissions and carbon burnout. Power station trials demonstrated that coal quality effects nitrogen oxides and burnout. The variability in boiler conditions also impacted on these factors. Lower nitrogen and higher volatile coals generally produced less NO{sub x}. Volatile content was the most important generic coal property for predicting burnout. Modelling rig tests, using data from advanced laboratory-scale tests, were found to be just as successful as using rig tests for predicting NO{sub x} performance of different coals. Laboratory-scale tests were found to be successful in providing accurate predictions of burnout for the coals studied. Mathematical models, however, were found to be less successful in this area and further work to develop this is required. A major achievement was CFD solutions of full-scale utility boiler furnaces in a single mesh. 32 refs., 15 figs., 33 tabs., 2 apps.

  19. The effect of phytobiotics, organic acids and humic acids on the utility and egg quality of laying hens

    Directory of Open Access Journals (Sweden)

    Henrieta Arpášová

    2017-11-01

    Full Text Available The aim of this study was the assessment of an influence of supplement of dietary herbal additive in combination with organic acids into feed mixture or drinking water of laying hens on performance parameters and egg quality. The Lohmann Brown Lite laying hens (n = 30 were divided into 3 groups (n = 10, and fed for 20 weeks ad libitum with complete feed mixtures (CFM. Hens in the control group received the complete feed mixture (CFM and drank drinking water without any supplements. In the first experimental group hens received CFM without supplements but phytobiotics (bergamot oil (Citrus bergamia, thyme (Thymus vulgaris, clove (Syzygium aromaticum, pepper (Piper nigrum in combination with the fumaric acid and citric acid at 60 mg per 1 liter of water were added to their drinking water. In the second experimental group was CFM enriched with humic acids in the concentration of 0.5%, and phytobiotcs with organic acids at the same dose as in the first experimental group were added to their drinking water. Monitored parameters: body weight (g, egg production (%, the weight of all produced eggs (g, egg albumen weight (g, egg albumen index, Haugh unit (HU, egg yolk weight (g, egg yolk index, egg yolk colour (° HLR, egg shell weight (g and egg shell strength (N.cm-2. The results showed no significant differences between the both experimental groups and the control group in the parameter body weight of hens (P>0.05. The highest average body weight was found in the hens from the second experimental group (values in the order of groups:  1792.22 ± 80.85; 1768.42 ±55.55; 1820.12 ±78.56 g±S.D.. We observed positive trend of increasing of egg production by adding of used supplements, especially in the second experimental group with the addition of humic acids, although with no statistically significant difference compared to the control group (P>0.05. The mean laying intensity in the order of groups: 90.42; 91.16; 91.56%. We observed statistically

  20. Computer modelling of the combined effects of plant conditions and coal quality on burnout in utility furnaces

    Energy Technology Data Exchange (ETDEWEB)

    P. Stephenson [RWE npower Engineering, Swindon (United Kingdom)

    2007-09-15

    The aim of this paper is to describe the latest steps in the development of a computer model to predict the combined effects of plant conditions and coal quality on burnout. The work was conducted as part of RWE's contribution to the recent ECSC project 'Development of a carbon-in-ash notification system (CARNO)'. A burnout predictor code has been developed and validated; it includes both coal and plant effects and includes a burnout model based closely on CBK8. The agreement between predicted C-in-ash and plant data is encouraging, but further improvements are still desirable. The predictions obtained from the burnout predictor show that the calculated sensitivities to changes in plant condition can be very dependent on state of plant. 7 refs., 7 figs., 1 tab.