WorldWideScience

Sample records for metric optimization evaluation

  1. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  2. The SPAtial EFficiency metric (SPAEF): multiple-component evaluation of spatial patterns for optimization of hydrological models

    Science.gov (United States)

    Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon

    2018-05-01

    The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.

  3. Thermodynamic metrics and optimal paths.

    Science.gov (United States)

    Sivak, David A; Crooks, Gavin E

    2012-05-11

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  4. Evaluating and Estimating the WCET Criticality Metric

    DEFF Research Database (Denmark)

    Jordan, Alexander

    2014-01-01

    a programmer (or compiler) from targeting optimizations the right way. A possible resort is to use a metric that targets WCET and which can be efficiently computed for all code parts of a program. Similar to dynamic profiling techniques, which execute code with input that is typically expected...... for the application, based on WCET analysis we can indicate how critical a code fragment is, in relation to the worst-case bound. Computing such a metric on top of static analysis, incurs a certain overhead though, which increases with the complexity of the underlying WCET analysis. We present our approach...... to estimate the Criticality metric, by relaxing the precision of WCET analysis. Through this, we can reduce analysis time by orders of magnitude, while only introducing minor error. To evaluate our estimation approach and share our garnered experience using the metric, we evaluate real-time programs, which...

  5. Human Performance Optimization Metrics: Consensus Findings, Gaps, and Recommendations for Future Research.

    Science.gov (United States)

    Nindl, Bradley C; Jaffin, Dianna P; Dretsch, Michael N; Cheuvront, Samuel N; Wesensten, Nancy J; Kent, Michael L; Grunberg, Neil E; Pierce, Joseph R; Barry, Erin S; Scott, Jonathan M; Young, Andrew J; OʼConnor, Francis G; Deuster, Patricia A

    2015-11-01

    Human performance optimization (HPO) is defined as "the process of applying knowledge, skills and emerging technologies to improve and preserve the capabilities of military members, and organizations to execute essential tasks." The lack of consensus for operationally relevant and standardized metrics that meet joint military requirements has been identified as the single most important gap for research and application of HPO. In 2013, the Consortium for Health and Military Performance hosted a meeting to develop a toolkit of standardized HPO metrics for use in military and civilian research, and potentially for field applications by commanders, units, and organizations. Performance was considered from a holistic perspective as being influenced by various behaviors and barriers. To accomplish the goal of developing a standardized toolkit, key metrics were identified and evaluated across a spectrum of domains that contribute to HPO: physical performance, nutritional status, psychological status, cognitive performance, environmental challenges, sleep, and pain. These domains were chosen based on relevant data with regard to performance enhancers and degraders. The specific objectives at this meeting were to (a) identify and evaluate current metrics for assessing human performance within selected domains; (b) prioritize metrics within each domain to establish a human performance assessment toolkit; and (c) identify scientific gaps and the needed research to more effectively assess human performance across domains. This article provides of a summary of 150 total HPO metrics across multiple domains that can be used as a starting point-the beginning of an HPO toolkit: physical fitness (29 metrics), nutrition (24 metrics), psychological status (36 metrics), cognitive performance (35 metrics), environment (12 metrics), sleep (9 metrics), and pain (5 metrics). These metrics can be particularly valuable as the military emphasizes a renewed interest in Human Dimension efforts

  6. Evaluating Application-Layer Traffic Optimization Cost Metrics for P2P Multimedia Streaming

    DEFF Research Database (Denmark)

    Poderys, Justas; Soler, José

    2017-01-01

    To help users of P2P communication systems perform better-than-random selection of communication peers, Internet Engineering Task Force standardized the Application Layer Traffic Optimization (ALTO) protocol. The ALTO provided data-routing cost metric, can be used to rank peers in P2P communicati...

  7. A novel spatial performance metric for robust pattern optimization of distributed hydrological models

    Science.gov (United States)

    Stisen, S.; Demirel, C.; Koch, J.

    2017-12-01

    Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing

  8. Next-Generation Metrics: Responsible Metrics & Evaluation for Open Science

    Energy Technology Data Exchange (ETDEWEB)

    Wilsdon, J.; Bar-Ilan, J.; Peters, I.; Wouters, P.

    2016-07-01

    Metrics evoke a mixed reaction from the research community. A commitment to using data to inform decisions makes some enthusiastic about the prospect of granular, real-time analysis o of research and its wider impacts. Yet we only have to look at the blunt use of metrics such as journal impact factors, h-indices and grant income targets, to be reminded of the pitfalls. Some of the most precious qualities of academic culture resist simple quantification, and individual indicators often struggle to do justice to the richness and plurality of research. Too often, poorly designed evaluation criteria are “dominating minds, distorting behaviour and determining careers (Lawrence, 2007).” Metrics hold real power: they are constitutive of values, identities and livelihoods. How to exercise that power to more positive ends has been the focus of several recent and complementary initiatives, including the San Francisco Declaration on Research Assessment (DORA1), the Leiden Manifesto2 and The Metric Tide3 (a UK government review of the role of metrics in research management and assessment). Building on these initiatives, the European Commission, under its new Open Science Policy Platform4, is now looking to develop a framework for responsible metrics for research management and evaluation, which can be incorporated into the successor framework to Horizon 2020. (Author)

  9. Metrics for Evaluation of Student Models

    Science.gov (United States)

    Pelanek, Radek

    2015-01-01

    Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…

  10. Robustness of climate metrics under climate policy ambiguity

    International Nuclear Information System (INIS)

    Ekholm, Tommi; Lindroos, Tomi J.; Savolainen, Ilkka

    2013-01-01

    Highlights: • We assess the economic impacts of using different climate metrics. • The setting is cost-efficient scenarios for three interpretations of the 2C target. • With each target setting, the optimal metric is different. • Therefore policy ambiguity prevents the selection of an optimal metric. • Robust metric values that perform well with multiple policy targets however exist. -- Abstract: A wide array of alternatives has been proposed as the common metrics with which to compare the climate impacts of different emission types. Different physical and economic metrics and their parameterizations give diverse weights between e.g. CH 4 and CO 2 , and fixing the metric from one perspective makes it sub-optimal from another. As the aims of global climate policy involve some degree of ambiguity, it is not possible to determine a metric that would be optimal and consistent with all policy aims. This paper evaluates the cost implications of using predetermined metrics in cost-efficient mitigation scenarios. Three formulations of the 2 °C target, including both deterministic and stochastic approaches, shared a wide range of metric values for CH 4 with which the mitigation costs are only slightly above the cost-optimal levels. Therefore, although ambiguity in current policy might prevent us from selecting an optimal metric, it can be possible to select robust metric values that perform well with multiple policy targets

  11. Microservice scaling optimization based on metric collection in Kubernetes

    OpenAIRE

    Blažej, Aljaž

    2017-01-01

    As web applications become more complex and the number of internet users rises, so does the need to optimize the use of hardware supporting these applications. Optimization can be achieved with microservices, as they offer several advantages compared to the monolithic approach, such as better utilization of resources, scalability and isolation of different parts of an application. Another important part is collecting metrics, since they can be used for analysis and debugging as well as the ba...

  12. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    Science.gov (United States)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  13. Retrospective group fusion similarity search based on eROCE evaluation metric.

    Science.gov (United States)

    Avram, Sorin I; Crisan, Luminita; Bora, Alina; Pacureanu, Liliana M; Avram, Stefana; Kurunczi, Ludovic

    2013-03-01

    In this study, a simple evaluation metric, denoted as eROCE was proposed to measure the early enrichment of predictive methods. We demonstrated the superior robustness of eROCE compared to other known metrics throughout several active to inactive ratios ranging from 1:10 to 1:1000. Group fusion similarity search was investigated by varying 16 similarity coefficients, five molecular representations (binary and non-binary) and two group fusion rules using two reference structure set sizes. We used a dataset of 3478 actives and 43,938 inactive molecules and the enrichment was analyzed by means of eROCE. This retrospective study provides optimal similarity search parameters in the case of ALDH1A1 inhibitors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.

    Science.gov (United States)

    Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen

    2017-06-01

    The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.

  15. Optimal recovery of linear operators in non-Euclidean metrics

    Energy Technology Data Exchange (ETDEWEB)

    Osipenko, K Yu [Moscow State Aviation Technological University, Moscow (Russian Federation)

    2014-10-31

    The paper looks at problems concerning the recovery of operators from noisy information in non-Euclidean metrics. A number of general theorems are proved and applied to recovery problems for functions and their derivatives from the noisy Fourier transform. In some cases, a family of optimal methods is found, from which the methods requiring the least amount of original information are singled out. Bibliography: 25 titles.

  16. Fisher information metrics for binary classifier evaluation and training

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Different evaluation metrics for binary classifiers are appropriate to different scientific domains and even to different problems within the same domain. This presentation focuses on the optimisation of event selection to minimise statistical errors in HEP parameter estimation, a problem that is best analysed in terms of the maximisation of Fisher information about the measured parameters. After describing a general formalism to derive evaluation metrics based on Fisher information, three more specific metrics are introduced for the measurements of signal cross sections in counting experiments (FIP1) or distribution fits (FIP2) and for the measurements of other parameters from distribution fits (FIP3). The FIP2 metric is particularly interesting because it can be derived from any ROC curve, provided that prevalence is also known. In addition to its relation to measurement errors when used as an evaluation criterion (which makes it more interesting that the ROC AUC), a further advantage of the FIP2 metric is ...

  17. A condition metric for Eucalyptus woodland derived from expert evaluations.

    Science.gov (United States)

    Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D

    2018-02-01

    The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.

  18. An analytical modeling framework to evaluate converged networks through business-oriented metrics

    International Nuclear Information System (INIS)

    Guimarães, Almir P.; Maciel, Paulo R.M.; Matias, Rivalino

    2013-01-01

    Nowadays, society has increasingly relied on convergent networks as an essential means for individuals, businesses, and governments. Strategies, methods, models and techniques for preventing and handling hardware or software failures as well as avoiding performance degradation are, thus, fundamental for prevailing in business. Issues such as operational costs, revenues and the respective relationship to key performance and dependability metrics are central for defining the required system infrastructure. Our work aims to provide system performance and dependability models for supporting optimization of infrastructure design, aimed at business oriented metrics. In addition, a methodology is also adopted to support both the modeling and the evaluation process. The results showed that the proposed methodology can significantly reduce the complexity of infrastructure design as well as improve the relationship between business and infrastructure aspects

  19. Evaluation metrics for biostatistical and epidemiological collaborations.

    Science.gov (United States)

    Rubio, Doris McGartland; Del Junco, Deborah J; Bhore, Rafia; Lindsell, Christopher J; Oster, Robert A; Wittkowski, Knut M; Welty, Leah J; Li, Yi-Ju; Demets, Dave

    2011-10-15

    Increasing demands for evidence-based medicine and for the translation of biomedical research into individual and public health benefit have been accompanied by the proliferation of special units that offer expertise in biostatistics, epidemiology, and research design (BERD) within academic health centers. Objective metrics that can be used to evaluate, track, and improve the performance of these BERD units are critical to their successful establishment and sustainable future. To develop a set of reliable but versatile metrics that can be adapted easily to different environments and evolving needs, we consulted with members of BERD units from the consortium of academic health centers funded by the Clinical and Translational Science Award Program of the National Institutes of Health. Through a systematic process of consensus building and document drafting, we formulated metrics that covered the three identified domains of BERD practices: the development and maintenance of collaborations with clinical and translational science investigators, the application of BERD-related methods to clinical and translational research, and the discovery of novel BERD-related methodologies. In this article, we describe the set of metrics and advocate their use for evaluating BERD practices. The routine application, comparison of findings across diverse BERD units, and ongoing refinement of the metrics will identify trends, facilitate meaningful changes, and ultimately enhance the contribution of BERD activities to biomedical research. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Conceptual Soundness, Metric Development, Benchmarking, and Targeting for PATH Subprogram Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Mosey. G.; Doris, E.; Coggeshall, C.; Antes, M.; Ruch, J.; Mortensen, J.

    2009-01-01

    The objective of this study is to evaluate the conceptual soundness of the U.S. Department of Housing and Urban Development (HUD) Partnership for Advancing Technology in Housing (PATH) program's revised goals and establish and apply a framework to identify and recommend metrics that are the most useful for measuring PATH's progress. This report provides an evaluative review of PATH's revised goals, outlines a structured method for identifying and selecting metrics, proposes metrics and benchmarks for a sampling of individual PATH programs, and discusses other metrics that potentially could be developed that may add value to the evaluation process. The framework and individual program metrics can be used for ongoing management improvement efforts and to inform broader program-level metrics for government reporting requirements.

  1. Riemannian metric optimization on surfaces (RMOS) for intrinsic brain mapping in the Laplace-Beltrami embedding space.

    Science.gov (United States)

    Gahm, Jin Kyu; Shi, Yonggang

    2018-05-01

    Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer's disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics

    Directory of Open Access Journals (Sweden)

    Bernardin Keni

    2008-01-01

    Full Text Available Abstract Simultaneous tracking of multiple persons in real-world environments is an active research field and several approaches have been proposed, based on a variety of features and algorithms. Recently, there has been a growing interest in organizing systematic evaluations to compare the various techniques. Unfortunately, the lack of common metrics for measuring the performance of multiple object trackers still makes it hard to compare their results. In this work, we introduce two intuitive and general metrics to allow for objective comparison of tracker characteristics, focusing on their precision in estimating object locations, their accuracy in recognizing object configurations and their ability to consistently label objects over time. These metrics have been extensively used in two large-scale international evaluations, the 2006 and 2007 CLEAR evaluations, to measure and compare the performance of multiple object trackers for a wide variety of tracking tasks. Selected performance results are presented and the advantages and drawbacks of the presented metrics are discussed based on the experience gained during the evaluations.

  3. METRIC EVALUATION PIPELINE FOR 3D MODELING OF URBAN SCENES

    Directory of Open Access Journals (Sweden)

    M. Bosch

    2017-05-01

    Full Text Available Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  4. Metric Evaluation Pipeline for 3d Modeling of Urban Scenes

    Science.gov (United States)

    Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.

    2017-05-01

    Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  5. Optimizing the fMRI data-processing pipeline using prediction and reproducibility performance metrics: I. A preliminary group analysis

    DEFF Research Database (Denmark)

    Strother, Stephen C.; Conte, Stephen La; Hansen, Lars Kai

    2004-01-01

    We argue that published results demonstrate that new insights into human brain function may be obscured by poor and/or limited choices in the data-processing pipeline, and review the work on performance metrics for optimizing pipelines: prediction, reproducibility, and related empirical Receiver......, temporal detrending, and between-subject alignment) in a group analysis of BOLD-fMRI scans from 16 subjects performing a block-design, parametric-static-force task. Large-scale brain networks were detected using a multivariate linear discriminant analysis (canonical variates analysis, CVA) that was tuned...... of baseline scans have constant, equal means, and this assumption was assessed with prediction metrics. Higher-order polynomial warps compared to affine alignment had only a minor impact on the performance metrics. We found that both prediction and reproducibility metrics were required for optimizing...

  6. Use of plan quality degradation to evaluate tradeoffs in delivery efficiency and clinical plan metrics arising from IMRT optimizer and sequencer compromises

    Science.gov (United States)

    Wilkie, Joel R.; Matuszak, Martha M.; Feng, Mary; Moran, Jean M.; Fraass, Benedick A.

    2013-01-01

    Purpose: Plan degradation resulting from compromises made to enhance delivery efficiency is an important consideration for intensity modulated radiation therapy (IMRT) treatment plans. IMRT optimization and/or multileaf collimator (MLC) sequencing schemes can be modified to generate more efficient treatment delivery, but the effect those modifications have on plan quality is often difficult to quantify. In this work, the authors present a method for quantitative assessment of overall plan quality degradation due to tradeoffs between delivery efficiency and treatment plan quality, illustrated using comparisons between plans developed allowing different numbers of intensity levels in IMRT optimization and/or MLC sequencing for static segmental MLC IMRT plans. Methods: A plan quality degradation method to evaluate delivery efficiency and plan quality tradeoffs was developed and used to assess planning for 14 prostate and 12 head and neck patients treated with static IMRT. Plan quality was evaluated using a physician's predetermined “quality degradation” factors for relevant clinical plan metrics associated with the plan optimization strategy. Delivery efficiency and plan quality were assessed for a range of optimization and sequencing limitations. The “optimal” (baseline) plan for each case was derived using a clinical cost function with an unlimited number of intensity levels. These plans were sequenced with a clinical MLC leaf sequencer which uses >100 segments, assuring delivered intensities to be within 1% of the optimized intensity pattern. Each patient's optimal plan was also sequenced limiting the number of intensity levels (20, 10, and 5), and then separately optimized with these same numbers of intensity levels. Delivery time was measured for all plans, and direct evaluation of the tradeoffs between delivery time and plan degradation was performed. Results: When considering tradeoffs, the optimal number of intensity levels depends on the treatment

  7. [Applicability of traditional landscape metrics in evaluating urban heat island effect].

    Science.gov (United States)

    Chen, Ai-Lian; Sun, Ran-Hao; Chen, Li-Ding

    2012-08-01

    By using 24 landscape metrics, this paper evaluated the urban heat island effect in parts of Beijing downtown area. QuickBird (QB) images were used to extract the landscape type information, and the thermal bands from Landsat Enhanced Thematic Mapper Plus (ETM+) images were used to extract the land surface temperature (LST) in four seasons of the same year. The 24 landscape pattern metrics were calculated at landscape and class levels in a fixed window with 120 mx 120 m in size, with the applicability of these traditional landscape metrics in evaluating the urban heat island effect examined. Among the 24 landscape metrics, only the percentage composition of landscape (PLAND), patch density (PD), largest patch index (LPI), coefficient of Euclidean nearest-neighbor distance variance (ENN_CV), and landscape division index (DIVISION) at landscape level were significantly correlated with the LST in March, May, and November, and the PLAND, LPI, DIVISION, percentage of like adjacencies, and interspersion and juxtaposition index at class level showed significant correlations with the LST in March, May, July, and December, especially in July. Some metrics such as PD, edge density, clumpiness index, patch cohesion index, effective mesh size, splitting index, aggregation index, and normalized landscape shape index showed varying correlations with the LST at different class levels. The traditional landscape metrics could not be appropriate in evaluating the effects of river on LST, while some of the metrics could be useful in characterizing urban LST and analyzing the urban heat island effect, but screening and examining should be made on the metrics.

  8. Quality Evaluation in Wireless Imaging Using Feature-Based Objective Metrics

    OpenAIRE

    Engelke, Ulrich; Zepernick, Hans-Jürgen

    2007-01-01

    This paper addresses the evaluation of image quality in the context of wireless systems using feature-based objective metrics. The considered metrics comprise of a weighted combination of feature values that are used to quantify the extend by which the related artifacts are present in a processed image. In view of imaging applications in mobile radio and wireless communication systems, reduced-reference objective quality metrics are investigated for quantifying user-perceived quality. The exa...

  9. Evaluation Metrics for Simulations of Tropical South America

    Science.gov (United States)

    Gallup, S.; Baker, I. T.; Denning, A. S.; Cheeseman, M.; Haynes, K. D.; Phillips, M.

    2017-12-01

    The evergreen broadleaf forest of the Amazon Basin is the largest rainforest on earth, and has teleconnections to global climate and carbon cycle characteristics. This region defies simple characterization, spanning large gradients in total rainfall and seasonal variability. Broadly, the region can be thought of as trending from light-limited in its wettest areas to water-limited near the ecotone, with individual landscapes possibly exhibiting the characteristics of either (or both) limitations during an annual cycle. A basin-scale classification of mean behavior has been elusive, and ecosystem response to seasonal cycles and anomalous drought events has resulted in some disagreement in the literature, to say the least. However, new observational platforms and instruments make characterization of the heterogeneity and variability more feasible.To evaluate simulations of ecophysiological function, we develop metrics that correlate various observational products with meteorological variables such as precipitation and radiation. Observations include eddy covariance fluxes, Solar Induced Fluorescence (SIF, from GOME2 and OCO2), biomass and vegetation indices. We find that the modest correlation between SIF and precipitation decreases with increasing annual precipitation, although the relationship is not consistent between products. Biomass increases with increasing precipitation. Although vegetation indices are generally correlated with biomass and precipitation, they can saturate or experience retrieval issues during cloudy periods.Using these observational products and relationships, we develop a set of model evaluation metrics. These metrics are designed to call attention to models that get "the right answer only if it's for the right reason," and provide an opportunity for more critical evaluation of model physics. These metrics represent a testbed that can be applied to multiple models as a means to evaluate their performance in tropical South America.

  10. Contribution to the evaluation and to the improvement of multi-objective optimization methods: application to the optimization of nuclear fuel reloading pattern

    International Nuclear Information System (INIS)

    Collette, Y.

    2002-01-01

    In this thesis, we study the general problem of the selection of a multi-objective optimization method, then we study the improvement so as to efficiently solve a problem. The pertinent selection of a method presume the existence of a methodology: we have built tools to perform evaluation of performances and we propose an original method dedicated to the classification of know optimization methods. Our step has been applied to the elaboration of new methods for solving a very difficult problem: the nuclear core reload pattern optimization. First, we looked for a non usual approach of performances measurement: we have 'measured' the behavior of a method. To reach this goal, we have introduced several metrics. We have proposed to evaluate the 'aesthetic' of a distribution of solutions by defining two new metrics: a 'spacing metric' and a metric that allow us to measure the size of the biggest hole in the distribution of solutions. Then, we studied the convergence of multi-objective optimization methods by using some metrics defined in scientific literature and by proposing some more metrics: the 'Pareto ratio' which computes a ratio of solution production. Lastly, we have defined new metrics intended to better apprehend the behavior of optimization methods: the 'speed metric', which allows to compute the speed profile and a 'distribution metric' which allows to compute statistical distribution of solutions along the Pareto frontier. Next, we have studied transformations of a multi-objective problem and defined news methods: the modified Tchebychev method, or the penalized weighted sum of objective functions. We have elaborated new techniques to choose the initial point. These techniques allow to produce new initial points closer and closer to the Pareto frontier and, thanks to the 'proximal optimality concept', allowing dramatic improvements in the convergence of a multi-objective optimization method. Lastly, we have defined new vectorial multi-objective optimization

  11. Evaluative Usage-Based Metrics for the Selection of E-Journals.

    Science.gov (United States)

    Hahn, Karla L.; Faulkner, Lila A.

    2002-01-01

    Explores electronic journal usage statistics and develops three metrics and three benchmarks based on those metrics. Topics include earlier work that assessed the value of print journals and was modified for the electronic format; the evaluation of potential purchases; and implications for standards development, including the need for content…

  12. Utility of different glycemic control metrics for optimizing management of diabetes.

    Science.gov (United States)

    Kohnert, Klaus-Dieter; Heinke, Peter; Vogt, Lutz; Salzsieder, Eckhard

    2015-02-15

    The benchmark for assessing quality of long-term glycemic control and adjustment of therapy is currently glycated hemoglobin (HbA1c). Despite its importance as an indicator for the development of diabetic complications, recent studies have revealed that this metric has some limitations; it conveys a rather complex message, which has to be taken into consideration for diabetes screening and treatment. On the basis of recent clinical trials, the relationship between HbA1c and cardiovascular outcomes in long-standing diabetes has been called into question. It becomes obvious that other surrogate and biomarkers are needed to better predict cardiovascular diabetes complications and assess efficiency of therapy. Glycated albumin, fructosamin, and 1,5-anhydroglucitol have received growing interest as alternative markers of glycemic control. In addition to measures of hyperglycemia, advanced glucose monitoring methods became available. An indispensible adjunct to HbA1c in routine diabetes care is self-monitoring of blood glucose. This monitoring method is now widely used, as it provides immediate feedback to patients on short-term changes, involving fasting, preprandial, and postprandial glucose levels. Beyond the traditional metrics, glycemic variability has been identified as a predictor of hypoglycemia, and it might also be implicated in the pathogenesis of vascular diabetes complications. Assessment of glycemic variability is thus important, but exact quantification requires frequently sampled glucose measurements. In order to optimize diabetes treatment, there is a need for both key metrics of glycemic control on a day-to-day basis and for more advanced, user-friendly monitoring methods. In addition to traditional discontinuous glucose testing, continuous glucose sensing has become a useful tool to reveal insufficient glycemic management. This new technology is particularly effective in patients with complicated diabetes and provides the opportunity to characterize

  13. An Evaluation of the IntelliMetric[SM] Essay Scoring System

    Science.gov (United States)

    Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine

    2006-01-01

    This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…

  14. Active Metric Learning for Supervised Classification

    OpenAIRE

    Kumaran, Krishnan; Papageorgiou, Dimitri; Chang, Yutong; Li, Minhan; Takáč, Martin

    2018-01-01

    Clustering and classification critically rely on distance metrics that provide meaningful comparisons between data points. We present mixed-integer optimization approaches to find optimal distance metrics that generalize the Mahalanobis metric extensively studied in the literature. Additionally, we generalize and improve upon leading methods by removing reliance on pre-designated "target neighbors," "triplets," and "similarity pairs." Another salient feature of our method is its ability to en...

  15. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  16. Evaluation of Subjective and Objective Performance Metrics for Haptically Controlled Robotic Systems

    Directory of Open Access Journals (Sweden)

    Cong Dung Pham

    2014-07-01

    Full Text Available This paper studies in detail how different evaluation methods perform when it comes to describing the performance of haptically controlled mobile manipulators. Particularly, we investigate how well subjective metrics perform compared to objective metrics. To find the best metrics to describe the performance of a control scheme is challenging when human operators are involved; how the user perceives the performance of the controller does not necessarily correspond to the directly measurable metrics normally used in controller evaluation. It is therefore important to study whether there is any correspondence between how the user perceives the performance of a controller, and how it performs in terms of directly measurable metrics such as the time used to perform a task, number of errors, accuracy, and so on. To perform these tests we choose a system that consists of a mobile manipulator that is controlled by an operator through a haptic device. This is a good system for studying different performance metrics as the performance can be determined by subjective metrics based on feedback from the users, and also as objective and directly measurable metrics. The system consists of a robotic arm which provides for interaction and manipulation, which is mounted on a mobile base which extends the workspace of the arm. The operator thus needs to perform both interaction and locomotion using a single haptic device. While the position of the on-board camera is determined by the base motion, the principal control objective is the motion of the manipulator arm. This calls for intelligent control allocation between the base and the manipulator arm in order to obtain intuitive control of both the camera and the arm. We implement three different approaches to the control allocation problem, i.e., whether the vehicle or manipulator arm actuation is applied to generate the desired motion. The performance of the different control schemes is evaluated, and our

  17. Analysis on the Metrics used in Optimizing Electronic Business based on Learning Techniques

    Directory of Open Access Journals (Sweden)

    Irina-Steliana STAN

    2014-09-01

    Full Text Available The present paper proposes a methodology of analyzing the metrics related to electronic business. The drafts of the optimizing models include KPIs that can highlight the business specific, if only they are integrated by using learning-based techniques. Having set the most important and high-impact elements of the business, the models should get in the end the link between them, by automating business flows. The human resource will be found in the situation of collaborating more and more with the optimizing models which will translate into high quality decisions followed by profitability increase.

  18. Evaluating which plan quality metrics are appropriate for use in lung SBRT.

    Science.gov (United States)

    Yaparpalvi, Ravindra; Garg, Madhur K; Shen, Jin; Bodner, William R; Mynampati, Dinesh K; Gafar, Aleiya; Kuo, Hsiang-Chi; Basavatia, Amar K; Ohri, Nitin; Hong, Linda X; Kalnicki, Shalom; Tome, Wolfgang A

    2018-02-01

    Several dose metrics in the categories-homogeneity, coverage, conformity and gradient have been proposed in literature for evaluating treatment plan quality. In this study, we applied these metrics to characterize and identify the plan quality metrics that would merit plan quality assessment in lung stereotactic body radiation therapy (SBRT) dose distributions. Treatment plans of 90 lung SBRT patients, comprising 91 targets, treated in our institution were retrospectively reviewed. Dose calculations were performed using anisotropic analytical algorithm (AAA) with heterogeneity correction. A literature review on published plan quality metrics in the categories-coverage, homogeneity, conformity and gradient was performed. For each patient, using dose-volume histogram data, plan quality metric values were quantified and analysed. For the study, the radiation therapy oncology group (RTOG) defined plan quality metrics were: coverage (0.90 ± 0.08); homogeneity (1.27 ± 0.07); conformity (1.03 ± 0.07) and gradient (4.40 ± 0.80). Geometric conformity strongly correlated with conformity index (p plan quality guidelines-coverage % (ICRU 62), conformity (CN or CI Paddick ) and gradient (R 50% ). Furthermore, we strongly recommend that RTOG lung SBRT protocols adopt either CN or CI Padddick in place of prescription isodose to target volume ratio for conformity index evaluation. Advances in knowledge: Our study metrics are valuable tools for establishing lung SBRT plan quality guidelines.

  19. A Cross-Domain Survey of Metrics for Modelling and Evaluating Collisions

    Directory of Open Access Journals (Sweden)

    Jeremy A. Marvel

    2014-09-01

    Full Text Available This paper provides a brief survey of the metrics for measuring probability, degree, and severity of collisions as applied to autonomous and intelligent systems. Though not exhaustive, this survey evaluates the state-of-the-art of collision metrics, and assesses which are likely to aid in the establishment and support of autonomous system collision modelling. The survey includes metrics for 1 robot arms; 2 mobile robot platforms; 3 nonholonomic physical systems such as ground vehicles, aircraft, and naval vessels, and; 4 virtual and mathematical models.

  20. Video Analytics Evaluation: Survey of Datasets, Performance Metrics and Approaches

    Science.gov (United States)

    2014-09-01

    people with different ethnicity and gender . Cur- rently we have four subjects, but more can be added in the future. • Lighting Variations. We consider...is however not a proper distance as the triangular inequality condition is not met. For this reason, the next metric should be preferred. • the...and Alan F. Smeaton and Georges Quenot, An Overview of the Goals, Tasks, Data, Evaluation Mechanisms and Metrics, Proceedings of TRECVID 2011, NIST, USA

  1. Classification in medical images using adaptive metric k-NN

    Science.gov (United States)

    Chen, C.; Chernoff, K.; Karemore, G.; Lo, P.; Nielsen, M.; Lauze, F.

    2010-03-01

    The performance of the k-nearest neighborhoods (k-NN) classifier is highly dependent on the distance metric used to identify the k nearest neighbors of the query points. The standard Euclidean distance is commonly used in practice. This paper investigates the performance of k-NN classifier with respect to different adaptive metrics in the context of medical imaging. We propose using adaptive metrics such that the structure of the data is better described, introducing some unsupervised learning knowledge in k-NN. We investigated four different metrics are estimated: a theoretical metric based on the assumption that images are drawn from Brownian Image Model (BIM), the normalized metric based on variance of the data, the empirical metric is based on the empirical covariance matrix of the unlabeled data, and an optimized metric obtained by minimizing the classification error. The spectral structure of the empirical covariance also leads to Principal Component Analysis (PCA) performed on it which results the subspace metrics. The metrics are evaluated on two data sets: lateral X-rays of the lumbar aortic/spine region, where we use k-NN for performing abdominal aorta calcification detection; and mammograms, where we use k-NN for breast cancer risk assessment. The results show that appropriate choice of metric can improve classification.

  2. Evaluating Modeled Impact Metrics for Human Health, Agriculture Growth, and Near-Term Climate

    Science.gov (United States)

    Seltzer, K. M.; Shindell, D. T.; Faluvegi, G.; Murray, L. T.

    2017-12-01

    Simulated metrics that assess impacts on human health, agriculture growth, and near-term climate were evaluated using ground-based and satellite observations. The NASA GISS ModelE2 and GEOS-Chem models were used to simulate the near-present chemistry of the atmosphere. A suite of simulations that varied by model, meteorology, horizontal resolution, emissions inventory, and emissions year were performed, enabling an analysis of metric sensitivities to various model components. All simulations utilized consistent anthropogenic global emissions inventories (ECLIPSE V5a or CEDS), and an evaluation of simulated results were carried out for 2004-2006 and 2009-2011 over the United States and 2014-2015 over China. Results for O3- and PM2.5-based metrics featured minor differences due to the model resolutions considered here (2.0° × 2.5° and 0.5° × 0.666°) and model, meteorology, and emissions inventory each played larger roles in variances. Surface metrics related to O3 were consistently high biased, though to varying degrees, demonstrating the need to evaluate particular modeling frameworks before O3 impacts are quantified. Surface metrics related to PM2.5 were diverse, indicating that a multimodel mean with robust results are valuable tools in predicting PM2.5-related impacts. Oftentimes, the configuration that captured the change of a metric best over time differed from the configuration that captured the magnitude of the same metric best, demonstrating the challenge in skillfully simulating impacts. These results highlight the strengths and weaknesses of these models in simulating impact metrics related to air quality and near-term climate. With such information, the reliability of historical and future simulations can be better understood.

  3. Multi-objective optimization for generating a weighted multi-model ensemble

    Science.gov (United States)

    Lee, H.

    2017-12-01

    Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic

  4. A composite efficiency metrics for evaluation of resource and energy utilization

    International Nuclear Information System (INIS)

    Yang, Siyu; Yang, Qingchun; Qian, Yu

    2013-01-01

    Polygeneration systems are commonly found in chemical and energy industry. These systems often involve chemical conversions and energy conversions. Studies of these systems are interdisciplinary, mainly involving fields of chemical engineering, energy engineering, environmental science, and economics. Each of these fields has developed an isolated index system different from the others. Analyses of polygeneration systems are therefore very likely to provide bias results with only the indexes from one field. This paper is motivated from this problem to develop a new composite efficiency metrics for polygeneration systems. This new metrics is based on the second law of thermodynamics, exergy theory. We introduce exergy cost for waste treatment as the energy penalty into conventional exergy efficiency. Using this new metrics could avoid the situation of spending too much energy for increasing production or paying production capacity for saving energy consumption. The composite metrics is studied on a simplified co-production process, syngas to methanol and electricity. The advantage of the new efficiency metrics is manifested by comparison with carbon element efficiency, energy efficiency, and exergy efficiency. Results show that the new metrics could give more rational analysis than the other indexes. - Highlights: • The composite efficiency metric gives the balanced evaluation of resource utilization and energy utilization. • This efficiency uses the exergy for waste treatment as the energy penalty. • This efficiency is applied on a simplified co-production process. • Results show that the composite metrics is better than energy efficiencies and resource efficiencies

  5. Multimetric indices: How many metrics?

    Science.gov (United States)

    Multimetric indices (MMI’s) often include 5 to 15 metrics, each representing a different attribute of assemblage condition, such as species diversity, tolerant taxa, and nonnative taxa. Is there an optimal number of metrics for MMIs? To explore this question, I created 1000 9-met...

  6. An Innovative Metric to Evaluate Satellite Precipitation's Spatial Distribution

    Science.gov (United States)

    Liu, H.; Chu, W.; Gao, X.; Sorooshian, S.

    2011-12-01

    Thanks to its capability to cover the mountains, where ground measurement instruments cannot reach, satellites provide a good means of estimating precipitation over mountainous regions. In regions with complex terrains, accurate information on high-resolution spatial distribution of precipitation is critical for many important issues, such as flood/landslide warning, reservoir operation, water system planning, etc. Therefore, in order to be useful in many practical applications, satellite precipitation products should possess high quality in characterizing spatial distribution. However, most existing validation metrics, which are based on point/grid comparison using simple statistics, cannot effectively measure satellite's skill of capturing the spatial patterns of precipitation fields. This deficiency results from the fact that point/grid-wised comparison does not take into account of the spatial coherence of precipitation fields. Furth more, another weakness of many metrics is that they can barely provide information on why satellite products perform well or poor. Motivated by our recent findings of the consistent spatial patterns of the precipitation field over the western U.S., we developed a new metric utilizing EOF analysis and Shannon entropy. The metric can be derived through two steps: 1) capture the dominant spatial patterns of precipitation fields from both satellite products and reference data through EOF analysis, and 2) compute the similarities between the corresponding dominant patterns using mutual information measurement defined with Shannon entropy. Instead of individual point/grid, the new metric treat the entire precipitation field simultaneously, naturally taking advantage of spatial dependence. Since the dominant spatial patterns are shaped by physical processes, the new metric can shed light on why satellite product can or cannot capture the spatial patterns. For demonstration, a experiment was carried out to evaluate a satellite

  7. Metric diffusion along foliations

    CERN Document Server

    Walczak, Szymon M

    2017-01-01

    Up-to-date research in metric diffusion along compact foliations is presented in this book. Beginning with fundamentals from the optimal transportation theory and the theory of foliations; this book moves on to cover Wasserstein distance, Kantorovich Duality Theorem, and the metrization of the weak topology by the Wasserstein distance. Metric diffusion is defined, the topology of the metric space is studied and the limits of diffused metrics along compact foliations are discussed. Essentials on foliations, holonomy, heat diffusion, and compact foliations are detailed and vital technical lemmas are proved to aide understanding. Graduate students and researchers in geometry, topology and dynamics of foliations and laminations will find this supplement useful as it presents facts about the metric diffusion along non-compact foliation and provides a full description of the limit for metrics diffused along foliation with at least one compact leaf on the two dimensions.

  8. MOL-Eye: A New Metric for the Performance Evaluation of a Molecular Signal

    OpenAIRE

    Turan, Meric; Kuran, Mehmet Sukru; Yilmaz, H. Birkan; Chae, Chan-Byoung; Tugcu, Tuna

    2017-01-01

    Inspired by the eye diagram in classical radio frequency (RF) based communications, the MOL-Eye diagram is proposed for the performance evaluation of a molecular signal within the context of molecular communication. Utilizing various features of this diagram, three new metrics for the performance evaluation of a molecular signal, namely the maximum eye height, standard deviation of received molecules, and counting SNR (CSNR) are introduced. The applicability of these performance metrics in th...

  9. RNA-SeQC: RNA-seq metrics for quality control and process optimization.

    Science.gov (United States)

    DeLuca, David S; Levin, Joshua Z; Sivachenko, Andrey; Fennell, Timothy; Nazaire, Marc-Danie; Williams, Chris; Reich, Michael; Winckler, Wendy; Getz, Gad

    2012-06-01

    RNA-seq, the application of next-generation sequencing to RNA, provides transcriptome-wide characterization of cellular activity. Assessment of sequencing performance and library quality is critical to the interpretation of RNA-seq data, yet few tools exist to address this issue. We introduce RNA-SeQC, a program which provides key measures of data quality. These metrics include yield, alignment and duplication rates; GC bias, rRNA content, regions of alignment (exon, intron and intragenic), continuity of coverage, 3'/5' bias and count of detectable transcripts, among others. The software provides multi-sample evaluation of library construction protocols, input materials and other experimental parameters. The modularity of the software enables pipeline integration and the routine monitoring of key measures of data quality such as the number of alignable reads, duplication rates and rRNA contamination. RNA-SeQC allows investigators to make informed decisions about sample inclusion in downstream analysis. In summary, RNA-SeQC provides quality control measures critical to experiment design, process optimization and downstream computational analysis. See www.genepattern.org to run online, or www.broadinstitute.org/rna-seqc/ for a command line tool.

  10. A lighting metric for quantitative evaluation of accent lighting systems

    Science.gov (United States)

    Acholo, Cyril O.; Connor, Kenneth A.; Radke, Richard J.

    2014-09-01

    Accent lighting is critical for artwork and sculpture lighting in museums, and subject lighting for stage, Film and television. The research problem of designing effective lighting in such settings has been revived recently with the rise of light-emitting-diode-based solid state lighting. In this work, we propose an easy-to-apply quantitative measure of the scene's visual quality as perceived by human viewers. We consider a well-accent-lit scene as one which maximizes the information about the scene (in an information-theoretic sense) available to the user. We propose a metric based on the entropy of the distribution of colors, which are extracted from an image of the scene from the viewer's perspective. We demonstrate that optimizing the metric as a function of illumination configuration (i.e., position, orientation, and spectral composition) results in natural, pleasing accent lighting. We use a photorealistic simulation tool to validate the functionality of our proposed approach, showing its successful application to two- and three-dimensional scenes.

  11. Semantic metrics

    OpenAIRE

    Hu, Bo; Kalfoglou, Yannis; Dupplaw, David; Alani, Harith; Lewis, Paul; Shadbolt, Nigel

    2006-01-01

    In the context of the Semantic Web, many ontology-related operations, e.g. ontology ranking, segmentation, alignment, articulation, reuse, evaluation, can be boiled down to one fundamental operation: computing the similarity and/or dissimilarity among ontological entities, and in some cases among ontologies themselves. In this paper, we review standard metrics for computing distance measures and we propose a series of semantic metrics. We give a formal account of semantic metrics drawn from a...

  12. Scientist impact factor (SIF): a new metric for improving scientists' evaluation?

    Science.gov (United States)

    Lippi, Giuseppe; Mattiuzzi, Camilla

    2017-08-01

    The publication of scientific research is the mainstay for knowledge dissemination, but is also an essential criterion of scientists' evaluation for recruiting funds and career progression. Although the most widespread approach for evaluating scientists is currently based on the H-index, the total impact factor (IF) and the overall number of citations, these metrics are plagued by some well-known drawbacks. Therefore, with the aim to improve the process of scientists' evaluation, we developed a new and potentially useful indicator of recent scientific output. The new metric scientist impact factor (SIF) was calculated as all citations of articles published in the two years following the publication year of the articles, divided by the overall number of articles published in that year. The metrics was then tested by analyzing data of the 40 top scientists of the local University. No correlation was found between SIF and H-index (r=0.15; P=0.367) or 2 years H-index (r=-0.01; P=0.933), whereas the H-index and 2 years H-index values were found to be highly correlated (r=0.57; Particles published in one year and the total number of citations to these articles in the two following years (r=0.62; Pscientists, wherein the SIF reflects the scientific output over the past two years thus increasing their chances to apply to and obtain competitive funding.

  13. Designing Industrial Networks Using Ecological Food Web Metrics.

    Science.gov (United States)

    Layton, Astrid; Bras, Bert; Weissburg, Marc

    2016-10-18

    Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.

  14. Performance evaluation of no-reference image quality metrics for face biometric images

    Science.gov (United States)

    Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick

    2018-03-01

    The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.

  15. Overview of journal metrics

    Directory of Open Access Journals (Sweden)

    Kihong Kim

    2018-02-01

    Full Text Available Various kinds of metrics used for the quantitative evaluation of scholarly journals are reviewed. The impact factor and related metrics including the immediacy index and the aggregate impact factor, which are provided by the Journal Citation Reports, are explained in detail. The Eigenfactor score and the article influence score are also reviewed. In addition, journal metrics such as CiteScore, Source Normalized Impact per Paper, SCImago Journal Rank, h-index, and g-index are discussed. Limitations and problems that these metrics have are pointed out. We should be cautious to rely on those quantitative measures too much when we evaluate journals or researchers.

  16. A framework for quantification of groundwater dynamics - redundancy and transferability of hydro(geo-)logical metrics

    Science.gov (United States)

    Heudorfer, Benedikt; Haaf, Ezra; Barthel, Roland; Stahl, Kerstin

    2017-04-01

    A new framework for quantification of groundwater dynamics has been proposed in a companion study (Haaf et al., 2017). In this framework, a number of conceptual aspects of dynamics, such as seasonality, regularity, flashiness or inter-annual forcing, are described, which are then linked to quantitative metrics. Hereby, a large number of possible metrics are readily available from literature, such as Pardé Coefficients, Colwell's Predictability Indices or Base Flow Index. In the present work, we focus on finding multicollinearity and in consequence redundancy among the metrics representing different patterns of dynamics found in groundwater hydrographs. This is done also to verify the categories of dynamics aspects suggested by Haaf et al., 2017. To determine the optimal set of metrics we need to balance the desired minimum number of metrics and the desired maximum descriptive property of the metrics. To do this, a substantial number of candidate metrics are applied to a diverse set of groundwater hydrographs from France, Germany and Austria within the northern alpine and peri-alpine region. By applying Principle Component Analysis (PCA) to the correlation matrix of the metrics, we determine a limited number of relevant metrics that describe the majority of variation in the dataset. The resulting reduced set of metrics comprise an optimized set that can be used to describe the aspects of dynamics that were identified within the groundwater dynamics framework. For some aspects of dynamics a single significant metric could be attributed. Other aspects have a more fuzzy quality that can only be described by an ensemble of metrics and are re-evaluated. The PCA is furthermore applied to groups of groundwater hydrographs containing regimes of similar behaviour in order to explore transferability when applying the metric-based characterization framework to groups of hydrographs from diverse groundwater systems. In conclusion, we identify an optimal number of metrics

  17. Generalized tolerance sensitivity and DEA metric sensitivity

    OpenAIRE

    Neralić, Luka; E. Wendell, Richard

    2015-01-01

    This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA). Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  18. Generalized tolerance sensitivity and DEA metric sensitivity

    Directory of Open Access Journals (Sweden)

    Luka Neralić

    2015-03-01

    Full Text Available This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA. Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  19. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    Energy Technology Data Exchange (ETDEWEB)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc [German Cancer Research Center, Heidelberg (Germany).

    2017-10-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  20. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    International Nuclear Information System (INIS)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc

    2017-01-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  1. Individuality evaluation for paper based artifact-metrics using transmitted light image

    Science.gov (United States)

    Yamakoshi, Manabu; Tanaka, Junichi; Furuie, Makoto; Hirabayashi, Masashi; Matsumoto, Tsutomu

    2008-02-01

    Artifact-metrics is an automated method of authenticating artifacts based on a measurable intrinsic characteristic. Intrinsic characters, such as microscopic random-patterns made during the manufacturing process, are very difficult to copy. A transmitted light image of the distribution can be used for artifact-metrics, since the fiber distribution of paper is random. Little is known about the individuality of the transmitted light image although it is an important requirement for intrinsic characteristic artifact-metrics. Measuring individuality requires that the intrinsic characteristic of each artifact significantly differs, so having sufficient individuality can make an artifact-metric system highly resistant to brute force attack. Here we investigate the influence of paper category, matching size of sample, and image-resolution on the individuality of a transmitted light image of paper through a matching test using those images. More concretely, we evaluate FMR/FNMR curves by calculating similarity scores with matches using correlation coefficients between pairs of scanner input images, and the individuality of paper by way of estimated EER with probabilistic measure through a matching method based on line segments, which can localize the influence of rotation gaps of a sample in the case of large matching size. As a result, we found that the transmitted light image of paper has a sufficient individuality.

  2. Metrics with vanishing quantum corrections

    International Nuclear Information System (INIS)

    Coley, A A; Hervik, S; Gibbons, G W; Pope, C N

    2008-01-01

    We investigate solutions of the classical Einstein or supergravity equations that solve any set of quantum corrected Einstein equations in which the Einstein tensor plus a multiple of the metric is equated to a symmetric conserved tensor T μν (g αβ , ∂ τ g αβ , ∂ τ ∂ σ g αβ , ...,) constructed from sums of terms, the involving contractions of the metric and powers of arbitrary covariant derivatives of the curvature tensor. A classical solution, such as an Einstein metric, is called universal if, when evaluated on that Einstein metric, T μν is a multiple of the metric. A Ricci flat classical solution is called strongly universal if, when evaluated on that Ricci flat metric, T μν vanishes. It is well known that pp-waves in four spacetime dimensions are strongly universal. We focus attention on a natural generalization; Einstein metrics with holonomy Sim(n - 2) in which all scalar invariants are zero or constant. In four dimensions we demonstrate that the generalized Ghanam-Thompson metric is weakly universal and that the Goldberg-Kerr metric is strongly universal; indeed, we show that universality extends to all four-dimensional Sim(2) Einstein metrics. We also discuss generalizations to higher dimensions

  3. Comparison of SOAP and REST Based Web Services Using Software Evaluation Metrics

    Directory of Open Access Journals (Sweden)

    Tihomirovs Juris

    2016-12-01

    Full Text Available The usage of Web services has recently increased. Therefore, it is important to select right type of Web services at the project design stage. The most common implementations are based on SOAP (Simple Object Access Protocol and REST (Representational State Transfer Protocol styles. Maintainability of REST and SOAP Web services has become an important issue as popularity of Web services is increasing. Choice of the right approach is not an easy decision since it is influenced by development requirements and maintenance considerations. In the present research, we present the comparison of SOAP and REST based Web services using software evaluation metrics. To achieve this aim, a systematic literature review will be made to compare REST and SOAP Web services in terms of the software evaluation metrics.

  4. ARM Data-Oriented Metrics and Diagnostics Package for Climate Model Evaluation Value-Added Product

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Chengzhu [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Xie, Shaocheng [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-10-15

    A Python-based metrics and diagnostics package is currently being developed by the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Infrastructure Team at Lawrence Livermore National Laboratory (LLNL) to facilitate the use of long-term, high-frequency measurements from the ARM Facility in evaluating the regional climate simulation of clouds, radiation, and precipitation. This metrics and diagnostics package computes climatological means of targeted climate model simulation and generates tables and plots for comparing the model simulation with ARM observational data. The Coupled Model Intercomparison Project (CMIP) model data sets are also included in the package to enable model intercomparison as demonstrated in Zhang et al. (2017). The mean of the CMIP model can serve as a reference for individual models. Basic performance metrics are computed to measure the accuracy of mean state and variability of climate models. The evaluated physical quantities include cloud fraction, temperature, relative humidity, cloud liquid water path, total column water vapor, precipitation, sensible and latent heat fluxes, and radiative fluxes, with plan to extend to more fields, such as aerosol and microphysics properties. Process-oriented diagnostics focusing on individual cloud- and precipitation-related phenomena are also being developed for the evaluation and development of specific model physical parameterizations. The version 1.0 package is designed based on data collected at ARM’s Southern Great Plains (SGP) Research Facility, with the plan to extend to other ARM sites. The metrics and diagnostics package is currently built upon standard Python libraries and additional Python packages developed by DOE (such as CDMS and CDAT). The ARM metrics and diagnostic package is available publicly with the hope that it can serve as an easy entry point for climate modelers to compare their models with ARM data. In this report, we first present the input data, which

  5. Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures

    Science.gov (United States)

    2016-06-01

    dataset ci = unit cost for item i fi = demand forecast for item i 28 ai = actual demand for item i A close look at fCIMIP metric reveals a...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA MBA PROFESSIONAL REPORT DEMAND FORECASTING : AN EVALUATION OF DOD’S ACCURACY...June 2016 3. REPORT TYPE AND DATES COVERED MBA professional report 4. TITLE AND SUBTITLE DEMAND FORECASTING : AN EVALUATION OF DOD’S ACCURACY

  6. Incorporating big data into treatment plan evaluation: Development of statistical DVH metrics and visualization dashboards.

    Science.gov (United States)

    Mayo, Charles S; Yao, John; Eisbruch, Avraham; Balter, James M; Litzenberg, Dale W; Matuszak, Martha M; Kessler, Marc L; Weyburn, Grant; Anderson, Carlos J; Owen, Dawn; Jackson, William C; Haken, Randall Ten

    2017-01-01

    To develop statistical dose-volume histogram (DVH)-based metrics and a visualization method to quantify the comparison of treatment plans with historical experience and among different institutions. The descriptive statistical summary (ie, median, first and third quartiles, and 95% confidence intervals) of volume-normalized DVH curve sets of past experiences was visualized through the creation of statistical DVH plots. Detailed distribution parameters were calculated and stored in JavaScript Object Notation files to facilitate management, including transfer and potential multi-institutional comparisons. In the treatment plan evaluation, structure DVH curves were scored against computed statistical DVHs and weighted experience scores (WESs). Individual, clinically used, DVH-based metrics were integrated into a generalized evaluation metric (GEM) as a priority-weighted sum of normalized incomplete gamma functions. Historical treatment plans for 351 patients with head and neck cancer, 104 with prostate cancer who were treated with conventional fractionation, and 94 with liver cancer who were treated with stereotactic body radiation therapy were analyzed to demonstrate the usage of statistical DVH, WES, and GEM in a plan evaluation. A shareable dashboard plugin was created to display statistical DVHs and integrate GEM and WES scores into a clinical plan evaluation within the treatment planning system. Benchmarking with normal tissue complication probability scores was carried out to compare the behavior of GEM and WES scores. DVH curves from historical treatment plans were characterized and presented, with difficult-to-spare structures (ie, frequently compromised organs at risk) identified. Quantitative evaluations by GEM and/or WES compared favorably with the normal tissue complication probability Lyman-Kutcher-Burman model, transforming a set of discrete threshold-priority limits into a continuous model reflecting physician objectives and historical experience

  7. Volume-based quantitative FDG PET/CT metrics and their association with optimal debulking and progression-free survival in patients with recurrent ovarian cancer undergoing secondary cytoreductive surgery

    International Nuclear Information System (INIS)

    Vargas, H.A.; Burger, I.A.; Micco, M.; Sosa, R.E.; Weber, W.; Hricak, H.; Sala, E.; Goldman, D.A.; Chi, D.S.

    2015-01-01

    Our aim was to evaluate the associations between quantitative 18 F-fluorodeoxyglucose positron-emission tomography (FDG-PET) uptake metrics, optimal debulking (OD) and progression-free survival (PFS) in patients with recurrent ovarian cancer undergoing secondary cytoreductive surgery. Fifty-five patients with recurrent ovarian cancer underwent FDG-PET/CT within 90 days prior to surgery. Standardized uptake values (SUV max ), metabolically active tumour volumes (MTV), and total lesion glycolysis (TLG) were measured on PET. Exact logistic regression, Kaplan-Meier curves and the log-rank test were used to assess associations between imaging metrics, OD and PFS. MTV (p = 0.0025) and TLG (p = 0.0043) were associated with OD; however, there was no significant association between SUV max and debulking status (p = 0.83). Patients with an MTV above 7.52 mL and/or a TLG above 35.94 g had significantly shorter PFS (p = 0.0191 for MTV and p = 0.0069 for TLG). SUV max was not significantly related to PFS (p = 0.10). PFS estimates at 3.5 years after surgery were 0.42 for patients with an MTV ≤ 7.52 mL and 0.19 for patients with an MTV > 7.52 mL; 0.46 for patients with a TLG ≤ 35.94 g and 0.15 for patients with a TLG > 35.94 g. FDG-PET metrics that reflect metabolic tumour burden are associated with optimal secondary cytoreductive surgery and progression-free survival in patients with recurrent ovarian cancer. (orig.)

  8. Reference-free ground truth metric for metal artifact evaluation in CT images

    International Nuclear Information System (INIS)

    Kratz, Baerbel; Ens, Svitlana; Mueller, Jan; Buzug, Thorsten M.

    2011-01-01

    Purpose: In computed tomography (CT), metal objects in the region of interest introduce data inconsistencies during acquisition. Reconstructing these data results in an image with star shaped artifacts induced by the metal inconsistencies. To enhance image quality, the influence of the metal objects can be reduced by different metal artifact reduction (MAR) strategies. For an adequate evaluation of new MAR approaches a ground truth reference data set is needed. In technical evaluations, where phantoms can be measured with and without metal inserts, ground truth data can easily be obtained by a second reference data acquisition. Obviously, this is not possible for clinical data. Here, an alternative evaluation method is presented without the need of an additionally acquired reference data set. Methods: The proposed metric is based on an inherent ground truth for metal artifacts as well as MAR methods comparison, where no reference information in terms of a second acquisition is needed. The method is based on the forward projection of a reconstructed image, which is compared to the actually measured projection data. Results: The new evaluation technique is performed on phantom and on clinical CT data with and without MAR. The metric results are then compared with methods using a reference data set as well as an expert-based classification. It is shown that the new approach is an adequate quantification technique for artifact strength in reconstructed metal or MAR CT images. Conclusions: The presented method works solely on the original projection data itself, which yields some advantages compared to distance measures in image domain using two data sets. Beside this, no parameters have to be manually chosen. The new metric is a useful evaluation alternative when no reference data are available.

  9. WE-AB-209-07: Explicit and Convex Optimization of Plan Quality Metrics in Intensity-Modulated Radiation Therapy Treatment Planning

    International Nuclear Information System (INIS)

    Engberg, L; Eriksson, K; Hardemark, B; Forsgren, A

    2016-01-01

    Purpose: To formulate objective functions of a multicriteria fluence map optimization model that correlate well with plan quality metrics, and to solve this multicriteria model by convex approximation. Methods: In this study, objectives of a multicriteria model are formulated to explicitly either minimize or maximize a dose-at-volume measure. Given the widespread agreement that dose-at-volume levels play important roles in plan quality assessment, these objectives correlate well with plan quality metrics. This is in contrast to the conventional objectives, which are to maximize clinical goal achievement by relating to deviations from given dose-at-volume thresholds: while balancing the new objectives means explicitly balancing dose-at-volume levels, balancing the conventional objectives effectively means balancing deviations. Constituted by the inherently non-convex dose-at-volume measure, the new objectives are approximated by the convex mean-tail-dose measure (CVaR measure), yielding a convex approximation of the multicriteria model. Results: Advantages of using the convex approximation are investigated through juxtaposition with the conventional objectives in a computational study of two patient cases. Clinical goals of each case respectively point out three ROI dose-at-volume measures to be considered for plan quality assessment. This is translated in the convex approximation into minimizing three mean-tail-dose measures. Evaluations of the three ROI dose-at-volume measures on Pareto optimal plans are used to represent plan quality of the Pareto sets. Besides providing increased accuracy in terms of feasibility of solutions, the convex approximation generates Pareto sets with overall improved plan quality. In one case, the Pareto set generated by the convex approximation entirely dominates that generated with the conventional objectives. Conclusion: The initial computational study indicates that the convex approximation outperforms the conventional objectives

  10. Incorporating big data into treatment plan evaluation: Development of statistical DVH metrics and visualization dashboards

    Directory of Open Access Journals (Sweden)

    Charles S. Mayo, PhD

    2017-07-01

    Conclusions: Statistical DVH offers an easy-to-read, detailed, and comprehensive way to visualize the quantitative comparison with historical experiences and among institutions. WES and GEM metrics offer a flexible means of incorporating discrete threshold-prioritizations and historic context into a set of standardized scoring metrics. Together, they provide a practical approach for incorporating big data into clinical practice for treatment plan evaluations.

  11. Performance metrics for the evaluation of hyperspectral chemical identification systems

    Science.gov (United States)

    Truslow, Eric; Golowich, Steven; Manolakis, Dimitris; Ingle, Vinay

    2016-02-01

    Remote sensing of chemical vapor plumes is a difficult but important task for many military and civilian applications. Hyperspectral sensors operating in the long-wave infrared regime have well-demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis testing problem which standard detection metrics do not fully describe. We propose using an additional performance metric for identification based on the so-called Dice index. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and identification metric. Using the proposed metrics, we demonstrate that the intuitive system design of a detector bank followed by an identifier is indeed justified when incorporating performance information beyond the standard detection metrics.

  12. Web metrics for library and information professionals

    CERN Document Server

    Stuart, David

    2014-01-01

    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional.Th...

  13. Evaluation of Vehicle-Based Crash Severity Metrics.

    Science.gov (United States)

    Tsoi, Ada H; Gabler, Hampton C

    2015-01-01

    Vehicle change in velocity (delta-v) is a widely used crash severity metric used to estimate occupant injury risk. Despite its widespread use, delta-v has several limitations. Of most concern, delta-v is a vehicle-based metric which does not consider the crash pulse or the performance of occupant restraints, e.g. seatbelts and airbags. Such criticisms have prompted the search for alternative impact severity metrics based upon vehicle kinematics. The purpose of this study was to assess the ability of the occupant impact velocity (OIV), acceleration severity index (ASI), vehicle pulse index (VPI), and maximum delta-v (delta-v) to predict serious injury in real world crashes. The study was based on the analysis of event data recorders (EDRs) downloaded from the National Automotive Sampling System / Crashworthiness Data System (NASS-CDS) 2000-2013 cases. All vehicles in the sample were GM passenger cars and light trucks involved in a frontal collision. Rollover crashes were excluded. Vehicles were restricted to single-event crashes that caused an airbag deployment. All EDR data were checked for a successful, completed recording of the event and that the crash pulse was complete. The maximum abbreviated injury scale (MAIS) was used to describe occupant injury outcome. Drivers were categorized into either non-seriously injured group (MAIS2-) or seriously injured group (MAIS3+), based on the severity of any injuries to the thorax, abdomen, and spine. ASI and OIV were calculated according to the Manual for Assessing Safety Hardware. VPI was calculated according to ISO/TR 12353-3, with vehicle-specific parameters determined from U.S. New Car Assessment Program crash tests. Using binary logistic regression, the cumulative probability of injury risk was determined for each metric and assessed for statistical significance, goodness-of-fit, and prediction accuracy. The dataset included 102,744 vehicles. A Wald chi-square test showed each vehicle-based crash severity metric

  14. Sigma Routing Metric for RPL Protocol

    Directory of Open Access Journals (Sweden)

    Paul Sanmartin

    2018-04-01

    Full Text Available This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX. However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption.

  15. Energy functionals for Calabi-Yau metrics

    International Nuclear Information System (INIS)

    Headrick, M; Nassar, A

    2013-01-01

    We identify a set of ''energy'' functionals on the space of metrics in a given Kähler class on a Calabi-Yau manifold, which are bounded below and minimized uniquely on the Ricci-flat metric in that class. Using these functionals, we recast the problem of numerically solving the Einstein equation as an optimization problem. We apply this strategy, using the ''algebraic'' metrics (metrics for which the Kähler potential is given in terms of a polynomial in the projective coordinates), to the Fermat quartic and to a one-parameter family of quintics that includes the Fermat and conifold quintics. We show that this method yields approximations to the Ricci-flat metric that are exponentially accurate in the degree of the polynomial (except at the conifold point, where the convergence is polynomial), and therefore orders of magnitude more accurate than the balanced metrics, previously studied as approximations to the Ricci-flat metric. The method is relatively fast and easy to implement. On the theoretical side, we also show that the functionals can be used to give a heuristic proof of Yau's theorem

  16. Volume-based quantitative FDG PET/CT metrics and their association with optimal debulking and progression-free survival in patients with recurrent ovarian cancer undergoing secondary cytoreductive surgery

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, H.A.; Burger, I.A.; Micco, M.; Sosa, R.E.; Weber, W.; Hricak, H.; Sala, E. [Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY (United States); Goldman, D.A. [Memorial Sloan Kettering Cancer Center, Department of Epidemiology and Biostatistics, New York, NY (United States); Chi, D.S. [Memorial Sloan Kettering Cancer Center, Department of Surgery, New York, NY (United States)

    2015-11-15

    Our aim was to evaluate the associations between quantitative {sup 18}F-fluorodeoxyglucose positron-emission tomography (FDG-PET) uptake metrics, optimal debulking (OD) and progression-free survival (PFS) in patients with recurrent ovarian cancer undergoing secondary cytoreductive surgery. Fifty-five patients with recurrent ovarian cancer underwent FDG-PET/CT within 90 days prior to surgery. Standardized uptake values (SUV{sub max}), metabolically active tumour volumes (MTV), and total lesion glycolysis (TLG) were measured on PET. Exact logistic regression, Kaplan-Meier curves and the log-rank test were used to assess associations between imaging metrics, OD and PFS. MTV (p = 0.0025) and TLG (p = 0.0043) were associated with OD; however, there was no significant association between SUV{sub max} and debulking status (p = 0.83). Patients with an MTV above 7.52 mL and/or a TLG above 35.94 g had significantly shorter PFS (p = 0.0191 for MTV and p = 0.0069 for TLG). SUV{sub max} was not significantly related to PFS (p = 0.10). PFS estimates at 3.5 years after surgery were 0.42 for patients with an MTV ≤ 7.52 mL and 0.19 for patients with an MTV > 7.52 mL; 0.46 for patients with a TLG ≤ 35.94 g and 0.15 for patients with a TLG > 35.94 g. FDG-PET metrics that reflect metabolic tumour burden are associated with optimal secondary cytoreductive surgery and progression-free survival in patients with recurrent ovarian cancer. (orig.)

  17. Process-level model evaluation: a snow and heat transfer metric

    Science.gov (United States)

    Slater, Andrew G.; Lawrence, David M.; Koven, Charles D.

    2017-04-01

    Land models require evaluation in order to understand results and guide future development. Examining functional relationships between model variables can provide insight into the ability of models to capture fundamental processes and aid in minimizing uncertainties or deficiencies in model forcing. This study quantifies the proficiency of land models to appropriately transfer heat from the soil through a snowpack to the atmosphere during the cooling season (Northern Hemisphere: October-March). Using the basic physics of heat diffusion, we investigate the relationship between seasonal amplitudes of soil versus air temperatures due to insulation from seasonal snow. Observations demonstrate the anticipated exponential relationship of attenuated soil temperature amplitude with increasing snow depth and indicate that the marginal influence of snow insulation diminishes beyond an effective snow depth of about 50 cm. A snow and heat transfer metric (SHTM) is developed to quantify model skill compared to observations. Land models within the CMIP5 experiment vary widely in SHTM scores, and deficiencies can often be traced to model structural weaknesses. The SHTM value for individual models is stable over 150 years of climate, 1850-2005, indicating that the metric is insensitive to climate forcing and can be used to evaluate each model's representation of the insulation process.

  18. Light Water Reactor Sustainability Program Operator Performance Metrics for Control Room Modernization: A Practical Guide for Early Design Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Ronald Boring; Roger Lew; Thomas Ulrich; Jeffrey Joe

    2014-03-01

    As control rooms are modernized with new digital systems at nuclear power plants, it is necessary to evaluate the operator performance using these systems as part of a verification and validation process. There are no standard, predefined metrics available for assessing what is satisfactory operator interaction with new systems, especially during the early design stages of a new system. This report identifies the process and metrics for evaluating human system interfaces as part of control room modernization. The report includes background information on design and evaluation, a thorough discussion of human performance measures, and a practical example of how the process and metrics have been used as part of a turbine control system upgrade during the formative stages of design. The process and metrics are geared toward generalizability to other applications and serve as a template for utilities undertaking their own control room modernization activities.

  19. Deep Transfer Metric Learning.

    Science.gov (United States)

    Junlin Hu; Jiwen Lu; Yap-Peng Tan; Jie Zhou

    2016-12-01

    Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption does not hold in many real visual recognition applications, especially when samples are captured across different data sets. In this paper, we propose a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain. Specifically, our DTML learns a deep metric network by maximizing the inter-class variations and minimizing the intra-class variations, and minimizing the distribution divergence between the source domain and the target domain at the top layer of the network. To better exploit the discriminative information from the source domain, we further develop a deeply supervised transfer metric learning (DSTML) method by including an additional objective on DTML, where the output of both the hidden layers and the top layer are optimized jointly. To preserve the local manifold of input data points in the metric space, we present two new methods, DTML with autoencoder regularization and DSTML with autoencoder regularization. Experimental results on face verification, person re-identification, and handwritten digit recognition validate the effectiveness of the proposed methods.

  20. New Metrics for Economic Evaluation in the Presence of Heterogeneity: Focusing on Evaluating Policy Alternatives Rather than Treatment Alternatives.

    Science.gov (United States)

    Kim, David D; Basu, Anirban

    2017-11-01

    Cost-effectiveness analysis (CEA) methods fail to acknowledge that where cost-effectiveness differs across subgroups, there may be differential adoption of technology. Also, current CEA methods are not amenable to incorporating the impact of policy alternatives that potentially influence the adoption behavior. Unless CEA methods are extended to allow for a comparison of policies rather than simply treatments, their usefulness to decision makers may be limited. We conceptualize new metrics, which estimate the realized value of technology from policy alternatives, through introducing subgroup-specific adoption parameters into existing metrics, incremental cost-effectiveness ratios (ICERs) and Incremental Net Monetary Benefits (NMBs). We also provide the Loss with respect to Efficient Diffusion (LED) metrics, which link with existing value of information metrics but take a policy evaluation perspective. We illustrate these metrics using policies on treatment with combination therapy with a statin plus a fibrate v. statin monotherapy for patients with diabetes and mixed dyslipidemia. Under the traditional approach, the population-level ICER of combination v. monotherapy was $46,000/QALY. However, after accounting for differential rates of adoption of the combination therapy (7.2% among males and 4.3% among females), the modified ICER was $41,733/QALY, due to the higher rate of adoption in the more cost-effective subgroup (male). The LED metrics showed that an education program to increase the uptake of combination therapy among males would provide the largest economic returns due to the significant underutilization of the combination therapy among males under the current policy. This framework may have the potential to improve the decision-making process by producing metrics that are better aligned with the specific policy decisions under consideration for a specific technology.

  1. Metrics Evolution in an Energy Research and Development Program

    International Nuclear Information System (INIS)

    Dixon, Brent

    2011-01-01

    All technology programs progress through three phases: Discovery, Definition, and Deployment. The form and application of program metrics needs to evolve with each phase. During the discovery phase, the program determines what is achievable. A set of tools is needed to define program goals, to analyze credible technical options, and to ensure that the options are compatible and meet the program objectives. A metrics system that scores the potential performance of technical options is part of this system of tools, supporting screening of concepts and aiding in the overall definition of objectives. During the definition phase, the program defines what specifically is wanted. What is achievable is translated into specific systems and specific technical options are selected and optimized. A metrics system can help with the identification of options for optimization and the selection of the option for deployment. During the deployment phase, the program shows that the selected system works. Demonstration projects are established and classical systems engineering is employed. During this phase, the metrics communicate system performance. This paper discusses an approach to metrics evolution within the Department of Energy's Nuclear Fuel Cycle R and D Program, which is working to improve the sustainability of nuclear energy.

  2. Evaluating hydrological model performance using information theory-based metrics

    Science.gov (United States)

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  3. Supplier selection using different metric functions

    Directory of Open Access Journals (Sweden)

    Omosigho S.E.

    2015-01-01

    Full Text Available Supplier selection is an important component of supply chain management in today’s global competitive environment. Hence, the evaluation and selection of suppliers have received considerable attention in the literature. Many attributes of suppliers, other than cost, are considered in the evaluation and selection process. Therefore, the process of evaluation and selection of suppliers is a multi-criteria decision making process. The methodology adopted to solve the supplier selection problem is intuitionistic fuzzy TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution. Generally, TOPSIS is based on the concept of minimum distance from the positive ideal solution and maximum distance from the negative ideal solution. We examine the deficiencies of using only one metric function in TOPSIS and propose the use of spherical metric function in addition to the commonly used metric functions. For empirical supplier selection problems, more than one metric function should be used.

  4. Turbine Airfoil Optimization Using Quasi-3D Analysis Codes

    Directory of Open Access Journals (Sweden)

    Sanjay Goel

    2009-01-01

    Full Text Available A new approach to optimize the geometry of a turbine airfoil by simultaneously designing multiple 2D sections of the airfoil is presented in this paper. The complexity of 3D geometry modeling is circumvented by generating multiple 2D airfoil sections and constraining their geometry in the radial direction using first- and second-order polynomials that ensure smoothness in the radial direction. The flow fields of candidate geometries obtained during optimization are evaluated using a quasi-3D, inviscid, CFD analysis code. An inviscid flow solver is used to reduce the execution time of the analysis. Multiple evaluation criteria based on the Mach number profile obtained from the analysis of each airfoil cross-section are used for computing a quality metric. A key contribution of the paper is the development of metrics that emulate the perception of the human designer in visually evaluating the Mach Number distribution. A mathematical representation of the evaluation criteria coupled with a parametric geometry generator enables the use of formal optimization techniques in the design. The proposed approach is implemented in the optimal design of a low-pressure turbine nozzle.

  5. The metrics of science and technology

    CERN Document Server

    Geisler, Eliezer

    2000-01-01

    Dr. Geisler's far-reaching, unique book provides an encyclopedic compilation of the key metrics to measure and evaluate the impact of science and technology on academia, industry, and government. Focusing on such items as economic measures, patents, peer review, and other criteria, and supported by an extensive review of the literature, Dr. Geisler gives a thorough analysis of the strengths and weaknesses inherent in metric design, and in the use of the specific metrics he cites. His book has already received prepublication attention, and will prove especially valuable for academics in technology management, engineering, and science policy; industrial R&D executives and policymakers; government science and technology policymakers; and scientists and managers in government research and technology institutions. Geisler maintains that the application of metrics to evaluate science and technology at all levels illustrates the variety of tools we currently possess. Each metric has its own unique strengths and...

  6. Performance evaluation of routing metrics for wireless mesh networks

    CSIR Research Space (South Africa)

    Nxumalo, SL

    2009-08-01

    Full Text Available for WMN. The routing metrics have not been compared with QoS parameters. This paper is a work in progress of the project in which researchers want to compare the performance of different routing metrics in WMN using a wireless test bed. Researchers...

  7. Optimization of a simplified automobile finite element model using time varying injury metrics.

    Science.gov (United States)

    Gaewsky, James P; Danelson, Kerry A; Weaver, Caitlin M; Stitzel, Joel D

    2014-01-01

    In 2011, frontal crashes resulted in 55% of passenger car injuries with 10,277 fatalities and 866,000 injuries in the United States. To better understand frontal crash injury mechanisms, human body finite element models (FEMs) can be used to reconstruct Crash Injury Research and Engineering Network (CIREN) cases. A limitation of this method is the paucity of vehicle FEMs; therefore, we developed a functionally equivalent simplified vehicle model. The New Car Assessment Program (NCAP) data for our selected vehicle was from a frontal collision with Hybrid III (H3) Anthropomorphic Test Device (ATD) occupant. From NCAP test reports, the vehicle geometry was created and the H3 ATD was positioned. The material and component properties optimized using a variation study process were: steering column shear bolt fracture force and stroke resistance, seatbelt pretensioner force, frontal and knee bolster airbag stiffness, and belt friction through the D-ring. These parameters were varied using three successive Latin Hypercube Designs of Experiments with 130-200 simulations each. The H3 injury response was compared to the reported NCAP frontal test results for the head, chest and pelvis accelerations, and seat belt and femur forces. The phase, magnitude, and comprehensive error factors, from a Sprague and Geers analysis were calculated for each injury metric and then combined to determine the simulations with the best match to the crash test. The Sprague and Geers analyses typically yield error factors ranging from 0 to 1 with lower scores being more optimized. The total body injury response error factor for the most optimized simulation from each round of the variation study decreased from 0.466 to 0.395 to 0.360. This procedure to optimize vehicle FEMs is a valuable tool to conduct future CIREN case reconstructions in a variety of vehicles.

  8. Using research metrics to evaluate the International Atomic Energy Agency guidelines on quality assurance for R&D

    Energy Technology Data Exchange (ETDEWEB)

    Bodnarczuk, M.

    1994-06-01

    The objective of the International Atomic Energy Agency (IAEA) Guidelines on Quality Assurance for R&D is to provide guidance for developing quality assurance (QA) programs for R&D work on items, services, and processes important to safety, and to support the siting, design, construction, commissioning, operation, and decommissioning of nuclear facilities. The standard approach to writing papers describing new quality guidelines documents is to present a descriptive overview of the contents of the document. I will depart from this approach. Instead, I will first discuss a conceptual framework of metrics for evaluating and improving basic and applied experimental science as well as the associated role that quality management should play in understanding and implementing these metrics. I will conclude by evaluating how well the IAEA document addresses the metrics from this conceptual framework and the broader principles of quality management.

  9. Metrics for Polyphonic Sound Event Detection

    Directory of Open Access Journals (Sweden)

    Annamaria Mesaros

    2016-05-01

    Full Text Available This paper presents and discusses various metrics proposed for evaluation of polyphonic sound event detection systems used in realistic situations where there are typically multiple sound sources active simultaneously. The system output in this case contains overlapping events, marked as multiple sounds detected as being active at the same time. The polyphonic system output requires a suitable procedure for evaluation against a reference. Metrics from neighboring fields such as speech recognition and speaker diarization can be used, but they need to be partially redefined to deal with the overlapping events. We present a review of the most common metrics in the field and the way they are adapted and interpreted in the polyphonic case. We discuss segment-based and event-based definitions of each metric and explain the consequences of instance-based and class-based averaging using a case study. In parallel, we provide a toolbox containing implementations of presented metrics.

  10. Evaluation of Deposited Sediment and Macroinvertebrate Metrics Used to Quantify Biological Response to Excessive Sedimentation in Agricultural Streams

    Science.gov (United States)

    Sutherland, Andrew B.; Culp, Joseph M.; Benoy, Glenn A.

    2012-07-01

    The objective of this study was to evaluate which macroinvertebrate and deposited sediment metrics are best for determining effects of excessive sedimentation on stream integrity. Fifteen instream sediment metrics, with the strongest relationship to land cover, were compared to riffle macroinvertebrate metrics in streams ranging across a gradient of land disturbance. Six deposited sediment metrics were strongly related to the relative abundance of Ephemeroptera, Plecoptera and Trichoptera and six were strongly related to the modified family biotic index (MFBI). Few functional feeding groups and habit groups were significantly related to deposited sediment, and this may be related to the focus on riffle, rather than reach-wide macroinvertebrates, as reach-wide sediment metrics were more closely related to human land use. Our results suggest that the coarse-level deposited sediment metric, visual estimate of fines, and the coarse-level biological index, MFBI, may be useful in biomonitoring efforts aimed at determining the impact of anthropogenic sedimentation on stream biotic integrity.

  11. Prognostic Performance Metrics

    Data.gov (United States)

    National Aeronautics and Space Administration — This chapter presents several performance metrics for offline evaluation of prognostics algorithms. A brief overview of different methods employed for performance...

  12. Degraded visual environment image/video quality metrics

    Science.gov (United States)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  13. A suite of standard post-tagging evaluation metrics can help assess tag retention for field-based fish telemetry research

    Science.gov (United States)

    Gerber, Kayla M.; Mather, Martha E.; Smith, Joseph M.

    2017-01-01

    Telemetry can inform many scientific and research questions if a context exists for integrating individual studies into the larger body of literature. Creating cumulative distributions of post-tagging evaluation metrics would allow individual researchers to relate their telemetry data to other studies. Widespread reporting of standard metrics is a precursor to the calculation of benchmarks for these distributions (e.g., mean, SD, 95% CI). Here we illustrate five types of standard post-tagging evaluation metrics using acoustically tagged Blue Catfish (Ictalurus furcatus) released into a Kansas reservoir. These metrics included: (1) percent of tagged fish detected overall, (2) percent of tagged fish detected daily using abacus plot data, (3) average number of (and percent of available) receiver sites visited, (4) date of last movement between receiver sites (and percent of tagged fish moving during that time period), and (5) number (and percent) of fish that egressed through exit gates. These metrics were calculated for one to three time periods: early ( 5 days early in the study. On average, tagged Blue Catfish visited 9 (50%) and 13 (72%) of 18 within-reservoir receivers early and at the end of the study, respectively. At the end of the study, 73% of all tagged fish were detected moving between receivers. Creating statistical benchmarks for individual metrics can provide useful reference points. In addition, combining multiple metrics can inform ecology and research design. Consequently, individual researchers and the field of telemetry research can benefit from widespread, detailed, and standard reporting of post-tagging detection metrics.

  14. Optimization of VPSC Model Parameters for Two-Phase Titanium Alloys: Flow Stress Vs Orientation Distribution Function Metrics

    Science.gov (United States)

    Miller, V. M.; Semiatin, S. L.; Szczepanski, C.; Pilchak, A. L.

    2018-06-01

    The ability to predict the evolution of crystallographic texture during hot work of titanium alloys in the α + β temperature regime is greatly significant to numerous engineering disciplines; however, research efforts are complicated by the rapid changes in phase volume fractions and flow stresses with temperature in addition to topological considerations. The viscoplastic self-consistent (VPSC) polycrystal plasticity model is employed to simulate deformation in the two phase field. Newly developed parameter selection schemes utilizing automated optimization based on two different error metrics are considered. In the first optimization scheme, which is commonly used in the literature, the VPSC parameters are selected based on the quality of fit between experiment and simulated flow curves at six hot-working temperatures. Under the second newly developed scheme, parameters are selected to minimize the difference between the simulated and experimentally measured α textures after accounting for the β → α transformation upon cooling. It is demonstrated that both methods result in good qualitative matches for the experimental α phase texture, but texture-based optimization results in a substantially better quantitative orientation distribution function match.

  15. Metric Learning for Hyperspectral Image Segmentation

    Science.gov (United States)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  16. Use of social media in health promotion: purposes, key performance indicators, and evaluation metrics.

    Science.gov (United States)

    Neiger, Brad L; Thackeray, Rosemary; Van Wagenen, Sarah A; Hanson, Carl L; West, Joshua H; Barnes, Michael D; Fagen, Michael C

    2012-03-01

    Despite the expanding use of social media, little has been published about its appropriate role in health promotion, and even less has been written about evaluation. The purpose of this article is threefold: (a) outline purposes for social media in health promotion, (b) identify potential key performance indicators associated with these purposes, and (c) propose evaluation metrics for social media related to the key performance indicators. Process evaluation is presented in this article as an overarching evaluation strategy for social media.

  17. Clinical Outcome Metrics for Optimization of Robust Training

    Science.gov (United States)

    Ebert, D.; Byrne, V. E.; McGuire, K. M.; Hurst, V. W., IV; Kerstman, E. L.; Cole, R. W.; Sargsyan, A. E.; Garcia, K. M.; Reyes, D.; Young, M.

    2016-01-01

    (pre-IMM analysis) and overall mitigation of the mission medical impact (IMM analysis); 2) refine the procedure outcome and clinical outcome metrics themselves; 3) refine or develop innovative medical training products and solutions to maximize CMO performance; and 4) validate the methods and products of this experiment for operational use in the planning, execution, and quality assurance of the CMO training process The team has finalized training protocols and developed a software training/testing tool in collaboration with Butler Graphics (Detroit, MI). In addition to the "hands on" medical procedure modules, the software includes a differential diagnosis exercise (limited clinical decision support tool) to evaluate the diagnostic skills of participants. Human subject testing will occur over the next year.

  18. Metrics for energy resilience

    International Nuclear Information System (INIS)

    Roege, Paul E.; Collier, Zachary A.; Mancillas, James; McDonagh, John A.; Linkov, Igor

    2014-01-01

    Energy lies at the backbone of any advanced society and constitutes an essential prerequisite for economic growth, social order and national defense. However there is an Achilles heel to today's energy and technology relationship; namely a precarious intimacy between energy and the fiscal, social, and technical systems it supports. Recently, widespread and persistent disruptions in energy systems have highlighted the extent of this dependence and the vulnerability of increasingly optimized systems to changing conditions. Resilience is an emerging concept that offers to reconcile considerations of performance under dynamic environments and across multiple time frames by supplementing traditionally static system performance measures to consider behaviors under changing conditions and complex interactions among physical, information and human domains. This paper identifies metrics useful to implement guidance for energy-related planning, design, investment, and operation. Recommendations are presented using a matrix format to provide a structured and comprehensive framework of metrics relevant to a system's energy resilience. The study synthesizes previously proposed metrics and emergent resilience literature to provide a multi-dimensional model intended for use by leaders and practitioners as they transform our energy posture from one of stasis and reaction to one that is proactive and which fosters sustainable growth. - Highlights: • Resilience is the ability of a system to recover from adversity. • There is a need for methods to quantify and measure system resilience. • We developed a matrix-based approach to generate energy resilience metrics. • These metrics can be used in energy planning, system design, and operations

  19. Metrics Are Needed for Collaborative Software Development

    Directory of Open Access Journals (Sweden)

    Mojgan Mohtashami

    2011-10-01

    Full Text Available There is a need for metrics for inter-organizational collaborative software development projects, encompassing management and technical concerns. In particular, metrics are needed that are aimed at the collaborative aspect itself, such as readiness for collaboration, the quality and/or the costs and benefits of collaboration in a specific ongoing project. We suggest questions and directions for such metrics, spanning the full lifespan of a collaborative project, from considering the suitability of collaboration through evaluating ongoing projects to final evaluation of the collaboration.

  20. Predicting class testability using object-oriented metrics

    OpenAIRE

    Bruntink, Magiel; Deursen, Arie

    2004-01-01

    textabstractIn this paper we investigate factors of the testability of object-oriented software systems. The starting point is given by a study of the literature to obtain both an initial model of testability and existing OO metrics related to testability. Subsequently, these metrics are evaluated by means of two case studies of large Java systems for which JUnit test cases exist. The goal of this paper is to define and evaluate a set of metrics that can be used to assess the testability of t...

  1. Comparison of luminance based metrics in different lighting conditions

    DEFF Research Database (Denmark)

    Wienold, J.; Kuhn, T.E.; Christoffersen, J.

    In this study, we evaluate established and newly developed metrics for predicting glare using data from three different research studies. The evaluation covers two different targets: 1. How well the user’s perception of glare magnitude correlates to the prediction of the glare metrics? 2. How well...... do the glare metrics describe the subjects’ disturbance by glare? We applied Spearman correlations, logistic regressions and an accuracy evaluation, based on an ROC-analysis. The results show that five of the twelve investigated metrics are failing at least one of the statistical tests. The other...... seven metrics CGI, modified DGI, DGP, Ev, average Luminance of the image Lavg, UGP and UGR are passing all statistical tests. DGP, CGI, DGI_mod and UGP have largest AUC and might be slightly more robust. The accuracy of the predictions of afore mentioned seven metrics for the disturbance by glare lies...

  2. A matrix-algebraic algorithm for the Riemannian logarithm on the Stiefel manifold under the canonical metric

    DEFF Research Database (Denmark)

    Zimmermann, Ralf

    2017-01-01

    We derive a numerical algorithm for evaluating the Riemannian logarithm on the Stiefel manifold with respect to the canonical metric. In contrast to the optimization-based approach known from the literature, we work from a purely matrix-algebraic perspective. Moreover, we prove that the algorithm...... converges locally and exhibits a linear rate of convergence....

  3. A matrix-algebraic algorithm for the Riemannian logarithm on the Stiefel manifold under the canonical metric

    OpenAIRE

    Zimmermann, Ralf

    2016-01-01

    We derive a numerical algorithm for evaluating the Riemannian logarithm on the Stiefel manifold with respect to the canonical metric. In contrast to the optimization-based approach known from the literature, we work from a purely matrix-algebraic perspective. Moreover, we prove that the algorithm converges locally and exhibits a linear rate of convergence.

  4. Daylight metrics and energy savings

    Energy Technology Data Exchange (ETDEWEB)

    Mardaljevic, John; Heschong, Lisa; Lee, Eleanor

    2009-12-31

    The drive towards sustainable, low-energy buildings has increased the need for simple, yet accurate methods to evaluate whether a daylit building meets minimum standards for energy and human comfort performance. Current metrics do not account for the temporal and spatial aspects of daylight, nor of occupants comfort or interventions. This paper reviews the historical basis of current compliance methods for achieving daylit buildings, proposes a technical basis for development of better metrics, and provides two case study examples to stimulate dialogue on how metrics can be applied in a practical, real-world context.

  5. Self-organizing weights for Internet AS-graphs and surprisingly simple routing metrics

    DEFF Research Database (Denmark)

    Scholz, Jan Carsten; Greiner, Martin

    The transport capacity of Internet-like communication networks and hence their efficiency may be improved by a factor of 5-10 through the use of highly optimized routing metrics, as demonstrated previously. Numerical determination of such routing metrics can be computationally demanding...... metrics. The new metrics have negligible computational cost and result in an approximately 5-fold performance increase, providing distinguished competitiveness with the computationally costly counterparts. They are applicable to very large networks and easy to implement in today's Internet routing...

  6. Network Community Detection on Metric Space

    Directory of Open Access Journals (Sweden)

    Suman Saha

    2015-08-01

    Full Text Available Community detection in a complex network is an important problem of much interest in recent years. In general, a community detection algorithm chooses an objective function and captures the communities of the network by optimizing the objective function, and then, one uses various heuristics to solve the optimization problem to extract the interesting communities for the user. In this article, we demonstrate the procedure to transform a graph into points of a metric space and develop the methods of community detection with the help of a metric defined for a pair of points. We have also studied and analyzed the community structure of the network therein. The results obtained with our approach are very competitive with most of the well-known algorithms in the literature, and this is justified over the large collection of datasets. On the other hand, it can be observed that time taken by our algorithm is quite less compared to other methods and justifies the theoretical findings.

  7. Energy-Based Metrics for Arthroscopic Skills Assessment.

    Science.gov (United States)

    Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa

    2017-08-05

    Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.

  8. A suite of standard post-tagging evaluation metrics can help assess tag retention for field-based fish telemetry research

    Science.gov (United States)

    Gerber, Kayla M.; Mather, Martha E.; Smith, Joseph M.

    2017-01-01

    Telemetry can inform many scientific and research questions if a context exists for integrating individual studies into the larger body of literature. Creating cumulative distributions of post-tagging evaluation metrics would allow individual researchers to relate their telemetry data to other studies. Widespread reporting of standard metrics is a precursor to the calculation of benchmarks for these distributions (e.g., mean, SD, 95% CI). Here we illustrate five types of standard post-tagging evaluation metrics using acoustically tagged Blue Catfish (Ictalurus furcatus) released into a Kansas reservoir. These metrics included: (1) percent of tagged fish detected overall, (2) percent of tagged fish detected daily using abacus plot data, (3) average number of (and percent of available) receiver sites visited, (4) date of last movement between receiver sites (and percent of tagged fish moving during that time period), and (5) number (and percent) of fish that egressed through exit gates. These metrics were calculated for one to three time periods: early (of the study (5 months). Over three-quarters of our tagged fish were detected early (85%) and at the end (85%) of the study. Using abacus plot data, all tagged fish (100%) were detected at least one day and 96% were detected for > 5 days early in the study. On average, tagged Blue Catfish visited 9 (50%) and 13 (72%) of 18 within-reservoir receivers early and at the end of the study, respectively. At the end of the study, 73% of all tagged fish were detected moving between receivers. Creating statistical benchmarks for individual metrics can provide useful reference points. In addition, combining multiple metrics can inform ecology and research design. Consequently, individual researchers and the field of telemetry research can benefit from widespread, detailed, and standard reporting of post-tagging detection metrics.

  9. Robust optimization based upon statistical theory.

    Science.gov (United States)

    Sobotta, B; Söhn, M; Alber, M

    2010-08-01

    Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose

  10. SU-E-T-436: Fluence-Based Trajectory Optimization for Non-Coplanar VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Smyth, G; Bamber, JC; Bedford, JL [Joint Department of Physics at The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London (United Kingdom); Evans, PM [Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford (United Kingdom); Saran, FH; Mandeville, HC [The Royal Marsden NHS Foundation Trust, Sutton (United Kingdom)

    2015-06-15

    Purpose: To investigate a fluence-based trajectory optimization technique for non-coplanar VMAT for brain cancer. Methods: Single-arc non-coplanar VMAT trajectories were determined using a heuristic technique for five patients. Organ at risk (OAR) volume intersected during raytracing was minimized for two cases: absolute volume and the sum of relative volumes weighted by OAR importance. These trajectories and coplanar VMAT formed starting points for the fluence-based optimization method. Iterative least squares optimization was performed on control points 24° apart in gantry rotation. Optimization minimized the root-mean-square (RMS) deviation of PTV dose from the prescription (relative importance 100), maximum dose to the brainstem (10), optic chiasm (5), globes (5) and optic nerves (5), plus mean dose to the lenses (5), hippocampi (3), temporal lobes (2), cochleae (1) and brain excluding other regions of interest (1). Control point couch rotations were varied in steps of up to 10° and accepted if the cost function improved. Final treatment plans were optimized with the same objectives in an in-house planning system and evaluated using a composite metric - the sum of optimization metrics weighted by importance. Results: The composite metric decreased with fluence-based optimization in 14 of the 15 plans. In the remaining case its overall value, and the PTV and OAR components, were unchanged but the balance of OAR sparing differed. PTV RMS deviation was improved in 13 cases and unchanged in two. The OAR component was reduced in 13 plans. In one case the OAR component increased but the composite metric decreased - a 4 Gy increase in OAR metrics was balanced by a reduction in PTV RMS deviation from 2.8% to 2.6%. Conclusion: Fluence-based trajectory optimization improved plan quality as defined by the composite metric. While dose differences were case specific, fluence-based optimization improved both PTV and OAR dosimetry in 80% of cases.

  11. Software architecture analysis tool : software architecture metrics collection

    NARCIS (Netherlands)

    Muskens, J.; Chaudron, M.R.V.; Westgeest, R.

    2002-01-01

    The Software Engineering discipline lacks the ability to evaluate software architectures. Here we describe a tool for software architecture analysis that is based on metrics. Metrics can be used to detect possible problems and bottlenecks in software architectures. Even though metrics do not give a

  12. Translating glucose variability metrics into the clinic via Continuous Glucose Monitoring: a Graphical User Interface for Diabetes Evaluation (CGM-GUIDE©).

    Science.gov (United States)

    Rawlings, Renata A; Shi, Hang; Yuan, Lo-Hua; Brehm, William; Pop-Busui, Rodica; Nelson, Patrick W

    2011-12-01

    Several metrics of glucose variability have been proposed to date, but an integrated approach that provides a complete and consistent assessment of glycemic variation is missing. As a consequence, and because of the tedious coding necessary during quantification, most investigators and clinicians have not yet adopted the use of multiple glucose variability metrics to evaluate glycemic variation. We compiled the most extensively used statistical techniques and glucose variability metrics, with adjustable hyper- and hypoglycemic limits and metric parameters, to create a user-friendly Continuous Glucose Monitoring Graphical User Interface for Diabetes Evaluation (CGM-GUIDE©). In addition, we introduce and demonstrate a novel transition density profile that emphasizes the dynamics of transitions between defined glucose states. Our combined dashboard of numerical statistics and graphical plots support the task of providing an integrated approach to describing glycemic variability. We integrated existing metrics, such as SD, area under the curve, and mean amplitude of glycemic excursion, with novel metrics such as the slopes across critical transitions and the transition density profile to assess the severity and frequency of glucose transitions per day as they move between critical glycemic zones. By presenting the above-mentioned metrics and graphics in a concise aggregate format, CGM-GUIDE provides an easy to use tool to compare quantitative measures of glucose variability. This tool can be used by researchers and clinicians to develop new algorithms of insulin delivery for patients with diabetes and to better explore the link between glucose variability and chronic diabetes complications.

  13. Prototypic Development and Evaluation of a Medium Format Metric Camera

    Science.gov (United States)

    Hastedt, H.; Rofallski, R.; Luhmann, T.; Rosenbauer, R.; Ochsner, D.; Rieke-Zapp, D.

    2018-05-01

    Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2-3 m in each direction) and large volumes (around 20 x 20 x 1-10 m). The requested precision in object space (1σ RMS) is defined to be within 0.1-0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1) high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2) a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3) a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002). Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm-0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement). All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.

  14. PROTOTYPIC DEVELOPMENT AND EVALUATION OF A MEDIUM FORMAT METRIC CAMERA

    Directory of Open Access Journals (Sweden)

    H. Hastedt

    2018-05-01

    Full Text Available Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2–3 m in each direction and large volumes (around 20 x 20 x 1–10 m. The requested precision in object space (1σ RMS is defined to be within 0.1–0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1 high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2 a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3 a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002. Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm–0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement. All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.

  15. Metrics for building performance assurance

    Energy Technology Data Exchange (ETDEWEB)

    Koles, G.; Hitchcock, R.; Sherman, M.

    1996-07-01

    This report documents part of the work performed in phase I of a Laboratory Directors Research and Development (LDRD) funded project entitled Building Performance Assurances (BPA). The focus of the BPA effort is to transform the way buildings are built and operated in order to improve building performance by facilitating or providing tools, infrastructure, and information. The efforts described herein focus on the development of metrics with which to evaluate building performance and for which information and optimization tools need to be developed. The classes of building performance metrics reviewed are (1) Building Services (2) First Costs, (3) Operating Costs, (4) Maintenance Costs, and (5) Energy and Environmental Factors. The first category defines the direct benefits associated with buildings; the next three are different kinds of costs associated with providing those benefits; the last category includes concerns that are broader than direct costs and benefits to the building owner and building occupants. The level of detail of the various issues reflect the current state of knowledge in those scientific areas and the ability of the to determine that state of knowledge, rather than directly reflecting the importance of these issues; it intentionally does not specifically focus on energy issues. The report describes work in progress and is intended as a resource and can be used to indicate the areas needing more investigation. Other reports on BPA activities are also available.

  16. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics for Scientific Data and Analysis

    Data.gov (United States)

    National Aeronautics and Space Administration — We will construct SciSpark, a scalable system for interactive model evaluation and for the rapid development of climate metrics and analyses. SciSpark directly...

  17. Recursive form of general limited memory variable metric methods

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Vlček, Jan

    2013-01-01

    Roč. 49, č. 2 (2013), s. 224-235 ISSN 0023-5954 Institutional support: RVO:67985807 Keywords : unconstrained optimization * large scale optimization * limited memory methods * variable metric updates * recursive matrix formulation * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.563, year: 2013 http://dml.cz/handle/10338.dmlcz/143365

  18. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies

    Science.gov (United States)

    Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora

    2017-01-01

    Objective Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to (a) catalog feasibility measures/metrics and (b) propose a framework. Methods For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. Findings We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Conclusions Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization. PMID:29333105

  19. Evaluation of the performance of a micromethod for measuring urinary iodine by using six sigma quality metrics.

    Science.gov (United States)

    Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud

    2013-09-01

    The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)Sigma metrics at all concentrations. Only one laboratory had TEcalc

  20. Translating Glucose Variability Metrics into the Clinic via Continuous Glucose Monitoring: A Graphical User Interface for Diabetes Evaluation (CGM-GUIDE©)

    Science.gov (United States)

    Rawlings, Renata A.; Shi, Hang; Yuan, Lo-Hua; Brehm, William; Pop-Busui, Rodica

    2011-01-01

    Abstract Background Several metrics of glucose variability have been proposed to date, but an integrated approach that provides a complete and consistent assessment of glycemic variation is missing. As a consequence, and because of the tedious coding necessary during quantification, most investigators and clinicians have not yet adopted the use of multiple glucose variability metrics to evaluate glycemic variation. Methods We compiled the most extensively used statistical techniques and glucose variability metrics, with adjustable hyper- and hypoglycemic limits and metric parameters, to create a user-friendly Continuous Glucose Monitoring Graphical User Interface for Diabetes Evaluation (CGM-GUIDE©). In addition, we introduce and demonstrate a novel transition density profile that emphasizes the dynamics of transitions between defined glucose states. Results Our combined dashboard of numerical statistics and graphical plots support the task of providing an integrated approach to describing glycemic variability. We integrated existing metrics, such as SD, area under the curve, and mean amplitude of glycemic excursion, with novel metrics such as the slopes across critical transitions and the transition density profile to assess the severity and frequency of glucose transitions per day as they move between critical glycemic zones. Conclusions By presenting the above-mentioned metrics and graphics in a concise aggregate format, CGM-GUIDE provides an easy to use tool to compare quantitative measures of glucose variability. This tool can be used by researchers and clinicians to develop new algorithms of insulin delivery for patients with diabetes and to better explore the link between glucose variability and chronic diabetes complications. PMID:21932986

  1. Optimized Evaluation System to Athletic Food Safety

    OpenAIRE

    Shanshan Li

    2015-01-01

    This study presented a new method of optimizing evaluation function in athletic food safety information programming by particle swarm optimization. The process of food information evaluation function is to automatically adjust these parameters in the evaluation function by self-optimizing method accomplished through competition, which is a food information system plays against itself with different evaluation functions. The results show that the particle swarm optimization is successfully app...

  2. A comparison of metrics to evaluate the effects of hydro-facility passage stressors on fish

    Energy Technology Data Exchange (ETDEWEB)

    Colotelo, Alison H.; Goldman, Amy E.; Wagner, Katie A.; Brown, Richard S.; Deng, Z. Daniel; Richmond, Marshall C.

    2017-03-01

    Hydropower is the most common form of renewable energy, and countries worldwide are considering expanding hydropower to new areas. One of the challenges of hydropower deployment is mitigation of the environmental impacts including water quality, habitat alterations, and ecosystem connectivity. For fish species that inhabit river systems with hydropower facilities, passage through the facility to access spawning and rearing habitats can be particularly challenging. Fish moving downstream through a hydro-facility can be exposed to a number of stressors (e.g., rapid decompression, shear forces, blade strike and collision, and turbulence), which can all affect fish survival in direct and indirect ways. Many studies have investigated the effects of hydro-turbine passage on fish; however, the comparability among studies is limited by variation in the metrics and biological endpoints used. Future studies investigating the effects of hydro-turbine passage should focus on using metrics and endpoints that are easily comparable. This review summarizes four categories of metrics that are used in fisheries research and have application to hydro-turbine passage (i.e., mortality, injury, molecular metrics, behavior) and evaluates them based on several criteria (i.e., resources needed, invasiveness, comparability among stressors and species, and diagnostic properties). Additionally, these comparisons are put into context of study setting (i.e., laboratory vs. field). Overall, injury and molecular metrics are ideal for studies in which there is a need to understand the mechanisms of effect, whereas behavior and mortality metrics provide information on the whole body response of the fish. The study setting strongly influences the comparability among studies. In laboratory-based studies, stressors can be controlled by both type and magnitude, allowing for easy comparisons among studies. In contrast, field studies expose fish to realistic passage environments but the comparability is

  3. Information-theoretic semi-supervised metric learning via entropy regularization.

    Science.gov (United States)

    Niu, Gang; Dai, Bo; Yamada, Makoto; Sugiyama, Masashi

    2014-08-01

    We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.

  4. INVESTIGATION AND EVALUATION OF SPATIAL PATTERNS IN TABRIZ PARKS USING LANDSCAPE METRICS

    Directory of Open Access Journals (Sweden)

    Ali Majnouni Toutakhane

    2016-01-01

    Full Text Available Nowadays, the green spaces in cities and especially metropolises have adopted a variety of functions. In addition to improving the environmental conditions, they are suitable places for spending free times and mitigating nervous pressures of the machinery life based on their distribution and dispersion in the cities. In this research, in order to study the spatial distribution and composition of the parks and green spaces in Tabriz metropolis, the map of Parks prepared using the digital atlas of Tabriz parks and Arc Map and IDRISI softwares. Then, quantitative information of spatial patterns of Tabriz parks provided using Fragstats software and a selection of landscape metrics including: the area of class, patch density, percentage of landscape, average patch size, average patch area, largest patch index, landscape shape index, average Euclidean distance of the nearest neighborhood and average index of patch shape. Then the spatial distribution, composition, extent and continuity of the parks was evaluated. Overall, only 8.5 percent of the landscape is assigned to the parks, and they are studied in three classes of neighborhood, district and regional parks. Neighborhood parks and green spaces have a better spatial distribution pattern compared to the other classes and the studied metrics showed better results for this class. In contrast, the quantitative results of the metrics calculated for regional parks, showed the most unfavorable spatial status for this class of parks among the three classes studied in Tabriz city.

  5. National evaluation of multidisciplinary quality metrics for head and neck cancer.

    Science.gov (United States)

    Cramer, John D; Speedy, Sedona E; Ferris, Robert L; Rademaker, Alfred W; Patel, Urjeet A; Samant, Sandeep

    2017-11-15

    The National Quality Forum has endorsed quality-improvement measures for multiple cancer types that are being developed into actionable tools to improve cancer care. No nationally endorsed quality metrics currently exist for head and neck cancer. The authors identified patients with surgically treated, invasive, head and neck squamous cell carcinoma in the National Cancer Data Base from 2004 to 2014 and compared the rate of adherence to 5 different quality metrics and whether compliance with these quality metrics impacted overall survival. The metrics examined included negative surgical margins, neck dissection lymph node (LN) yield ≥ 18, appropriate adjuvant radiation, appropriate adjuvant chemoradiation, adjuvant therapy within 6 weeks, as well as overall quality. In total, 76,853 eligible patients were identified. There was substantial variability in patient-level adherence, which was 80% for negative surgical margins, 73.1% for neck dissection LN yield, 69% for adjuvant radiation, 42.6% for adjuvant chemoradiation, and 44.5% for adjuvant therapy within 6 weeks. Risk-adjusted Cox proportional-hazard models indicated that all metrics were associated with a reduced risk of death: negative margins (hazard ratio [HR] 0.73; 95% confidence interval [CI], 0.71-0.76), LN yield ≥ 18 (HR, 0.93; 95% CI, 0.89-0.96), adjuvant radiation (HR, 0.67; 95% CI, 0.64-0.70), adjuvant chemoradiation (HR, 0.84; 95% CI, 0.79-0.88), and adjuvant therapy ≤6 weeks (HR, 0.92; 95% CI, 0.89-0.96). Patients who received high-quality care had a 19% reduced adjusted hazard of mortality (HR, 0.81; 95% CI, 0.79-0.83). Five head and neck cancer quality metrics were identified that have substantial variability in adherence and meaningfully impact overall survival. These metrics are appropriate candidates for national adoption. Cancer 2017;123:4372-81. © 2017 American Cancer Society. © 2017 American Cancer Society.

  6. Towards a consensus on datasets and evaluation metrics for developing B-cell epitope prediction tools

    DEFF Research Database (Denmark)

    Greenbaum, Jason A.; Andersen, Pernille; Blythe, Martin

    2007-01-01

    and immunology communities. Improving the accuracy of B-cell epitope prediction methods depends on a community consensus on the data and metrics utilized to develop and evaluate such tools. A workshop, sponsored by the National Institute of Allergy and Infectious Disease (NIAID), was recently held in Washington...

  7. Performance evaluation of objective quality metrics for HDR image compression

    Science.gov (United States)

    Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic

    2014-09-01

    Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.

  8. Engineering performance metrics

    Science.gov (United States)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  9. Social Media Metrics Importance and Usage Frequency in Latvia

    Directory of Open Access Journals (Sweden)

    Ronalds Skulme

    2017-12-01

    Full Text Available Purpose of the article: The purpose of this paper was to explore which social media marketing metrics are most often used and are most important for marketing experts in Latvia and can be used to evaluate marketing campaign effectiveness. Methodology/methods: In order to achieve the aim of this paper several theoretical and practical research methods were used, such as theoretical literature analysis, surveying and grouping. First of all, theoretical research about social media metrics was conducted. Authors collected information about social media metric grouping methods and the most frequently mentioned social media metrics in the literature. The collected information was used as the foundation for the expert surveys. The expert surveys were used to collect information from Latvian marketing professionals to determine which social media metrics are used most often and which social media metrics are most important in Latvia. Scientific aim: The scientific aim of this paper was to identify if social media metrics importance varies depending on the consumer purchase decision stage. Findings: Information about the most important and most often used social media marketing metrics in Latvia was collected. A new social media grouping framework is proposed. Conclusions: The main conclusion is that the importance and the usage frequency of the social media metrics is changing depending of consumer purchase decisions stage the metric is used to evaluate.

  10. Predicting class testability using object-oriented metrics

    NARCIS (Netherlands)

    M. Bruntink (Magiel); A. van Deursen (Arie)

    2004-01-01

    textabstractIn this paper we investigate factors of the testability of object-oriented software systems. The starting point is given by a study of the literature to obtain both an initial model of testability and existing OO metrics related to testability. Subsequently, these metrics are evaluated

  11. NASA Aviation Safety Program Systems Analysis/Program Assessment Metrics Review

    Science.gov (United States)

    Louis, Garrick E.; Anderson, Katherine; Ahmad, Tisan; Bouabid, Ali; Siriwardana, Maya; Guilbaud, Patrick

    2003-01-01

    The goal of this project is to evaluate the metrics and processes used by NASA's Aviation Safety Program in assessing technologies that contribute to NASA's aviation safety goals. There were three objectives for reaching this goal. First, NASA's main objectives for aviation safety were documented and their consistency was checked against the main objectives of the Aviation Safety Program. Next, the metrics used for technology investment by the Program Assessment function of AvSP were evaluated. Finally, other metrics that could be used by the Program Assessment Team (PAT) were identified and evaluated. This investigation revealed that the objectives are in fact consistent across organizational levels at NASA and with the FAA. Some of the major issues discussed in this study which should be further investigated, are the removal of the Cost and Return-on-Investment metrics, the lack of the metrics to measure the balance of investment and technology, the interdependencies between some of the metric risk driver categories, and the conflict between 'fatal accident rate' and 'accident rate' in the language of the Aviation Safety goal as stated in different sources.

  12. A Metric for Secrecy-Energy Efficiency Tradeoff Evaluation in 3GPP Cellular Networks

    Directory of Open Access Journals (Sweden)

    Fabio Ciabini

    2016-10-01

    Full Text Available Physical-layer security is now being considered for information protection in future wireless communications. However, a better understanding of the inherent secrecy of wireless systems under more realistic conditions, with a specific attention to the relative energy consumption costs, has to be pursued. This paper aims at proposing new analysis tools and investigating the relation between secrecy capacity and energy consumption in a 3rd Generation Partnership Project (3GPP cellular network , by focusing on secure and energy efficient communications. New metrics that bind together the secure area in the Base Station (BS sectors, the afforded date-rate and the power spent by the BS to obtain it, are proposed that permit evaluation of the tradeoff between these aspects. The results show that these metrics are useful in identifying the optimum transmit power level for the BS, so that the maximum secure area can be obtained while minimizing the energy consumption.

  13. Survey of source code metrics for evaluating testability of object oriented systems

    OpenAIRE

    Shaheen , Muhammad Rabee; Du Bousquet , Lydie

    2010-01-01

    Software testing is costly in terms of time and funds. Testability is a software characteristic that aims at producing systems easy to test. Several metrics have been proposed to identify the testability weaknesses. But it is sometimes difficult to be convinced that those metrics are really related with testability. This article is a critical survey of the source-code based metrics proposed in the literature for object-oriented software testability. It underlines the necessity to provide test...

  14. Field installation versus local integration of photovoltaic systems and their effect on energy evaluation metrics

    International Nuclear Information System (INIS)

    Halasah, Suleiman A.; Pearlmutter, David; Feuermann, Daniel

    2013-01-01

    In this study we employ Life-Cycle Assessment to evaluate the energy-related impacts of photovoltaic systems at different scales of integration, in an arid region with especially high solar irradiation. Based on the electrical output and embodied energy of a selection of fixed and tracking systems and including concentrator photovoltaic (CPV) and varying cell technology, we calculate a number of energy evaluation metrics, including the energy payback time (EPBT), energy return factor (ERF), and life-cycle CO 2 emissions offset per unit aperture and land area. Studying these metrics in the context of a regionally limited setting, it was found that utilizing existing infrastructure such as existing building roofs and shade structures does significantly reduce the embodied energy requirements (by 20–40%) and in turn the EPBT of flat-plate PV systems due to the avoidance of energy-intensive balance of systems (BOS) components like foundations. Still, high-efficiency CPV field installations were found to yield the shortest EPBT, the highest ERF and the largest life-cycle CO 2 offsets—under the condition that land availability is not a limitation. A greater life-cycle energy return and carbon offset per unit land area is yielded by locally-integrated non-concentrating systems, despite their lower efficiency per unit module area. - Highlights: ► We evaluate life-cycle energy impacts of PV systems at different scales. ► We calculate the energy payback time, return factor and CO 2 emissions offset. ► Utilizing existing structures significantly improves metrics of flat-plate PV. ► High-efficiency CPV installations yield best return and offset per aperture area. ► Locally-integrated flat-plate systems yield best return and offset per land area.

  15. [Clinical trial data management and quality metrics system].

    Science.gov (United States)

    Chen, Zhao-hua; Huang, Qin; Deng, Ya-zhong; Zhang, Yue; Xu, Yu; Yu, Hao; Liu, Zong-fan

    2015-11-01

    Data quality management system is essential to ensure accurate, complete, consistent, and reliable data collection in clinical research. This paper is devoted to various choices of data quality metrics. They are categorized by study status, e.g. study start up, conduct, and close-out. In each category, metrics for different purposes are listed according to ALCOA+ principles such us completeness, accuracy, timeliness, traceability, etc. Some general quality metrics frequently used are also introduced. This paper contains detail information as much as possible to each metric by providing definition, purpose, evaluation, referenced benchmark, and recommended targets in favor of real practice. It is important that sponsors and data management service providers establish a robust integrated clinical trial data quality management system to ensure sustainable high quality of clinical trial deliverables. It will also support enterprise level of data evaluation and bench marking the quality of data across projects, sponsors, data management service providers by using objective metrics from the real clinical trials. We hope this will be a significant input to accelerate the improvement of clinical trial data quality in the industry.

  16. Self-organizing weights for Internet AS-graphs and surprisingly simple routing metrics

    DEFF Research Database (Denmark)

    Scholz, Jan Carsten; Greiner, Martin

    2011-01-01

    The transport capacity of Internet-like communication networks and hence their efficiency may be improved by a factor of 5–10 through the use of highly optimized routing metrics, as demonstrated previously. The numerical determination of such routing metrics can be computationally demanding...... to an extent that prohibits both investigation of and application to very large networks. In an attempt to find a numerically less expensive way of constructing a metric with a comparable performance increase, we propose a local, self-organizing iteration scheme and find two surprisingly simple and efficient...... metrics. The new metrics have negligible computational cost and result in an approximately 5-fold performance increase, providing distinguished competitiveness with the computationally costly counterparts. They are applicable to very large networks and easy to implement in today's Internet routing...

  17. Measurable Control System Security through Ideal Driven Technical Metrics

    Energy Technology Data Exchange (ETDEWEB)

    Miles McQueen; Wayne Boyer; Sean McBride; Marie Farrar; Zachary Tudor

    2008-01-01

    The Department of Homeland Security National Cyber Security Division supported development of a small set of security ideals as a framework to establish measurable control systems security. Based on these ideals, a draft set of proposed technical metrics was developed to allow control systems owner-operators to track improvements or degradations in their individual control systems security posture. The technical metrics development effort included review and evaluation of over thirty metrics-related documents. On the bases of complexity, ambiguity, or misleading and distorting effects the metrics identified during the reviews were determined to be weaker than necessary to aid defense against the myriad threats posed by cyber-terrorism to human safety, as well as to economic prosperity. Using the results of our metrics review and the set of security ideals as a starting point for metrics development, we identified thirteen potential technical metrics - with at least one metric supporting each ideal. Two case study applications of the ideals and thirteen metrics to control systems were then performed to establish potential difficulties in applying both the ideals and the metrics. The case studies resulted in no changes to the ideals, and only a few deletions and refinements to the thirteen potential metrics. This led to a final proposed set of ten core technical metrics. To further validate the security ideals, the modifications made to the original thirteen potential metrics, and the final proposed set of ten core metrics, seven separate control systems security assessments performed over the past three years were reviewed for findings and recommended mitigations. These findings and mitigations were then mapped to the security ideals and metrics to assess gaps in their coverage. The mappings indicated that there are no gaps in the security ideals and that the ten core technical metrics provide significant coverage of standard security issues with 87% coverage. Based

  18. Evaluation of metrics and baselines for tracking greenhouse gas emissions trends: Recommendations for the California climate action registry

    Energy Technology Data Exchange (ETDEWEB)

    Price, Lynn; Murtishaw, Scott; Worrell, Ernst

    2003-06-01

    Laboratory (Berkeley Lab) was asked to provide technical assistance to the California Energy Commission (Energy Commission) related to the Registry in three areas: (1) assessing the availability and usefulness of industry-specific metrics, (2) evaluating various methods for establishing baselines for calculating GHG emissions reductions related to specific actions taken by Registry participants, and (3) establishing methods for calculating electricity CO2 emission factors. The third area of research was completed in 2002 and is documented in Estimating Carbon Dioxide Emissions Factors for the California Electric Power Sector (Marnay et al., 2002). This report documents our findings related to the first areas of research. For the first area of research, the overall objective was to evaluate the metrics, such as emissions per economic unit or emissions per unit of production that can be used to report GHG emissions trends for potential Registry participants. This research began with an effort to identify methodologies, benchmarking programs, inventories, protocols, and registries that u se industry-specific metrics to track trends in energy use or GHG emissions in order to determine what types of metrics have already been developed. The next step in developing industry-specific metrics was to assess the availability of data needed to determine metric development priorities. Berkeley Lab also determined the relative importance of different potential Registry participant categories in order to asses s the availability of sectoral or industry-specific metrics and then identified industry-specific metrics in use around the world. While a plethora of metrics was identified, no one metric that adequately tracks trends in GHG emissions while maintaining confidentiality of data was identified. As a result of this review, Berkeley Lab recommends the development of a GHG intensity index as a new metric for reporting and tracking GHG emissions trends.Such an index could provide an

  19. Sharp metric obstructions for quasi-Einstein metrics

    Science.gov (United States)

    Case, Jeffrey S.

    2013-02-01

    Using the tractor calculus to study smooth metric measure spaces, we adapt results of Gover and Nurowski to give sharp metric obstructions to the existence of quasi-Einstein metrics on suitably generic manifolds. We do this by introducing an analogue of the Weyl tractor W to the setting of smooth metric measure spaces. The obstructions we obtain can be realized as tensorial invariants which are polynomial in the Riemann curvature tensor and its divergence. By taking suitable limits of their tensorial forms, we then find obstructions to the existence of static potentials, generalizing to higher dimensions a result of Bartnik and Tod, and to the existence of potentials for gradient Ricci solitons.

  20. Comparative Study of Trace Metrics between Bibliometrics and Patentometrics

    Directory of Open Access Journals (Sweden)

    Fred Y. Ye

    2016-06-01

    Full Text Available Purpose: To comprehensively evaluate the overall performance of a group or an individual in both bibliometrics and patentometrics. Design/methodology/approach: Trace metrics were applied to the top 30 universities in the 2014 Academic Ranking of World Universities (ARWU — computer sciences, the top 30 ESI highly cited papers in the computer sciences field in 2014, as well as the top 30 assignees and the top 30 most cited patents in the National Bureau of Economic Research (NBER computer hardware and software category. Findings: We found that, by applying trace metrics, the research or marketing impact efficiency, at both group and individual levels, was clearly observed. Furthermore, trace metrics were more sensitive to the different publication-citation distributions than the average citation and h-index were. Research limitations: Trace metrics considered publications with zero citations as negative contributions. One should clarify how he/she evaluates a zero-citation paper or patent before applying trace metrics. Practical implications: Decision makers could regularly examinine the performance of their university/company by applying trace metrics and adjust their policies accordingly. Originality/value: Trace metrics could be applied both in bibliometrics and patentometrics and provide a comprehensive view. Moreover, the high sensitivity and unique impact efficiency view provided by trace metrics can facilitate decision makers in examining and adjusting their policies.

  1. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies: The Evidence and the Framework.

    Science.gov (United States)

    Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora

    2017-12-01

    Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to ( a ) catalog feasibility measures/metrics and ( b ) propose a framework. For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization.

  2. Development of a perceptually calibrated objective metric of noise

    Science.gov (United States)

    Keelan, Brian W.; Jin, Elaine W.; Prokushkin, Sergey

    2011-01-01

    A system simulation model was used to create scene-dependent noise masks that reflect current performance of mobile phone cameras. Stimuli with different overall magnitudes of noise and with varying mixtures of red, green, blue, and luminance noises were included in the study. Eleven treatments in each of ten pictorial scenes were evaluated by twenty observers using the softcopy ruler method. In addition to determining the quality loss function in just noticeable differences (JNDs) for the average observer and scene, transformations for different combinations of observer sensitivity and scene susceptibility were derived. The psychophysical results were used to optimize an objective metric of isotropic noise based on system noise power spectra (NPS), which were integrated over a visual frequency weighting function to yield perceptually relevant variances and covariances in CIE L*a*b* space. Because the frequency weighting function is expressed in terms of cycles per degree at the retina, it accounts for display pixel size and viewing distance effects, so application-specific predictions can be made. Excellent results were obtained using only L* and a* variances and L*a* covariance, with relative weights of 100, 5, and 12, respectively. The positive a* weight suggests that the luminance (photopic) weighting is slightly narrow on the long wavelength side for predicting perceived noisiness. The L*a* covariance term, which is normally negative, reflects masking between L* and a* noise, as confirmed in informal evaluations. Test targets in linear sRGB and rendered L*a*b* spaces for each treatment are available at http://www.aptina.com/ImArch/ to enable other researchers to test metrics of their own design and calibrate them to JNDs of quality loss without performing additional observer experiments. Such JND-calibrated noise metrics are particularly valuable for comparing the impact of noise and other attributes, and for computing overall image quality.

  3. $\\eta$-metric structures

    OpenAIRE

    Gaba, Yaé Ulrich

    2017-01-01

    In this paper, we discuss recent results about generalized metric spaces and fixed point theory. We introduce the notion of $\\eta$-cone metric spaces, give some topological properties and prove some fixed point theorems for contractive type maps on these spaces. In particular we show that theses $\\eta$-cone metric spaces are natural generalizations of both cone metric spaces and metric type spaces.

  4. A Novel Riemannian Metric Based on Riemannian Structure and Scaling Information for Fixed Low-Rank Matrix Completion.

    Science.gov (United States)

    Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit

    2017-05-01

    Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.

  5. IDENTIFYING MARKETING EFFECTIVENESS METRICS (Case study: East Azerbaijan`s industrial units)

    OpenAIRE

    Faridyahyaie, Reza; Faryabi, Mohammad; Bodaghi Khajeh Noubar, Hossein

    2012-01-01

    The Paper attempts to identify marketing eff ectiveness metrics in industrial units. The metrics investigated in this study are completely applicable and comprehensive, and consequently they can evaluate marketing eff ectiveness in various industries. The metrics studied include: Market Share, Profitability, Sales Growth, Customer Numbers, Customer Satisfaction and Customer Loyalty. The findings indicate that these six metrics are impressive when measuring marketing effectiveness. Data was ge...

  6. Survey of consumer attitudes and awareness of the metric conversion of distilled spirits containers: A policy and planning evaluation

    Science.gov (United States)

    Simpson, J. A.; Barsby, S. L.

    1981-12-01

    The survey was conducted as part of a policy and planning evaluation study. The overall study was an examination of a completed private sector conversion to the metric system, in the light of the US Metric Board's planning guidelines and procedures. The conversion of distilled spirits containers took place prior to the establishment of the USMB. The study's objective was to use the completed version to determine if the guidelines and related procedures were adequate to help the conversion process. If they were not, the study was designed to provide suggestions for improvement.

  7. The Optimizer Topology Characteristics in Seismic Hazards

    Science.gov (United States)

    Sengor, T.

    2015-12-01

    The characteristic data of the natural phenomena are questioned in a topological space approach to illuminate whether there is an algorithm behind them bringing the situation of physics of phenomena to optimized states even if they are hazards. The optimized code designing the hazard on a topological structure mashes the metric of the phenomena. The deviations in the metric of different phenomena push and/or pull the fold of the other suitable phenomena. For example if the metric of a specific phenomenon A fits to the metric of another specific phenomenon B after variation processes generated with the deviation of the metric of previous phenomenon A. Defining manifold processes covering the metric characteristics of each of every phenomenon is possible for all the physical events; i.e., natural hazards. There are suitable folds in those manifold groups so that each subfold fits to the metric characteristics of one of the natural hazard category at least. Some variation algorithms on those metric structures prepare a gauge effect bringing the long time stability of Earth for largely scaled periods. The realization of that stability depends on some specific conditions. These specific conditions are called optimized codes. The analytical basics of processes in topological structures are developed in [1]. The codes are generated according to the structures in [2]. Some optimized codes are derived related to the seismicity of NAF beginning from the quakes of the year 1999. References1. Taner SENGOR, "Topological theory and analytical configuration for a universal community model," Procedia- Social and Behavioral Sciences, Vol. 81, pp. 188-194, 28 June 2013, 2. Taner SENGOR, "Seismic-Climatic-Hazardous Events Estimation Processes via the Coupling Structures in Conserving Energy Topologies of the Earth," The 2014 AGU Fall Meeting, Abstract no.: 31374, ABD.

  8. Disaster Metrics: A Comprehensive Framework for Disaster Evaluation Typologies.

    Science.gov (United States)

    Wong, Diana F; Spencer, Caroline; Boyd, Lee; Burkle, Frederick M; Archer, Frank

    2017-10-01

    framework. This unique, unifying framework has relevance at an international level and is expected to benefit the disaster, humanitarian, and development sectors. The next step is to undertake a validation process that will include international leaders with experience in evaluation, in general, and disasters specifically. This work promotes an environment for constructive dialogue on evaluations in the disaster setting to strengthen the evidence base for interventions across the disaster spectrum. It remains a work in progress. Wong DF , Spencer C , Boyd L , Burkle FM Jr. , Archer F . Disaster metrics: a comprehensive framework for disaster evaluation typologies. Prehosp Disaster Med. 2017;32(5):501-514.

  9. Relevance of motion-related assessment metrics in laparoscopic surgery.

    Science.gov (United States)

    Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J

    2013-06-01

    Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.

  10. Best Proximity Point Results in Complex Valued Metric Spaces

    Directory of Open Access Journals (Sweden)

    Binayak S. Choudhury

    2014-01-01

    complex valued metric spaces. We treat the problem as that of finding the global optimal solution of a fixed point equation although the exact solution does not in general exist. We also define and use the concept of P-property in such spaces. Our results are illustrated with examples.

  11. WE-B-304-02: Treatment Planning Evaluation and Optimization Should Be Biologically and Not Dose/volume Based

    International Nuclear Information System (INIS)

    Deasy, J.

    2015-01-01

    The ultimate goal of radiotherapy treatment planning is to find a treatment that will yield a high tumor control probability (TCP) with an acceptable normal tissue complication probability (NTCP). Yet most treatment planning today is not based upon optimization of TCPs and NTCPs, but rather upon meeting physical dose and volume constraints defined by the planner. It has been suggested that treatment planning evaluation and optimization would be more effective if they were biologically and not dose/volume based, and this is the claim debated in this month’s Point/Counterpoint. After a brief overview of biologically and DVH based treatment planning by the Moderator Colin Orton, Joseph Deasy (for biological planning) and Charles Mayo (against biological planning) will begin the debate. Some of the arguments in support of biological planning include: this will result in more effective dose distributions for many patients DVH-based measures of plan quality are known to have little predictive value there is little evidence that either D95 or D98 of the PTV is a good predictor of tumor control sufficient validated outcome prediction models are now becoming available and should be used to drive planning and optimization Some of the arguments against biological planning include: several decades of experience with DVH-based planning should not be discarded we do not know enough about the reliability and errors associated with biological models the radiotherapy community in general has little direct experience with side by side comparisons of DVH vs biological metrics and outcomes it is unlikely that a clinician would accept extremely cold regions in a CTV or hot regions in a PTV, despite having acceptable TCP values Learning Objectives: To understand dose/volume based treatment planning and its potential limitations To understand biological metrics such as EUD, TCP, and NTCP To understand biologically based treatment planning and its potential limitations

  12. A review of the trunk surface metrics used as Scoliosis and other deformities evaluation indices

    Directory of Open Access Journals (Sweden)

    Aggouris Costas

    2010-06-01

    Full Text Available Abstract Background Although scoliosis is characterized by lateral deviation of the spine, a 3D deformation actually is responsible for geometric and morphologic changes in the trunk and rib cage. In a vast related medical literature, one can find quite a few scoliosis evaluation indices, which are based on back surface data and are generally measured along three planes. Regardless the large number of such indices, the literature is lacking a coherent presentation of the underlying metrics, the involved anatomic surface landmarks, the definition of planes and the definition of the related body axes. In addition, the long list of proposed scoliotic indices is rarely presented in cross-reference to each other. This creates a possibility of misunderstandings and sometimes irrational or even wrong use of these indices by the medical society. Materials and methods It is hoped that the current work contributes in clearing up the issue and gives rise to innovative ideas on how to assess the surface metrics in scoliosis. In particular, this paper presents a thorough study on the scoliosis evaluation indices, proposed by the medical society. Results More specifically, the referred indices are classified, according to the type of asymmetry they measure, according to the plane they refer to, according to the importance, and relevance or the level of scientific consensus they enjoy. Conclusions Surface metrics have very little correlation to Cobb angle measurements. Indices measured on different planes do not correlate to each other. Different indices exhibit quite diverging characteristics in terms of observer-induced errors, accuracy, sensitivity and specificity. Complicated positioning of the patient and ambiguous anatomical landmarks are the major error sources, which cause observer variations. Principles that should be followed when an index is proposed are presented.

  13. Evaluation of alternate categorical tumor metrics and cut points for response categorization using the RECIST 1.1 data warehouse.

    Science.gov (United States)

    Mandrekar, Sumithra J; An, Ming-Wen; Meyers, Jeffrey; Grothey, Axel; Bogaerts, Jan; Sargent, Daniel J

    2014-03-10

    We sought to test and validate the predictive utility of trichotomous tumor response (TriTR; complete response [CR] or partial response [PR] v stable disease [SD] v progressive disease [PD]), disease control rate (DCR; CR/PR/SD v PD), and dichotomous tumor response (DiTR; CR/PR v others) metrics using alternate cut points for PR and PD. The data warehouse assembled to guide the Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1 was used. Data from 13 trials (5,480 patients with metastatic breast cancer, non-small-cell lung cancer, or colorectal cancer) were randomly split (60:40) into training and validation data sets. In all, 27 pairs of cut points for PR and PD were considered: PR (10% to 50% decrease by 5% increments) and PD (10% to 20% increase by 5% increments), for which 30% and 20% correspond to the RECIST categorization. Cox proportional hazards models with landmark analyses at 12 and 24 weeks stratified by study and number of lesions (fewer than three v three or more) and adjusted for average baseline tumor size were used to assess the impact of each metric on overall survival (OS). Model discrimination was assessed by using the concordance index (c-index). Standard RECIST cut points demonstrated predictive ability similar to the alternate PR and PD cut points. Regardless of tumor type, the TriTR, DiTR, and DCR metrics had similar predictive performance. The 24-week metrics (albeit with higher c-index point estimate) were not meaningfully better than the 12-week metrics. None of the metrics did particularly well for breast cancer. Alternative cut points to RECIST standards provided no meaningful improvement in OS prediction. Metrics assessed at 12 weeks have good predictive performance.

  14. New metrics for evaluating channel networks extracted in grid digital elevation models

    Science.gov (United States)

    Orlandini, S.; Moretti, G.

    2017-12-01

    Channel networks are critical components of drainage basins and delta regions. Despite the important role played by these systems in hydrology and geomorphology, there are at present no well-defined methods to evaluate numerically how two complex channel networks are geometrically far apart. The present study introduces new metrics for evaluating numerically channel networks extracted in grid digital elevation models with respect to a reference channel network (see the figure below). Streams of the evaluated network (EN) are delineated as in the Horton ordering system and examined through a priority climbing algorithm based on the triple index (ID1,ID2,ID3), where ID1 is a stream identifier that increases as the elevation of lower end of the stream increases, ID2 indicates the ID1 of the draining stream, and ID3 is the ID1 of the corresponding stream in the reference network (RN). Streams of the RN are identified by the double index (ID1,ID2). Streams of the EN are processed in the order of increasing ID1 (plots a-l in the figure below). For each processed stream of the EN, the closest stream of the RN is sought by considering all the streams of the RN sharing the same ID2. This ID2 in the RN is equal in the EN to the ID3 of the stream draining the processed stream, the one having ID1 equal to the ID2 of the processed stream. The mean stream planar distance (MSPD) and the mean stream elevation drop (MSED) are computed as the mean distance and drop, respectively, between corresponding streams. The MSPD is shown to be useful for evaluating slope direction methods and thresholds for channel initiation, whereas the MSED is shown to indicate the ability of grid coarsening strategies to retain the profiles of observed channels. The developed metrics fill a gap in the existing literature by allowing hydrologists and geomorphologists to compare descriptions of a fixed physical system obtained by using different terrain analysis methods, or different physical systems

  15. Modelling the B2C Marketplace: Evaluation of a Reputation Metric for e-Commerce

    Science.gov (United States)

    Gutowska, Anna; Sloane, Andrew

    This paper evaluates recently developed novel and comprehensive reputation metric designed for the distributed multi-agent reputation system for the Business-to-Consumer (B2C) E-commerce applications. To do that an agent-based simulation framework was implemented which models different types of behaviours in the marketplace. The trustworthiness of different types of providers is investigated to establish whether the simulation models behaviour of B2C e-Commerce systems as they are expected to behave in real life.

  16. Environmental cost of using poor decision metrics to prioritize environmental projects.

    Science.gov (United States)

    Pannell, David J; Gibson, Fiona L

    2016-04-01

    Conservation decision makers commonly use project-scoring metrics that are inconsistent with theory on optimal ranking of projects. As a result, there may often be a loss of environmental benefits. We estimated the magnitudes of these losses for various metrics that deviate from theory in ways that are common in practice. These metrics included cases where relevant variables were omitted from the benefits metric, project costs were omitted, and benefits were calculated using a faulty functional form. We estimated distributions of parameters from 129 environmental projects from Australia, New Zealand, and Italy for which detailed analyses had been completed previously. The cost of using poor prioritization metrics (in terms of lost environmental values) was often high--up to 80% in the scenarios we examined. The cost in percentage terms was greater when the budget was smaller. The most costly errors were omitting information about environmental values (up to 31% loss of environmental values), omitting project costs (up to 35% loss), omitting the effectiveness of management actions (up to 9% loss), and using a weighted-additive decision metric for variables that should be multiplied (up to 23% loss). The latter 3 are errors that occur commonly in real-world decision metrics, in combination often reducing potential benefits from conservation investments by 30-50%. Uncertainty about parameter values also reduced the benefits from investments in conservation projects but often not by as much as faulty prioritization metrics. © 2016 Society for Conservation Biology.

  17. Standardised metrics for global surgical surveillance.

    Science.gov (United States)

    Weiser, Thomas G; Makary, Martin A; Haynes, Alex B; Dziekan, Gerald; Berry, William R; Gawande, Atul A

    2009-09-26

    Public health surveillance relies on standardised metrics to evaluate disease burden and health system performance. Such metrics have not been developed for surgical services despite increasing volume, substantial cost, and high rates of death and disability associated with surgery. The Safe Surgery Saves Lives initiative of WHO's Patient Safety Programme has developed standardised public health metrics for surgical care that are applicable worldwide. We assembled an international panel of experts to develop and define metrics for measuring the magnitude and effect of surgical care in a population, while taking into account economic feasibility and practicability. This panel recommended six measures for assessing surgical services at a national level: number of operating rooms, number of operations, number of accredited surgeons, number of accredited anaesthesia professionals, day-of-surgery death ratio, and postoperative in-hospital death ratio. We assessed the feasibility of gathering such statistics at eight diverse hospitals in eight countries and incorporated them into the WHO Guidelines for Safe Surgery, in which methods for data collection, analysis, and reporting are outlined.

  18. Green Chemistry Metrics with Special Reference to Green Analytical Chemistry

    Directory of Open Access Journals (Sweden)

    Marek Tobiszewski

    2015-06-01

    Full Text Available The concept of green chemistry is widely recognized in chemical laboratories. To properly measure an environmental impact of chemical processes, dedicated assessment tools are required. This paper summarizes the current state of knowledge in the field of development of green chemistry and green analytical chemistry metrics. The diverse methods used for evaluation of the greenness of organic synthesis, such as eco-footprint, E-Factor, EATOS, and Eco-Scale are described. Both the well-established and recently developed green analytical chemistry metrics, including NEMI labeling and analytical Eco-scale, are presented. Additionally, this paper focuses on the possibility of the use of multivariate statistics in evaluation of environmental impact of analytical procedures. All the above metrics are compared and discussed in terms of their advantages and disadvantages. The current needs and future perspectives in green chemistry metrics are also discussed.

  19. Green Chemistry Metrics with Special Reference to Green Analytical Chemistry.

    Science.gov (United States)

    Tobiszewski, Marek; Marć, Mariusz; Gałuszka, Agnieszka; Namieśnik, Jacek

    2015-06-12

    The concept of green chemistry is widely recognized in chemical laboratories. To properly measure an environmental impact of chemical processes, dedicated assessment tools are required. This paper summarizes the current state of knowledge in the field of development of green chemistry and green analytical chemistry metrics. The diverse methods used for evaluation of the greenness of organic synthesis, such as eco-footprint, E-Factor, EATOS, and Eco-Scale are described. Both the well-established and recently developed green analytical chemistry metrics, including NEMI labeling and analytical Eco-scale, are presented. Additionally, this paper focuses on the possibility of the use of multivariate statistics in evaluation of environmental impact of analytical procedures. All the above metrics are compared and discussed in terms of their advantages and disadvantages. The current needs and future perspectives in green chemistry metrics are also discussed.

  20. SU-F-J-38: Dose Rates and Preliminary Evaluation of Contouring Similarity Metrics Using 4D Cone Beam CT

    Energy Technology Data Exchange (ETDEWEB)

    Santoso, A [Wayne State University School of Medicine, Detroit, Michigan (United States); Song, K; Qin, Y; Gardner, S; Liu, C; Cattaneo, R; Chetty, I; Movsas, B; Aljouni, M; Wen, N [Henry Ford Health System, Detroit, MI (United States)

    2016-06-15

    Purpose: 4D imaging modalities require detailed characterization for clinical optimization. The On-Board Imager mounted on the linear accelerator was used to investigate dose rates in a tissue mimicking phantom using 4D-CBCT and assess variability of contouring similarity metrics between 4D-CT and 4D-CBCT retrospective reconstructions. Methods: A 125 kVp thoracic protocol was used. A phantom placed on a motion platform simulated a patient’s breathing cycle. An ion chamber was affixed inside the phantom’s tissue mimicking cavities (i.e. bone, lung, and soft tissue). A sinusoidal motion waveform was executed with a five second period and superior-inferior motion. Dose rates were measured at six ion chamber positions. A preliminary workflow for contouring similarity between 4D-CT and 4D-CBCT was established using a single lung SBRT patient’s historical data. Average intensity projection (Ave-IP) and maximum intensity projection (MIP) reconstructions generated offline were compared between the 4D modalities. Similarity metrics included Dice similarity coefficient (DSC), Hausdorff distance, and center of mass (COM) deviation. Two isolated lesions were evaluated in the patient’s scans: one located in the right lower lobe (ITVRLL) and one located in the left lower lobe (ITVLLL). Results: Dose rates ranged from 2.30 (lung) to 5.18 (bone) E-3 cGy/mAs. For fixed acquisition parameters, cumulative dose is inversely proportional to gantry speed. For ITVRLL, DSC were 0.70 and 0.68, Hausdorff distances were 6.11 and 5.69 mm, and COM deviations were 1.24 and 4.77 mm, for Ave-IP and MIP respectively. For ITVLLL, DSC were 0.64 and 0.75, Hausdorff distances were 10.74 and 8.00 mm, and COM deviations were 7.55 and 4.3 mm, for Ave-IP and MIP respectively. Conclusion: While the dosimetric output of 4D-CBCT is low, characterization is necessary to assure clinical optimization. A basic workflow for comparison of simulation and treatment 4D image-based contours was established

  1. Metrics for evaluating patient navigation during cancer diagnosis and treatment: crafting a policy-relevant research agenda for patient navigation in cancer care.

    Science.gov (United States)

    Guadagnolo, B Ashleigh; Dohan, Daniel; Raich, Peter

    2011-08-01

    Racial and ethnic minorities as well as other vulnerable populations experience disparate cancer-related health outcomes. Patient navigation is an emerging health care delivery innovation that offers promise in improving quality of cancer care delivery to these patients who experience unique health-access barriers. Metrics are needed to evaluate whether patient navigation can improve quality of care delivery, health outcomes, and overall value in health care during diagnosis and treatment of cancer. Information regarding the current state of the science examining patient navigation interventions was gathered via search of the published scientific literature. A focus group of providers, patient navigators, and health-policy experts was convened as part of the Patient Navigation Leadership Summit sponsored by the American Cancer Society. Key metrics were identified for assessing the efficacy of patient navigation in cancer diagnosis and treatment. Patient navigation data exist for all stages of cancer care; however, the literature is more robust for its implementation during prevention, screening, and early diagnostic workup of cancer. Relatively fewer data are reported for outcomes and efficacy of patient navigation during cancer treatment. Metrics are proposed for a policy-relevant research agenda to evaluate the efficacy of patient navigation in cancer diagnosis and treatment. Patient navigation is understudied with respect to its use in cancer diagnosis and treatment. Core metrics are defined to evaluate its efficacy in improving outcomes and mitigating health-access barriers. Copyright © 2011 American Cancer Society.

  2. 3D-2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    Science.gov (United States)

    De Silva, T.; Uneri, A.; Ketcha, M. D.; Reaungamornrat, S.; Kleinszig, G.; Vogt, S.; Aygun, N.; Lo, S.-F.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-04-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE  interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of  >14% however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved runtime (29.3 s). The GO metric

  3. Metric modular spaces

    CERN Document Server

    Chistyakov, Vyacheslav

    2015-01-01

    Aimed toward researchers and graduate students familiar with elements of functional analysis, linear algebra, and general topology; this book contains a general study of modulars, modular spaces, and metric modular spaces. Modulars may be thought of as generalized velocity fields and serve two important purposes: generate metric spaces in a unified manner and provide a weaker convergence, the modular convergence, whose topology is non-metrizable in general. Metric modular spaces are extensions of metric spaces, metric linear spaces, and classical modular linear spaces. The topics covered include the classification of modulars, metrizability of modular spaces, modular transforms and duality between modular spaces, metric  and modular topologies. Applications illustrated in this book include: the description of superposition operators acting in modular spaces, the existence of regular selections of set-valued mappings, new interpretations of spaces of Lipschitzian and absolutely continuous mappings, the existe...

  4. Software Quality Assurance Metrics

    Science.gov (United States)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  5. Relevance as a metric for evaluating machine learning algorithms

    NARCIS (Netherlands)

    Kota Gopalakrishna, A.; Ozcelebi, T.; Liotta, A.; Lukkien, J.J.

    2013-01-01

    In machine learning, the choice of a learning algorithm that is suitable for the application domain is critical. The performance metric used to compare different algorithms must also reflect the concerns of users in the application domain under consideration. In this work, we propose a novel

  6. Parameter-space metric of semicoherent searches for continuous gravitational waves

    International Nuclear Information System (INIS)

    Pletsch, Holger J.

    2010-01-01

    Continuous gravitational-wave (CW) signals such as emitted by spinning neutron stars are an important target class for current detectors. However, the enormous computational demand prohibits fully coherent broadband all-sky searches for prior unknown CW sources over wide ranges of parameter space and for yearlong observation times. More efficient hierarchical ''semicoherent'' search strategies divide the data into segments much shorter than one year, which are analyzed coherently; then detection statistics from different segments are combined incoherently. To optimally perform the incoherent combination, understanding of the underlying parameter-space structure is requisite. This problem is addressed here by using new coordinates on the parameter space, which yield the first analytical parameter-space metric for the incoherent combination step. This semicoherent metric applies to broadband all-sky surveys (also embedding directed searches at fixed sky position) for isolated CW sources. Furthermore, the additional metric resolution attained through the combination of segments is studied. From the search parameters (sky position, frequency, and frequency derivatives), solely the metric resolution in the frequency derivatives is found to significantly increase with the number of segments.

  7. An Evaluation of iMetric Studies through the Scholarly Influence Model

    Directory of Open Access Journals (Sweden)

    Faramarz Soheili

    2016-12-01

    Full Text Available Among the topics studied in the context of scientometrics, the issue of the scholarly influence is of special interest. This study tries to test the components in the scholarly influence model based on iMetrics studies, and also to find potential relations among these components. The study uses a bibliometric methodology. Since the researchers aim to determine the relationship between variables, this research is of correlation type. The initial data of this study, which comprises 5944 records in the field of iMetrics during 1978-2014, have been retrieved from Web of Science. To calculate the most of measures involved in each kind of influence, the researchers used UCINet and BibExcel software moreover, some indices have been calculated manually using Excel. After calculating all measures included in three types of influence, the researchers used the Smart PLS to test both the model and research hypotheses. The results of data analysis using the software Smart PLS confirmed the scholarly influence model and indicated significant correlation between the variables in the model. To be more precise, findings uncovered that social influence is associated with both ideational and venue influence. Moreover, the venue influence is associated with ideational influence. If researchers test the scholarly influence model in some other areas and led to positive outcomes, it is hoped that the policy-makers use a combination of variables involved in the model as a measure to evaluate the scholarly influence of researchers and to decision-makings related to purposes such as promotion, recruitment, and so on.

  8. Impact and alternative metrics for medical publishing: our experience with International Orthopaedics.

    Science.gov (United States)

    Scarlat, Marius M; Mavrogenis, Andreas F; Pećina, Marko; Niculescu, Marius

    2015-08-01

    This paper compares the traditional tools of calculation for a journal's efficacy and visibility with the new tools that have arrived from the Internet, social media and search engines. The examples concern publications of orthopaedic surgery and in particular International Orthopaedics. Until recently, the prestige of publications, authors or journals was evaluated by the number of citations using the traditional citation metrics, most commonly the impact factor. Over the last few years, scientific medical literature has developed exponentially. The Internet has dramatically changed the way of sharing and the speed of flow of medical information. New tools have allowed readers from all over the world to access information and record their experience. Web platforms such as Facebook® and Twitter® have allowed for inputs from the general public. Professional sites such as LinkedIn® and more specialised sites such as ResearchGate®, BioMed Central® and OrthoEvidence® have provided specific information on defined fields of science. Scientific and professional blogs provide free access quality information. Therefore, in this new era of advanced wireless technology and online medical communication, the prestige of a paper should also be evaluated by alternative metrics (altmetrics) that measure the visibility of the scientific information by collecting Internet citations, number of downloads, number of hits on the Internet, number of tweets and likes of scholarly articles by newspapers, blogs, social media and other sources of data. This article provides insights into altmetrics and informs the reader about current tools for optimal visibility and citation of their work. It also includes useful information about the performance of International Orthopaedics and the bias between traditional publication metrics and the new alternatives.

  9. Quantum anomalies for generalized Euclidean Taub-NUT metrics

    International Nuclear Information System (INIS)

    Cotaescu, Ion I; Moroianu, Sergiu; Visinescu, Mihai

    2005-01-01

    The generalized Taub-NUT metrics exhibit in general gravitational anomalies. This is in contrast with the fact that the original Taub-NUT metric does not exhibit gravitational anomalies, which is a consequence of the fact that it admits Killing-Yano tensors forming Staeckel-Killing tensors as products. We have found that for axial anomalies, interpreted as the index of the Dirac operator, the presence of Killing-Yano tensors is irrelevant. In order to evaluate the axial anomalies, we compute the index of the Dirac operator with the APS boundary condition on balls and on annular domains. The result is an explicit number-theoretic quantity depending on the radii of the domain. This quantity is 0 for metrics close to the original Taub-NUT metric but it does not vanish in general

  10. Robust Design Impact Metrics: Measuring the effect of implementing and using Robust Design

    DEFF Research Database (Denmark)

    Ebro, Martin; Olesen, Jesper; Howard, Thomas J.

    2014-01-01

    Measuring the performance of an organisation’s product development process can be challenging due to the limited use of metrics in R&D. An organisation considering whether to use Robust Design as an integrated part of their development process may find it difficult to define whether it is relevant......, and afterwards measure the effect of having implemented it. This publication identifies and evaluates Robust Design-related metrics and finds that 2 metrics are especially useful: 1) Relative amount of R&D Resources spent after Design Verification and 2) Number of ‘change notes’ after Design Verification....... The metrics have been applied in a case company to test the assumptions made during the evaluation. It is concluded that the metrics are useful and relevant, but further work is necessary to make a proper overview and categorisation of different types of robustness related metrics....

  11. Important LiDAR metrics for discriminating forest tree species in Central Europe

    Science.gov (United States)

    Shi, Yifang; Wang, Tiejun; Skidmore, Andrew K.; Heurich, Marco

    2018-03-01

    Numerous airborne LiDAR-derived metrics have been proposed for classifying tree species. Yet an in-depth ecological and biological understanding of the significance of these metrics for tree species mapping remains largely unexplored. In this paper, we evaluated the performance of 37 frequently used LiDAR metrics derived under leaf-on and leaf-off conditions, respectively, for discriminating six different tree species in a natural forest in Germany. We firstly assessed the correlation between these metrics. Then we applied a Random Forest algorithm to classify the tree species and evaluated the importance of the LiDAR metrics. Finally, we identified the most important LiDAR metrics and tested their robustness and transferability. Our results indicated that about 60% of LiDAR metrics were highly correlated to each other (|r| > 0.7). There was no statistically significant difference in tree species mapping accuracy between the use of leaf-on and leaf-off LiDAR metrics. However, combining leaf-on and leaf-off LiDAR metrics significantly increased the overall accuracy from 58.2% (leaf-on) and 62.0% (leaf-off) to 66.5% as well as the kappa coefficient from 0.47 (leaf-on) and 0.51 (leaf-off) to 0.58. Radiometric features, especially intensity related metrics, provided more consistent and significant contributions than geometric features for tree species discrimination. Specifically, the mean intensity of first-or-single returns as well as the mean value of echo width were identified as the most robust LiDAR metrics for tree species discrimination. These results indicate that metrics derived from airborne LiDAR data, especially radiometric metrics, can aid in discriminating tree species in a mixed temperate forest, and represent candidate metrics for tree species classification and monitoring in Central Europe.

  12. Productivity in Pediatric Palliative Care: Measuring and Monitoring an Elusive Metric.

    Science.gov (United States)

    Kaye, Erica C; Abramson, Zachary R; Snaman, Jennifer M; Friebert, Sarah E; Baker, Justin N

    2017-05-01

    Workforce productivity is poorly defined in health care. Particularly in the field of pediatric palliative care (PPC), the absence of consensus metrics impedes aggregation and analysis of data to track workforce efficiency and effectiveness. Lack of uniformly measured data also compromises the development of innovative strategies to improve productivity and hinders investigation of the link between productivity and quality of care, which are interrelated but not interchangeable. To review the literature regarding the definition and measurement of productivity in PPC; to identify barriers to productivity within traditional PPC models; and to recommend novel metrics to study productivity as a component of quality care in PPC. PubMed ® and Cochrane Database of Systematic Reviews searches for scholarly literature were performed using key words (pediatric palliative care, palliative care, team, workforce, workflow, productivity, algorithm, quality care, quality improvement, quality metric, inpatient, hospital, consultation, model) for articles published between 2000 and 2016. Organizational searches of Center to Advance Palliative Care, National Hospice and Palliative Care Organization, National Association for Home Care & Hospice, American Academy of Hospice and Palliative Medicine, Hospice and Palliative Nurses Association, National Quality Forum, and National Consensus Project for Quality Palliative Care were also performed. Additional semistructured interviews were conducted with directors from seven prominent PPC programs across the U.S. to review standard operating procedures for PPC team workflow and productivity. Little consensus exists in the PPC field regarding optimal ways to define, measure, and analyze provider and program productivity. Barriers to accurate monitoring of productivity include difficulties with identification, measurement, and interpretation of metrics applicable to an interdisciplinary care paradigm. In the context of inefficiencies

  13. Accuracy and precision in the calculation of phenology metrics

    DEFF Research Database (Denmark)

    Ferreira, Ana Sofia; Visser, Andre; MacKenzie, Brian

    2014-01-01

    a phenology metric is first determined from a noise- and gap-free time series, and again once it has been modified. We show that precision is a greater concern than accuracy for many of these metrics, an important point that has been hereto overlooked in the literature. The variability in precision between...... phenology metrics is substantial, but it can be improved by the use of preprocessing techniques (e.g., gap-filling or smoothing). Furthermore, there are important differences in the inherent variability of the metrics that may be crucial in the interpretation of studies based upon them. Of the considered......Phytoplankton phenology (the timing of seasonal events) is a commonly used indicator for evaluating responses of marine ecosystems to climate change. However, phenological metrics are vulnerable to observation-(bloom amplitude, missing data, and observational noise) and analysis-related (temporal...

  14. Decision Analysis for Metric Selection on a Clinical Quality Scorecard.

    Science.gov (United States)

    Guth, Rebecca M; Storey, Patricia E; Vitale, Michael; Markan-Aurora, Sumita; Gordon, Randolph; Prevost, Traci Q; Dunagan, Wm Claiborne; Woeltje, Keith F

    2016-09-01

    Clinical quality scorecards are used by health care institutions to monitor clinical performance and drive quality improvement. Because of the rapid proliferation of quality metrics in health care, BJC HealthCare found it increasingly difficult to select the most impactful scorecard metrics while still monitoring metrics for regulatory purposes. A 7-step measure selection process was implemented incorporating Kepner-Tregoe Decision Analysis, which is a systematic process that considers key criteria that must be satisfied in order to make the best decision. The decision analysis process evaluates what metrics will most appropriately fulfill these criteria, as well as identifies potential risks associated with a particular metric in order to identify threats to its implementation. Using this process, a list of 750 potential metrics was narrowed to 25 that were selected for scorecard inclusion. This decision analysis process created a more transparent, reproducible approach for selecting quality metrics for clinical quality scorecards. © The Author(s) 2015.

  15. Anisotropic rectangular metric for polygonal surface remeshing

    KAUST Repository

    Pellenard, Bertrand

    2013-06-18

    We propose a new method for anisotropic polygonal surface remeshing. Our algorithm takes as input a surface triangle mesh. An anisotropic rectangular metric, defined at each triangle facet of the input mesh, is derived from both a user-specified normal-based tolerance error and the requirement to favor rectangle-shaped polygons. Our algorithm uses a greedy optimization procedure that adds, deletes and relocates generators so as to match two criteria related to partitioning and conformity.

  16. Anisotropic rectangular metric for polygonal surface remeshing

    KAUST Repository

    Pellenard, Bertrand; Morvan, Jean-Marie; Alliez, Pierre

    2013-01-01

    We propose a new method for anisotropic polygonal surface remeshing. Our algorithm takes as input a surface triangle mesh. An anisotropic rectangular metric, defined at each triangle facet of the input mesh, is derived from both a user-specified normal-based tolerance error and the requirement to favor rectangle-shaped polygons. Our algorithm uses a greedy optimization procedure that adds, deletes and relocates generators so as to match two criteria related to partitioning and conformity.

  17. Optimization of planar self-collimating photonic crystals.

    Science.gov (United States)

    Rumpf, Raymond C; Pazos, Javier J

    2013-07-01

    Self-collimation in photonic crystals has received a lot of attention in the literature, partly due to recent interest in silicon photonics, yet no performance metrics have been proposed. This paper proposes a figure of merit (FOM) for self-collimation and outlines a methodical approach for calculating it. Performance metrics include bandwidth, angular acceptance, strength, and an overall FOM. Two key contributions of this work include the performance metrics and identifying that the optimum frequency for self-collimation is not at the inflection point. The FOM is used to optimize a planar photonic crystal composed of a square array of cylinders. Conclusions are drawn about how the refractive indices and fill fraction of the lattice impact each of the performance metrics. The optimization is demonstrated by simulating two spatially variant self-collimating photonic crystals, where one has a high FOM and the other has a low FOM. This work gives optical designers tremendous insight into how to design and optimize robust self-collimating photonic crystals, which promises many applications in silicon photonics and integrated optics.

  18. Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates

    International Nuclear Information System (INIS)

    Perfetti, Christopher M.; Rearden, Bradley T.

    2015-01-01

    This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.

  19. Baby universe metric equivalent to an interior black-hole metric

    International Nuclear Information System (INIS)

    Gonzalez-Diaz, P.F.

    1991-01-01

    It is shown that the maximally extended metric corresponding to a large wormhole is the unique possible wormhole metric whose baby universe sector is conformally equivalent ot the maximal inextendible Kruskal metric corresponding to the interior region of a Schwarzschild black hole whose gravitational radius is half the wormhole neck radius. The physical implications of this result in the black hole evaporation process are discussed. (orig.)

  20. Technique of experimental evaluation of cloud environment attacks detection accuracy

    Directory of Open Access Journals (Sweden)

    Sergey A. Klimachev

    2018-05-01

    Full Text Available The article is devoted to research of efficiency evaluation of IDS used for dynamic and complex organizational and technical structure computing platform guard. The components of the platform have a set of heterogeneous parameters. Analysis of existing IDS evaluation technique revealed shortcomings in justification of quantitative metrics that describe the efficiency and reliability IDS resolving. This makes if difficult to prove IDS evaluation technique. The purpose of the study is to increase IDS evaluation objectivity. To achive the purpose it is necessary to develop the correct technique, tools, experimental stand. The article proposes the results of development and approbation of the technique of IDS efficiency evaluation and software for it. The technique is based on defining of optimal set of attack detection accuracy scores. The technique and the software allow solving problems of comparative analysis of IDS that have similar functionality. As a result of the research, a number of task have been solved, including the selection of universal quantitative metrics for attack detection accuracy evaluation, the defining of summarised attack detection accuracy evaluation metric based on defining of pareto-optimal set of scores that ensure the confidentiality, integrity and accessibility of cloud environment information and information resources,  the development of a functional model,  a functional scheme and a software for cloud environment IDS research.

  1. Adaptive metric learning with deep neural networks for video-based facial expression recognition

    Science.gov (United States)

    Liu, Xiaofeng; Ge, Yubin; Yang, Chao; Jia, Ping

    2018-01-01

    Video-based facial expression recognition has become increasingly important for plenty of applications in the real world. Despite that numerous efforts have been made for the single sequence, how to balance the complex distribution of intra- and interclass variations well between sequences has remained a great difficulty in this area. We propose the adaptive (N+M)-tuplet clusters loss function and optimize it with the softmax loss simultaneously in the training phrase. The variations introduced by personal attributes are alleviated using the similarity measurements of multiple samples in the feature space with many fewer comparison times as conventional deep metric learning approaches, which enables the metric calculations for large data applications (e.g., videos). Both the spatial and temporal relations are well explored by a unified framework that consists of an Inception-ResNet network with long short term memory and the two fully connected layer branches structure. Our proposed method has been evaluated with three well-known databases, and the experimental results show that our method outperforms many state-of-the-art approaches.

  2. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  3. Content-based retrieval of brain tumor in contrast-enhanced MRI images using tumor margin information and learned distance metric.

    Science.gov (United States)

    Yang, Wei; Feng, Qianjin; Yu, Mei; Lu, Zhentai; Gao, Yang; Xu, Yikai; Chen, Wufan

    2012-11-01

    A content-based image retrieval (CBIR) method for T1-weighted contrast-enhanced MRI (CE-MRI) images of brain tumors is presented for diagnosis aid. The method is thoroughly evaluated on a large image dataset. Using the tumor region as a query, the authors' CBIR system attempts to retrieve tumors of the same pathological category. Aside from commonly used features such as intensity, texture, and shape features, the authors use a margin information descriptor (MID), which is capable of describing the characteristics of tissue surrounding a tumor, for representing image contents. In addition, the authors designed a distance metric learning algorithm called Maximum mean average Precision Projection (MPP) to maximize the smooth approximated mean average precision (mAP) to optimize retrieval performance. The effectiveness of MID and MPP algorithms was evaluated using a brain CE-MRI dataset consisting of 3108 2D scans acquired from 235 patients with three categories of brain tumors (meningioma, glioma, and pituitary tumor). By combining MID and other features, the mAP of retrieval increased by more than 6% with the learned distance metrics. The distance metric learned by MPP significantly outperformed the other two existing distance metric learning methods in terms of mAP. The CBIR system using the proposed strategies achieved a mAP of 87.3% and a precision of 89.3% when top 10 images were returned by the system. Compared with scale-invariant feature transform, the MID, which uses the intensity profile as descriptor, achieves better retrieval performance. Incorporating tumor margin information represented by MID with the distance metric learned by the MPP algorithm can substantially improve the retrieval performance for brain tumors in CE-MRI.

  4. A neurophysiological training evaluation metric for air traffic management.

    Science.gov (United States)

    Borghini, G; Aricò, P; Ferri, F; Graziani, I; Pozzi, S; Napoletano, L; Imbert, J P; Granger, G; Benhacene, R; Babiloni, F

    2014-01-01

    The aim of this work was to analyze the possibility to apply a neuroelectrical cognitive metrics for the evaluation of the training level of subjects during the learning of a task employed by Air Traffic Controllers (ATCos). In particular, the Electroencephalogram (EEG), the Electrocardiogram (ECG) and the Electrooculogram (EOG) signals were gathered from a group of students during the execution of an Air Traffic Management (ATM) task, proposed at three different levels of difficulty. The neuroelectrical results were compared with the subjective perception of the task difficulty obtained by the NASA-TLX questionnaires. From these analyses, we suggest that the integration of information derived from the power spectral density (PSD) of the EEG signals, the heart rate (HR) and the eye-blink rate (EBR) return important quantitative information about the training level of the subjects. In particular, by focusing the analysis on the direct and inverse correlation of the frontal PSD theta (4-7 (Hz)) and HR, and of the parietal PSD alpha (10-12 (Hz)) and EBR, respectively, with the degree of mental and emotive engagement, it is possible to obtain useful information about the training improvement across the training sessions.

  5. Properties of C-metric spaces

    Science.gov (United States)

    Croitoru, Anca; Apreutesei, Gabriela; Mastorakis, Nikos E.

    2017-09-01

    The subject of this paper belongs to the theory of approximate metrics [23]. An approximate metric on X is a real application defined on X × X that satisfies only a part of the metric axioms. In a recent paper [23], we introduced a new type of approximate metric, named C-metric, that is an application which satisfies only two metric axioms: symmetry and triangular inequality. The remarkable fact in a C-metric space is that a topological structure induced by the C-metric can be defined. The innovative idea of this paper is that we obtain some convergence properties of a C-metric space in the absence of a metric. In this paper we investigate C-metric spaces. The paper is divided into four sections. Section 1 is for Introduction. In Section 2 we recall some concepts and preliminary results. In Section 3 we present some properties of C-metric spaces, such as convergence properties, a canonical decomposition and a C-fixed point theorem. Finally, in Section 4 some conclusions are highlighted.

  6. Learning Low-Dimensional Metrics

    OpenAIRE

    Jain, Lalit; Mason, Blake; Nowak, Robert

    2017-01-01

    This paper investigates the theoretical foundations of metric learning, focused on three key questions that are not fully addressed in prior work: 1) we consider learning general low-dimensional (low-rank) metrics as well as sparse metrics; 2) we develop upper and lower (minimax)bounds on the generalization error; 3) we quantify the sample complexity of metric learning in terms of the dimension of the feature space and the dimension/rank of the underlying metric;4) we also bound the accuracy ...

  7. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    Directory of Open Access Journals (Sweden)

    Mark N Read

    2016-09-01

    Full Text Available The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto

  8. Operator-based metric for nuclear operations automation assessment

    Energy Technology Data Exchange (ETDEWEB)

    Zacharias, G.L.; Miao, A.X.; Kalkan, A. [Charles River Analytics Inc., Cambridge, MA (United States)] [and others

    1995-04-01

    Continuing advances in real-time computational capabilities will support enhanced levels of smart automation and AI-based decision-aiding systems in the nuclear power plant (NPP) control room of the future. To support development of these aids, we describe in this paper a research tool, and more specifically, a quantitative metric, to assess the impact of proposed automation/aiding concepts in a manner that can account for a number of interlinked factors in the control room environment. In particular, we describe a cognitive operator/plant model that serves as a framework for integrating the operator`s information-processing capabilities with his procedural knowledge, to provide insight as to how situations are assessed by the operator, decisions made, procedures executed, and communications conducted. Our focus is on the situation assessment (SA) behavior of the operator, the development of a quantitative metric reflecting overall operator awareness, and the use of this metric in evaluating automation/aiding options. We describe the results of a model-based simulation of a selected emergency scenario, and metric-based evaluation of a range of contemplated NPP control room automation/aiding options. The results demonstrate the feasibility of model-based analysis of contemplated control room enhancements, and highlight the need for empirical validation.

  9. Scalar-metric and scalar-metric-torsion gravitational theories

    International Nuclear Information System (INIS)

    Aldersley, S.J.

    1977-01-01

    The techniques of dimensional analysis and of the theory of tensorial concomitants are employed to study field equations in gravitational theories which incorporate scalar fields of the Brans-Dicke type. Within the context of scalar-metric gravitational theories, a uniqueness theorem for the geometric (or gravitational) part of the field equations is proven and a Lagrangian is determined which is uniquely specified by dimensional analysis. Within the context of scalar-metric-torsion gravitational theories a uniqueness theorem for field Lagrangians is presented and the corresponding Euler-Lagrange equations are given. Finally, an example of a scalar-metric-torsion theory is presented which is similar in many respects to the Brans-Dicke theory and the Einstein-Cartan theory

  10. Metrics of quantum states

    International Nuclear Information System (INIS)

    Ma Zhihao; Chen Jingling

    2011-01-01

    In this work we study metrics of quantum states, which are natural generalizations of the usual trace metric and Bures metric. Some useful properties of the metrics are proved, such as the joint convexity and contractivity under quantum operations. Our result has a potential application in studying the geometry of quantum states as well as the entanglement detection.

  11. Classification and Evaluation of Mobility Metrics for Mobility Model Movement Patterns in Mobile Ad-Hoc Networks

    OpenAIRE

    Santosh Kumar S C Sharma Bhupendra Suman

    2011-01-01

    A mobile ad hoc network is collection of self configuring and adaption of wireless link between communicating devices (mobile devices) to form an arbitrary topology and multihop wireless connectivity without the use of existing infrastructure. It requires efficient dynamic routing protocol to determine the routes subsequent to a set of rules that enables two or more devices to communicate with each others. This paper basically classifies and evaluates the mobility metrics into two categories-...

  12. WE-E-213CD-11: A New Automatically Generated Metric for Evaluating the Spatial Precision of Deformable Image Registrations: The Distance Discordance Metric.

    Science.gov (United States)

    Saleh, Z; Apte, A; Sharp, G; Deasy, J

    2012-06-01

    We propose a new metric called Distance Discordance (DD), which is defined as the distance between two anatomic points from two moving images, which are co-located on some reference image, when deformed onto another reference image. To demonstrate the concept of DD, we created a reference software phantom which contains two objects. The first object (1) consists of a hollow box with a fixed size core and variable wall thickness. The second object (2) consists of a solid box of fixed size and arbitrary location. 7 different variations of the fixed phantom were created. Each phantom was deformed onto every other phantom using two B-Spline DIR algorithms available in Elastix and Plastimatch. Voxels were sampled from the reference phantom [1], which were also deformed from moving phantoms [2…6], and we find the differences in their corresponding location on phantom [7]. Each voxel results in a distribution of DD values, which we call distance discordance histogram (DDH). We also demonstrate this concept in 8 Head & Neck patients. The two image registration algorithms produced two different DD results for the same phantom image set. The mean values of the DDH were slightly lower for Elastix (0-1.28 cm) as compared to the values produced by Plastimatch (0-1.43 cm). The combined DDH for the H&N patients followed a lognormal distribution with a mean of 0.45 cm and std. deviation of 0.42 cm. The proposed distance discordance (DD) metric is an easily interpretable, quantitative tool that can be used to evaluate the effect of inter-patient variability on the goodness of the registration in different parts of the patient anatomy. Therefore, it can be utilized to exclude certain images based on their DDH characteristics. In addition, this metric does not rely on 'ground truth' or the presence of contoured structures. Partially supported by NIH grant R01 CA85181. © 2012 American Association of Physicists in Medicine.

  13. MESUR: USAGE-BASED METRICS OF SCHOLARLY IMPACT

    Energy Technology Data Exchange (ETDEWEB)

    BOLLEN, JOHAN [Los Alamos National Laboratory; RODRIGUEZ, MARKO A. [Los Alamos National Laboratory; VAN DE SOMPEL, HERBERT [Los Alamos National Laboratory

    2007-01-30

    The evaluation of scholarly communication items is now largely a matter of expert opinion or metrics derived from citation data. Both approaches can fail to take into account the myriad of factors that shape scholarly impact. Usage data has emerged as a promising complement to existing methods o fassessment but the formal groundwork to reliably and validly apply usage-based metrics of schlolarly impact is lacking. The Andrew W. Mellon Foundation funded MESUR project constitutes a systematic effort to define, validate and cross-validate a range of usage-based metrics of schlolarly impact by creating a semantic model of the scholarly communication process. The constructed model will serve as the basis of a creating a large-scale semantic network that seamlessly relates citation, bibliographic and usage data from a variety of sources. A subsequent program that uses the established semantic network as a reference data set will determine the characteristics and semantics of a variety of usage-based metrics of schlolarly impact. This paper outlines the architecture and methodology adopted by the MESUR project and its future direction.

  14. METRIC context unit architecture

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, R.O.

    1988-01-01

    METRIC is an architecture for a simple but powerful Reduced Instruction Set Computer (RISC). Its speed comes from the simultaneous processing of several instruction streams, with instructions from the various streams being dispatched into METRIC's execution pipeline as they become available for execution. The pipeline is thus kept full, with a mix of instructions for several contexts in execution at the same time. True parallel programming is supported within a single execution unit, the METRIC Context Unit. METRIC's architecture provides for expansion through the addition of multiple Context Units and of specialized Functional Units. The architecture thus spans a range of size and performance from a single-chip microcomputer up through large and powerful multiprocessors. This research concentrates on the specification of the METRIC Context Unit at the architectural level. Performance tradeoffs made during METRIC's design are discussed, and projections of METRIC's performance are made based on simulation studies.

  15. Introduction to the Special Collection of Papers on the San Luis Basin Sustainability Metrics Project: A Methodology for Evaluating Regional Sustainability

    Science.gov (United States)

    This paper introduces a collection of four articles describing the San Luis Basin Sustainability Metrics Project. The Project developed a methodology for evaluating regional sustainability. This introduction provides the necessary background information for the project, descripti...

  16. Evaluation of Daily Evapotranspiration Over Orchards Using METRIC Approach and Landsat Satellite Observations

    Science.gov (United States)

    He, R.; Jin, Y.; Daniele, Z.; Kandelous, M. M.; Kent, E. R.

    2016-12-01

    The pistachio and almond acreage in California has been rapidly growing in the past 10 years, raising concerns about competition for limited water resources in California. A robust and cost-effective mapping of crop water use, mostly evapotranspiration (ET), by orchards, is needed for improved farm-level irrigation management and regional water planning. METRIC™, a satellite-based surface energy balance approach, has been widely used to map field-scale crop ET, mostly over row crops. We here aim to apply METRIC with Landsat satellite observations over California's orchards and evaluate the ET estimates by comparing with field measurements in South San Joaquin Valley, California. Reference ET of grass (ETo) from California Irrigation Management Information system (CIMIS) stations was used to estimate daily ET of commercial almond and pistachio orchards. Our comparisons showed that METRIC-Landsat ET daily estimates agreed well with ET measured by the eddy covariance and surface renewal stations, with a RMSE of 1.25 and a correlation coefficient of 0.84 for the pistachio orchard. A slight high bias of satellite based ET estimates was found for both pistachio and almond orchards. We also found time series of NDVI was highly correlated with ET temporal dynamics within each field, but the correlation was reduced to 0.56 when all fields were pooled together. Net radiation, however, remained highly correlated with ET across all the fields. The METRIC ET was able to distinguish the differences in ET among salt- and non-salt affected pistachio orchards, e.g., mean daily ET during growing season in salt-affected orchards was lower than that of non-salt affected one by 0.87 mm/day. The remote sensing based ET estimate will support a variety of state and local interests in water use and management, for both planning and regulatory/compliance purposes, and provide the farmers observation-based guidance for site-specific and time-sensitive irrigation management.

  17. Introduction to the special collection of papers on the San Luis Basin Sustainability Metrics Project: a methodology for evaluating regional sustainability.

    Science.gov (United States)

    Heberling, Matthew T; Hopton, Matthew E

    2012-11-30

    This paper introduces a collection of four articles describing the San Luis Basin Sustainability Metrics Project. The Project developed a methodology for evaluating regional sustainability. This introduction provides the necessary background information for the project, description of the region, overview of the methods, and summary of the results. Although there are a multitude of scientifically based sustainability metrics, many are data intensive, difficult to calculate, and fail to capture all aspects of a system. We wanted to see if we could develop an approach that decision-makers could use to understand if their system was moving toward or away from sustainability. The goal was to produce a scientifically defensible, but straightforward and inexpensive methodology to measure and monitor environmental quality within a regional system. We initiated an interdisciplinary pilot project in the San Luis Basin, south-central Colorado, to test the methodology. The objectives were: 1) determine the applicability of using existing datasets to estimate metrics of sustainability at a regional scale; 2) calculate metrics through time from 1980 to 2005; and 3) compare and contrast the results to determine if the system was moving toward or away from sustainability. The sustainability metrics, chosen to represent major components of the system, were: 1) Ecological Footprint to capture the impact and human burden on the system; 2) Green Net Regional Product to represent economic welfare; 3) Emergy to capture the quality-normalized flow of energy through the system; and 4) Fisher information to capture the overall dynamic order and to look for possible regime changes. The methodology, data, and results of each metric are presented in the remaining four papers of the special collection. Based on the results of each metric and our criteria for understanding the sustainability trends, we find that the San Luis Basin is moving away from sustainability. Although we understand

  18. Strength Pareto particle swarm optimization and hybrid EA-PSO for multi-objective optimization.

    Science.gov (United States)

    Elhossini, Ahmed; Areibi, Shawki; Dony, Robert

    2010-01-01

    This paper proposes an efficient particle swarm optimization (PSO) technique that can handle multi-objective optimization problems. It is based on the strength Pareto approach originally used in evolutionary algorithms (EA). The proposed modified particle swarm algorithm is used to build three hybrid EA-PSO algorithms to solve different multi-objective optimization problems. This algorithm and its hybrid forms are tested using seven benchmarks from the literature and the results are compared to the strength Pareto evolutionary algorithm (SPEA2) and a competitive multi-objective PSO using several metrics. The proposed algorithm shows a slower convergence, compared to the other algorithms, but requires less CPU time. Combining PSO and evolutionary algorithms leads to superior hybrid algorithms that outperform SPEA2, the competitive multi-objective PSO (MO-PSO), and the proposed strength Pareto PSO based on different metrics.

  19. Observable traces of non-metricity: New constraints on metric-affine gravity

    Science.gov (United States)

    Delhom-Latorre, Adrià; Olmo, Gonzalo J.; Ronco, Michele

    2018-05-01

    Relaxing the Riemannian condition to incorporate geometric quantities such as torsion and non-metricity may allow to explore new physics associated with defects in a hypothetical space-time microstructure. Here we show that non-metricity produces observable effects in quantum fields in the form of 4-fermion contact interactions, thereby allowing us to constrain the scale of non-metricity to be greater than 1 TeV by using results on Bahbah scattering. Our analysis is carried out in the framework of a wide class of theories of gravity in the metric-affine approach. The bound obtained represents an improvement of several orders of magnitude to previous experimental constraints.

  20. Information theoretic methods for image processing algorithm optimization

    Science.gov (United States)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  1. Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.

    Science.gov (United States)

    Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng

    2017-12-01

    How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.

  2. 3D–2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    International Nuclear Information System (INIS)

    De Silva, T; Ketcha, M D; Siewerdsen, J H; Uneri, A; Reaungamornrat, S; Kleinszig, G; Vogt, S; Aygun, N; Lo, S-F; Wolinsky, J-P

    2016-01-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D–2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D–2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE  <  6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1–2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of  >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved

  3. Metrics Feedback Cycle: measuring and improving user engagement in gamified eLearning systems

    Directory of Open Access Journals (Sweden)

    Adam Atkins

    2017-12-01

    Full Text Available This paper presents the identification, design and implementation of a set of metrics of user engagement in a gamified eLearning application. The 'Metrics Feedback Cycle' (MFC is introduced as a formal process prescribing the iterative evaluation and improvement of application-wide engagement, using data collected from metrics as input to improve related engagement features. This framework was showcased using a gamified eLearning application as a case study. In this paper, we designed a prototype and tested it with thirty-six (N=36 students to validate the effectiveness of the MFC. The analysis and interpretation of metrics data shows that the gamification features had a positive effect on user engagement, and helped identify areas in which this could be improved. We conclude that the MFC has applications in gamified systems that seek to maximise engagement by iteratively evaluating implemented features against a set of evolving metrics.

  4. Developing a Metric for the Cost of Green House Gas Abatement

    Science.gov (United States)

    2017-02-28

    The authors introduce the levelized cost of carbon (LCC), a metric that can be used to evaluate MassDOT CO2 abatement projects in terms of their cost-effectiveness. The study presents ways in which the metric can be used to rank projects. The data ar...

  5. SU-C-BRB-05: Determining the Adequacy of Auto-Contouring Via Probabilistic Assessment of Ensuing Treatment Plan Metrics in Comparison with Manual Contours

    International Nuclear Information System (INIS)

    Nourzadeh, H; Watkins, W; Siebers, J; Ahmad, M

    2016-01-01

    Purpose: To determine if auto-contour and manual-contour—based plans differ when evaluated with respect to probabilistic coverage metrics and biological model endpoints for prostate IMRT. Methods: Manual and auto-contours were created for 149 CT image sets acquired from 16 unique prostate patients. A single physician manually contoured all images. Auto-contouring was completed utilizing Pinnacle’s Smart Probabilistic Image Contouring Engine (SPICE). For each CT, three different 78 Gy/39 fraction 7-beam IMRT plans are created; PD with drawn ROIs, PAS with auto-contoured ROIs, and PM with auto-contoured OARs with the manually drawn target. For each plan, 1000 virtual treatment simulations with different sampled systematic errors for each simulation and a different sampled random error for each fraction were performed using our in-house GPU-accelerated robustness analyzer tool which reports the statistical probability of achieving dose-volume metrics, NTCP, TCP, and the probability of achieving the optimization criteria for both auto-contoured (AS) and manually drawn (D) ROIs. Metrics are reported for all possible cross-evaluation pairs of ROI types (AS,D) and planning scenarios (PD,PAS,PM). Bhattacharyya coefficient (BC) is calculated to measure the PDF similarities for the dose-volume metric, NTCP, TCP, and objectives with respect to the manually drawn contour evaluated on base plan (D-PD). Results: We observe high BC values (BC≥0.94) for all OAR objectives. BC values of max dose objective on CTV also signify high resemblance (BC≥0.93) between the distributions. On the other hand, BC values for CTV’s D95 and Dmin objectives are small for AS-PM, AS-PD. NTCP distributions are similar across all evaluation pairs, while TCP distributions of AS-PM, AS-PD sustain variations up to %6 compared to other evaluated pairs. Conclusion: No significant probabilistic differences are observed in the metrics when auto-contoured OARs are used. The prostate auto-contour needs

  6. SU-C-BRB-05: Determining the Adequacy of Auto-Contouring Via Probabilistic Assessment of Ensuing Treatment Plan Metrics in Comparison with Manual Contours

    Energy Technology Data Exchange (ETDEWEB)

    Nourzadeh, H; Watkins, W; Siebers, J; Ahmad, M [University of Virginia Health Systems, Charlottesville, VA (United States)

    2016-06-15

    Purpose: To determine if auto-contour and manual-contour—based plans differ when evaluated with respect to probabilistic coverage metrics and biological model endpoints for prostate IMRT. Methods: Manual and auto-contours were created for 149 CT image sets acquired from 16 unique prostate patients. A single physician manually contoured all images. Auto-contouring was completed utilizing Pinnacle’s Smart Probabilistic Image Contouring Engine (SPICE). For each CT, three different 78 Gy/39 fraction 7-beam IMRT plans are created; PD with drawn ROIs, PAS with auto-contoured ROIs, and PM with auto-contoured OARs with the manually drawn target. For each plan, 1000 virtual treatment simulations with different sampled systematic errors for each simulation and a different sampled random error for each fraction were performed using our in-house GPU-accelerated robustness analyzer tool which reports the statistical probability of achieving dose-volume metrics, NTCP, TCP, and the probability of achieving the optimization criteria for both auto-contoured (AS) and manually drawn (D) ROIs. Metrics are reported for all possible cross-evaluation pairs of ROI types (AS,D) and planning scenarios (PD,PAS,PM). Bhattacharyya coefficient (BC) is calculated to measure the PDF similarities for the dose-volume metric, NTCP, TCP, and objectives with respect to the manually drawn contour evaluated on base plan (D-PD). Results: We observe high BC values (BC≥0.94) for all OAR objectives. BC values of max dose objective on CTV also signify high resemblance (BC≥0.93) between the distributions. On the other hand, BC values for CTV’s D95 and Dmin objectives are small for AS-PM, AS-PD. NTCP distributions are similar across all evaluation pairs, while TCP distributions of AS-PM, AS-PD sustain variations up to %6 compared to other evaluated pairs. Conclusion: No significant probabilistic differences are observed in the metrics when auto-contoured OARs are used. The prostate auto-contour needs

  7. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  8. Fault Management Metrics

    Science.gov (United States)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  9. Completion of a Dislocated Metric Space

    Directory of Open Access Journals (Sweden)

    P. Sumati Kumari

    2015-01-01

    Full Text Available We provide a construction for the completion of a dislocated metric space (abbreviated d-metric space; we also prove that the completion of the metric associated with a d-metric coincides with the metric associated with the completion of the d-metric.

  10. Better Metrics to Automatically Predict the Quality of a Text Summary

    Directory of Open Access Journals (Sweden)

    Judith D. Schlesinger

    2012-09-01

    Full Text Available In this paper we demonstrate a family of metrics for estimating the quality of a text summary relative to one or more human-generated summaries. The improved metrics are based on features automatically computed from the summaries to measure content and linguistic quality. The features are combined using one of three methods—robust regression, non-negative least squares, or canonical correlation, an eigenvalue method. The new metrics significantly outperform the previous standard for automatic text summarization evaluation, ROUGE.

  11. Multi-Metric Sustainability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cowlin, Shannon [National Renewable Energy Lab. (NREL), Golden, CO (United States); Heimiller, Donna [National Renewable Energy Lab. (NREL), Golden, CO (United States); Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Pless, Jacquelyn [National Renewable Energy Lab. (NREL), Golden, CO (United States); Munoz, David [Colorado School of Mines, Golden, CO (United States)

    2014-12-01

    A readily accessible framework that allows for evaluating impacts and comparing tradeoffs among factors in energy policy, expansion planning, and investment decision making is lacking. Recognizing this, the Joint Institute for Strategic Energy Analysis (JISEA) funded an exploration of multi-metric sustainability analysis (MMSA) to provide energy decision makers with a means to make more comprehensive comparisons of energy technologies. The resulting MMSA tool lets decision makers simultaneously compare technologies and potential deployment locations.

  12. Remarks on G-Metric Spaces

    Directory of Open Access Journals (Sweden)

    Bessem Samet

    2013-01-01

    Full Text Available In 2005, Mustafa and Sims (2006 introduced and studied a new class of generalized metric spaces, which are called G-metric spaces, as a generalization of metric spaces. We establish some useful propositions to show that many fixed point theorems on (nonsymmetric G-metric spaces given recently by many authors follow directly from well-known theorems on metric spaces. Our technique can be easily extended to other results as shown in application.

  13. An evaluation of non-metric cranial traits used to estimate ancestry in a South African sample.

    Science.gov (United States)

    L'Abbé, E N; Van Rooyen, C; Nawrocki, S P; Becker, P J

    2011-06-15

    Establishing ancestry from a skeleton for forensic purposes has been shown to be difficult. The purpose of this paper is to address the application of thirteen non-metric traits to estimate ancestry in three South African groups, namely White, Black and "Coloured". In doing so, the frequency distribution of thirteen non-metric traits among South Africans are presented; the relationship of these non-metric traits with ancestry, sex, age at death are evaluated; and Kappa statistics are utilized to assess the inter and intra-rater reliability. Crania of 520 known individuals were obtained from four skeletal samples in South Africa: the Pretoria Bone Collection, the Raymond A. Dart Collection, the Kirsten Collection and the Student Bone Collection from the University of the Free State. Average age at death was 51, with an age range between 18 and 90. Thirteen commonly used non-metric traits from the face and jaw were scored; definition and illustrations were taken from Hefner, Bass and Hauser and De Stephano. Frequency distributions, ordinal regression and Cohen's Kappa statistics were performed as a means to assess population variation and repeatability. Frequency distributions were highly variable among South Africans. Twelve of the 13 variables had a statistically significant relationship with ancestry. Sex significantly affected only one variable, inter-orbital breadth, and age at death affected two (anterior nasal spine and alveolar prognathism). The interaction of ancestry and sex independently affected three variables (nasal bone contour, nasal breadth, and interorbital breadth). Seven traits had moderate to excellent repeatability, while poor scoring consistency was noted for six variables. Difficulties in repeating several of the trait scores may require either a need for refinement of the definitions, or these character states may not adequately describe the observable morphology in the population. The application of the traditional experience-based approach

  14. Towards Optimal Transport Networks

    Directory of Open Access Journals (Sweden)

    Erik P. Vargo

    2010-08-01

    Full Text Available Our ultimate goal is to design transportation net- works whose dynamic performance metrics (e.g. pas- senger throughput, passenger delay, and insensitivity to weather disturbances are optimized. Here the fo- cus is on optimizing static features of the network that are known to directly affect the network dynamics. First, we present simulation results which support a connection between maximizing the first non-trivial eigenvalue of a network's Laplacian and superior air- port network performance. Then, we explore the ef- fectiveness of a tabu search heuristic for optimizing this metric by comparing experimental results to the- oretical upper bounds. We also consider generating upper bounds on a network's algebraic connectivity via the solution of semidefinite programming (SDP relaxations. A modification of an existing subgraph extraction algorithm is implemented to explore the underlying regional structures in the U.S. airport net- work, with the hope that the resulting localized struc- tures can be optimized independently and reconnected via a "backbone" network to achieve superior network performance.

  15. Metric-adjusted skew information

    DEFF Research Database (Denmark)

    Liang, Cai; Hansen, Frank

    2010-01-01

    on a bipartite system and proved superadditivity of the Wigner-Yanase-Dyson skew informations for such states. We extend this result to the general metric-adjusted skew information. We finally show that a recently introduced extension to parameter values 1 ...We give a truly elementary proof of the convexity of metric-adjusted skew information following an idea of Effros. We extend earlier results of weak forms of superadditivity to general metric-adjusted skew information. Recently, Luo and Zhang introduced the notion of semi-quantum states...... of (unbounded) metric-adjusted skew information....

  16. METRIC CHARACTERISTICS OF SOME TESTS FOR EVALUATION OF AEROBIC AND ANAEROBIC CAPACITIES

    Directory of Open Access Journals (Sweden)

    Slobodan Stojiljković

    2006-06-01

    Full Text Available This research was aimed at cheking the metric characteristics of some specific functional tests often used in practice for the evaluation of aerobic and anaerobic capacities and muscular capabilities. Keeping track of the changes and behavior of the functional abilities was performed on the basis of several repeated measurements of the same test on a sample consisting of 110 examinees, Students of the nursing school “Dr Milenko Hadzic” iz Nis, 17 years of age (± 6 months, regularly attending the classes of physical education.Two measuring instruments were tested: MARGARIA TEST and HARVARD STEP TEST.The reliability of said tests was evaluated on the basis of five successive measurements using Spearman-Brown method, based on determining of the value of the coefficients of determination of all measurements and of the main component h1.The outcome revealed high reliability of the results of most of the measurements and of the first main component H1, so that the acquired results were 91.2% for the MARGARIA TEST (anaerobic capacity and 93.4% for5 the HARVARD STEP TEST (aerobic capacity.

  17. H-Metric: Characterizing Image Datasets via Homogenization Based on KNN-Queries

    Directory of Open Access Journals (Sweden)

    Welington M da Silva

    2012-01-01

    Full Text Available Precision-Recall is one of the main metrics for evaluating content-based image retrieval techniques. However, it does not provide an ample perception of the properties of an image dataset immersed in a metric space. In this work, we describe an alternative metric named H-Metric, which is determined along a sequence of controlled modifications in the image dataset. The process is named homogenization and works by altering the homogeneity characteristics of the classes of the images. The result is a process that measures how hard it is to deal with a set of images in respect to content-based retrieval, offering support in the task of analyzing configurations of distance functions and of features extractors.

  18. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    Science.gov (United States)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  19. Evaluation of performance metrics of leagile supply chain through fuzzy MCDM

    Directory of Open Access Journals (Sweden)

    D. Venkata Ramana

    2013-07-01

    Full Text Available Leagile supply chain management has emerged as a proactive approach for improving business value of companies. The companies that face volatile and unpredictable market demand of their products must pioneer in leagile supply chain strategy for competition and various demands of customers. There are literally many approaches for performance metrics of supply chain in general, yet little investigation has identified the reliability and validity of such approaches particularly in leagile supply chains. This study examines the consistency approaches by confirmatory factor analysis that determines the adoption of performance dimensions. The prioritization of performance enablers under these dimensions of leagile supply chain in small and medium enterprises are determined through fuzzy logarithmic least square method (LLSM. The study developed a generic hierarchy model for decision-makers who can prioritize the supply chain metrics under performance dimensions of leagile supply chain.

  20. Metric Accuracy Evaluation of Dense Matching Algorithms in Archeological Applications

    Directory of Open Access Journals (Sweden)

    C. Re

    2011-12-01

    Full Text Available In the cultural heritage field the recording and documentation of small and medium size objects with very detailed Digital Surface Models (DSM is readily possible by through the use of high resolution and high precision triangulation laser scanners. 3D surface recording of archaeological objects can be easily achieved in museums; however, this type of record can be quite expensive. In many cases photogrammetry can provide a viable alternative for the generation of DSMs. The photogrammetric procedure has some benefits with respect to laser survey. The research described in this paper sets out to verify the reconstruction accuracy of DSMs of some archaeological artifacts obtained by photogrammetric survey. The experimentation has been carried out on some objects preserved in the Petrie Museum of Egyptian Archaeology at University College London (UCL. DSMs produced by two photogrammetric software packages are compared with the digital 3D model obtained by a state of the art triangulation color laser scanner. Intercomparison between the generated DSM has allowed an evaluation of metric accuracy of the photogrammetric approach applied to archaeological documentation and of precision performances of the two software packages.

  1. A contest of sensors in close range 3D imaging: performance evaluation with a new metric test object

    Directory of Open Access Journals (Sweden)

    M. Hess

    2014-06-01

    Full Text Available An independent means of 3D image quality assessment is introduced, addressing non-professional users of sensors and freeware, which is largely characterized as closed-sourced and by the absence of quality metrics for processing steps, such as alignment. A performance evaluation of commercially available, state-of-the-art close range 3D imaging technologies is demonstrated with the help of a newly developed Portable Metric Test Artefact. The use of this test object provides quality control by a quantitative assessment of 3D imaging sensors. It will enable users to give precise specifications which spatial resolution and geometry recording they expect as outcome from their 3D digitizing process. This will lead to the creation of high-quality 3D digital surrogates and 3D digital assets. The paper is presented in the form of a competition of teams, and a possible winner will emerge.

  2. Development of quality metrics for ambulatory pediatric cardiology: Infection prevention.

    Science.gov (United States)

    Johnson, Jonathan N; Barrett, Cindy S; Franklin, Wayne H; Graham, Eric M; Halnon, Nancy J; Hattendorf, Brandy A; Krawczeski, Catherine D; McGovern, James J; O'Connor, Matthew J; Schultz, Amy H; Vinocur, Jeffrey M; Chowdhury, Devyani; Anderson, Jeffrey B

    2017-12-01

    In 2012, the American College of Cardiology's (ACC) Adult Congenital and Pediatric Cardiology Council established a program to develop quality metrics to guide ambulatory practices for pediatric cardiology. The council chose five areas on which to focus their efforts; chest pain, Kawasaki Disease, tetralogy of Fallot, transposition of the great arteries after arterial switch, and infection prevention. Here, we sought to describe the process, evaluation, and results of the Infection Prevention Committee's metric design process. The infection prevention metrics team consisted of 12 members from 11 institutions in North America. The group agreed to work on specific infection prevention topics including antibiotic prophylaxis for endocarditis, rheumatic fever, and asplenia/hyposplenism; influenza vaccination and respiratory syncytial virus prophylaxis (palivizumab); preoperative methods to reduce intraoperative infections; vaccinations after cardiopulmonary bypass; hand hygiene; and testing to identify splenic function in patients with heterotaxy. An extensive literature review was performed. When available, previously published guidelines were used fully in determining metrics. The committee chose eight metrics to submit to the ACC Quality Metric Expert Panel for review. Ultimately, metrics regarding hand hygiene and influenza vaccination recommendation for patients did not pass the RAND analysis. Both endocarditis prophylaxis metrics and the RSV/palivizumab metric passed the RAND analysis but fell out during the open comment period. Three metrics passed all analyses, including those for antibiotic prophylaxis in patients with heterotaxy/asplenia, for influenza vaccination compliance in healthcare personnel, and for adherence to recommended regimens of secondary prevention of rheumatic fever. The lack of convincing data to guide quality improvement initiatives in pediatric cardiology is widespread, particularly in infection prevention. Despite this, three metrics were

  3. The metric system: An introduction

    Science.gov (United States)

    Lumley, Susan M.

    On 13 Jul. 1992, Deputy Director Duane Sewell restated the Laboratory's policy on conversion to the metric system which was established in 1974. Sewell's memo announced the Laboratory's intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory's conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on 25 Jul. 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation's conversion to the metric system. The second part of this report is on applying the metric system.

  4. The metric system: An introduction

    Energy Technology Data Exchange (ETDEWEB)

    Lumley, S.M.

    1995-05-01

    On July 13, 1992, Deputy Director Duane Sewell restated the Laboratory`s policy on conversion to the metric system which was established in 1974. Sewell`s memo announced the Laboratory`s intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory`s conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on July 25, 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation`s conversion to the metric system. The second part of this report is on applying the metric system.

  5. Social Media Metrics Importance and Usage Frequency in Latvia

    OpenAIRE

    Ronalds Skulme

    2017-01-01

    Purpose of the article: The purpose of this paper was to explore which social media marketing metrics are most often used and are most important for marketing experts in Latvia and can be used to evaluate marketing campaign effectiveness. Methodology/methods: In order to achieve the aim of this paper several theoretical and practical research methods were used, such as theoretical literature analysis, surveying and grouping. First of all, theoretical research about social media metrics was...

  6. A new universal colour image fidelity metric

    NARCIS (Netherlands)

    Toet, A.; Lucassen, M.P.

    2003-01-01

    We extend a recently introduced universal grayscale image quality index to a newly developed perceptually decorrelated colour space. The resulting colour image fidelity metric quantifies the distortion of a processed colour image relative to its original version. We evaluated the new colour image

  7. Attack-Resistant Trust Metrics

    Science.gov (United States)

    Levien, Raph

    The Internet is an amazingly powerful tool for connecting people together, unmatched in human history. Yet, with that power comes great potential for spam and abuse. Trust metrics are an attempt to compute the set of which people are trustworthy and which are likely attackers. This chapter presents two specific trust metrics developed and deployed on the Advogato Website, which is a community blog for free software developers. This real-world experience demonstrates that the trust metrics fulfilled their goals, but that for good results, it is important to match the assumptions of the abstract trust metric computation to the real-world implementation.

  8. Research on quality metrics of wireless adaptive video streaming

    Science.gov (United States)

    Li, Xuefei

    2018-04-01

    With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.

  9. The influence of soil properties and nutrients on conifer forest growth in Sweden, and the first steps in developing a nutrient availability metric

    Science.gov (United States)

    Van Sundert, Kevin; Horemans, Joanna A.; Stendahl, Johan; Vicca, Sara

    2018-06-01

    . However, the IIASA metric was unrelated to normalized forest productivity across Sweden (R2 = 0.00-0.01) because the soil factors under consideration were not optimally implemented according to the Swedish data, and because the soil C : N ratio was not included. Using two methods (each one based on a different way of normalizing productivity for climate), we adjusted this metric by incorporating soil C : N and modifying the relationship between SOC and nutrient availability in view of the observed relationships across our database. In contrast to the IIASA metric, the adjusted metrics explained some variation in normalized productivity in the database (R2 = 0.03-0.21; depending on the applied method). A test for five manually selected local fertility gradients in our database revealed a significant and stronger relationship between the adjusted metrics and productivity for each of the gradients (R2 = 0.09-0.38). This study thus shows for the first time how nutrient availability metrics can be evaluated and adjusted for a particular ecosystem type, using a large-scale database.

  10. Symmetries of the dual metrics

    International Nuclear Information System (INIS)

    Baleanu, D.

    1998-01-01

    The geometric duality between the metric g μν and a Killing tensor K μν is studied. The conditions were found when the symmetries of the metric g μν and the dual metric K μν are the same. Dual spinning space was constructed without introduction of torsion. The general results are applied to the case of Kerr-Newmann metric

  11. Science as Knowledge, Practice, and Map Making: The Challenge of Defining Metrics for Evaluating and Improving DOE-Funded Basic Experimental Science

    Energy Technology Data Exchange (ETDEWEB)

    Bodnarczuk, M.

    1993-03-01

    Industrial R&D laboratories have been surprisingly successful in developing performance objectives and metrics that convincingly show that planning, management, and improvement techniques can be value-added to the actual output of R&D organizations. In this paper, I will discuss the more difficult case of developing analogous constructs for DOE-funded non-nuclear, non-weapons basic research, or as I will refer to it - basic experimental science. Unlike most industrial R&D or the bulk of applied science performed at the National Renewable Energy Laboratory (NREL), the purpose of basic experimental science is producing new knowledge (usually published in professional journals) that has no immediate application to the first link (the R) of a planned R&D chain. Consequently, performance objectives and metrics are far more difficult to define. My claim is that if one can successfully define metrics for evaluating and improving DOE-funded basic experimental science (which is the most difficult case), then defining such constructs for DOE-funded applied science should be much less problematic. With the publication of the DOE Standard - Implementation Guide for Quality Assurance Programs for Basic and Applied Research (DOE-ER-STD-6001-92) and the development of a conceptual framework for integrating all the DOE orders, we need to move aggressively toward the threefold next phase: (1) focusing the management elements found in DOE-ER-STD-6001-92 on the main output of national laboratories - the experimental science itself; (2) developing clearer definitions of basic experimental science as practice not just knowledge; and (3) understanding the relationship between the metrics that scientists use for evaluating the performance of DOE-funded basic experimental science, the management elements of DOE-ER-STD-6001-92, and the notion of continuous improvement.

  12. Metric properties of the Utrecht Scale for Evaluation of Rehabilitation-Participation (USER-Participation) in persons with spinal cord injury living in Switzerland

    NARCIS (Netherlands)

    Mader, Luzius; Post, Marcel W M; Ballert, Carolina S; Michel, Gisela; Stucki, Gerold; Brinkhof, Martin W G

    2016-01-01

    OBJECTIVE: To examine the metric properties of the Utrecht Scale for Evaluation of Rehabilitation-Participation (USER-Participation) in persons with spinal cord injury in Switzerland from a classical and item response theory perspective. DESIGN: Cross-sectional survey. SUBJECTS: Persons with spinal

  13. Determination of a Screening Metric for High Diversity DNA Libraries.

    Science.gov (United States)

    Guido, Nicholas J; Handerson, Steven; Joseph, Elaine M; Leake, Devin; Kung, Li A

    2016-01-01

    The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.

  14. Determination of a Screening Metric for High Diversity DNA Libraries.

    Directory of Open Access Journals (Sweden)

    Nicholas J Guido

    Full Text Available The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.

  15. Metrics for comparing plasma mass filters

    Energy Technology Data Exchange (ETDEWEB)

    Fetterman, Abraham J.; Fisch, Nathaniel J. [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08540 (United States)

    2011-10-15

    High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter.

  16. Metrics for comparing plasma mass filters

    International Nuclear Information System (INIS)

    Fetterman, Abraham J.; Fisch, Nathaniel J.

    2011-01-01

    High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter.

  17. Impact of region contouring variability on image-based focal therapy evaluation

    Science.gov (United States)

    Gibson, Eli; Donaldson, Ian A.; Shah, Taimur T.; Hu, Yipeng; Ahmed, Hashim U.; Barratt, Dean C.

    2016-03-01

    Motivation: Focal therapy is an emerging low-morbidity treatment option for low-intermediate risk prostate cancer; however, challenges remain in accurately delivering treatment to specified targets and determining treatment success. Registered multi-parametric magnetic resonance imaging (MPMRI) acquired before and after treatment can support focal therapy evaluation and optimization; however, contouring variability, when defining the prostate, the clinical target volume (CTV) and the ablation region in images, reduces the precision of quantitative image-based focal therapy evaluation metrics. To inform the interpretation and clarify the limitations of such metrics, we investigated inter-observer contouring variability and its impact on four metrics. Methods: Pre-therapy and 2-week-post-therapy standard-of-care MPMRI were acquired from 5 focal cryotherapy patients. Two clinicians independently contoured, on each slice, the prostate (pre- and post-treatment) and the dominant index lesion CTV (pre-treatment) in the T2-weighted MRI, and the ablated region (post-treatment) in the dynamic-contrast- enhanced MRI. For each combination of clinician contours, post-treatment images were registered to pre-treatment images using a 3D biomechanical-model-based registration of prostate surfaces, and four metrics were computed: the proportion of the target tissue region that was ablated and the target:ablated region volume ratio for each of two targets (the CTV and an expanded planning target volume). Variance components analysis was used to measure the contribution of each type of contour to the variance in the therapy evaluation metrics. Conclusions: 14-23% of evaluation metric variance was attributable to contouring variability (including 6-12% from ablation region contouring); reducing this variability could improve the precision of focal therapy evaluation metrics.

  18. A PEG Construction of LDPC Codes Based on the Betweenness Centrality Metric

    Directory of Open Access Journals (Sweden)

    BHURTAH-SEEWOOSUNGKUR, I.

    2016-05-01

    Full Text Available Progressive Edge Growth (PEG constructions are usually based on optimizing the distance metric by using various methods. In this work however, the distance metric is replaced by a different one, namely the betweenness centrality metric, which was shown to enhance routing performance in wireless mesh networks. A new type of PEG construction for Low-Density Parity-Check (LDPC codes is introduced based on the betweenness centrality metric borrowed from social networks terminology given that the bipartite graph describing the LDPC is analogous to a network of nodes. The algorithm is very efficient in filling edges on the bipartite graph by adding its connections in an edge-by-edge manner. The smallest graph size the new code could construct surpasses those obtained from a modified PEG algorithm - the RandPEG algorithm. To the best of the authors' knowledge, this paper produces the best regular LDPC column-weight two graphs. In addition, the technique proves to be competitive in terms of error-correcting performance. When compared to MacKay, PEG and other recent modified-PEG codes, the algorithm gives better performance over high SNR due to its particular edge and local graph properties.

  19. Development and design optimization of water hydraulic manipulator for ITER

    International Nuclear Information System (INIS)

    Kekaelaeinen, Teemu; Mattila, Jouni; Virvalo, Tapio

    2009-01-01

    This paper describes one of the research projects carried out in The Preparation of Remote Handling Engineers for ITER (PREFIT) program within the European Fusion Training Scheme (EFTS). This research project is focusing on the design and optimization of water hydraulic manipulators used to test several remote handling tasks of ITER at Divertor Test Platform 2 (DTP2), Tampere, Finland, and later in ITER. In this project, a water hydraulic manipulator designed and build by Department of Intelligent Hydraulics and Automation in Tampere University of Technology, Finland (TUT/IHA) is further optimized as a case study for a given manipulator requirement specification in order to illustrate and verify developed comprehensive design guidelines and performance metrics. Without meaningful manipulator performance parameters, the evaluation of alternative robot manipulators designs remains ad hoc at best. Therefore, more comprehensive design guidelines and performance metrics are needed for comparing and improving different existing manipulators versus task requirements or for comparing different digital prototypes at early design phase of manipulators. In this paper the description of the project, its background and developments are presented and discussed.

  20. Holographic Spherically Symmetric Metrics

    Science.gov (United States)

    Petri, Michael

    The holographic principle (HP) conjectures, that the maximum number of degrees of freedom of any realistic physical system is proportional to the system's boundary area. The HP has its roots in the study of black holes. It has recently been applied to cosmological solutions. In this article we apply the HP to spherically symmetric static space-times. We find that any regular spherically symmetric object saturating the HP is subject to tight constraints on the (interior) metric, energy-density, temperature and entropy-density. Whenever gravity can be described by a metric theory, gravity is macroscopically scale invariant and the laws of thermodynamics hold locally and globally, the (interior) metric of a regular holographic object is uniquely determined up to a constant factor and the interior matter-state must follow well defined scaling relations. When the metric theory of gravity is general relativity, the interior matter has an overall string equation of state (EOS) and a unique total energy-density. Thus the holographic metric derived in this article can serve as simple interior 4D realization of Mathur's string fuzzball proposal. Some properties of the holographic metric and its possible experimental verification are discussed. The geodesics of the holographic metric describe an isotropically expanding (or contracting) universe with a nearly homogeneous matter-distribution within the local Hubble volume. Due to the overall string EOS the active gravitational mass-density is zero, resulting in a coasting expansion with Ht = 1, which is compatible with the recent GRB-data.

  1. An optimization-based framework for anisotropic simplex mesh adaptation

    Science.gov (United States)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  2. A comparison of evaluation metrics for biomedical journals, articles, and websites in terms of sensitivity to topic.

    Science.gov (United States)

    Fu, Lawrence D; Aphinyanaphongs, Yindalon; Wang, Lily; Aliferis, Constantin F

    2011-08-01

    Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed's clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic-adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. A Comparison of Evaluation Metrics for Biomedical Journals, Articles, and Websites in Terms of Sensitivity to Topic

    Science.gov (United States)

    Fu, Lawrence D.; Aphinyanaphongs, Yindalon; Wang, Lily; Aliferis, Constantin F.

    2011-01-01

    Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed’s clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations. PMID:21419864

  4. Evaluation of SEBS, SEBAL, and METRIC models in estimation of the evaporation from the freshwater lakes (Case study: Amirkabir dam, Iran)

    Science.gov (United States)

    Zamani Losgedaragh, Saeideh; Rahimzadegan, Majid

    2018-06-01

    Evapotranspiration (ET) estimation is of great importance due to its key role in water resource management. Surface energy modeling tools such as Surface Energy Balance Algorithm for Land (SEBAL), Mapping Evapotranspiration with Internalized Calibration (METRIC), and the Surface Energy Balance System (SEBS) can estimate the amount of evapotranspiration for every pixel of the satellite images. The main objective of this research is evaporation investigation from the freshwater bodies using SEBAL, METRIC, and SEBS. For this purpose, the Amirkabir dam reservoir and its nearby agricultural lands in a semi-arid climate were selected and studied from 2011 to 2017 as the study area. The implementations of this study were accomplished on 16 satellite images of Landsat TM5 and OLI. Then, SEBAL, METRIC, and SEBS were implemented on the selected images. Moreover, the corresponding pan evaporate measurements on the reservoir bank were considered as the ground truth data. Regarding to the results, SEBAL is not a reliable method to evaluate freshwater evaporation with the coefficient of determination (R2) of 0.36 and the Root Mean Square Error (RMSE) of 5.1 mm. On the other hand, METRIC with RMSE and R2 of 0.57 and 2.02 mm and SEBS with RMSE and R2 of 0.93 and 0.62 demonstrated a relatively good performance.

  5. Toward a better comprehension of Lean metrics for research and product development management

    DEFF Research Database (Denmark)

    da Costa, Janaina Mascarenhas Hornos; Oehmen, Josef; Rebentisch, Eric

    2014-01-01

    This paper presents a compilation and empirical survey-based evaluation of the metrics most commonly used by program managers during product development management. This work is part of a bigger project of MIT, PMI and INCOSE. Three methodological procedures were applied: systematic literature...... review, focus-group discussions, and survey. The survey results indicate the metrics considered to be the most and least useful for managing lean engineering programs, and reveals a shift of interest towards qualitative metrics, especially the ones that address the achievement of stakeholder values......, and the absence of useful metrics regarding the lean principles People and Pull....

  6. Connection Setup Signaling Scheme with Flooding-Based Path Searching for Diverse-Metric Network

    Science.gov (United States)

    Kikuta, Ko; Ishii, Daisuke; Okamoto, Satoru; Oki, Eiji; Yamanaka, Naoaki

    Connection setup on various computer networks is now achieved by GMPLS. This technology is based on the source-routing approach, which requires the source node to store metric information of the entire network prior to computing a route. Thus all metric information must be distributed to all network nodes and kept up-to-date. However, as metric information become more diverse and generalized, it is hard to update all information due to the huge update overhead. Emerging network services and applications require the network to support diverse metrics for achieving various communication qualities. Increasing the number of metrics supported by the network causes excessive processing of metric update messages. To reduce the number of metric update messages, another scheme is required. This paper proposes a connection setup scheme that uses flooding-based signaling rather than the distribution of metric information. The proposed scheme requires only flooding of signaling messages with requested metric information, no routing protocol is required. Evaluations confirm that the proposed scheme achieves connection establishment without excessive overhead. Our analysis shows that the proposed scheme greatly reduces the number of control messages compared to the conventional scheme, while their blocking probabilities are comparable.

  7. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  8. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  9. Context-dependent ATC complexity metric

    NARCIS (Netherlands)

    Mercado Velasco, G.A.; Borst, C.

    2015-01-01

    Several studies have investigated Air Traffic Control (ATC) complexity metrics in a search for a metric that could best capture workload. These studies have shown how daunting the search for a universal workload metric (one that could be applied in different contexts: sectors, traffic patterns,

  10. Metrics for evaluation of the author's writing styles: who is the best?

    Science.gov (United States)

    Darooneh, Amir H; Shariati, Ashrafosadat

    2014-09-01

    Studying the complexity of language has attracted the physicist's attention recently. The methods borrowed from the statistical mechanics; namely, the complex network theory, can be used for exploring the regularities as a characteristic of complexity of language. In this paper, we focus on the authorship identification by using the complex network approach. We introduce three metrics which enable us for comparison the author's writing styles. This approach was previously used by us for finding the author of unknown book among collection of thirty six books written by five Persian poets. Here, we select a collection of one hundred and one books of nine English writers and quantify their writing styles according to our metrics. In our experiment, Shakespeare appears as the best author who follows a unique writing style in all of his works.

  11. Metrics for evaluation of the author's writing styles: Who is the best?

    Science.gov (United States)

    Darooneh, Amir H.; Shariati, Ashrafosadat

    2014-09-01

    Studying the complexity of language has attracted the physicist's attention recently. The methods borrowed from the statistical mechanics; namely, the complex network theory, can be used for exploring the regularities as a characteristic of complexity of language. In this paper, we focus on the authorship identification by using the complex network approach. We introduce three metrics which enable us for comparison the author's writing styles. This approach was previously used by us for finding the author of unknown book among collection of thirty six books written by five Persian poets. Here, we select a collection of one hundred and one books of nine English writers and quantify their writing styles according to our metrics. In our experiment, Shakespeare appears as the best author who follows a unique writing style in all of his works.

  12. DLA Energy Biofuel Feedstock Metrics Study

    Science.gov (United States)

    2012-12-11

    moderately/highly in- vasive  Metric 2: Genetically modified organism ( GMO ) hazard, Yes/No and Hazard Category  Metric 3: Species hybridization...4– biofuel distribution Stage # 5– biofuel use Metric 1: State inva- siveness ranking Yes Minimal Minimal No No Metric 2: GMO hazard Yes...may utilize GMO microbial or microalgae species across the applicable biofuel life cycles (stages 1–3). The following consequence Metrics 4–6 then

  13. Optimal networks of future gravitational-wave telescopes

    Science.gov (United States)

    Raffai, Péter; Gondán, László; Heng, Ik Siong; Kelecsényi, Nándor; Logue, Josh; Márka, Zsuzsa; Márka, Szabolcs

    2013-08-01

    We aim to find the optimal site locations for a hypothetical network of 1-3 triangular gravitational-wave telescopes. We define the following N-telescope figures of merit (FoMs) and construct three corresponding metrics: (a) capability of reconstructing the signal polarization; (b) accuracy in source localization; and (c) accuracy in reconstructing the parameters of a standard binary source. We also define a combined metric that takes into account the three FoMs with practically equal weight. After constructing a geomap of possible telescope sites, we give the optimal 2-telescope networks for the four FoMs separately in example cases where the location of the first telescope has been predetermined. We found that based on the combined metric, placing the first telescope to Australia provides the most options for optimal site selection when extending the network with a second instrument. We suggest geographical regions where a potential second and third telescope could be placed to get optimal network performance in terms of our FoMs. Additionally, we use a similar approach to find the optimal location and orientation for the proposed LIGO-India detector within a five-detector network with Advanced LIGO (Hanford), Advanced LIGO (Livingston), Advanced Virgo, and KAGRA. We found that the FoMs do not change greatly in sites within India, though the network can suffer a significant loss in reconstructing signal polarizations if the orientation angle of an L-shaped LIGO-India is not set to the optimal value of ˜58.2°( + k × 90°) (measured counterclockwise from East to the bisector of the arms).

  14. A Practitioners’ Perspective on Developmental Models, Metrics and Community

    Directory of Open Access Journals (Sweden)

    Chad Stewart

    2009-12-01

    Full Text Available This article builds on a paper by Stein and Heikkinen (2009, and suggestsways to expand and improve our measurement of the quality of the developmentalmodels, metrics and instruments and the results we get in collaborating with clients. Wesuggest that this dialogue needs to be about more than stage development measured by(even calibrated stage development-focused, linguistic-based, developmental psychologymetrics that produce lead indicators and are shown to be reliable and valid bypsychometric qualities alone. The article first provides a brief overview of ourbackground and biases, and an applied version of Ken Wilber’s Integral OperatingSystem that has provided increased development, client satisfaction, and contribution toour communities measured by verifiable, tangible results (as well as intangible resultssuch as increased ability to cope with complex surroundings, reduced stress and growthin developmental stages to better fit to the environment in which our clients wereengaged at that time. It then addresses four key points raised by Stein and Heikkinen(need for quality control, defining and deciding on appropriate metrics, building a systemto evaluate models and metrics, and clarifying and increasing the reliability and validityof the models and metrics we use by providing initial concrete steps to:• Adopt a systemic value-chain approach• Measure results in addition to language• Build on the evaluation system for instruments, models and metrics suggested byStein & Heikkinen• Clarify and improve the reliability and validity of the instruments, models andmetrics we useWe complete the article with an echoing call for the community of AppliedDevelopmental Theory suggested by Ross (2008 and Stein and Heikkinen, a briefdescription of that community (from our perspective, and a table that builds on Table 2proposed by Stein and Heikkinen.

  15. Image Correlation Pattern Optimization for Micro-Scale In-Situ Strain Measurements

    Science.gov (United States)

    Bomarito, G. F.; Hochhalter, J. D.; Cannon, A. H.

    2016-01-01

    -matched shape functions. An important implication, as discussed by Sutton et al., is that in the presence of highly localized deformations (e.g., crack fronts), error can be reduced by minimizing the subset size. In other words, smaller subsets allow the more accurate resolution of localized deformations. Contrarily, the choice of optimal subset size has been widely studied and a general consensus is that larger subsets with more information content are less prone to random error. Thus, an optimal subset size balances the systematic error from under matched deformations with random error from measurement noise. The alternative approach pursued in the current work is to choose a small subset size and optimize the information content within (i.e., optimizing an applied DIC pattern), rather than finding an optimal subset size. In the literature, many pattern quality metrics have been proposed, e.g., sum of square intensity gradient (SSSIG), mean subset fluctuation, gray level co-occurrence, autocorrelation-based metrics, and speckle-based metrics. The majority of these metrics were developed to quantify the quality of common pseudo-random patterns after they have been applied, and were not created with the intent of pattern generation. As such, it is found that none of the metrics examined in this study are fit to be the objective function of a pattern generation optimization. In some cases, such as with speckle-based metrics, application to pixel by pixel patterns is ill-conditioned and requires somewhat arbitrary extensions. In other cases, such as with the SSSIG, it is shown that trivial solutions exist for the optimum of the metric which are ill-suited for DIC (such as a checkerboard pattern). In the current work, a multi-metric optimization method is proposed whereby quality is viewed as a combination of individual quality metrics. Specifically, SSSIG and two auto-correlation metrics are used which have generally competitive objectives. Thus, each metric could be viewed as a

  16. An Opportunistic Routing Mechanism Combined with Long-Term and Short-Term Metrics for WMN

    Directory of Open Access Journals (Sweden)

    Weifeng Sun

    2014-01-01

    Full Text Available WMN (wireless mesh network is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX.

  17. An opportunistic routing mechanism combined with long-term and short-term metrics for WMN.

    Science.gov (United States)

    Sun, Weifeng; Wang, Haotian; Piao, Xianglan; Qiu, Tie

    2014-01-01

    WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX.

  18. Building healthy communities: establishing health and wellness metrics for use within the real estate industry.

    Science.gov (United States)

    Trowbridge, Matthew J; Pickell, Sarah Gauche; Pyke, Christopher R; Jutte, Douglas P

    2014-11-01

    It is increasingly well recognized that the design and operation of the communities in which people live, work, learn, and play significantly influence their health. However, within the real estate industry, the health impacts of transportation, community development, and other construction projects, both positive and negative, continue to operate largely as economic externalities: unmeasured, unregulated, and for the most part unconsidered. This lack of transparency limits communities' ability to efficiently advocate for real estate investment that best promotes their health and well-being. It also limits market incentives for innovation within the real estate industry by making it more difficult for developers that successfully target health behaviors and outcomes in their projects to differentiate themselves competitively. In this article we outline the need for actionable, community-relevant, practical, and valuable metrics jointly developed by the health care and real estate sectors to better evaluate and optimize the "performance" of real estate development projects from a population health perspective. Potential templates for implementation, including the successful introduction of sustainability metrics by the green building movement, and preliminary data from selected case-study projects are also discussed. Project HOPE—The People-to-People Health Foundation, Inc.

  19. Symmetries of Taub-NUT dual metrics

    International Nuclear Information System (INIS)

    Baleanu, D.; Codoban, S.

    1998-01-01

    Recently geometric duality was analyzed for a metric which admits Killing tensors. An interesting example arises when the manifold has Killing-Yano tensors. The symmetries of the dual metrics in the case of Taub-NUT metric are investigated. Generic and non-generic symmetries of dual Taub-NUT metric are analyzed

  20. Perceptual Dominant Color Extraction by Multidimensional Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Moncef Gabbouj

    2009-01-01

    Full Text Available Color is the major source of information widely used in image analysis and content-based retrieval. Extracting dominant colors that are prominent in a visual scenery is of utmost importance since the human visual system primarily uses them for perception and similarity judgment. In this paper, we address dominant color extraction as a dynamic clustering problem and use techniques based on Particle Swarm Optimization (PSO for finding optimal (number of dominant colors in a given color space, distance metric and a proper validity index function. The first technique, so-called Multidimensional (MD PSO can seek both positional and dimensional optima. Nevertheless, MD PSO is still susceptible to premature convergence due to lack of divergence. To address this problem we then apply Fractional Global Best Formation (FGBF technique. In order to extract perceptually important colors and to further improve the discrimination factor for a better clustering performance, an efficient color distance metric, which uses a fuzzy model for computing color (dis- similarities over HSV (or HSL color space is proposed. The comparative evaluations against MPEG-7 dominant color descriptor show the superiority of the proposed technique.

  1. Metric learning

    CERN Document Server

    Bellet, Aurelien; Sebban, Marc

    2015-01-01

    Similarity between objects plays an important role in both human cognitive processes and artificial systems for recognition and categorization. How to appropriately measure such similarities for a given task is crucial to the performance of many machine learning, pattern recognition and data mining methods. This book is devoted to metric learning, a set of techniques to automatically learn similarity and distance functions from data that has attracted a lot of interest in machine learning and related fields in the past ten years. In this book, we provide a thorough review of the metric learnin

  2. The Relationship between the Level and Modality of HRM Metrics, Quality of HRM Practice and Organizational Performance

    OpenAIRE

    Nina Pološki Vokić

    2011-01-01

    The paper explores the relationship between the way organizations measure HRM and overall quality of HRM activities, as well as the relationship between HRM metrics used and financial performance of an organization. In the theoretical part of the paper modalities of HRM metrics are grouped into five groups (evaluating HRM using accounting principles, evaluating HRM using management techniques, evaluating individual HRM activities, aggregate evaluation of HRM, and evaluating HRM de...

  3. Technical Privacy Metrics: a Systematic Survey

    OpenAIRE

    Wagner, Isabel; Eckhoff, David

    2018-01-01

    The file attached to this record is the author's final peer reviewed version The goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature makes an informed choice of metrics challenging. As a result, instead of using existing metrics, n...

  4. San Luis Basin Sustainability Metrics Project: A Methodology for Evaluating Regional Sustainability

    Science.gov (United States)

    Although there are several scientifically-based sustainability metrics, many are data intensive, difficult to calculate, and fail to capture all aspects of a system. To address these issues, we produced a scientifically-defensible, but straightforward and inexpensive, methodolog...

  5. Optimization of Bolt Stress

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2013-01-01

    The state of stress in bolts and nuts with ISO metric thread design is examined and optimized. The assumed failure mode is fatigue so the applied preload and the load amplitude together with the stress concentrations define the connection strength. Maximum stress in the bolt is found at, the fillet...... under the head, at the thread start or at the thread root. To minimize the stress concentration shape optimization is applied....

  6. Image characterization metrics for muon tomography

    Science.gov (United States)

    Luo, Weidong; Lehovich, Andre; Anashkin, Edward; Bai, Chuanyong; Kindem, Joel; Sossong, Michael; Steiger, Matt

    2014-05-01

    Muon tomography uses naturally occurring cosmic rays to detect nuclear threats in containers. Currently there are no systematic image characterization metrics for muon tomography. We propose a set of image characterization methods to quantify the imaging performance of muon tomography. These methods include tests of spatial resolution, uniformity, contrast, signal to noise ratio (SNR) and vertical smearing. Simulated phantom data and analysis methods were developed to evaluate metric applicability. Spatial resolution was determined as the FWHM of the point spread functions in X, Y and Z axis for 2.5cm tungsten cubes. Uniformity was measured by drawing a volume of interest (VOI) within a large water phantom and defined as the standard deviation of voxel values divided by the mean voxel value. Contrast was defined as the peak signals of a set of tungsten cubes divided by the mean voxel value of the water background. SNR was defined as the peak signals of cubes divided by the standard deviation (noise) of the water background. Vertical smearing, i.e. vertical thickness blurring along the zenith axis for a set of 2 cm thick tungsten plates, was defined as the FWHM of vertical spread function for the plate. These image metrics provided a useful tool to quantify the basic imaging properties for muon tomography.

  7. On Information Metrics for Spatial Coding.

    Science.gov (United States)

    Souza, Bryan C; Pavão, Rodrigo; Belchior, Hindiael; Tort, Adriano B L

    2018-04-01

    The hippocampal formation is involved in navigation, and its neuronal activity exhibits a variety of spatial correlates (e.g., place cells, grid cells). The quantification of the information encoded by spikes has been standard procedure to identify which cells have spatial correlates. For place cells, most of the established metrics derive from Shannon's mutual information (Shannon, 1948), and convey information rate in bits/s or bits/spike (Skaggs et al., 1993, 1996). Despite their widespread use, the performance of these metrics in relation to the original mutual information metric has never been investigated. In this work, using simulated and real data, we find that the current information metrics correlate less with the accuracy of spatial decoding than the original mutual information metric. We also find that the top informative cells may differ among metrics, and show a surrogate-based normalization that yields comparable spatial information estimates. Since different information metrics may identify different neuronal populations, we discuss current and alternative definitions of spatially informative cells, which affect the metric choice. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.

    Science.gov (United States)

    Nik, S J; Thing, R S; Watts, R; Meyer, J

    2012-06-01

    To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations.

  9. Generalized Painleve-Gullstrand metrics

    Energy Technology Data Exchange (ETDEWEB)

    Lin Chunyu [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China)], E-mail: l2891112@mail.ncku.edu.tw; Soo Chopin [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China)], E-mail: cpsoo@mail.ncku.edu.tw

    2009-02-02

    An obstruction to the implementation of spatially flat Painleve-Gullstrand (PG) slicings is demonstrated, and explicitly discussed for Reissner-Nordstroem and Schwarzschild-anti-deSitter spacetimes. Generalizations of PG slicings which are not spatially flat but which remain regular at the horizons are introduced. These metrics can be obtained from standard spherically symmetric metrics by physical Lorentz boosts. With these generalized PG metrics, problematic contributions to the imaginary part of the action in the Parikh-Wilczek derivation of Hawking radiation due to the obstruction can be avoided.

  10. Constructing a no-reference H.264/AVC bitstream-based video quality metric using genetic programming-based symbolic regression

    OpenAIRE

    Staelens, Nicolas; Deschrijver, Dirk; Vladislavleva, E; Vermeulen, Brecht; Dhaene, Tom; Demeester, Piet

    2013-01-01

    In order to ensure optimal quality of experience toward end users during video streaming, automatic video quality assessment becomes an important field-of-interest to video service providers. Objective video quality metrics try to estimate perceived quality with high accuracy and in an automated manner. In traditional approaches, these metrics model the complex properties of the human visual system. More recently, however, it has been shown that machine learning approaches can also yield comp...

  11. Kerr metric in the deSitter background

    International Nuclear Information System (INIS)

    Vaidya, P.C.

    1984-01-01

    In addition to the Kerr metric with cosmological constant Λ several other metrics are presented giving a Kerr-like solution of Einstein's equations in the background of deSitter universe. A new metric of what may be termed as rotating deSitter space-time devoid of matter but containing null fluid with twisting null rays, has been presented. This metric reduces to the standard deSitter metric when the twist in the rays vanishes. Kerr metric in this background is the immediate generalization of Schwarzschild's exterior metric with cosmological constant. (author)

  12. Kerr metric in cosmological background

    Energy Technology Data Exchange (ETDEWEB)

    Vaidya, P C [Gujarat Univ., Ahmedabad (India). Dept. of Mathematics

    1977-06-01

    A metric satisfying Einstein's equation is given which in the vicinity of the source reduces to the well-known Kerr metric and which at large distances reduces to the Robertson-Walker metric of a nomogeneous cosmological model. The radius of the event horizon of the Kerr black hole in the cosmological background is found out.

  13. Load Index Metrics for an Optimized Management of Web Services: A Systematic Evaluation

    Science.gov (United States)

    Souza, Paulo S. L.; Santana, Regina H. C.; Santana, Marcos J.; Zaluska, Ed; Faical, Bruno S.; Estrella, Julio C.

    2013-01-01

    The lack of precision to predict service performance through load indices may lead to wrong decisions regarding the use of web services, compromising service performance and raising platform cost unnecessarily. This paper presents experimental studies to qualify the behaviour of load indices in the web service context. The experiments consider three services that generate controlled and significant server demands, four levels of workload for each service and six distinct execution scenarios. The evaluation considers three relevant perspectives: the capability for representing recent workloads, the capability for predicting near-future performance and finally stability. Eight different load indices were analysed, including the JMX Average Time index (proposed in this paper) specifically designed to address the limitations of the other indices. A systematic approach is applied to evaluate the different load indices, considering a multiple linear regression model based on the stepwise-AIC method. The results show that the load indices studied represent the workload to some extent; however, in contrast to expectations, most of them do not exhibit a coherent correlation with service performance and this can result in stability problems. The JMX Average Time index is an exception, showing a stable behaviour which is tightly-coupled to the service runtime for all executions. Load indices are used to predict the service runtime and therefore their inappropriate use can lead to decisions that will impact negatively on both service performance and execution cost. PMID:23874776

  14. Two classes of metric spaces

    Directory of Open Access Journals (Sweden)

    Isabel Garrido

    2016-04-01

    Full Text Available The class of metric spaces (X,d known as small-determined spaces, introduced by Garrido and Jaramillo, are properly defined by means of some type of real-valued Lipschitz functions on X. On the other hand, B-simple metric spaces introduced by Hejcman are defined in terms of some kind of bornologies of bounded subsets of X. In this note we present a common framework where both classes of metric spaces can be studied which allows us to see not only the relationships between them but also to obtain new internal characterizations of these metric properties.

  15. Identification and optimization problems in plasma physics

    International Nuclear Information System (INIS)

    Gilbert, J.C.

    1986-06-01

    Parameter identification of the current in a tokamak plasma is studied. Plasma equilibrium in a vacuum container with a diaphragm is analyzed. A variable metric method with reduced optimization with nonlinear equality constraints; and a quasi-Newton reduced optimization method with constraints giving priority to restoration are presented [fr

  16. A Metric for Heterotic Moduli

    Science.gov (United States)

    Candelas, Philip; de la Ossa, Xenia; McOrist, Jock

    2017-12-01

    Heterotic vacua of string theory are realised, at large radius, by a compact threefold with vanishing first Chern class together with a choice of stable holomorphic vector bundle. These form a wide class of potentially realistic four-dimensional vacua of string theory. Despite all their phenomenological promise, there is little understanding of the metric on the moduli space of these. What is sought is the analogue of special geometry for these vacua. The metric on the moduli space is important in phenomenology as it normalises D-terms and Yukawa couplings. It is also of interest in mathematics, since it generalises the metric, first found by Kobayashi, on the space of gauge field connections, to a more general context. Here we construct this metric, correct to first order in {α^{\\backprime}}, in two ways: first by postulating a metric that is invariant under background gauge transformations of the gauge field, and also by dimensionally reducing heterotic supergravity. These methods agree and the resulting metric is Kähler, as is required by supersymmetry. Checking the metric is Kähler is intricate and the anomaly cancellation equation for the H field plays an essential role. The Kähler potential nevertheless takes a remarkably simple form: it is the Kähler potential of special geometry with the Kähler form replaced by the {α^{\\backprime}}-corrected hermitian form.

  17. Cross-layer protocol design for QoS optimization in real-time wireless sensor networks

    Science.gov (United States)

    Hortos, William S.

    2010-04-01

    The metrics of quality of service (QoS) for each sensor type in a wireless sensor network can be associated with metrics for multimedia that describe the quality of fused information, e.g., throughput, delay, jitter, packet error rate, information correlation, etc. These QoS metrics are typically set at the highest, or application, layer of the protocol stack to ensure that performance requirements for each type of sensor data are satisfied. Application-layer metrics, in turn, depend on the support of the lower protocol layers: session, transport, network, data link (MAC), and physical. The dependencies of the QoS metrics on the performance of the higher layers of the Open System Interconnection (OSI) reference model of the WSN protocol, together with that of the lower three layers, are the basis for a comprehensive approach to QoS optimization for multiple sensor types in a general WSN model. The cross-layer design accounts for the distributed power consumption along energy-constrained routes and their constituent nodes. Following the author's previous work, the cross-layer interactions in the WSN protocol are represented by a set of concatenated protocol parameters and enabling resource levels. The "best" cross-layer designs to achieve optimal QoS are established by applying the general theory of martingale representations to the parameterized multivariate point processes (MVPPs) for discrete random events occurring in the WSN. Adaptive control of network behavior through the cross-layer design is realized through the parametric factorization of the stochastic conditional rates of the MVPPs. The cross-layer protocol parameters for optimal QoS are determined in terms of solutions to stochastic dynamic programming conditions derived from models of transient flows for heterogeneous sensor data and aggregate information over a finite time horizon. Markov state processes, embedded within the complex combinatorial history of WSN events, are more computationally

  18. Singular value decomposition metrics show limitations of detector design in diffuse fluorescence tomography.

    Science.gov (United States)

    Leblond, Frederic; Tichauer, Kenneth M; Pogue, Brian W

    2010-11-29

    The spatial resolution and recovered contrast of images reconstructed from diffuse fluorescence tomography data are limited by the high scattering properties of light propagation in biological tissue. As a result, the image reconstruction process can be exceedingly vulnerable to inaccurate prior knowledge of tissue optical properties and stochastic noise. In light of these limitations, the optimal source-detector geometry for a fluorescence tomography system is non-trivial, requiring analytical methods to guide design. Analysis of the singular value decomposition of the matrix to be inverted for image reconstruction is one potential approach, providing key quantitative metrics, such as singular image mode spatial resolution and singular data mode frequency as a function of singular mode. In the present study, these metrics are used to analyze the effects of different sources of noise and model errors as related to image quality in the form of spatial resolution and contrast recovery. The image quality is demonstrated to be inherently noise-limited even when detection geometries were increased in complexity to allow maximal tissue sampling, suggesting that detection noise characteristics outweigh detection geometry for achieving optimal reconstructions.

  19. Optimal networks of future gravitational-wave telescopes

    International Nuclear Information System (INIS)

    Raffai, Péter; Márka, Zsuzsa; Márka, Szabolcs; Gondán, László; Kelecsényi, Nándor; Heng, Ik Siong; Logue, Josh

    2013-01-01

    We aim to find the optimal site locations for a hypothetical network of 1–3 triangular gravitational-wave telescopes. We define the following N-telescope figures of merit (FoMs) and construct three corresponding metrics: (a) capability of reconstructing the signal polarization; (b) accuracy in source localization; and (c) accuracy in reconstructing the parameters of a standard binary source. We also define a combined metric that takes into account the three FoMs with practically equal weight. After constructing a geomap of possible telescope sites, we give the optimal 2-telescope networks for the four FoMs separately in example cases where the location of the first telescope has been predetermined. We found that based on the combined metric, placing the first telescope to Australia provides the most options for optimal site selection when extending the network with a second instrument. We suggest geographical regions where a potential second and third telescope could be placed to get optimal network performance in terms of our FoMs. Additionally, we use a similar approach to find the optimal location and orientation for the proposed LIGO-India detector within a five-detector network with Advanced LIGO (Hanford), Advanced LIGO (Livingston), Advanced Virgo, and KAGRA. We found that the FoMs do not change greatly in sites within India, though the network can suffer a significant loss in reconstructing signal polarizations if the orientation angle of an L-shaped LIGO-India is not set to the optimal value of ∼58.2°( + k × 90°) (measured counterclockwise from East to the bisector of the arms). (paper)

  20. On characterizations of quasi-metric completeness

    Energy Technology Data Exchange (ETDEWEB)

    Dag, H.; Romaguera, S.; Tirado, P.

    2017-07-01

    Hu proved in [4] that a metric space (X, d) is complete if and only if for any closed subspace C of (X, d), every Banach contraction on C has fixed point. Since then several authors have investigated the problem of characterizing the metric completeness by means of fixed point theorems. Recently this problem has been studied in the more general context of quasi-metric spaces for different notions of completeness. Here we present a characterization of a kind of completeness for quasi-metric spaces by means of a quasi-metric versions of Hu’s theorem. (Author)

  1. Comparative evaluation of various optimization methods and the development of an optimization code system SCOOP

    International Nuclear Information System (INIS)

    Suzuki, Tadakazu

    1979-11-01

    Thirty two programs for linear and nonlinear optimization problems with or without constraints have been developed or incorporated, and their stability, convergence and efficiency have been examined. On the basis of these evaluations, the first version of the optimization code system SCOOP-I has been completed. The SCOOP-I is designed to be an efficient, reliable, useful and also flexible system for general applications. The system enables one to find global optimization point for a wide class of problems by selecting the most appropriate optimization method built in it. (author)

  2. Performance of different metrics proposed to CIE TC 1-91

    Directory of Open Access Journals (Sweden)

    Pramod Bhusal

    2017-12-01

    Full Text Available The main aim of the article is to find out the performance of different metrics proposed to CIE TC 1-91. Currently, six different indexes have been proposed to CIE TC 1-91: Colour Quality Scale (CQS, Feeling of Contrast Index (FCI, Memory colour rendering index (MCRI, Preference of skin (PS, Relative gamut area index (RGAI and Illuminating Engineering society Method for evaluating light source colour rendition (IES TM-30. The evaluation and analysis are based on previously conducted experiment in lighting booth. The analysis showed the area based metric FCI was good subjective preference indicator. The subjective preference was measured in terms of naturalness of objects, colourfulness of colour checker chart, and the visual appearance of the lit scene in the booth.

  3. Towards enhancement of performance of K-means clustering using nature-inspired optimization algorithms.

    Science.gov (United States)

    Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  4. Automated selection of the optimal cardiac phase for single-beat coronary CT angiography reconstruction

    International Nuclear Information System (INIS)

    Stassi, D.; Ma, H.; Schmidt, T. G.; Dutta, S.; Soderman, A.; Pazzani, D.; Gros, E.; Okerlund, D.

    2016-01-01

    Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three

  5. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    Science.gov (United States)

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  6. Brand metrics that matter

    NARCIS (Netherlands)

    Muntinga, D.; Bernritter, S.

    2017-01-01

    Het merk staat steeds meer centraal in de organisatie. Het is daarom essentieel om de gezondheid, prestaties en ontwikkelingen van het merk te meten. Het is echter een uitdaging om de juiste brand metrics te selecteren. Een enorme hoeveelheid metrics vraagt de aandacht van merkbeheerders. Maar welke

  7. Privacy Metrics and Boundaries

    NARCIS (Netherlands)

    L-F. Pau (Louis-François)

    2005-01-01

    textabstractThis paper aims at defining a set of privacy metrics (quantitative and qualitative) in the case of the relation between a privacy protector ,and an information gatherer .The aims with such metrics are: -to allow to assess and compare different user scenarios and their differences; for

  8. Optimization of an In silico Cardiac Cell Model for Proarrhythmia Risk Assessment

    Directory of Open Access Journals (Sweden)

    Sara Dutta

    2017-08-01

    Full Text Available Drug-induced Torsade-de-Pointes (TdP has been responsible for the withdrawal of many drugs from the market and is therefore of major concern to global regulatory agencies and the pharmaceutical industry. The Comprehensive in vitro Proarrhythmia Assay (CiPA was proposed to improve prediction of TdP risk, using in silico models and in vitro multi-channel pharmacology data as integral parts of this initiative. Previously, we reported that combining dynamic interactions between drugs and the rapid delayed rectifier potassium current (IKr with multi-channel pharmacology is important for TdP risk classification, and we modified the original O'Hara Rudy ventricular cell mathematical model to include a Markov model of IKr to represent dynamic drug-IKr interactions (IKr-dynamic ORd model. We also developed a novel metric that could separate drugs with different TdP liabilities at high concentrations based on total electronic charge carried by the major inward ionic currents during the action potential. In this study, we further optimized the IKr-dynamic ORd model by refining model parameters using published human cardiomyocyte experimental data under control and drug block conditions. Using this optimized model and manual patch clamp data, we developed an updated version of the metric that quantifies the net electronic charge carried by major inward and outward ionic currents during the steady state action potential, which could classify the level of drug-induced TdP risk across a wide range of concentrations and pacing rates. We also established a framework to quantitatively evaluate a system's robustness against the induction of early afterdepolarizations (EADs, and demonstrated that the new metric is correlated with the cell's robustness to the pro-EAD perturbation of IKr conductance reduction. In summary, in this work we present an optimized model that is more consistent with experimental data, an improved metric that can classify drugs at

  9. Urban Landscape Metrics for Climate and Sustainability Assessments

    Science.gov (United States)

    Cochran, F. V.; Brunsell, N. A.

    2014-12-01

    To test metrics for rapid identification of urban classes and sustainable urban forms, we examine the configuration of urban landscapes using satellite remote sensing data. We adopt principles from landscape ecology and urban planning to evaluate urban heterogeneity and design themes that may constitute more sustainable urban forms, including compactness (connectivity), density, mixed land uses, diversity, and greening. Using 2-D wavelet and multi-resolution analysis, landscape metrics, and satellite-derived indices of vegetation fraction and impervious surface, the spatial variability of Landsat and MODIS data from metropolitan areas of Manaus and São Paulo, Brazil are investigated. Landscape metrics for density, connectivity, and diversity, like the Shannon Diversity Index, are used to assess the diversity of urban buildings, geographic extent, and connectedness. Rapid detection of urban classes for low density, medium density, high density, and tall building district at the 1-km scale are needed for use in climate models. If the complexity of finer-scale urban characteristics can be related to the neighborhood scale both climate and sustainability assessments may be more attainable across urban areas.

  10. Energy metrics analysis of hybrid - photovoltaic (PV) modules

    Energy Technology Data Exchange (ETDEWEB)

    Tiwari, Arvind [Department of Electronics and Communication, Krishna Institute of Engineering and Technology, 13 k.m. stone, Ghaziabad - Meerut Road, Ghaziabad 201 206, UP (India); Barnwal, P.; Sandhu, G.S.; Sodha, M.S. [Centre for Energy Studies, Indian Institute of Technology Delhi, Hauz Khas, New Delhi 110 016 (India)

    2009-12-15

    In this paper, energy metrics (energy pay back time, electricity production factor and life cycle conversion efficiency) of hybrid photovoltaic (PV) modules have been analyzed and presented for the composite climate of New Delhi, India. For this purpose, it is necessary to calculate (1) the energy consumption in making different components of the PV modules and (2) the annual energy (electrical and thermal) available from the hybrid-PV modules. A set of mathematical relations have been reformulated for computation of the energy metrics. The manufacturing energy, material production energy, energy use and distribution energy of the system have been taken into account, to determine the embodied energy for the hybrid-PV modules. The embodied energy and annual energy outputs have been used for evaluation of the energy metrics. For hybrid PV module, it has been observed that the EPBT gets significantly reduced by taking into account the increase in annual energy availability of the thermal energy in addition to the electrical energy. The values of EPF and LCCE of hybrid PV module become higher as expected. (author)

  11. Revision and extension of Eco-LCA metrics for sustainability assessment of the energy and chemical processes.

    Science.gov (United States)

    Yang, Shiying; Yang, Siyu; Kraslawski, Andrzej; Qian, Yu

    2013-12-17

    Ecologically based life cycle assessment (Eco-LCA) is an appealing approach for the evaluation of resources utilization and environmental impacts of the process industries from an ecological scale. However, the aggregated metrics of Eco-LCA suffer from some drawbacks: the environmental impact metric has limited applicability; the resource utilization metric ignores indirect consumption; the renewability metric fails to address the quantitative distinction of resources availability; the productivity metric seems self-contradictory. In this paper, the existing Eco-LCA metrics are revised and extended for sustainability assessment of the energy and chemical processes. A new Eco-LCA metrics system is proposed, including four independent dimensions: environmental impact, resource utilization, resource availability, and economic effectiveness. An illustrative example of comparing assessment between a gas boiler and a solar boiler process provides insight into the features of the proposed approach.

  12. Evaluation of Frameworks for HSCT Design Optimization

    Science.gov (United States)

    Krishnan, Ramki

    1998-01-01

    This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.

  13. State of the art metrics for aspect oriented programming

    Science.gov (United States)

    Ghareb, Mazen Ismaeel; Allen, Gary

    2018-04-01

    The quality evaluation of software, e.g., defect measurement, gains significance with higher use of software applications. Metric measurements are considered as the primary indicator of imperfection prediction and software maintenance in various empirical studies of software products. However, there is no agreement on which metrics are compelling quality indicators for novel development approaches such as Aspect Oriented Programming (AOP). AOP intends to enhance programming quality, by providing new and novel constructs for the development of systems, for example, point cuts, advice and inter-type relationships. Hence, it is not evident if quality pointers for AOP can be derived from direct expansions of traditional OO measurements. Then again, investigations of AOP do regularly depend on established coupling measurements. Notwithstanding the late reception of AOP in empirical studies, coupling measurements have been adopted as useful markers of flaw inclination in this context. In this paper we will investigate the state of the art metrics for measurement of Aspect Oriented systems development.

  14. Cyber threat metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Frye, Jason Neal; Veitch, Cynthia K.; Mateski, Mark Elliot; Michalski, John T.; Harris, James Mark; Trevino, Cassandra M.; Maruoka, Scott

    2012-03-01

    Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.

  15. Fixed point theory in metric type spaces

    CERN Document Server

    Agarwal, Ravi P; O’Regan, Donal; Roldán-López-de-Hierro, Antonio Francisco

    2015-01-01

    Written by a team of leading experts in the field, this volume presents a self-contained account of the theory, techniques and results in metric type spaces (in particular in G-metric spaces); that is, the text approaches this important area of fixed point analysis beginning from the basic ideas of metric space topology. The text is structured so that it leads the reader from preliminaries and historical notes on metric spaces (in particular G-metric spaces) and on mappings, to Banach type contraction theorems in metric type spaces, fixed point theory in partially ordered G-metric spaces, fixed point theory for expansive mappings in metric type spaces, generalizations, present results and techniques in a very general abstract setting and framework. Fixed point theory is one of the major research areas in nonlinear analysis. This is partly due to the fact that in many real world problems fixed point theory is the basic mathematical tool used to establish the existence of solutions to problems which arise natur...

  16. Measuring Success: Metrics that Link Supply Chain Management to Aircraft Readiness

    National Research Council Canada - National Science Library

    Balestreri, William

    2002-01-01

    This thesis evaluates and analyzes current strategic management planning methods that develop performance metrics linking supply chain management to aircraft readiness, Our primary focus is the Marine...

  17. Instrument Motion Metrics for Laparoscopic Skills Assessment in Virtual Reality and Augmented Reality.

    Science.gov (United States)

    Fransson, Boel A; Chen, Chi-Ya; Noyes, Julie A; Ragle, Claude A

    2016-11-01

    To determine the construct and concurrent validity of instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality simulators. Evaluation study. Veterinarian students (novice, n = 14) and veterinarians (experienced, n = 11) with no or variable laparoscopic experience. Participants' minimally invasive surgery (MIS) experience was determined by hospital records of MIS procedures performed in the Teaching Hospital. Basic laparoscopic skills were assessed by 5 tasks using a physical box trainer. Each participant completed 2 tasks for assessments in each type of simulator (virtual reality: bowel handling and cutting; augmented reality: object positioning and a pericardial window model). Motion metrics such as instrument path length, angle or drift, and economy of motion of each simulator were recorded. None of the motion metrics in a virtual reality simulator showed correlation with experience, or to the basic laparoscopic skills score. All metrics in augmented reality were significantly correlated with experience (time, instrument path, and economy of movement), except for the hand dominance metric. The basic laparoscopic skills score was correlated to all performance metrics in augmented reality. The augmented reality motion metrics differed between American College of Veterinary Surgeons diplomates and residents, whereas basic laparoscopic skills score and virtual reality metrics did not. Our results provide construct validity and concurrent validity for motion analysis metrics for an augmented reality system, whereas a virtual reality system was validated only for the time score. © Copyright 2016 by The American College of Veterinary Surgeons.

  18. Development and Implementation of a Design Metric for Systems Containing Long-Term Fluid Loops

    Science.gov (United States)

    Steele, John W.

    2016-01-01

    John Steele, a chemist and technical fellow from United Technologies Corporation, provided a water quality module to assist engineers and scientists with a metric tool to evaluate risks associated with the design of space systems with fluid loops. This design metric is a methodical, quantitative, lessons-learned based means to evaluate the robustness of a long-term fluid loop system design. The tool was developed by a cross-section of engineering disciplines who had decades of experience and problem resolution.

  19. A Metric and Workflow for Quality Control in the Analysis of Heterogeneity in Phenotypic Profiles and Screens

    Science.gov (United States)

    Gough, Albert; Shun, Tongying; Taylor, D. Lansing; Schurdak, Mark

    2016-01-01

    Heterogeneity is well recognized as a common property of cellular systems that impacts biomedical research and the development of therapeutics and diagnostics. Several studies have shown that analysis of heterogeneity: gives insight into mechanisms of action of perturbagens; can be used to predict optimal combination therapies; and to quantify heterogeneity in tumors where heterogeneity is believed to be associated with adaptation and resistance. Cytometry methods including high content screening (HCS), high throughput microscopy, flow cytometry, mass spec imaging and digital pathology capture cell level data for populations of cells. However it is often assumed that the population response is normally distributed and therefore that the average adequately describes the results. A deeper understanding of the results of the measurements and more effective comparison of perturbagen effects requires analysis that takes into account the distribution of the measurements, i.e. the heterogeneity. However, the reproducibility of heterogeneous data collected on different days, and in different plates/slides has not previously been evaluated. Here we show that conventional assay quality metrics alone are not adequate for quality control of the heterogeneity in the data. To address this need, we demonstrate the use of the Kolmogorov-Smirnov statistic as a metric for monitoring the reproducibility of heterogeneity in an SAR screen, describe a workflow for quality control in heterogeneity analysis. One major challenge in high throughput biology is the evaluation and interpretation of heterogeneity in thousands of samples, such as compounds in a cell-based screen. In this study we also demonstrate that three heterogeneity indices previously reported, capture the shapes of the distributions and provide a means to filter and browse big data sets of cellular distributions in order to compare and identify distributions of interest. These metrics and methods are presented as a

  20. Comparison of Two Probabilistic Fatigue Damage Assessment Approaches Using Prognostic Performance Metrics

    Directory of Open Access Journals (Sweden)

    Xuefei Guan

    2011-01-01

    Full Text Available In this paper, two probabilistic prognosis updating schemes are compared. One is based on the classical Bayesian approach and the other is based on newly developed maximum relative entropy (MRE approach. The algorithm performance of the two models is evaluated using a set of recently developed prognostics-based metrics. Various uncertainties from measurements, modeling, and parameter estimations are integrated into the prognosis framework as random input variables for fatigue damage of materials. Measures of response variables are then used to update the statistical distributions of random variables and the prognosis results are updated using posterior distributions. Markov Chain Monte Carlo (MCMC technique is employed to provide the posterior samples for model updating in the framework. Experimental data are used to demonstrate the operation of the proposed probabilistic prognosis methodology. A set of prognostics-based metrics are employed to quantitatively evaluate the prognosis performance and compare the proposed entropy method with the classical Bayesian updating algorithm. In particular, model accuracy, precision, robustness and convergence are rigorously evaluated in addition to the qualitative visual comparison. Following this, potential development and improvement for the prognostics-based metrics are discussed in detail.

  1. Exergy metrication of radiant panel heating and cooling with heat pumps

    International Nuclear Information System (INIS)

    Kilkis, Birol

    2012-01-01

    Highlights: ► Rational Exergy Management Model analytically relates heat pumps and radiant panels. ► Heat pumps driven by wind energy perform better with radiantpanels. ► Better CO 2 mitigation is possible with wind turbine, heat pump, radiant panel combination. ► Energy savings and thermo-mechanical performance are directly linked to CO 2 emissions. - Abstract: Radiant panels are known to be energy efficient sensible heating and cooling systems and a suitable fit for low-exergy buildings. This paper points out the little known fact that this may not necessarily be true unless their low-exergy demand is matched with low-exergy waste and alternative energy resources. In order to further investigate and metricate this condition and shed more light on this issue for different types of energy resources and energy conversion systems coupled to radiant panels, a new engineering metric was developed. Using this metric, which is based on the Rational Exergy Management Model, true potential and benefits of radiant panels coupled to ground-source heat pumps were analyzed. Results provide a new perspective in identifying the actual benefits of heat pump technology in curbing CO 2 emissions and also refer to IEA Annex 49 findings for low-exergy buildings. Case studies regarding different scenarios are compared with a base case, which comprises a radiant panel system connected to a natural gas-fired condensing boiler in heating and a grid power-driven chiller in cooling. Results show that there is a substantial CO 2 emission reduction potential if radiant panels are optimally operated with ground-source heat pumps driven by renewable energy sources, or optimally matched with combined heat and power systems, preferably running on alternative fuels.

  2. Defining quality metrics and improving safety and outcome in allergy care.

    Science.gov (United States)

    Lee, Stella; Stachler, Robert J; Ferguson, Berrylin J

    2014-04-01

    The delivery of allergy immunotherapy in the otolaryngology office is variable and lacks standardization. Quality metrics encompasses the measurement of factors associated with good patient-centered care. These factors have yet to be defined in the delivery of allergy immunotherapy. We developed and applied quality metrics to 6 allergy practices affiliated with an academic otolaryngic allergy center. This work was conducted at a tertiary academic center providing care to over 1500 patients. We evaluated methods and variability between 6 sites. Tracking of errors and anaphylaxis was initiated across all sites. A nationwide survey of academic and private allergists was used to collect data on current practice and use of quality metrics. The most common types of errors recorded were patient identification errors (n = 4), followed by vial mixing errors (n = 3), and dosing errors (n = 2). There were 7 episodes of anaphylaxis of which 2 were secondary to dosing errors for a rate of 0.01% or 1 in every 10,000 injection visits/year. Site visits showed that 86% of key safety measures were followed. Analysis of nationwide survey responses revealed that quality metrics are still not well defined by either medical or otolaryngic allergy practices. Academic practices were statistically more likely to use quality metrics (p = 0.021) and perform systems reviews and audits in comparison to private practices (p = 0.005). Quality metrics in allergy delivery can help improve safety and quality care. These metrics need to be further defined by otolaryngic allergists in the changing health care environment. © 2014 ARS-AAOA, LLC.

  3. Machine Learning for ATLAS DDM Network Metrics

    CERN Document Server

    Lassnig, Mario; The ATLAS collaboration; Vamosi, Ralf

    2016-01-01

    The increasing volume of physics data is posing a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from our ongoing automation efforts. First, we describe our framework for distributed data management and network metrics, automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for network-aware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.

  4. Regge calculus from discontinuous metrics

    International Nuclear Information System (INIS)

    Khatsymovsky, V.M.

    2003-01-01

    Regge calculus is considered as a particular case of the more general system where the linklengths of any two neighbouring 4-tetrahedra do not necessarily coincide on their common face. This system is treated as that one described by metric discontinuous on the faces. In the superspace of all discontinuous metrics the Regge calculus metrics form some hypersurface defined by continuity conditions. Quantum theory of the discontinuous metric system is assumed to be fixed somehow in the form of quantum measure on (the space of functionals on) the superspace. The problem of reducing this measure to the Regge hypersurface is addressed. The quantum Regge calculus measure is defined from a discontinuous metric measure by inserting the δ-function-like phase factor. The requirement that continuity conditions be imposed in a 'face-independent' way fixes this factor uniquely. The term 'face-independent' means that this factor depends only on the (hyper)plane spanned by the face, not on it's form and size. This requirement seems to be natural from the viewpoint of existence of the well-defined continuum limit maximally free of lattice artefacts

  5. Exergoeconomic analysis and optimization of a model cogeneration system; Analise exergoeconomica e otimizacao de um modelo de sistema de cogeracao

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Leonardo S.R. [Centro de Pesquisas de Energia Eletrica, Rio de Janeiro, RJ (Brazil). Area de Conhecimento de Materiais e Mecanica]. E-mail: lsrv@cepel.br; Donatelli, Joao L.M. [Espirito Santo Univ., Vitoria, ES (Brazil). Dept. de Engenharia Mecanica]. E-mail: donatelli@lttc.com.ufrj.br; Cruz, Manuel E.C. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Mecanica]. E-mail: manuel@serv.com.ufrj.br

    2000-07-01

    In this paper we perform exergetic and exergoeconomic analyses, a mathematical optimization and an exergoeconomic optimization of a gas turbine-heat recovery boiler cogeneration system with fixed electricity and steam production rates. The exergy balance is calculated with the IPSE pro thermal system simulation program. In the exergetic analysis, exergy destruction rates, exergetic efficiencies and structural bond coefficients for each component are evaluated as functions of the decision variables of the optimization problem. In the exergoeconomic analysis the cost for each exergetic flow is determined through cost balance equations and additional auxiliary equations from cost partition criteria. Mathematical optimization is performed by the metric variable method (software EES - Engineering Equation Solver) and by the successive quadratic programming (IMSL library - Fortran Power Station). The exergoeconomic optimization is performed on the basis of the exergoeconomic variables. System optimization is also performed by evaluating the derivative of the objective function through finite differences. This paper concludes with a comparison between the four optimization techniques employed. (author)

  6. Numerical Calabi-Yau metrics

    International Nuclear Information System (INIS)

    Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, Rene

    2008-01-01

    We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results

  7. Ranking metrics in gene set enrichment analysis: do they matter?

    Science.gov (United States)

    Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna

    2017-05-12

    There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner

  8. Scaling-Laws of Flow Entropy with Topological Metrics of Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Giovanni Francesco Santonastaso

    2018-01-01

    Full Text Available Robustness of water distribution networks is related to their connectivity and topological structure, which also affect their reliability. Flow entropy, based on Shannon’s informational entropy, has been proposed as a measure of network redundancy and adopted as a proxy of reliability in optimal network design procedures. In this paper, the scaling properties of flow entropy of water distribution networks with their size and other topological metrics are studied. To such aim, flow entropy, maximum flow entropy, link density and average path length have been evaluated for a set of 22 networks, both real and synthetic, with different size and topology. The obtained results led to identify suitable scaling laws of flow entropy and maximum flow entropy with water distribution network size, in the form of power–laws. The obtained relationships allow comparing the flow entropy of water distribution networks with different size, and provide an easy tool to define the maximum achievable entropy of a specific water distribution network. An example of application of the obtained relationships to the design of a water distribution network is provided, showing how, with a constrained multi-objective optimization procedure, a tradeoff between network cost and robustness is easily identified.

  9. Evaluation of Risk Metrics for KHNP Reference Plants Using the Latest Plant Specific Data

    International Nuclear Information System (INIS)

    Jeon, Ho Jun; Hwang, Seok Won; Ghi, Moon Goo

    2010-01-01

    As Risk-Informed Applications (RIAs) are actively implemented in the nuclear industry, an issue associated with the technical adequacy of the Probabilistic Safety Assessment (PSA) arises in its data sources. The American Society of Mechanical Engineers (ASME) PRA standard suggests the use of component failure data that represent the as-built and as-operated plant conditions. Furthermore, the peer reviews for the KHNP reference plants stated that the component failure data should be updated to reflect the latest plant specific data available. For ensuring the technical adequacy in PSA data elements, we try to update component failure data to reflect the as-operated plant conditions, and a trend analysis of the failure data is implemented. In addition, by applying the updated failure data to the PSA models of the KHNP reference plants, the risk metrics of Core Damage Frequency (CDF) and Large Early Release Frequency (LERF) are evaluated

  10. SIMPATIQCO: a server-based software suite which facilitates monitoring the time course of LC-MS performance metrics on Orbitrap instruments.

    Science.gov (United States)

    Pichler, Peter; Mazanek, Michael; Dusberger, Frederico; Weilnböck, Lisa; Huber, Christian G; Stingl, Christoph; Luider, Theo M; Straube, Werner L; Köcher, Thomas; Mechtler, Karl

    2012-11-02

    While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC-MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge.

  11. Metric to quantify white matter damage on brain magnetic resonance images

    International Nuclear Information System (INIS)

    Valdes Hernandez, Maria del C.; Munoz Maniega, Susana; Anblagan, Devasuda; Bastin, Mark E.; Wardlaw, Joanna M.; Chappell, Francesca M.; Morris, Zoe; Sakka, Eleni; Dickie, David Alexander; Royle, Natalie A.; Armitage, Paul A.; Deary, Ian J.

    2017-01-01

    Quantitative assessment of white matter hyperintensities (WMH) on structural Magnetic Resonance Imaging (MRI) is challenging. It is important to harmonise results from different software tools considering not only the volume but also the signal intensity. Here we propose and evaluate a metric of white matter (WM) damage that addresses this need. We obtained WMH and normal-appearing white matter (NAWM) volumes from brain structural MRI from community dwelling older individuals and stroke patients enrolled in three different studies, using two automatic methods followed by manual editing by two to four observers blind to each other. We calculated the average intensity values on brain structural fluid-attenuation inversion recovery (FLAIR) MRI for the NAWM and WMH. The white matter damage metric is calculated as the proportion of WMH in brain tissue weighted by the relative image contrast of the WMH-to-NAWM. The new metric was evaluated using tissue microstructure parameters and visual ratings of small vessel disease burden and WMH: Fazekas score for WMH burden and Prins scale for WMH change. The correlation between the WM damage metric and the visual rating scores (Spearman ρ > =0.74, p =0.72, p < 0.0001). The repeatability of the WM damage metric was better than WM volume (average median difference between measurements 3.26% (IQR 2.76%) and 5.88% (IQR 5.32%) respectively). The follow-up WM damage was highly related to total Prins score even when adjusted for baseline WM damage (ANCOVA, p < 0.0001), which was not always the case for WMH volume, as total Prins was highly associated with the change in the intense WMH volume (p = 0.0079, increase of 4.42 ml per unit change in total Prins, 95%CI [1.17 7.67]), but not with the change in less-intense, subtle WMH, which determined the volumetric change. The new metric is practical and simple to calculate. It is robust to variations in image processing methods and scanning protocols, and sensitive to subtle and severe white

  12. Normalized Point Source Sensitivity for Off-Axis Optical Performance Evaluation of the Thirty Meter Telescope

    Science.gov (United States)

    Seo, Byoung-Joon; Nissly, Carl; Troy, Mitchell; Angeli, George

    2010-01-01

    The Normalized Point Source Sensitivity (PSSN) has previously been defined and analyzed as an On-Axis seeing-limited telescope performance metric. In this paper, we expand the scope of the PSSN definition to include Off-Axis field of view (FoV) points and apply this generalized metric for performance evaluation of the Thirty Meter Telescope (TMT). We first propose various possible choices for the PSSN definition and select one as our baseline. We show that our baseline metric has useful properties including the multiplicative feature even when considering Off-Axis FoV points, which has proven to be useful for optimizing the telescope error budget. Various TMT optical errors are considered for the performance evaluation including segment alignment and phasing, segment surface figures, temperature, and gravity, whose On-Axis PSSN values have previously been published by our group.

  13. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Chan, Maria F. [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States); Jarry, Geneviève; Lemire, Matthieu [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada); Lowden, John [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States); Hampton, Carnell [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States); Feygelman, Vladimir [Moffitt Cancer Center, Tampa, Florida 33612 (United States)

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were

  14. SU-E-T-452: Impact of Respiratory Motion On Robustly-Optimized Intensity-Modulated Proton Therapy to Treat Lung Cancers

    International Nuclear Information System (INIS)

    Liu, W; Schild, S; Bues, M; Liao, Z; Sahoo, N; Park, P; Li, H; Li, Y; Li, X; Shen, J; Anand, A; Dong, L; Zhu, X; Mohan, R

    2014-01-01

    Purpose: We compared conventionally optimized intensity-modulated proton therapy (IMPT) treatment plans against the worst-case robustly optimized treatment plans for lung cancer. The comparison of the two IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient set-up, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. Methods: For each of the 9 lung cancer cases two treatment plans were created accounting for treatment uncertainties in two different ways: the first used the conventional Method: delivery of prescribed dose to the planning target volume (PTV) that is geometrically expanded from the internal target volume (ITV). The second employed the worst-case robust optimization scheme that addressed set-up and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of the changes in patient anatomy due to respiratory motion was investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the two groups were compared using two-sided paired t-tests. Results: Without respiratory motion considered, we affirmed that worst-case robust optimization is superior to PTV-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, robust optimization still leads to more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality [D95% ITV: 96.6% versus 96.1% (p=0.26), D5% - D95% ITV: 10.0% versus 12.3% (p=0.082), D1% spinal cord: 31.8% versus 36.5% (p =0.035)]. Conclusion: Worst-case robust optimization led to superior solutions for lung IMPT. Despite of the fact that robust optimization did not explicitly

  15. A Metrics Approach for Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Cristian CIUREA

    2009-01-01

    Full Text Available This article presents different types of collaborative systems, their structure and classification. This paper defines the concept of virtual campus as a collaborative system. It builds architecture for virtual campus oriented on collaborative training processes. It analyses the quality characteristics of collaborative systems and propose techniques for metrics construction and validation in order to evaluate them. The article analyzes different ways to increase the efficiency and the performance level in collaborative banking systems.

  16. A Tale of Three District Energy Systems: Metrics and Future Opportunities

    Energy Technology Data Exchange (ETDEWEB)

    Pass, Rebecca Zarin; Wetter, Michael; Piette, Mary Ann

    2017-08-01

    Improving the sustainability of cities is crucial for meeting climate goals in the next several decades. One way this is being tackled is through innovation in district energy systems, which can take advantage of local resources and economies of scale to improve the performance of whole neighborhoods in ways infeasible for individual buildings. These systems vary in physical size, end use services, primary energy resources, and sophistication of control. They also vary enormously in their choice of optimization metrics while all under the umbrella-goal of improved sustainability. This paper explores the implications of choice of metric on district energy systems using three case studies: Stanford University, the University of California at Merced, and the Richmond Bay campus of the University of California at Berkeley. They each have a centralized authority to implement large-scale projects quickly, while maintaining data records, which makes them relatively effective at achieving their respective goals. Comparing the systems using several common energy metrics reveals significant differences in relative system merit. Additionally, a novel bidirectional heating and cooling system is presented. This system is highly energy-efficient, and while more analysis is required, may be the basis of the next generation of district energy systems.

  17. Comparison of continuous versus categorical tumor measurement-based metrics to predict overall survival in cancer treatment trials

    Science.gov (United States)

    An, Ming-Wen; Mandrekar, Sumithra J.; Branda, Megan E.; Hillman, Shauna L.; Adjei, Alex A.; Pitot, Henry; Goldberg, Richard M.; Sargent, Daniel J.

    2011-01-01

    Purpose The categorical definition of response assessed via the Response Evaluation Criteria in Solid Tumors has documented limitations. We sought to identify alternative metrics for tumor response that improve prediction of overall survival. Experimental Design Individual patient data from three North Central Cancer Treatment Group trials (N0026, n=117; N9741, n=1109; N9841, n=332) were used. Continuous metrics of tumor size based on longitudinal tumor measurements were considered in addition to a trichotomized response (TriTR: Response vs. Stable vs. Progression). Cox proportional hazards models, adjusted for treatment arm and baseline tumor burden, were used to assess the impact of the metrics on subsequent overall survival, using a landmark analysis approach at 12-, 16- and 24-weeks post baseline. Model discrimination was evaluated using the concordance (c) index. Results The overall best response rates for the three trials were 26%, 45%, and 25% respectively. While nearly all metrics were statistically significantly associated with overall survival at the different landmark time points, the c-indices for the traditional response metrics ranged from 0.59-0.65; for the continuous metrics from 0.60-0.66 and for the TriTR metrics from 0.64-0.69. The c-indices for TriTR at 12-weeks were comparable to those at 16- and 24-weeks. Conclusions Continuous tumor-measurement-based metrics provided no predictive improvement over traditional response based metrics or TriTR; TriTR had better predictive ability than best TriTR or confirmed response. If confirmed, TriTR represents a promising endpoint for future Phase II trials. PMID:21880789

  18. Scholarly metrics under the microscope from citation analysis to academic auditing

    CERN Document Server

    Sugimoto, Cassidy R

    2015-01-01

    Interest in bibliometrics the quantitative analysis of publications, authors, bibliographic references, and related concepts has never been greater, as universities, research councils, national governments, and corporations seek to identify robust indicators of research effectiveness. In Scholarly Metrics Under the Microscope, editors Blaise Cronin and Cassidy R. Sugimoto bring together and expertly annotate a wealth of previously published papers, harvested from a wide range of journals and disciplines, that provide critical commentary on the use of metrics, both established and emerging, to assess the quality of scholarship and the impact of research. The expansive overview and analysis presented in this remarkable volume will be welcomed by any scholar or researcher who seeks a deeper understanding of the role and significance of performance metrics in higher education, research evaluation, and science policy.

  19. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  20. Partial rectangular metric spaces and fixed point theorems.

    Science.gov (United States)

    Shukla, Satish

    2014-01-01

    The purpose of this paper is to introduce the concept of partial rectangular metric spaces as a generalization of rectangular metric and partial metric spaces. Some properties of partial rectangular metric spaces and some fixed point results for quasitype contraction in partial rectangular metric spaces are proved. Some examples are given to illustrate the observed results.

  1. A Kerr-NUT metric

    International Nuclear Information System (INIS)

    Vaidya, P.C.; Patel, L.K.; Bhatt, P.V.

    1976-01-01

    Using Galilean time and retarded distance as coordinates the usual Kerr metric is expressed in form similar to the Newman-Unti-Tamburino (NUT) metric. The combined Kerr-NUT metric is then investigated. In addition to the Kerr and NUT solutions of Einstein's equations, three other types of solutions are derived. These are (i) the radiating Kerr solution, (ii) the radiating NUT solution satisfying Rsub(ik) = sigmaxisub(i)xisub(k), xisub(i)xisup(i) = 0, and (iii) the associated Kerr solution satisfying Rsub(ik) = 0. Solution (i) is distinct from and simpler than the one reported earlier by Vaidya and Patel (Phys. Rev.; D7:3590 (1973)). Solutions (ii) and (iii) gave line elements which have the axis of symmetry as a singular line. (author)

  2. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2014-01-01

    Full Text Available Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  3. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    Science.gov (United States)

    Deb, Suash; Yang, Xin-She

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  4. Estimating fish swimming metrics and metabolic rates with accelerometers: the influence of sampling frequency.

    Science.gov (United States)

    Brownscombe, J W; Lennox, R J; Danylchuk, A J; Cooke, S J

    2018-06-21

    Accelerometry is growing in popularity for remotely measuring fish swimming metrics, but appropriate sampling frequencies for accurately measuring these metrics are not well studied. This research examined the influence of sampling frequency (1-25 Hz) with tri-axial accelerometer biologgers on estimates of overall dynamic body acceleration (ODBA), tail-beat frequency, swimming speed and metabolic rate of bonefish Albula vulpes in a swim-tunnel respirometer and free-swimming in a wetland mesocosm. In the swim tunnel, sampling frequencies of ≥ 5 Hz were sufficient to establish strong relationships between ODBA, swimming speed and metabolic rate. However, in free-swimming bonefish, estimates of metabolic rate were more variable below 10 Hz. Sampling frequencies should be at least twice the maximum tail-beat frequency to estimate this metric effectively, which is generally higher than those required to estimate ODBA, swimming speed and metabolic rate. While optimal sampling frequency probably varies among species due to tail-beat frequency and swimming style, this study provides a reference point with a medium body-sized sub-carangiform teleost fish, enabling researchers to measure these metrics effectively and maximize study duration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  5. Background metric in supergravity theories

    International Nuclear Information System (INIS)

    Yoneya, T.

    1978-01-01

    In supergravity theories, we investigate the conformal anomaly of the path-integral determinant and the problem of fermion zero modes in the presence of a nontrivial background metric. Except in SO(3) -invariant supergravity, there are nonvanishing conformal anomalies. As a consequence, amplitudes around the nontrivial background metric contain unpredictable arbitrariness. The fermion zero modes which are explicitly constructed for the Euclidean Schwarzschild metric are interpreted as an indication of the supersymmetric multiplet structure of a black hole. The degree of degeneracy of a black hole is 2/sup 4n/ in SO(n) supergravity

  6. Performance indices and evaluation of algorithms in building energy efficient design optimization

    International Nuclear Information System (INIS)

    Si, Binghui; Tian, Zhichao; Jin, Xing; Zhou, Xin; Tang, Peng; Shi, Xing

    2016-01-01

    Building energy efficient design optimization is an emerging technique that is increasingly being used to design buildings with better overall performance and a particular emphasis on energy efficiency. To achieve building energy efficient design optimization, algorithms are vital to generate new designs and thus drive the design optimization process. Therefore, the performance of algorithms is crucial to achieving effective energy efficient design techniques. This study evaluates algorithms used for building energy efficient design optimization. A set of performance indices, namely, stability, robustness, validity, speed, coverage, and locality, is proposed to evaluate the overall performance of algorithms. A benchmark building and a design optimization problem are also developed. Hooke–Jeeves algorithm, Multi-Objective Genetic Algorithm II, and Multi-Objective Particle Swarm Optimization algorithm are evaluated by using the proposed performance indices and benchmark design problem. Results indicate that no algorithm performs best in all six areas. Therefore, when facing an energy efficient design problem, the algorithm must be carefully selected based on the nature of the problem and the performance indices that matter the most. - Highlights: • Six indices of algorithm performance in building energy optimization are developed. • For each index, its concept is defined and the calculation formulas are proposed. • A benchmark building and benchmark energy efficient design problem are proposed. • The performance of three selected algorithms are evaluated.

  7. National Quality Forum Colon Cancer Quality Metric Performance: How Are Hospitals Measuring Up?

    Science.gov (United States)

    Mason, Meredith C; Chang, George J; Petersen, Laura A; Sada, Yvonne H; Tran Cao, Hop S; Chai, Christy; Berger, David H; Massarweh, Nader N

    2017-12-01

    To evaluate the impact of care at high-performing hospitals on the National Quality Forum (NQF) colon cancer metrics. The NQF endorses evaluating ≥12 lymph nodes (LNs), adjuvant chemotherapy (AC) for stage III patients, and AC within 4 months of diagnosis as colon cancer quality indicators. Data on hospital-level metric performance and the association with survival are unclear. Retrospective cohort study of 218,186 patients with resected stage I to III colon cancer in the National Cancer Data Base (2004-2012). High-performing hospitals (>75% achievement) were identified by the proportion of patients achieving each measure. The association between hospital performance and survival was evaluated using Cox shared frailty modeling. Only hospital LN performance improved (15.8% in 2004 vs 80.7% in 2012; trend test, P fashion [0 metrics, reference; 1, hazard ratio (HR) 0.96 (0.89-1.03); 2, HR 0.92 (0.87-0.98); 3, HR 0.85 (0.80-0.90); 2 vs 1, HR 0.96 (0.91-1.01); 3 vs 1, HR 0.89 (0.84-0.93); 3 vs 2, HR 0.95 (0.89-0.95)]. Performance on metrics in combination was associated with lower risk of death [LN + AC, HR 0.86 (0.78-0.95); AC + timely AC, HR 0.92 (0.87-0.98); LN + AC + timely AC, HR 0.85 (0.80-0.90)], whereas individual measures were not [LN, HR 0.95 (0.88-1.04); AC, HR 0.95 (0.87-1.05)]. Less than half of hospitals perform well on these NQF colon cancer metrics concurrently, and high performance on individual measures is not associated with improved survival. Quality improvement efforts should shift focus from individual measures to defining composite measures encompassing the overall multimodal care pathway and capturing successful transitions from one care modality to another.

  8. An approach for multi-objective optimization of vehicle suspension system

    Science.gov (United States)

    Koulocheris, D.; Papaioannou, G.; Christodoulou, D.

    2017-10-01

    In this paper, a half car model of with nonlinear suspension systems is selected in order to study the vertical vibrations and optimize its suspension system with respect to ride comfort and road holding. A road bump was used as road profile. At first, the optimization problem is solved with the use of Genetic Algorithms with respect to 6 optimization targets. Then the k - ɛ optimization method was implemented to locate one optimum solution. Furthermore, an alternative approach is presented in this work: the previous optimization targets are separated in main and supplementary ones, depending on their importance in the analysis. The supplementary targets are not crucial to the optimization but they could enhance the main objectives. Thus, the problem was solved again using Genetic Algorithms with respect to the 3 main targets of the optimization. Having obtained the Pareto set of solutions, the k - ɛ optimality method was implemented for the 3 main targets and the supplementary ones, evaluated by the simulation of the vehicle model. The results of both cases are presented and discussed in terms of convergence of the optimization and computational time. The optimum solutions acquired from both cases are compared based on performance metrics as well.

  9. Balanced metrics for vector bundles and polarised manifolds

    DEFF Research Database (Denmark)

    Garcia Fernandez, Mario; Ross, Julius

    2012-01-01

    leads to a Hermitian-Einstein metric on E and a constant scalar curvature Kähler metric in c_1(L). For special values of α, limits of balanced metrics are solutions of a system of coupled equations relating a Hermitian-Einstein metric on E and a Kähler metric in c1(L). For this, we compute the top two......We consider a notion of balanced metrics for triples (X, L, E) which depend on a parameter α, where X is smooth complex manifold with an ample line bundle L and E is a holomorphic vector bundle over X. For generic choice of α, we prove that the limit of a convergent sequence of balanced metrics...

  10. Scaling-Laws of Flow Entropy with Topological Metrics of Water Distribution Networks

    OpenAIRE

    Giovanni Francesco Santonastaso; Armando Di Nardo; Michele Di Natale; Carlo Giudicianni; Roberto Greco

    2018-01-01

    Robustness of water distribution networks is related to their connectivity and topological structure, which also affect their reliability. Flow entropy, based on Shannon’s informational entropy, has been proposed as a measure of network redundancy and adopted as a proxy of reliability in optimal network design procedures. In this paper, the scaling properties of flow entropy of water distribution networks with their size and other topological metrics are studied. To such aim, flow entropy, ma...

  11. Cat swarm optimization based evolutionary framework for multi document summarization

    Science.gov (United States)

    Rautray, Rasmita; Balabantaray, Rakesh Chandra

    2017-07-01

    Today, World Wide Web has brought us enormous quantity of on-line information. As a result, extracting relevant information from massive data has become a challenging issue. In recent past text summarization is recognized as one of the solution to extract useful information from vast amount documents. Based on number of documents considered for summarization, it is categorized as single document or multi document summarization. Rather than single document, multi document summarization is more challenging for the researchers to find accurate summary from multiple documents. Hence in this study, a novel Cat Swarm Optimization (CSO) based multi document summarizer is proposed to address the problem of multi document summarization. The proposed CSO based model is also compared with two other nature inspired based summarizer such as Harmony Search (HS) based summarizer and Particle Swarm Optimization (PSO) based summarizer. With respect to the benchmark Document Understanding Conference (DUC) datasets, the performance of all algorithms are compared in terms of different evaluation metrics such as ROUGE score, F score, sensitivity, positive predicate value, summary accuracy, inter sentence similarity and readability metric to validate non-redundancy, cohesiveness and readability of the summary respectively. The experimental analysis clearly reveals that the proposed approach outperforms the other summarizers included in the study.

  12. A systematic approach towards the objective evaluation of low-contrast performance in MDCT: Combination of a full-reference image fidelity metric and a software phantom

    International Nuclear Information System (INIS)

    Falck, Christian von; Rodt, Thomas; Waldeck, Stephan; Hartung, Dagmar; Meyer, Bernhard; Wacker, Frank; Shin, Hoen-oh

    2012-01-01

    Objectives: To assess the feasibility of an objective approach for the evaluation of low-contrast detectability in multidetector computed-tomography (MDCT) by combining a virtual phantom containing simulated lesions with an image quality metric. Materials and methods: A low-contrast phantom containing hypodense spheric lesions (−20 HU) was scanned on a 64-slice MDCT scanner at 4 different dose levels (25, 50, 100, 200 mAs). In addition, virtual round hypodense low-contrast lesions (20 HU object contrast) based on real CT data were inserted into the lesion-free section of the datasets. The sliding-thin-slab algorithm was applied to the image data with an increasing slice-thickness from 1 to 15 slices. For each dataset containing simulated lesions a lesion-free counterpart was reconstructed and post-processed in the same manner. The low-contrast performance of all datasets containing virtual lesions was determined using a full-reference image quality metric (modified multiscale structural similarity index, MS-SSIM*). The results were validated against a reader-study of the real lesions. Results: For all dose levels and lesion sizes there was no statistically significant difference between the low-contrast performance as determined by the image quality metric when compared to the reader study (p < 0.05). The intraclass correlation coefficient was 0.72, 0.82, 0.90 and 0.84 for lesion diameters of 4 mm, 5 mm, 8 mm and 10 mm, respectively. The use of the sliding-thin-slab algorithm improves lesion detectability by a factor ranging from 1.15 to 2.69 when compared with the original axial slice (0.625 mm). Conclusion: The combination of a virtual phantom and a full-reference image quality metric enables a systematic, automated and objective evaluation of low-contrast detectability in MDCT datasets and correlates well with the judgment of human readers.

  13. Extending cosmology: the metric approach

    OpenAIRE

    Mendoza, S.

    2012-01-01

    Comment: 2012, Extending Cosmology: The Metric Approach, Open Questions in Cosmology; Review article for an Intech "Open questions in cosmology" book chapter (19 pages, 3 figures). Available from: http://www.intechopen.com/books/open-questions-in-cosmology/extending-cosmology-the-metric-approach

  14. Metrics, Media and Advertisers: Discussing Relationship

    Directory of Open Access Journals (Sweden)

    Marco Aurelio de Souza Rodrigues

    2014-11-01

    Full Text Available This study investigates how Brazilian advertisers are adapting to new media and its attention metrics. In-depth interviews were conducted with advertisers in 2009 and 2011. In 2009, new media and its metrics were celebrated as innovations that would increase advertising campaigns overall efficiency. In 2011, this perception has changed: New media’s profusion of metrics, once seen as an advantage, started to compromise its ease of use and adoption. Among its findings, this study argues that there is an opportunity for media groups willing to shift from a product-focused strategy towards a customer-centric one, through the creation of new, simple and integrative metrics

  15. Measuring Information Security: Guidelines to Build Metrics

    Science.gov (United States)

    von Faber, Eberhard

    Measuring information security is a genuine interest of security managers. With metrics they can develop their security organization's visibility and standing within the enterprise or public authority as a whole. Organizations using information technology need to use security metrics. Despite the clear demands and advantages, security metrics are often poorly developed or ineffective parameters are collected and analysed. This paper describes best practices for the development of security metrics. First attention is drawn to motivation showing both requirements and benefits. The main body of this paper lists things which need to be observed (characteristic of metrics), things which can be measured (how measurements can be conducted) and steps for the development and implementation of metrics (procedures and planning). Analysis and communication is also key when using security metrics. Examples are also given in order to develop a better understanding. The author wants to resume, continue and develop the discussion about a topic which is or increasingly will be a critical factor of success for any security managers in larger organizations.

  16. Protein Structure Refinement by Optimization

    DEFF Research Database (Denmark)

    Carlsen, Martin

    on whether the three-dimensional structure of a homologous sequence is known. Whether or not a protein model can be used for industrial purposes depends on the quality of the predicted structure. A model can be used to design a drug when the quality is high. The overall goal of this project is to assess...... that correlates maximally to a native-decoy distance. The main contribution of this thesis is methods developed for analyzing the performance of metrically trained knowledge-based potentials and for optimizing their performance while making them less dependent on the decoy set used to define them. We focus...... being at-least a local minimum of the potential. To address how far the current functional form of the potential is from an ideal potential we present two methods for finding the optimal metrically trained potential that simultaneous has a number of native structures as a local minimum. Our results...

  17. Comparative Simulation Study of Glucose Control Methods Designed for Use in the Intensive Care Unit Setting via a Novel Controller Scoring Metric.

    Science.gov (United States)

    DeJournett, Jeremy; DeJournett, Leon

    2017-11-01

    Effective glucose control in the intensive care unit (ICU) setting has the potential to decrease morbidity and mortality rates and thereby decrease health care expenditures. To evaluate what constitutes effective glucose control, typically several metrics are reported, including time in range, time in mild and severe hypoglycemia, coefficient of variation, and others. To date, there is no one metric that combines all of these individual metrics to give a number indicative of overall performance. We proposed a composite metric that combines 5 commonly reported metrics, and we used this composite metric to compare 6 glucose controllers. We evaluated the following controllers: Ideal Medical Technologies (IMT) artificial-intelligence-based controller, Yale protocol, Glucommander, Wintergerst et al PID controller, GRIP, and NICE-SUGAR. We evaluated each controller across 80 simulated patients, 4 clinically relevant exogenous dextrose infusions, and one nonclinical infusion as a test of the controller's ability to handle difficult situations. This gave a total of 2400 5-day simulations, and 585 604 individual glucose values for analysis. We used a random walk sensor error model that gave a 10% MARD. For each controller, we calculated severe hypoglycemia (140 mg/dL), and coefficient of variation (CV), as well as our novel controller metric. For the controllers tested, we achieved the following median values for our novel controller scoring metric: IMT: 88.1, YALE: 46.7, GLUC: 47.2, PID: 50, GRIP: 48.2, NICE: 46.4. The novel scoring metric employed in this study shows promise as a means for evaluating new and existing ICU-based glucose controllers, and it could be used in the future to compare results of glucose control studies in critical care. The IMT AI-based glucose controller demonstrated the most consistent performance results based on this new metric.

  18. The use of the kurtosis metric in the evaluation of occupational hearing loss in workers in China: Implications for hearing risk assessment

    Directory of Open Access Journals (Sweden)

    Robert I Davis

    2012-01-01

    Full Text Available This study examined: (1 the value of using the statistical metric, kurtosis [β(t], along with an energy metric to determine the hazard to hearing from high level industrial noise environments, and (2 the accuracy of the International Standard Organization (ISO-1999:1990 model for median noise-induced permanent threshold shift (NIPTS estimates with actual recent epidemiological data obtained on 240 highly screened workers exposed to high-level industrial noise in China. A cross-sectional approach was used in this study. Shift-long temporal waveforms of the noise that workers were exposed to for evaluation of noise exposures and audiometric threshold measures were obtained on all selected subjects. The subjects were exposed to only one occupational noise exposure without the use of hearing protection devices. The results suggest that: (1 the kurtosis metric is an important variable in determining the hazards to hearing posed by a high-level industrial noise environment for hearing conservation purposes, i.e., the kurtosis differentiated between the hazardous effects produced by Gaussian and non-Gaussian noise environments, (2 the ISO-1999 predictive model does not accurately estimate the degree of median NIPTS incurred to high level kurtosis industrial noise, and (3 the inherent large variability in NIPTS among subjects emphasize the need to develop and analyze a larger database of workers with well-documented exposures to better understand the effect of kurtosis on NIPTS incurred from high level industrial noise exposures. A better understanding of the role of the kurtosis metric may lead to its incorporation into a new generation of more predictive hearing risk assessment for occupational noise exposure.

  19. The use of the kurtosis metric in the evaluation of occupational hearing loss in workers in China: implications for hearing risk assessment.

    Science.gov (United States)

    Davis, Robert I; Qiu, Wei; Heyer, Nicholas J; Zhao, Yiming; Qiuling Yang, M S; Li, Nan; Tao, Liyuan; Zhu, Liangliang; Zeng, Lin; Yao, Daohua

    2012-01-01

    This study examined: (1) the value of using the statistical metric, kurtosis [β(t)], along with an energy metric to determine the hazard to hearing from high level industrial noise environments, and (2) the accuracy of the International Standard Organization (ISO-1999:1990) model for median noise-induced permanent threshold shift (NIPTS) estimates with actual recent epidemiological data obtained on 240 highly screened workers exposed to high-level industrial noise in China. A cross-sectional approach was used in this study. Shift-long temporal waveforms of the noise that workers were exposed to for evaluation of noise exposures and audiometric threshold measures were obtained on all selected subjects. The subjects were exposed to only one occupational noise exposure without the use of hearing protection devices. The results suggest that: (1) the kurtosis metric is an important variable in determining the hazards to hearing posed by a high-level industrial noise environment for hearing conservation purposes, i.e., the kurtosis differentiated between the hazardous effects produced by Gaussian and non-Gaussian noise environments, (2) the ISO-1999 predictive model does not accurately estimate the degree of median NIPTS incurred to high level kurtosis industrial noise, and (3) the inherent large variability in NIPTS among subjects emphasize the need to develop and analyze a larger database of workers with well-documented exposures to better understand the effect of kurtosis on NIPTS incurred from high level industrial noise exposures. A better understanding of the role of the kurtosis metric may lead to its incorporation into a new generation of more predictive hearing risk assessment for occupational noise exposure.

  20. Pareto-Ranking Based Quantum-Behaved Particle Swarm Optimization for Multiobjective Optimization

    Directory of Open Access Journals (Sweden)

    Na Tian

    2015-01-01

    Full Text Available A study on pareto-ranking based quantum-behaved particle swarm optimization (QPSO for multiobjective optimization problems is presented in this paper. During the iteration, an external repository is maintained to remember the nondominated solutions, from which the global best position is chosen. The comparison between different elitist selection strategies (preference order, sigma value, and random selection is performed on four benchmark functions and two metrics. The results demonstrate that QPSO with preference order has comparative performance with sigma value according to different number of objectives. Finally, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling problems.

  1. Automatic generation of 3D statistical shape models with optimal landmark distributions.

    Science.gov (United States)

    Heimann, T; Wolf, I; Meinzer, H-P

    2007-01-01

    To point out the problem of non-uniform landmark placement in statistical shape modeling, to present an improved method for generating landmarks in the 3D case and to propose an unbiased evaluation metric to determine model quality. Our approach minimizes a cost function based on the minimum description length (MDL) of the shape model to optimize landmark correspondences over the training set. In addition to the standard technique, we employ an extended remeshing method to change the landmark distribution without losing correspondences, thus ensuring a uniform distribution over all training samples. To break the dependency of the established evaluation measures generalization and specificity from the landmark distribution, we change the internal metric from landmark distance to volumetric overlap. Redistributing landmarks to an equally spaced distribution during the model construction phase improves the quality of the resulting models significantly if the shapes feature prominent bulges or other complex geometry. The distribution of landmarks on the training shapes is -- beyond the correspondence issue -- a crucial point in model construction.

  2. Liver fibrosis: in vivo evaluation using intravoxel incoherent motion-derived histogram metrics with histopathologic findings at 3.0 T.

    Science.gov (United States)

    Hu, Fubi; Yang, Ru; Huang, Zixing; Wang, Min; Zhang, Hanmei; Yan, Xu; Song, Bin

    2017-12-01

    To retrospectively determine the feasibility of intravoxel incoherent motion (IVIM) imaging based on histogram analysis for the staging of liver fibrosis (LF) using histopathologic findings as the reference standard. 56 consecutive patients (14 men, 42 women; age range, 15-76, years) with chronic liver diseases (CLDs) were studied using IVIM-DWI with 9 b-values (0, 25, 50, 75, 100, 150, 200, 500, 800 s/mm 2 ) at 3.0 T. Fibrosis stage was evaluated using the METAVIR scoring system. Histogram metrics including mean, standard deviation (Std), skewness, kurtosis, minimum (Min), maximum (Max), range, interquartile (Iq) range, and percentiles (10, 25, 50, 75, 90th) were extracted from apparent diffusion coefficient (ADC), true diffusion coefficient (D), pseudo-diffusion coefficient (D*), and perfusion fraction (f) maps. All histogram metrics among different fibrosis groups were compared using one-way analysis of variance or nonparametric Kruskal-Wallis test. For significant parameters, receivers operating characteristic curve (ROC) analyses were further performed for the staging of LF. Based on their METAVIR stage, the 56 patients were reclassified into three groups as follows: F0-1 group (n = 25), F2-3 group (n = 21), and F4 group (n = 10). The mean, Iq range, percentiles (50, 75, and 90th) of D* maps between the groups were significant differences (all P histogram metrics of ADC, D, and f maps demonstrated no significant difference among the groups (all P > 0.05). Histogram analysis of D* map derived from IVIM can be used to stage liver fibrosis in patients with CLDs and provide more quantitative information beyond the mean value.

  3. Investigation of in-vehicle speech intelligibility metrics for normal hearing and hearing impaired listeners

    Science.gov (United States)

    Samardzic, Nikolina

    The effectiveness of in-vehicle speech communication can be a good indicator of the perception of the overall vehicle quality and customer satisfaction. Currently available speech intelligibility metrics do not account in their procedures for essential parameters needed for a complete and accurate evaluation of in-vehicle speech intelligibility. These include the directivity and the distance of the talker with respect to the listener, binaural listening, hearing profile of the listener, vocal effort, and multisensory hearing. In the first part of this research the effectiveness of in-vehicle application of these metrics is investigated in a series of studies to reveal their shortcomings, including a wide range of scores resulting from each of the metrics for a given measurement configuration and vehicle operating condition. In addition, the nature of a possible correlation between the scores obtained from each metric is unknown. The metrics and the subjective perception of speech intelligibility using, for example, the same speech material have not been compared in literature. As a result, in the second part of this research, an alternative method for speech intelligibility evaluation is proposed for use in the automotive industry by utilizing a virtual reality driving environment for ultimately setting targets, including the associated statistical variability, for future in-vehicle speech intelligibility evaluation. The Speech Intelligibility Index (SII) was evaluated at the sentence Speech Receptions Threshold (sSRT) for various listening situations and hearing profiles using acoustic perception jury testing and a variety of talker and listener configurations and background noise. In addition, the effect of individual sources and transfer paths of sound in an operating vehicle to the vehicle interior sound, specifically their effect on speech intelligibility was quantified, in the framework of the newly developed speech intelligibility evaluation method. Lastly

  4. Integrated Metrics for Improving the Life Cycle Approach to Assessing Product System Sustainability

    Directory of Open Access Journals (Sweden)

    Wesley Ingwersen

    2014-03-01

    Full Text Available Life cycle approaches are critical for identifying and reducing environmental burdens of products. While these methods can indicate potential environmental impacts of a product, current Life Cycle Assessment (LCA methods fail to integrate the multiple impacts of a system into unified measures of social, economic or environmental performance related to sustainability. Integrated metrics that combine multiple aspects of system performance based on a common scientific or economic principle have proven to be valuable for sustainability evaluation. In this work, we propose methods of adapting four integrated metrics for use with LCAs of product systems: ecological footprint, emergy, green net value added, and Fisher information. These metrics provide information on the full product system in land, energy, monetary equivalents, and as a unitless information index; each bundled with one or more indicators for reporting. When used together and for relative comparison, integrated metrics provide a broader coverage of sustainability aspects from multiple theoretical perspectives that is more likely to illuminate potential issues than individual impact indicators. These integrated metrics are recommended for use in combination with traditional indicators used in LCA. Future work will test and demonstrate the value of using these integrated metrics and combinations to assess product system sustainability.

  5. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  6. Common Metrics for Human-Robot Interaction

    Science.gov (United States)

    Steinfeld, Aaron; Lewis, Michael; Fong, Terrence; Scholtz, Jean; Schultz, Alan; Kaber, David; Goodrich, Michael

    2006-01-01

    This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.

  7. Exploring s-CIELAB as a scanner metric for print uniformity

    Science.gov (United States)

    Hertel, Dirk W.

    2005-01-01

    The s-CIELAB color difference metric combines the standard CIELAB metric for perceived color difference with spatial contrast sensitivity filtering. When studying the performance of digital image processing algorithms, maps of spatial color difference between 'before' and 'after' images are a measure of perceived image difference. A general image quality metric can be obtained by modeling the perceived difference from an ideal image. This paper explores the s-CIELAB concept for evaluating the quality of digital prints. Prints present the challenge that the 'ideal print' which should serve as the reference when calculating the delta E* error map is unknown, and thus be estimated from the scanned print. A reasonable estimate of what the ideal print 'should have been' is possible at least for images of known content such as flat fields or continuous wedges, where the error map can be calculated against a global or local mean. While such maps showing the perceived error at each pixel are extremely useful when analyzing print defects, it is desirable to statistically reduce them to a more manageable dataset. Examples of digital print uniformity are given, and the effect of specific print defects on the s-CIELAB delta E* metric are discussed.

  8. An Implementable First-Order Primal-Dual Algorithm for Structured Convex Optimization

    Directory of Open Access Journals (Sweden)

    Feng Ma

    2014-01-01

    Full Text Available Many application problems of practical interest can be posed as structured convex optimization models. In this paper, we study a new first-order primaldual algorithm. The method can be easily implementable, provided that the resolvent operators of the component objective functions are simple to evaluate. We show that the proposed method can be interpreted as a proximal point algorithm with a customized metric proximal parameter. Convergence property is established under the analytic contraction framework. Finally, we verify the efficiency of the algorithm by solving the stable principal component pursuit problem.

  9. Narrowing the Gap Between QoS Metrics and Web QoE Using Above-the-fold Metrics

    OpenAIRE

    da Hora, Diego Neves; Asrese, Alemnew; Christophides, Vassilis; Teixeira, Renata; Rossi, Dario

    2018-01-01

    International audience; Page load time (PLT) is still the most common application Quality of Service (QoS) metric to estimate the Quality of Experience (QoE) of Web users. Yet, recent literature abounds with proposals for alternative metrics (e.g., Above The Fold, SpeedIndex and variants) that aim at better estimating user QoE. The main purpose of this work is thus to thoroughly investigate a mapping between established and recently proposed objective metrics and user QoE. We obtain ground tr...

  10. Honorary authorship epidemic in scholarly publications? How the current use of citation-based evaluative metrics make (pseudo)honorary authors from honest contributors of every multi-author article.

    Science.gov (United States)

    Kovacs, Jozsef

    2013-08-01

    The current use of citation-based metrics to evaluate the research output of individual researchers is highly discriminatory because they are uniformly applied to authors of single-author articles as well as contributors of multi-author papers. In the latter case, these quantitative measures are counted, as if each contributor were the single author of the full article. In this way, each and every contributor is assigned the full impact-factor score and all the citations that the article has received. This has a multiplication effect on each contributor's citation-based evaluative metrics of multi-author articles, because the more contributors an article has, the more undeserved credit is assigned to each of them. In this paper, I argue that this unfair system could be made fairer by requesting the contributors of multi-author articles to describe the nature of their contribution, and to assign a numerical value to their degree of relative contribution. In this way, we could create a contribution-specific index of each contributor for each citation metric. This would be a strong disincentive against honorary authorship and publication cartels, because it would transform the current win-win strategy of accepting honorary authors in the byline into a zero-sum game for each contributor.

  11. A GOAL QUESTION METRIC (GQM APPROACH FOR EVALUATING INTERACTION DESIGN PATTERNS IN DRAWING GAMES FOR PRESCHOOL CHILDREN

    Directory of Open Access Journals (Sweden)

    Dana Sulistiyo Kusumo

    2017-06-01

    Full Text Available In recent years, there has been an increasing interest to use smart devices’ drawing games for educational benefit. In Indonesia, our government classifies children age four to six years old as preschool children. Not all preschool children can use drawing games easily. Further, drawing games may not fulfill all Indonesia's preschool children’s drawing competencies. This research proposes to use Goal-Question Metric (GQM to investigate and evaluate interaction design patterns of preschool children in order to achieve the drawing competencies for preschool children in two drawing Android-based games: Belajar Menggambar (in English: Learn to Draw and Coret: Belajar Menggambar (in English: Scratch: Learn to Draw. We collected data from nine students of a preschool children education in a user research. The results show that GQM can assist to evaluate interaction design patterns in achieving the drawing competencies. Our approach can also yield interaction design patterns by comparing interaction design patterns in two drawing games used.

  12. Factor structure of the Tomimatsu-Sato metrics

    International Nuclear Information System (INIS)

    Perjes, Z.

    1989-02-01

    Based on an earlier result stating that δ = 3 Tomimatsu-Sato (TS) metrics can be factored over the field of integers, an analogous representation for higher TS metrics was sought. It is shown that the factoring property of TS metrics follows from the structure of special Hankel determinants. A set of linear algebraic equations determining the factors was defined, and the factors of the first five TS metrics were tabulated, together with their primitive factors. (R.P.) 4 refs.; 2 tabs

  13. ST-intuitionistic fuzzy metric space with properties

    Science.gov (United States)

    Arora, Sahil; Kumar, Tanuj

    2017-07-01

    In this paper, we define ST-intuitionistic fuzzy metric space and the notion of convergence and completeness properties of cauchy sequences is studied. Further, we prove some properties of ST-intuitionistic fuzzy metric space. Finally, we introduce the concept of symmetric ST Intuitionistic Fuzzy metric space.

  14. Primal Interior-Point Method for Large Sparse Minimax Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 45, č. 5 (2009), s. 841-864 ISSN 0023-5954 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * minimax optimization * nonsmooth optimization * interior-point methods * modified Newton methods * variable metric methods * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.445, year: 2009 http://dml.cz/handle/10338.dmlcz/140034

  15. Metric to quantify white matter damage on brain magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Valdes Hernandez, Maria del C.; Munoz Maniega, Susana; Anblagan, Devasuda; Bastin, Mark E.; Wardlaw, Joanna M. [University of Edinburgh, Department of Neuroimaging Sciences, Centre for Clinical Brain Sciences, Edinburgh (United Kingdom); University of Edinburgh, Centre for Cognitive Ageing and Cognitive Epidemiology, Edinburgh (United Kingdom); UK Dementia Research Institute, Edinburgh Dementia Research Centre, London (United Kingdom); Chappell, Francesca M.; Morris, Zoe; Sakka, Eleni [University of Edinburgh, Department of Neuroimaging Sciences, Centre for Clinical Brain Sciences, Edinburgh (United Kingdom); UK Dementia Research Institute, Edinburgh Dementia Research Centre, London (United Kingdom); Dickie, David Alexander; Royle, Natalie A. [University of Edinburgh, Department of Neuroimaging Sciences, Centre for Clinical Brain Sciences, Edinburgh (United Kingdom); University of Edinburgh, Centre for Cognitive Ageing and Cognitive Epidemiology, Edinburgh (United Kingdom); Armitage, Paul A. [University of Sheffield, Department of Cardiovascular Sciences, Sheffield (United Kingdom); Deary, Ian J. [University of Edinburgh, Centre for Cognitive Ageing and Cognitive Epidemiology, Edinburgh (United Kingdom); University of Edinburgh, Department of Psychology, Edinburgh (United Kingdom)

    2017-10-15

    Quantitative assessment of white matter hyperintensities (WMH) on structural Magnetic Resonance Imaging (MRI) is challenging. It is important to harmonise results from different software tools considering not only the volume but also the signal intensity. Here we propose and evaluate a metric of white matter (WM) damage that addresses this need. We obtained WMH and normal-appearing white matter (NAWM) volumes from brain structural MRI from community dwelling older individuals and stroke patients enrolled in three different studies, using two automatic methods followed by manual editing by two to four observers blind to each other. We calculated the average intensity values on brain structural fluid-attenuation inversion recovery (FLAIR) MRI for the NAWM and WMH. The white matter damage metric is calculated as the proportion of WMH in brain tissue weighted by the relative image contrast of the WMH-to-NAWM. The new metric was evaluated using tissue microstructure parameters and visual ratings of small vessel disease burden and WMH: Fazekas score for WMH burden and Prins scale for WMH change. The correlation between the WM damage metric and the visual rating scores (Spearman ρ > =0.74, p < 0.0001) was slightly stronger than between the latter and WMH volumes (Spearman ρ > =0.72, p < 0.0001). The repeatability of the WM damage metric was better than WM volume (average median difference between measurements 3.26% (IQR 2.76%) and 5.88% (IQR 5.32%) respectively). The follow-up WM damage was highly related to total Prins score even when adjusted for baseline WM damage (ANCOVA, p < 0.0001), which was not always the case for WMH volume, as total Prins was highly associated with the change in the intense WMH volume (p = 0.0079, increase of 4.42 ml per unit change in total Prins, 95%CI [1.17 7.67]), but not with the change in less-intense, subtle WMH, which determined the volumetric change. The new metric is practical and simple to calculate. It is robust to variations in

  16. Analysis of Skeletal Muscle Metrics as Predictors of Functional Task Performance

    Science.gov (United States)

    Ryder, Jeffrey W.; Buxton, Roxanne E.; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle J.; Fiedler, James; Ploutz-Snyder, Robert J.; Bloomberg, Jacob J.; Ploutz-Snyder, Lori L.

    2010-01-01

    PURPOSE: The ability to predict task performance using physiological performance metrics is vital to ensure that astronauts can execute their jobs safely and effectively. This investigation used a weighted suit to evaluate task performance at various ratios of strength, power, and endurance to body weight. METHODS: Twenty subjects completed muscle performance tests and functional tasks representative of those that would be required of astronauts during planetary exploration (see table for specific tests/tasks). Subjects performed functional tasks while wearing a weighted suit with additional loads ranging from 0-120% of initial body weight. Performance metrics were time to completion for all tasks except hatch opening, which consisted of total work. Task performance metrics were plotted against muscle metrics normalized to "body weight" (subject weight + external load; BW) for each trial. Fractional polynomial regression was used to model the relationship between muscle and task performance. CONCLUSION: LPMIF/BW is the best predictor of performance for predominantly lower-body tasks that are ambulatory and of short duration. LPMIF/BW is a very practical predictor of occupational task performance as it is quick and relatively safe to perform. Accordingly, bench press work best predicts hatch-opening work performance.

  17. Metrics for Assessment of Smart Grid Data Integrity Attacks

    Energy Technology Data Exchange (ETDEWEB)

    Annarita Giani; Miles McQueen; Russell Bent; Kameshwar Poolla; Mark Hinrichs

    2012-07-01

    There is an emerging consensus that the nation’s electricity grid is vulnerable to cyber attacks. This vulnerability arises from the increasing reliance on using remote measurements, transmitting them over legacy data networks to system operators who make critical decisions based on available data. Data integrity attacks are a class of cyber attacks that involve a compromise of information that is processed by the grid operator. This information can include meter readings of injected power at remote generators, power flows on transmission lines, and relay states. These data integrity attacks have consequences only when the system operator responds to compromised data by redispatching generation under normal or contingency protocols. These consequences include (a) financial losses from sub-optimal economic dispatch to service loads, (b) robustness/resiliency losses from placing the grid at operating points that are at greater risk from contingencies, and (c) systemic losses resulting from cascading failures induced by poor operational choices. This paper is focused on understanding the connections between grid operational procedures and cyber attacks. We first offer two examples to illustrate how data integrity attacks can cause economic and physical damage by misleading operators into taking inappropriate decisions. We then focus on unobservable data integrity attacks involving power meter data. These are coordinated attacks where the compromised data are consistent with the physics of power flow, and are therefore passed by any bad data detection algorithm. We develop metrics to assess the economic impact of these attacks under re-dispatch decisions using optimal power flow methods. These metrics can be use to prioritize the adoption of appropriate countermeasures including PMU placement, encryption, hardware upgrades, and advance attack detection algorithms.

  18. Global optimization based on noisy evaluations: An empirical study of two statistical approaches

    International Nuclear Information System (INIS)

    Vazquez, Emmanuel; Villemonteix, Julien; Sidorkiewicz, Maryan; Walter, Eric

    2008-01-01

    The optimization of the output of complex computer codes has often to be achieved with a small budget of evaluations. Algorithms dedicated to such problems have been developed and compared, such as the Expected Improvement algorithm (El) or the Informational Approach to Global Optimization (IAGO). However, the influence of noisy evaluation results on the outcome of these comparisons has often been neglected, despite its frequent appearance in industrial problems. In this paper, empirical convergence rates for El and IAGO are compared when an additive noise corrupts the result of an evaluation. IAGO appears more efficient than El and various modifications of El designed to deal with noisy evaluations. Keywords. Global optimization; computer simulations; kriging; Gaussian process; noisy evaluations.

  19. Mapping Rubber Plantations and Natural Forests in Xishuangbanna (Southwest China Using Multi-Spectral Phenological Metrics from MODIS Time Series

    Directory of Open Access Journals (Sweden)

    Sebastian van der Linden

    2013-05-01

    Full Text Available We developed and evaluated a new approach for mapping rubber plantations and natural forests in one of Southeast Asia’s biodiversity hot spots, Xishuangbanna in China. We used a one-year annual time series of Moderate Resolution Imaging Spectroradiometer (MODIS, Enhanced Vegetation Index (EVI and short-wave infrared (SWIR reflectance data to develop phenological metrics. These phenological metrics were used to classify rubber plantations and forests with the Random Forest classification algorithm. We evaluated which key phenological characteristics were important to discriminate rubber plantations and natural forests by estimating the influence of each metric on the classification accuracy. As a benchmark, we compared the best classification with a classification based on the full, fitted time series data. Overall classification accuracies derived from EVI and SWIR time series alone were 64.4% and 67.9%, respectively. Combining the phenological metrics from EVI and SWIR time series improved the accuracy to 73.5%. Using the full, smoothed time series data instead of metrics derived from the time series improved the overall accuracy only slightly (1.3%, indicating that the phenological metrics were sufficient to explain the seasonal changes captured by the MODIS time series. The results demonstrate a promising utility of phenological metrics for mapping and monitoring rubber expansion with MODIS.

  20. Citation metrics of excellence in sports biomechanics research.

    Science.gov (United States)

    Knudson, Duane

    2017-11-13

    This study extended research on key citation metrics of winners of two career scholar awards in sports biomechanics. Google Scholar (GS) was searched using Harzing's Publish or Perish software for the 13 most recent winners of the ISBS Geoffrey Dyson Award and the ASB Jim Hay Memorial Award. Returned records were corrected for author, and publications excluded for all but peer-reviewed journal articles, proceedings articles, chapters and books in English. These recent award winners had published about 150 publications that had been cited typically 4,082 and 6,648 times over a 26- and 28-year period before receiving these career awards for sports biomechanics research. Estimated median citations at time of their awards were 2,927 and 4,907 for the Dyson and Hay awards, respectively. Award winners had mean Hirsh indexes of 32-45 and mean h i of 19-28. Their mean g indexes (59-84) and their numerous citation classics (C > 100) indicated that they had many influential publications. The citation metrics of these scholars were outstanding and consistent with recent studies of top scholars in biomechanics and kinesiology/exercise science. Careful searching, cleaning and interpretation of several scholar-level citation metrics may provide useful confirmatory evidence for evaluations of awards committees.

  1. Methodology to Calculate the ACE and HPQ Metrics Used in the Wave Energy Prize

    Energy Technology Data Exchange (ETDEWEB)

    Driscoll, Frederick R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Weber, Jochem W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jenne, Dale S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Thresher, Robert W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Fingersh, Lee J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bull, Dianna [Sandia National Laboratories; Dallman, Ann [Sandia National Laboratories; Gunawan, Budi [Sandia National Laboratories; Ruehl, Kelley [Sandia National Laboratories; Newborn, David [Naval Surface Warfare Center, Carderock Division; Quintero, Miguel [Naval Surface Warfare Center, Carderock Division; LaBonte, Alison [U.S. Department of Energy; Karwat, Darshan [U.S. Department of Energy; Beatty, Scott [Cascadia Coast Research Ltd.

    2018-03-08

    The U.S. Department of Energy's Wave Energy Prize Competition encouraged the development of innovative deep-water wave energy conversion technologies that at least doubled device performance above the 2014 state of the art. Because levelized cost of energy (LCOE) metrics are challenging to apply equitably to new technologies where significant uncertainty exists in design and operation, the prize technical team developed a reduced metric as proxy for LCOE, which provides an equitable comparison of low technology readiness level wave energy converter (WEC) concepts. The metric is called 'ACE' which is short for the ratio of the average climate capture width to the characteristic capital expenditure. The methodology and application of the ACE metric used to evaluate the performance of the technologies that competed in the Wave Energy Prize are explained in this report.

  2. Pragmatic security metrics applying metametrics to information security

    CERN Document Server

    Brotby, W Krag

    2013-01-01

    Other books on information security metrics discuss number theory and statistics in academic terms. Light on mathematics and heavy on utility, PRAGMATIC Security Metrics: Applying Metametrics to Information Security breaks the mold. This is the ultimate how-to-do-it guide for security metrics.Packed with time-saving tips, the book offers easy-to-follow guidance for those struggling with security metrics. Step by step, it clearly explains how to specify, develop, use, and maintain an information security measurement system (a comprehensive suite of metrics) to

  3. Defining a Progress Metric for CERT RMM Improvement

    Science.gov (United States)

    2017-09-14

    REV-03.18.2016.0 Defining a Progress Metric for CERT-RMM Improvement Gregory Crabb Nader Mehravari David Tobar September 2017 TECHNICAL ...fendable resource allocation decisions. Technical metrics measure aspects of controls implemented through technology (systems, soft- ware, hardware...implementation metric would be the percentage of users who have received anti-phishing training . • Effectiveness/efficiency metrics measure whether

  4. IT Project Management Metrics

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Many software and IT projects fail in completing theirs objectives because different causes of which the management of the projects has a high weight. In order to have successfully projects, lessons learned have to be used, historical data to be collected and metrics and indicators have to be computed and used to compare them with past projects and avoid failure to happen. This paper presents some metrics that can be used for the IT project management.

  5. Mass Customization Measurements Metrics

    DEFF Research Database (Denmark)

    Nielsen, Kjeld; Brunø, Thomas Ditlev; Jørgensen, Kaj Asbjørn

    2014-01-01

    A recent survey has indicated that 17 % of companies have ceased mass customizing less than 1 year after initiating the effort. This paper presents measurement for a company’s mass customization performance, utilizing metrics within the three fundamental capabilities: robust process design, choice...... navigation, and solution space development. A mass customizer when assessing performance with these metrics can identify within which areas improvement would increase competitiveness the most and enable more efficient transition to mass customization....

  6. Test and Evaluation Metrics of Crew Decision-Making And Aircraft Attitude and Energy State Awareness

    Science.gov (United States)

    Bailey, Randall E.; Ellis, Kyle K. E.; Stephens, Chad L.

    2013-01-01

    NASA has established a technical challenge, under the Aviation Safety Program, Vehicle Systems Safety Technologies project, to improve crew decision-making and response in complex situations. The specific objective of this challenge is to develop data and technologies which may increase a pilot's (crew's) ability to avoid, detect, and recover from adverse events that could otherwise result in accidents/incidents. Within this technical challenge, a cooperative industry-government research program has been established to develop innovative flight deck-based counter-measures that can improve the crew's ability to avoid, detect, mitigate, and recover from unsafe loss-of-aircraft state awareness - specifically, the loss of attitude awareness (i.e., Spatial Disorientation, SD) or the loss-of-energy state awareness (LESA). A critical component of this research is to develop specific and quantifiable metrics which identify decision-making and the decision-making influences during simulation and flight testing. This paper reviews existing metrics and methods for SD testing and criteria for establishing visual dominance. The development of Crew State Monitoring technologies - eye tracking and other psychophysiological - are also discussed as well as emerging new metrics for identifying channelized attention and excessive pilot workload, both of which have been shown to contribute to SD/LESA accidents or incidents.

  7. New exposure-based metric approach for evaluating O3 risk to North American aspen forests

    International Nuclear Information System (INIS)

    Percy, K.E.; Nosal, M.; Heilman, W.; Dann, T.; Sober, J.; Legge, A.H.; Karnosky, D.F.

    2007-01-01

    The United States and Canada currently use exposure-based metrics to protect vegetation from O 3 . Using 5 years (1999-2003) of co-measured O 3 , meteorology and growth response, we have developed exposure-based regression models that predict Populus tremuloides growth change within the North American ambient air quality context. The models comprised growing season fourth-highest daily maximum 8-h average O 3 concentration, growing degree days, and wind speed. They had high statistical significance, high goodness of fit, include 95% confidence intervals for tree growth change, and are simple to use. Averaged across a wide range of clonal sensitivity, historical 2001-2003 growth change over most of the 26 M ha P. tremuloides distribution was estimated to have ranged from no impact (0%) to strong negative impacts (-31%). With four aspen clones responding negatively (one responded positively) to O 3 , the growing season fourth-highest daily maximum 8-h average O 3 concentration performed much better than growing season SUM06, AOT40 or maximum 1 h average O 3 concentration metrics as a single indicator of aspen stem cross-sectional area growth. - A new exposure-based metric approach to predict O 3 risk to North American aspen forests has been developed

  8. Metrical Phonology: German Sound System.

    Science.gov (United States)

    Tice, Bradley S.

    Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English and German languages. The objective is to promote use of metrical phonology as a tool for enhancing instruction in stress patterns in words and sentences, particularly in…

  9. Optimization-Based Image Segmentation by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Rosenberger C

    2008-01-01

    Full Text Available Abstract Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.

  10. Optimization-Based Image Segmentation by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    H. Laurent

    2008-05-01

    Full Text Available Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.

  11. Construction of Einstein-Sasaki metrics in D≥7

    International Nuclear Information System (INIS)

    Lue, H.; Pope, C. N.; Vazquez-Poritz, J. F.

    2007-01-01

    We construct explicit Einstein-Kaehler metrics in all even dimensions D=2n+4≥6, in terms of a 2n-dimensional Einstein-Kaehler base metric. These are cohomogeneity 2 metrics which have the new feature of including a NUT-type parameter, or gravomagnetic charge, in addition to..' in addition to mass and rotation parameters. Using a canonical construction, these metrics all yield Einstein-Sasaki metrics in dimensions D=2n+5≥7. As is commonly the case in this type of construction, for suitable choices of the free parameters the Einstein-Sasaki metrics can extend smoothly onto complete and nonsingular manifolds, even though the underlying Einstein-Kaehler metric has conical singularities. We discuss some explicit examples in the case of seven-dimensional Einstein-Sasaki spaces. These new spaces can provide supersymmetric backgrounds in M theory, which play a role in the AdS 4 /CFT 3 correspondence

  12. National Metrical Types in Nineteenth Century Art Song

    Directory of Open Access Journals (Sweden)

    Leigh VanHandel

    2010-01-01

    Full Text Available William Rothstein’s article “National metrical types in music of the eighteenth and early nineteenth centuries” (2008 proposes a distinction between the metrical habits of 18th and early 19th century German music and those of Italian and French music of that period. Based on theoretical treatises and compositional practice, he outlines these national metrical types and discusses the characteristics of each type. This paper presents the results of a study designed to determine whether, and to what degree, Rothstein’s characterizations of national metrical types are present in 19th century French and German art song. Studying metrical habits in this genre may provide a lens into changing metrical conceptions of 19th century theorists and composers, as well as to the metrical habits and compositional style of individual 19th century French and German art song composers.

  13. A Metric on Phylogenetic Tree Shapes.

    Science.gov (United States)

    Colijn, C; Plazzotta, G

    2018-01-01

    The shapes of evolutionary trees are influenced by the nature of the evolutionary process but comparisons of trees from different processes are hindered by the challenge of completely describing tree shape. We present a full characterization of the shapes of rooted branching trees in a form that lends itself to natural tree comparisons. We use this characterization to define a metric, in the sense of a true distance function, on tree shapes. The metric distinguishes trees from random models known to produce different tree shapes. It separates trees derived from tropical versus USA influenza A sequences, which reflect the differing epidemiology of tropical and seasonal flu. We describe several metrics based on the same core characterization, and illustrate how to extend the metric to incorporate trees' branch lengths or other features such as overall imbalance. Our approach allows us to construct addition and multiplication on trees, and to create a convex metric on tree shapes which formally allows computation of average tree shapes. © The Author(s) 2017. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  14. SERVICE PROVISIONING IN MANETS USING SERVICE PROVIDER’S METRICS

    Directory of Open Access Journals (Sweden)

    K. Ponmozhi

    2012-09-01

    Full Text Available Service discovery technologies are exploited to enable services to advertise their existence in a dynamic way and can be discovered, configured and used by other devices with minimum of manual efforts. Automatic service discovery plays an important role in future network scenarios. Service discovery in distributed environment is difficult that too if the availability information of the services cannot be in a centralized node. The complexity is increased even further in the case of MANETs in which there will not be central intelligence also, the nodes involved may be on the move. The mobility issue leads to the situation of uncertainty about the service availability of the service provider. In this paper we propose a decentralized discovery mechanism. The basic idea is, distributing service information along with the availability metrics to the nodes. The metrics will give us the information to evaluate the goodness of the service provider. Every node will form multi-layered overlays of service providers sorted based on the metrics. When we send a query, each node will identify the service provider from the overlay with the good metric among the available providers (i.e. the one in the first position in the overlay. We define the message structures and methods needed for this proposal. The simulation result shows that in the high mobile environment too we could have a better convergence. We believe that the architecture presented here is a necessary component of any service provision framework.

  15. Using community-level metrics to monitor the effects of marine protected areas on biodiversity.

    Science.gov (United States)

    Soykan, Candan U; Lewison, Rebecca L

    2015-06-01

    Marine protected areas (MPAs) are used to protect species, communities, and their associated habitats, among other goals. Measuring MPA efficacy can be challenging, however, particularly when considering responses at the community level. We gathered 36 abundance and 14 biomass data sets on fish assemblages and used meta-analysis to evaluate the ability of 22 distinct community diversity metrics to detect differences in community structure between MPAs and nearby control sites. We also considered the effects of 6 covariates-MPA size and age, MPA size and age interaction, latitude, total species richness, and level of protection-on each metric. Some common metrics, such as species richness and Shannon diversity, did not differ consistently between MPA and control sites, whereas other metrics, such as total abundance and biomass, were consistently different across studies. Metric responses derived from the biomass data sets were more consistent than those based on the abundance data sets, suggesting that community-level biomass differs more predictably than abundance between MPA and control sites. Covariate analyses indicated that level of protection, latitude, MPA size, and the interaction between MPA size and age affect metric performance. These results highlight a handful of metrics, several of which are little known, that could be used to meet the increasing demand for community-level indicators of MPA effectiveness. © 2015 Society for Conservation Biology.

  16. Quantitative application of sigma metrics in medical biochemistry.

    Science.gov (United States)

    Nanda, Sunil Kumar; Ray, Lopamudra

    2013-12-01

    Laboratory errors are result of a poorly designed quality system in the laboratory. Six Sigma is an error reduction methodology that has been successfully applied at Motorola and General Electric. Sigma (σ) is the mathematical symbol for standard deviation (SD). Sigma methodology can be applied wherever an outcome of a process has to be measured. A poor outcome is counted as an error or defect. This is quantified as defects per million (DPM). A six sigma process is one in which 99.999666% of the products manufactured are statistically expected to be free of defects. Six sigma concentrates, on regulating a process to 6 SDs, represents 3.4 DPM (defects per million) opportunities. It can be inferred that as sigma increases, the consistency and steadiness of the test improves, thereby reducing the operating costs. We aimed to gauge performance of our laboratory parameters by sigma metrics. Evaluation of sigma metrics in interpretation of parameter performance in clinical biochemistry. The six month internal QC (October 2012 to march 2013) and EQAS (external quality assurance scheme) were extracted for the parameters-Glucose, Urea, Creatinine, Total Bilirubin, Total Protein, Albumin, Uric acid, Total Cholesterol, Triglycerides, Chloride, SGOT, SGPT and ALP. Coefficient of variance (CV) were calculated from internal QC for these parameters. Percentage bias for these parameters was calculated from the EQAS. Total allowable errors were followed as per Clinical Laboratory Improvement Amendments (CLIA) guidelines. Sigma metrics were calculated from CV, percentage bias and total allowable error for the above mentioned parameters. For parameters - Total bilirubin, uric acid, SGOT, SGPT and ALP, the sigma values were found to be more than 6. For parameters - glucose, Creatinine, triglycerides, urea, the sigma values were found to be between 3 to 6. For parameters - total protein, albumin, cholesterol and chloride, the sigma values were found to be less than 3. ALP was the best

  17. Wheeling rates evaluation using optimal power flows

    International Nuclear Information System (INIS)

    Muchayi, M.; El-Hawary, M. E.

    1998-01-01

    Wheeling is the transmission of electrical power and reactive power from a seller to a buyer through a transmission network owned by a third party. The wheeling rates are then the prices charged by the third party for the use of its network. This paper proposes and evaluates a strategy for pricing wheeling power using a pricing algorithm that in addition to the fuel cost for generation incorporates the optimal allocation of the transmission system operating cost, based on time-of-use pricing. The algorithm is implemented for the IEEE standard 14 and 30 bus system which involves solving a modified optimal power flow problem iteratively. The base of the proposed algorithm is the hourly spot price. The analysis spans a total time period of 24 hours. Unlike other algorithms that use DC models, the proposed model captures wheeling rates of both real and reactive power. Based on the evaluation, it was concluded that the model has the potential for wide application in calculating wheeling rates in a deregulated competitive power transmission environment. 9 refs., 3 tabs

  18. Knowledge metrics of Brand Equity; critical measure of Brand Attachment

    OpenAIRE

    Arslan Rafi (Corresponding Author); Arslan Ali; Sidra Waris; Dr. Kashif-ur-Rehman

    2011-01-01

    Brand creation through an effective marketing strategy is necessary for creation of unique associations in the customers memory. Customers attitude, awareness and association towards the brand are primarily focused while evaluating performance of a brand, before designing the marketing strategies and subsequent evaluation of the progress. In this research, literature establishes a direct and significant effect of Knowledge metrics of the Brand equity, i.e. Brand Awareness and Brand Associatio...

  19. Evaluating Consumer Product Life Cycle Sustainability with Integrated Metrics: A Paper Towel Case Study

    Science.gov (United States)

    Integrated sustainability metrics provide an enriched set of information to inform decision-making. However, such approaches are rarely used to assess product supply chains. In this work, four integrated metrics—presented in terms of land, resources, value added, and stability—ar...

  20. Evaluation and optimization of feed-in tariffs

    International Nuclear Information System (INIS)

    Kim, Kyoung-Kuk; Lee, Chi-Guhn

    2012-01-01

    Feed-in tariff program is an incentive plan that provides investors with a set payment for electricity generated from renewable energy sources that is fed into the power grid. As of today, FIT is being used by over 75 jurisdictions around the world and offers a number of design options to achieve policy goals. The objective of this paper is to propose a quantitative model, by which a specific FIT program can be evaluated and hence optimized. We focus on payoff structure, which has a direct impact on the net present value of the investment, and other parameters relevant to investor reaction and electricity prices. We combine cost modeling, option valuation, and consumer choice so as to simulate the performance of a FIT program of interest in various scenarios. The model is used to define an optimization problem from a policy maker's perspective, who wants to increase the contribution of renewable energy to the overall energy supply, while keeping the total burden on ratepayers under control. Numerical studies shed light on the interactions among design options, program parameters, and the performance of a FIT program. - Highlights: ► A quantitative model to evaluate and optimize feed-in tariff policies. ► Net present value of investment on renewable energy under a given feed-in tariff policy. ► Analysis of the interactions of policy options and relevant parameters. ► Recommendations for how to set policy options for feed-in tariff program.

  1. The Jacobi metric for timelike geodesics in static spacetimes

    Science.gov (United States)

    Gibbons, G. W.

    2016-01-01

    It is shown that the free motion of massive particles moving in static spacetimes is given by the geodesics of an energy-dependent Riemannian metric on the spatial sections analogous to Jacobi's metric in classical dynamics. In the massless limit Jacobi's metric coincides with the energy independent Fermat or optical metric. For stationary metrics, it is known that the motion of massless particles is given by the geodesics of an energy independent Finslerian metric of Randers type. The motion of massive particles is governed by neither a Riemannian nor a Finslerian metric. The properies of the Jacobi metric for massive particles moving outside the horizon of a Schwarschild black hole are described. By constrast with the massless case, the Gaussian curvature of the equatorial sections is not always negative.

  2. Evaluating social media's capacity to develop engaged audiences in health promotion settings: use of Twitter metrics as a case study.

    Science.gov (United States)

    Neiger, Brad L; Thackeray, Rosemary; Burton, Scott H; Giraud-Carrier, Christophe G; Fagen, Michael C

    2013-03-01

    Use of social media in health promotion and public health continues to grow in popularity, though most of what is reported in literature represents one-way messaging devoid of attributes associated with engagement, a core attribute, if not the central purpose, of social media. This article defines engagement, describes its value in maximizing the potential of social media in health promotion, proposes an evaluation hierarchy for social media engagement, and uses Twitter as a case study to illustrate how the hierarchy might function in practice. Partnership and participation are proposed as culminating outcomes for social media use in health promotion. As use of social media in health promotion moves toward this end, evaluation metrics that verify progress and inform subsequent strategies will become increasingly important.

  3. Relaxed metrics and indistinguishability operators: the relationship

    Energy Technology Data Exchange (ETDEWEB)

    Martin, J.

    2017-07-01

    In 1982, the notion of indistinguishability operator was introduced by E. Trillas in order to fuzzify the crisp notion of equivalence relation (/cite{Trillas}). In the study of such a class of operators, an outstanding property must be pointed out. Concretely, there exists a duality relationship between indistinguishability operators and metrics. The aforesaid relationship was deeply studied by several authors that introduced a few techniques to generate metrics from indistinguishability operators and vice-versa (see, for instance, /cite{BaetsMesiar,BaetsMesiar2}). In the last years a new generalization of the metric notion has been introduced in the literature with the purpose of developing mathematical tools for quantitative models in Computer Science and Artificial Intelligence (/cite{BKMatthews,Ma}). The aforementioned generalized metrics are known as relaxed metrics. The main target of this talk is to present a study of the duality relationship between indistinguishability operators and relaxed metrics in such a way that the aforementioned classical techniques to generate both concepts, one from the other, can be extended to the new framework. (Author)

  4. Concordance-based Kendall's Correlation for Computationally-Light vs. Computationally-Heavy Centrality Metrics: Lower Bound for Correlation

    Directory of Open Access Journals (Sweden)

    Natarajan Meghanathan

    2017-01-01

    Full Text Available We identify three different levels of correlation (pair-wise relative ordering, network-wide ranking and linear regression that could be assessed between a computationally-light centrality metric and a computationally-heavy centrality metric for real-world networks. The Kendall's concordance-based correlation measure could be used to quantitatively assess how well we could consider the relative ordering of two vertices vi and vj with respect to a computationally-light centrality metric as the relative ordering of the same two vertices with respect to a computationally-heavy centrality metric. We hypothesize that the pair-wise relative ordering (concordance-based assessment of the correlation between centrality metrics is the most strictest of all the three levels of correlation and claim that the Kendall's concordance-based correlation coefficient will be lower than the correlation coefficient observed with the more relaxed levels of correlation measures (linear regression-based Pearson's product-moment correlation coefficient and the network wide ranking-based Spearman's correlation coefficient. We validate our hypothesis by evaluating the three correlation coefficients between two sets of centrality metrics: the computationally-light degree and local clustering coefficient complement-based degree centrality metrics and the computationally-heavy eigenvector centrality, betweenness centrality and closeness centrality metrics for a diverse collection of 50 real-world networks.

  5. Multi-objective based on parallel vector evaluated particle swarm optimization for optimal steady-state performance of power systems

    DEFF Research Database (Denmark)

    Vlachogiannis, Ioannis (John); Lee, K Y

    2009-01-01

    In this paper the state-of-the-art extended particle swarm optimization (PSO) methods for solving multi-objective optimization problems are represented. We emphasize in those, the co-evolution technique of the parallel vector evaluated PSO (VEPSO), analysed and applied in a multi-objective problem...

  6. Overall bolt stress optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2013-01-01

    The state of stress in bolts and nuts with International Organization for Standardization metric thread design is examined and optimized. The assumed failure mode is fatigue, so the applied preload and the load amplitude together with the stress concentrations define the connection strength....... Maximum stress in the bolt is found at the fillet under the head, at the thread start, or at the thread root. To minimize the stress concentration, shape optimization is applied. Nut shape optimization also has a positive effect on the maximum stress. The optimization results show that designing a nut......, which results in a more evenly distribution of load along the engaged thread, has a limited influence on the maximum stress due to the stress concentration at the first thread root. To further reduce the maximum stress, the transition from bolt shank to the thread must be optimized. Stress reduction...

  7. Assessing precision, bias and sigma-metrics of 53 measurands of the Alinity ci system.

    Science.gov (United States)

    Westgard, Sten; Petrides, Victoria; Schneider, Sharon; Berman, Marvin; Herzogenrath, Jörg; Orzechowski, Anthony

    2017-12-01

    Assay performance is dependent on the accuracy and precision of a given method. These attributes can be combined into an analytical Sigma-metric, providing a simple value for laboratorians to use in evaluating a test method's capability to meet its analytical quality requirements. Sigma-metrics were determined for 37 clinical chemistry assays, 13 immunoassays, and 3 ICT methods on the Alinity ci system. Analytical Performance Specifications were defined for the assays, following a rationale of using CLIA goals first, then Ricos Desirable goals when CLIA did not regulate the method, and then other sources if the Ricos Desirable goal was unrealistic. A precision study was conducted at Abbott on each assay using the Alinity ci system following the CLSI EP05-A2 protocol. Bias was estimated following the CLSI EP09-A3 protocol using samples with concentrations spanning the assay's measuring interval tested in duplicate on the Alinity ci system and ARCHITECT c8000 and i2000 SR systems, where testing was also performed at Abbott. Using the regression model, the %bias was estimated at an important medical decisions point. Then the Sigma-metric was estimated for each assay and was plotted on a method decision chart. The Sigma-metric was calculated using the equation: Sigma-metric=(%TEa-|%bias|)/%CV. The Sigma-metrics and Normalized Method Decision charts demonstrate that a majority of the Alinity assays perform at least at five Sigma or higher, at or near critical medical decision levels. More than 90% of the assays performed at Five and Six Sigma. None performed below Three Sigma. Sigma-metrics plotted on Normalized Method Decision charts provide useful evaluations of performance. The majority of Alinity ci system assays had sigma values >5 and thus laboratories can expect excellent or world class performance. Laboratorians can use these tools as aids in choosing high-quality products, further contributing to the delivery of excellent quality healthcare for patients

  8. Experiential space is hardly metric

    Czech Academy of Sciences Publication Activity Database

    Šikl, Radovan; Šimeček, Michal; Lukavský, Jiří

    2008-01-01

    Roč. 2008, č. 37 (2008), s. 58-58 ISSN 0301-0066. [European Conference on Visual Perception. 24.08-28.08.2008, Utrecht] R&D Projects: GA ČR GA406/07/1676 Institutional research plan: CEZ:AV0Z70250504 Keywords : visual space perception * metric and non-metric perceptual judgments * ecological validity Subject RIV: AN - Psychology

  9. High resolution metric imaging payload

    Science.gov (United States)

    Delclaud, Y.

    2017-11-01

    Alcatel Space Industries has become Europe's leader in the field of high and very high resolution optical payloads, in the frame work of earth observation system able to provide military government with metric images from space. This leadership allowed ALCATEL to propose for the export market, within a French collaboration frame, a complete space based system for metric observation.

  10. Evaluation and improvement of dynamic optimality in electrochemical reactors

    International Nuclear Information System (INIS)

    Vijayasekaran, B.; Basha, C. Ahmed

    2005-01-01

    A systematic approach for the dynamic optimization problem statement to improve the dynamic optimality in electrochemical reactors is presented in this paper. The formulation takes an account of the diffusion phenomenon in the electrode/electrolyte interface. To demonstrate the present methodology, the optimal time-varying electrode potential for a coupled chemical-electrochemical reaction scheme, that maximizes the production of the desired product in a batch electrochemical reactor with/without recirculation are determined. The dynamic optimization problem statement, based upon this approach, is a nonlinear differential algebraic system, and its solution provides information about the optimal policy. Optimal control policy at different conditions is evaluated using the best-known Pontryagin's maximum principle. The two-point boundary value problem resulting from the application of the maximum principle is then solved using the control vector iteration technique. These optimal time-varying profiles of electrode potential are then compared to the best uniform operation through the relative improvements of the performance index. The application of the proposed approach to two electrochemical systems, described by ordinary differential equations, shows that the existing electrochemical process control strategy could be improved considerably when the proposed method is incorporated

  11. Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction

    Science.gov (United States)

    Zang, Y.; Yang, B.

    2018-04-01

    3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  12. OPTIMAL INFORMATION EXTRACTION OF LASER SCANNING DATASET BY SCALE-ADAPTIVE REDUCTION

    Directory of Open Access Journals (Sweden)

    Y. Zang

    2018-04-01

    Full Text Available 3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  13. The Association between Four Citation Metrics and Peer Rankings of Research Influence of Australian Researchers in Six Fields of Public Health

    Science.gov (United States)

    Derrick, Gemma Elizabeth; Haynes, Abby; Chapman, Simon; Hall, Wayne D.

    2011-01-01

    Doubt about the relevance, appropriateness and transparency of peer review has promoted the use of citation metrics as a viable adjunct or alternative in the assessment of research impact. It is also commonly acknowledged that research metrics will not replace peer review unless they are shown to correspond with the assessment of peers. This paper evaluates the relationship between researchers' influence as evaluated by their peers and various citation metrics representing different aspects of research output in 6 fields of public health in Australia. For four fields, the results showed a modest positive correlation between different research metrics and peer assessments of research influence. However, for two fields, tobacco and injury, negative or no correlations were found. This suggests a peer understanding of research influence within these fields differed from visibility in the mainstream, peer-reviewed scientific literature. This research therefore recommends the use of both peer review and metrics in a combined approach in assessing research influence. Future research evaluation frameworks intent on incorporating metrics should first analyse each field closely to determine what measures of research influence are valued highly by members of that research community. This will aid the development of comprehensive and relevant frameworks with which to fairly and transparently distribute research funds or approve promotion applications. PMID:21494691

  14. The association between four citation metrics and peer rankings of research influence of Australian researchers in six fields of public health.

    Directory of Open Access Journals (Sweden)

    Gemma Elizabeth Derrick

    Full Text Available Doubt about the relevance, appropriateness and transparency of peer review has promoted the use of citation metrics as a viable adjunct or alternative in the assessment of research impact. It is also commonly acknowledged that research metrics will not replace peer review unless they are shown to correspond with the assessment of peers. This paper evaluates the relationship between researchers' influence as evaluated by their peers and various citation metrics representing different aspects of research output in 6 fields of public health in Australia. For four fields, the results showed a modest positive correlation between different research metrics and peer assessments of research influence. However, for two fields, tobacco and injury, negative or no correlations were found. This suggests a peer understanding of research influence within these fields differed from visibility in the mainstream, peer-reviewed scientific literature. This research therefore recommends the use of both peer review and metrics in a combined approach in assessing research influence. Future research evaluation frameworks intent on incorporating metrics should first analyse each field closely to determine what measures of research influence are valued highly by members of that research community. This will aid the development of comprehensive and relevant frameworks with which to fairly and transparently distribute research funds or approve promotion applications.

  15. The association between four citation metrics and peer rankings of research influence of Australian researchers in six fields of public health.

    Science.gov (United States)

    Derrick, Gemma Elizabeth; Haynes, Abby; Chapman, Simon; Hall, Wayne D

    2011-04-06

    Doubt about the relevance, appropriateness and transparency of peer review has promoted the use of citation metrics as a viable adjunct or alternative in the assessment of research impact. It is also commonly acknowledged that research metrics will not replace peer review unless they are shown to correspond with the assessment of peers. This paper evaluates the relationship between researchers' influence as evaluated by their peers and various citation metrics representing different aspects of research output in 6 fields of public health in Australia. For four fields, the results showed a modest positive correlation between different research metrics and peer assessments of research influence. However, for two fields, tobacco and injury, negative or no correlations were found. This suggests a peer understanding of research influence within these fields differed from visibility in the mainstream, peer-reviewed scientific literature. This research therefore recommends the use of both peer review and metrics in a combined approach in assessing research influence. Future research evaluation frameworks intent on incorporating metrics should first analyse each field closely to determine what measures of research influence are valued highly by members of that research community. This will aid the development of comprehensive and relevant frameworks with which to fairly and transparently distribute research funds or approve promotion applications.

  16. Image-guided radiotherapy quality control: Statistical process control using image similarity metrics.

    Science.gov (United States)

    Shiraishi, Satomi; Grams, Michael P; Fong de Los Santos, Luis E

    2018-05-01

    , respectively. Patient-specific control charts using NCC evaluated daily variation and identified statistically significant deviations. This study also showed that subjective evaluations of the images were not always consistent. Population control charts identified a patient whose tracking metrics were significantly lower than those of other patients. The patient-specific action limits identified registrations that warranted immediate evaluation by an expert. When effective displacements in the anterior-posterior direction were compared to 3DoF couch displacements, the agreement was ±1 mm for seven of 10 patients for both C-spine and mandible RTVs. Qualitative review alone of IGRT images can result in inconsistent feedback to the IGRT process. Registration tracking using NCC objectively identifies statistically significant deviations. When used in conjunction with the current image review process, this tool can assist in improving the safety and consistency of the IGRT process. © 2018 American Association of Physicists in Medicine.

  17. Smart Grid Status and Metrics Report Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Balducci, Patrick J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Antonopoulos, Chrissi A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Clements, Samuel L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gorrissen, Willy J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kirkham, Harold [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ruiz, Kathleen A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Smith, David L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Weimar, Mark R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gardner, Chris [APQC, Houston, TX (United States); Varney, Jeff [APQC, Houston, TX (United States)

    2014-07-01

    A smart grid uses digital power control and communication technology to improve the reliability, security, flexibility, and efficiency of the electric system, from large generation through the delivery systems to electricity consumers and a growing number of distributed generation and storage resources. To convey progress made in achieving the vision of a smart grid, this report uses a set of six characteristics derived from the National Energy Technology Laboratory Modern Grid Strategy. The Smart Grid Status and Metrics Report defines and examines 21 metrics that collectively provide insight into the grid’s capacity to embody these characteristics. This appendix presents papers covering each of the 21 metrics identified in Section 2.1 of the Smart Grid Status and Metrics Report. These metric papers were prepared in advance of the main body of the report and collectively form its informational backbone.

  18. Implications of Metric Choice for Common Applications of Readmission Metrics

    OpenAIRE

    Davies, Sheryl; Saynina, Olga; Schultz, Ellen; McDonald, Kathryn M; Baker, Laurence C

    2013-01-01

    Objective. To quantify the differential impact on hospital performance of three readmission metrics: all-cause readmission (ACR), 3M Potential Preventable Readmission (PPR), and Centers for Medicare and Medicaid 30-day readmission (CMS).

  19. Evaluation of dose-volume metrics for microbeam radiation therapy dose distributions in head phantoms of various sizes using Monte Carlo simulations

    Science.gov (United States)

    Anderson, Danielle; Siegbahn, E. Albert; Fallone, B. Gino; Serduc, Raphael; Warkentin, Brad

    2012-05-01

    This work evaluates four dose-volume metrics applied to microbeam radiation therapy (MRT) using simulated dosimetric data as input. We seek to improve upon the most frequently used MRT metric, the peak-to-valley dose ratio (PVDR), by analyzing MRT dose distributions from a more volumetric perspective. Monte Carlo simulations were used to calculate dose distributions in three cubic head phantoms: a 2 cm mouse head, an 8 cm cat head and a 16 cm dog head. The dose distribution was calculated for a 4 × 4 mm2 microbeam array in each phantom, as well as a 16 × 16 mm2 array in the 8 cm cat head, and a 32 × 32 mm2 array in the 16 cm dog head. Microbeam widths of 25, 50 and 75 µm and center-to-center spacings of 100, 200 and 400 µm were considered. The metrics calculated for each simulation were the conventional PVDR, the peak-to-mean valley dose ratio (PMVDR), the mean dose and the percentage volume below a threshold dose. The PVDR ranged between 3 and 230 for the 2 cm mouse phantom, and between 2 and 186 for the 16 cm dog phantom depending on geometry. The corresponding ranges for the PMVDR were much smaller, being 2-49 (mouse) and 2-46 (dog), and showed a slightly weaker dependence on phantom size and array size. The ratio of the PMVDR to the PVDR varied from 0.21 to 0.79 for the different collimation configurations, indicating a difference between the geometric dependence on outcome that would be predicted by these two metrics. For unidirectional irradiation, the mean lesion dose was 102%, 79% and 42% of the mean skin dose for the 2 cm mouse, 8 cm cat and 16 cm dog head phantoms, respectively. However, the mean lesion dose recovered to 83% of the mean skin dose in the 16 cm dog phantom in intersecting cross-firing regions. The percentage volume below a 10% dose threshold was highly dependent on geometry, with ranges for the different collimation configurations of 2-87% and 33-96% for the 2 cm mouse and 16 cm dog heads, respectively. The results of this study

  20. Evaluation of dose-volume metrics for microbeam radiation therapy dose distributions in head phantoms of various sizes using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Anderson, Danielle; Fallone, B Gino; Warkentin, Brad; Siegbahn, E Albert; Serduc, Raphael

    2012-01-01

    This work evaluates four dose-volume metrics applied to microbeam radiation therapy (MRT) using simulated dosimetric data as input. We seek to improve upon the most frequently used MRT metric, the peak-to-valley dose ratio (PVDR), by analyzing MRT dose distributions from a more volumetric perspective. Monte Carlo simulations were used to calculate dose distributions in three cubic head phantoms: a 2 cm mouse head, an 8 cm cat head and a 16 cm dog head. The dose distribution was calculated for a 4 × 4 mm 2 microbeam array in each phantom, as well as a 16 × 16 mm 2 array in the 8 cm cat head, and a 32 × 32 mm 2 array in the 16 cm dog head. Microbeam widths of 25, 50 and 75 µm and center-to-center spacings of 100, 200 and 400 µm were considered. The metrics calculated for each simulation were the conventional PVDR, the peak-to-mean valley dose ratio (PMVDR), the mean dose and the percentage volume below a threshold dose. The PVDR ranged between 3 and 230 for the 2 cm mouse phantom, and between 2 and 186 for the 16 cm dog phantom depending on geometry. The corresponding ranges for the PMVDR were much smaller, being 2–49 (mouse) and 2–46 (dog), and showed a slightly weaker dependence on phantom size and array size. The ratio of the PMVDR to the PVDR varied from 0.21 to 0.79 for the different collimation configurations, indicating a difference between the geometric dependence on outcome that would be predicted by these two metrics. For unidirectional irradiation, the mean lesion dose was 102%, 79% and 42% of the mean skin dose for the 2 cm mouse, 8 cm cat and 16 cm dog head phantoms, respectively. However, the mean lesion dose recovered to 83% of the mean skin dose in the 16 cm dog phantom in intersecting cross-firing regions. The percentage volume below a 10% dose threshold was highly dependent on geometry, with ranges for the different collimation configurations of 2–87% and 33–96% for the 2 cm mouse and 16 cm dog heads, respectively. The results of this

  1. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  2. Quality assurance for high dose rate brachytherapy treatment planning optimization: using a simple optimization to verify a complex optimization

    International Nuclear Information System (INIS)

    Deufel, Christopher L; Furutani, Keith M

    2014-01-01

    As dose optimization for high dose rate brachytherapy becomes more complex, it becomes increasingly important to have a means of verifying that optimization results are reasonable. A method is presented for using a simple optimization as quality assurance for the more complex optimization algorithms typically found in commercial brachytherapy treatment planning systems. Quality assurance tests may be performed during commissioning, at regular intervals, and/or on a patient specific basis. A simple optimization method is provided that optimizes conformal target coverage using an exact, variance-based, algebraic approach. Metrics such as dose volume histogram, conformality index, and total reference air kerma agree closely between simple and complex optimizations for breast, cervix, prostate, and planar applicators. The simple optimization is shown to be a sensitive measure for identifying failures in a commercial treatment planning system that are possibly due to operator error or weaknesses in planning system optimization algorithms. Results from the simple optimization are surprisingly similar to the results from a more complex, commercial optimization for several clinical applications. This suggests that there are only modest gains to be made from making brachytherapy optimization more complex. The improvements expected from sophisticated linear optimizations, such as PARETO methods, will largely be in making systems more user friendly and efficient, rather than in finding dramatically better source strength distributions. (paper)

  3. Climate Classification is an Important Factor in ­Assessing Hospital Performance Metrics

    Science.gov (United States)

    Boland, M. R.; Parhi, P.; Gentine, P.; Tatonetti, N. P.

    2017-12-01

    Context/Purpose: Climate is a known modulator of disease, but its impact on hospital performance metrics remains unstudied. Methods: We assess the relationship between Köppen-Geiger climate classification and hospital performance metrics, specifically 30-day mortality, as reported in Hospital Compare, and collected for the period July 2013 through June 2014 (7/1/2013 - 06/30/2014). A hospital-level multivariate linear regression analysis was performed while controlling for known socioeconomic factors to explore the relationship between all-cause mortality and climate. Hospital performance scores were obtained from 4,524 hospitals belonging to 15 distinct Köppen-Geiger climates and 2,373 unique counties. Results: Model results revealed that hospital performance metrics for mortality showed significant climate dependence (psocioeconomic factors. Interpretation: Currently, hospitals are reimbursed by Governmental agencies using 30-day mortality rates along with 30-day readmission rates. These metrics allow Government agencies to rank hospitals according to their `performance' along these metrics. Various socioeconomic factors are taken into consideration when determining individual hospitals performance. However, no climate-based adjustment is made within the existing framework. Our results indicate that climate-based variability in 30-day mortality rates does exist even after socioeconomic confounder adjustment. Use of standardized high-level climate classification systems (such as Koppen-Geiger) would be useful to incorporate in future metrics. Conclusion: Climate is a significant factor in evaluating hospital 30-day mortality rates. These results demonstrate that climate classification is an important factor when comparing hospital performance across the United States.

  4. Optimizing aspects of pedestrian traffic in building designs

    KAUST Repository

    Rodriguez, Samuel

    2013-11-01

    In this work, we investigate aspects of building design that can be optimized. Architectural features that we explore include pillar placement in simple corridors, doorway placement in buildings, and agent placement for information dispersement in an evacuation. The metrics utilized are tuned to the specific scenarios we study, which include continuous flow pedestrian movement and building evacuation. We use Multidimensional Direct Search (MDS) optimization with an extreme barrier criteria to find optimal placements while enforcing building constraints. © 2013 IEEE.

  5. Optimizing aspects of pedestrian traffic in building designs

    KAUST Repository

    Rodriguez, Samuel; Yinghua Zhang,; Gans, Nicholas; Amato, Nancy M.

    2013-01-01

    In this work, we investigate aspects of building design that can be optimized. Architectural features that we explore include pillar placement in simple corridors, doorway placement in buildings, and agent placement for information dispersement in an evacuation. The metrics utilized are tuned to the specific scenarios we study, which include continuous flow pedestrian movement and building evacuation. We use Multidimensional Direct Search (MDS) optimization with an extreme barrier criteria to find optimal placements while enforcing building constraints. © 2013 IEEE.

  6. Eckart frame vibration-rotation Hamiltonians: Contravariant metric tensor

    International Nuclear Information System (INIS)

    Pesonen, Janne

    2014-01-01

    Eckart frame is a unique embedding in the theory of molecular vibrations and rotations. It is defined by the condition that the Coriolis coupling of the reference structure of the molecule is zero for every choice of the shape coordinates. It is far from trivial to set up Eckart kinetic energy operators (KEOs), when the shape of the molecule is described by curvilinear coordinates. In order to obtain the KEO, one needs to set up the corresponding contravariant metric tensor. Here, I derive explicitly the Eckart frame rotational measuring vectors. Their inner products with themselves give the rotational elements, and their inner products with the vibrational measuring vectors (which, in the absence of constraints, are the mass-weighted gradients of the shape coordinates) give the Coriolis elements of the contravariant metric tensor. The vibrational elements are given as the inner products of the vibrational measuring vectors with themselves, and these elements do not depend on the choice of the body-frame. The present approach has the advantage that it does not depend on any particular choice of the shape coordinates, but it can be used in conjunction with all shape coordinates. Furthermore, it does not involve evaluation of covariant metric tensors, chain rules of derivation, or numerical differentiation, and it can be easily modified if there are constraints on the shape of the molecule. Both the planar and non-planar reference structures are accounted for. The present method is particular suitable for numerical work. Its computational implementation is outlined in an example, where I discuss how to evaluate vibration-rotation energies and eigenfunctions of a general N-atomic molecule, the shape of which is described by a set of local polyspherical coordinates

  7. Metric qualities of the cognitive behavioral assessment for outcome evaluation to estimate psychological treatment effects.

    Science.gov (United States)

    Bertolotti, Giorgio; Michielin, Paolo; Vidotto, Giulio; Sanavio, Ezio; Bottesi, Gioia; Bettinardi, Ornella; Zotti, Anna Maria

    2015-01-01

    Cognitive behavioral assessment for outcome evaluation was developed to evaluate psychological treatment interventions, especially for counseling and psychotherapy. It is made up of 80 items and five scales: anxiety, well-being, perception of positive change, depression, and psychological distress. The aim of the study was to present the metric qualities and to show validity and reliability of the five constructs of the questionnaire both in nonclinical and clinical subjects. Four steps were completed to assess reliability and factor structure: criterion-related and concurrent validity, responsiveness, and convergent-divergent validity. A nonclinical group of 269 subjects was enrolled, as was a clinical group comprising 168 adults undergoing psychotherapy and psychological counseling provided by the Italian public health service. Cronbach's alphas were between 0.80 and 0.91 for the clinical sample and between 0.74 and 0.91 in the nonclinical one. We observed an excellent structural validity for the five interrelated dimensions. The clinical group showed higher scores in the anxiety, depression, and psychological distress scales, as well as lower scores in well-being and perception of positive change scales than those observed in the nonclinical group. Responsiveness was large for the anxiety, well-being, and depression scales; the psychological distress and perception of positive change scales showed a moderate effect. The questionnaire showed excellent psychometric properties, thus demonstrating that the questionnaire is a good evaluative instrument, with which to assess pre- and post-treatment outcomes.

  8. Principle of space existence and De Sitter metric

    International Nuclear Information System (INIS)

    Mal'tsev, V.K.

    1990-01-01

    The selection principle for the solutions of the Einstein equations suggested in a series of papers implies the existence of space (g ik ≠ 0) only in the presence of matter (T ik ≠0). This selection principle (principle of space existence, in the Markov terminology) implies, in the general case, the absence of the cosmological solution with the De Sitter metric. On the other hand, the De Sitter metric is necessary for describing both inflation and deflation periods of the Universe. It is shown that the De Sitter metric is also allowed by the selection principle under discussion if the metric experiences the evolution into the Friedmann metric

  9. What can article-level metrics do for you?

    Science.gov (United States)

    Fenner, Martin

    2013-10-01

    Article-level metrics (ALMs) provide a wide range of metrics about the uptake of an individual journal article by the scientific community after publication. They include citations, usage statistics, discussions in online comments and social media, social bookmarking, and recommendations. In this essay, we describe why article-level metrics are an important extension of traditional citation-based journal metrics and provide a number of example from ALM data collected for PLOS Biology.

  10. About the possibility of a generalized metric

    International Nuclear Information System (INIS)

    Lukacs, B.; Ladik, J.

    1991-10-01

    The metric (the structure of the space-time) may be dependent on the properties of the object measuring it. The case of size dependence of the metric was examined. For this dependence the simplest possible form of the metric tensor has been constructed which fulfils the following requirements: there be two extremal characteristic scales; the metric be unique and the usual between them; the change be sudden in the neighbourhood of these scales; the size of the human body appear as a parameter (postulated on the basis of some philosophical arguments). Estimates have been made for the two extremal length scales according to existing observations. (author) 19 refs

  11. MJO simulation in CMIP5 climate models: MJO skill metrics and process-oriented diagnosis

    Science.gov (United States)

    Ahn, Min-Seop; Kim, Daehyun; Sperber, Kenneth R.; Kang, In-Sik; Maloney, Eric; Waliser, Duane; Hendon, Harry

    2017-12-01

    The Madden-Julian Oscillation (MJO) simulation diagnostics developed by MJO Working Group and the process-oriented MJO simulation diagnostics developed by MJO Task Force are applied to 37 Coupled Model Intercomparison Project phase 5 (CMIP5) models in order to assess model skill in representing amplitude, period, and coherent eastward propagation of the MJO, and to establish a link between MJO simulation skill and parameterized physical processes. Process-oriented diagnostics include the Relative Humidity Composite based on Precipitation (RHCP), Normalized Gross Moist Stability (NGMS), and the Greenhouse Enhancement Factor (GEF). Numerous scalar metrics are developed to quantify the results. Most CMIP5 models underestimate MJO amplitude, especially when outgoing longwave radiation (OLR) is used in the evaluation, and exhibit too fast phase speed while lacking coherence between eastward propagation of precipitation/convection and the wind field. The RHCP-metric, indicative of the sensitivity of simulated convection to low-level environmental moisture, and the NGMS-metric, indicative of the efficiency of a convective atmosphere for exporting moist static energy out of the column, show robust correlations with a large number of MJO skill metrics. The GEF-metric, indicative of the strength of the column-integrated longwave radiative heating due to cloud-radiation interaction, is also correlated with the MJO skill metrics, but shows relatively lower correlations compared to the RHCP- and NGMS-metrics. Our results suggest that modifications to processes associated with moisture-convection coupling and the gross moist stability might be the most fruitful for improving simulations of the MJO. Though the GEF-metric exhibits lower correlations with the MJO skill metrics, the longwave radiation feedback is highly relevant for simulating the weak precipitation anomaly regime that may be important for the establishment of shallow convection and the transition to deep convection.

  12. SU-G-201-14: Is Maximum Skin Dose a Reliable Metric for Accelerated Partial Breast Irradiation with Brachytherapy?

    International Nuclear Information System (INIS)

    Park, S; Ragab, O; Patel, S; Demanes, J; Kamrava, M; Kim, Y

    2016-01-01

    Purpose: To evaluate the reliability of the maximum point dose (Dmax) to the skin surface as a dosimetric constraint, we investigated the correlation between Dmax at the skin surface and dose metrics at various definitions of skin thickness. Methods: 42 patients treated with APBI using a Strut Adjusted Volume Implant (SAVI) applicator between 2010 and 2014 were retrospectively reviewed. Target (PTV-EVAL) and organs at risk (OARs: skin, lung, and ribs) were delineated on a CT following NSABP B-39 guidelines. Six skin structures were contoured: a rind 3cm external to the body surface and 1, 2, 3, 4, and 5mm thick rinds deep to the body surface. Inverse planning simulated annealing optimization was used to deliver 32–34Gy in 8-10 fractions to the target while minimizing OAR doses. Dmax, D0.1cc, D1.0cc, and D2.0cc to the various skin structures were calculated. Linear regressions between the metrics were evaluated using the coefficient of determination (R"2). Results: The average±SD PTV-EVAL volume and cavity-to-skin distances were 71.1±28.5cc and 6.9±5.0mm. The target V90 and V95 were 97.3±2.3% and 95.1±3.2%. The Dmax to the skin structures were 78.7±10.2% (skin surface), 82.2±10.7% (skin-1mm), 89.4±12.6% (skin-2mm), 97.9±15.4% (skin-3mm), 114.1±32.5% (skin-4mm), and 157.0±85.3% (skin-5mm). Linear regression analysis showed D1.0cc and D2.0cc to the skin 1mm and Dmax to the skin-4mm and 5mm were poorly correlated with other metrics (R"2=0.413±0.204). Dmax to the skin surface was well correlated (R"2=0.910±0.047) and D1.0cc to the skin-3mm was strongly correlated with all subsurface skin layers (R"2=0.935±0.050). Conclusion: Dmax to the skin surface is a relevant metric for breast skin dose. Contouring discontinuities in the skin with a 1mm subsurface rind and the active dwells in the skin 4 and 5mm introduced significant variations in skin DVH. D0.1cc, D1.0cc, and D2.0cc to a 3mm skin rind are more robust metrics in breast brachytherapy.

  13. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  14. Conversion Rate Optimization : Visual Neuro Programming Principles

    OpenAIRE

    Berezhnaya, Anastasia

    2016-01-01

    The influence of the world wide web has already spread in every business. Consequently, it has become crucial to develop strong online presence and offer qualified user experience for website visitors. Website optimization undeniably has proved its importance in the recent decade. This research was conducted in order to study the practical application and structure of the stages of the CRO (Conversion Rate Optimization) framework that focuses on the most representative website metric – c...

  15. Ideal Based Cyber Security Technical Metrics for Control Systems

    Energy Technology Data Exchange (ETDEWEB)

    W. F. Boyer; M. A. McQueen

    2007-10-01

    Much of the world's critical infrastructure is at risk from attack through electronic networks connected to control systems. Security metrics are important because they provide the basis for management decisions that affect the protection of the infrastructure. A cyber security technical metric is the security relevant output from an explicit mathematical model that makes use of objective measurements of a technical object. A specific set of technical security metrics are proposed for use by the operators of control systems. Our proposed metrics are based on seven security ideals associated with seven corresponding abstract dimensions of security. We have defined at least one metric for each of the seven ideals. Each metric is a measure of how nearly the associated ideal has been achieved. These seven ideals provide a useful structure for further metrics development. A case study shows how the proposed metrics can be applied to an operational control system.

  16. THE ROLE OF ARTICLE LEVEL METRICS IN SCIENTIFIC PUBLISHING

    Directory of Open Access Journals (Sweden)

    Vladimir TRAJKOVSKI

    2016-04-01

    Full Text Available Emerging metrics based on article-level does not exclude traditional metrics based on citations to the journal, but complements them. Article-level metrics (ALMs provide a wide range of metrics about the uptake of an individual journal article by the scientific community after publication. They include citations, statistics of usage, discussions in online comments and social media, social bookmarking, and recommendations. In this editorial, the role of article level metrics in publishing scientific papers has been described. Article-Level Metrics (ALMs are rapidly emerging as important tools to quantify how individual articles are being discussed, shared, and used. Data sources depend on the tool, but they include classic metrics indicators depending on citations, academic social networks (Mendeley, CiteULike, Delicious and social media (Facebook, Twitter, blogs, and Youtube. The most popular tools used to apply this new metrics are: Public Library of Science - Article-Level Metrics, Altmetric, Impactstory and Plum Analytics. Journal Impact Factor (JIF does not consider impact or influence beyond citations count as this count reflected only through Thomson Reuters’ Web of Science® database. JIF provides indicator related to the journal, but not related to a published paper. Thus, altmetrics now becomes an alternative metrics for performance assessment of individual scientists and their contributed scholarly publications. Macedonian scholarly publishers have to work on implementing of article level metrics in their e-journals. It is the way to increase their visibility and impact in the world of science.

  17. Quantitative Metrics for Generative Justice: Graphing the Value of Diversity

    Directory of Open Access Journals (Sweden)

    Brian Robert Callahan

    2016-12-01

    Full Text Available Scholarship utilizing the Generative Justice framework has focused primarily on qualitative data collection and analysis for its insights. This paper introduces a quantitative data measurement, contributory diversity, which can be used to enhance the analysis of ethical dimensions of value production under the Generative Justice lens. It is well known that the identity of contributors—gender, ethnicity, and other categories—is a key issue for social justice in general. Using the example of Open Source Software communities, we note that that typical diversity measures, focusing exclusively on workforce demographics, can fail to fully illuminate issues in value generation. Using Shannon’s entropy measure, we offer an alternative metric which combines the traditional assessment of demographics with a measure of value generation. This mapping allows for previously unacknowledged contributions to be recognized, and can avoid some of the ways in which exclusionary practices are obscured. We offer contributory diversity not as the single optimal metric, but rather as a call for others to begin investigating the possibilities for quantitative measurements of the communities and value flows that are studied using the Generative Justice framework. 

  18. Machine learning classifier using abnormal brain network topological metrics in major depressive disorder.

    Science.gov (United States)

    Guo, Hao; Cao, Xiaohua; Liu, Zhifen; Li, Haifang; Chen, Junjie; Zhang, Kerang

    2012-12-05

    Resting state functional brain networks have been widely studied in brain disease research. However, it is currently unclear whether abnormal resting state functional brain network metrics can be used with machine learning for the classification of brain diseases. Resting state functional brain networks were constructed for 28 healthy controls and 38 major depressive disorder patients by thresholding partial correlation matrices of 90 regions. Three nodal metrics were calculated using graph theory-based approaches. Nonparametric permutation tests were then used for group comparisons of topological metrics, which were used as classified features in six different algorithms. We used statistical significance as the threshold for selecting features and measured the accuracies of six classifiers with different number of features. A sensitivity analysis method was used to evaluate the importance of different features. The result indicated that some of the regions exhibited significantly abnormal nodal centralities, including the limbic system, basal ganglia, medial temporal, and prefrontal regions. Support vector machine with radial basis kernel function algorithm and neural network algorithm exhibited the highest average accuracy (79.27 and 78.22%, respectively) with 28 features (Pdisorder is associated with abnormal functional brain network topological metrics and statistically significant nodal metrics can be successfully used for feature selection in classification algorithms.

  19. A New Metric for Land-Atmosphere Coupling Strength: Applications on Observations and Modeling

    Science.gov (United States)

    Tang, Q.; Xie, S.; Zhang, Y.; Phillips, T. J.; Santanello, J. A., Jr.; Cook, D. R.; Riihimaki, L.; Gaustad, K.

    2017-12-01

    A new metric is proposed to quantify the land-atmosphere (LA) coupling strength and is elaborated by correlating the surface evaporative fraction and impacting land and atmosphere variables (e.g., soil moisture, vegetation, and radiation). Based upon multiple linear regression, this approach simultaneously considers multiple factors and thus represents complex LA coupling mechanisms better than existing single variable metrics. The standardized regression coefficients quantify the relative contributions from individual drivers in a consistent manner, avoiding the potential inconsistency in relative influence of conventional metrics. Moreover, the unique expendable feature of the new method allows us to verify and explore potentially important coupling mechanisms. Our observation-based application of the new metric shows moderate coupling with large spatial variations at the U.S. Southern Great Plains. The relative importance of soil moisture vs. vegetation varies by location. We also show that LA coupling strength is generally underestimated by single variable methods due to their incompleteness. We also apply this new metric to evaluate the representation of LA coupling in the Accelerated Climate Modeling for Energy (ACME) V1 Contiguous United States (CONUS) regionally refined model (RRM). This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-734201

  20. Characterising risk - aggregated metrics: radiation and noise

    International Nuclear Information System (INIS)

    Passchier, W.

    1998-01-01

    The characterisation of risk is an important phase in the risk assessment - risk management process. From the multitude of risk attributes a few have to be selected to obtain a risk characteristic or profile that is useful for risk management decisions and implementation of protective measures. One way to reduce the number of attributes is aggregation. In the field of radiation protection such an aggregated metric is firmly established: effective dose. For protection against environmental noise the Health Council of the Netherlands recently proposed a set of aggregated metrics for noise annoyance and sleep disturbance. The presentation will discuss similarities and differences between these two metrics and practical limitations. The effective dose has proven its usefulness in designing radiation protection measures, which are related to the level of risk associated with the radiation practice in question, given that implicit judgements on radiation induced health effects are accepted. However, as the metric does not take into account the nature of radiation practice, it is less useful in policy discussions on the benefits and harm of radiation practices. With respect to the noise exposure metric, only one effect is targeted (annoyance), and the differences between sources are explicitly taken into account. This should make the metric useful in policy discussions with respect to physical planning and siting problems. The metric proposed has only significance on a population level, and can not be used as a predictor for individual risk. (author)

  1. 77 FR 12832 - Non-RTO/ISO Performance Metrics; Commission Staff Request Comments on Performance Metrics for...

    Science.gov (United States)

    2012-03-02

    ... Performance Metrics; Commission Staff Request Comments on Performance Metrics for Regions Outside of RTOs and... performance communicate about the benefits of RTOs and, where appropriate, (2) changes that need to be made to... common set of performance measures for markets both within and outside of ISOs/RTOs. As recommended by...

  2. Regional Sustainability: The San Luis Basin Metrics Project

    Science.gov (United States)

    There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute. Moreover, individual metrics may not capture all aspects of a system that are relevant to sust...

  3. Usability Metrics for Gamified E-learning Course: A Multilevel Approach

    Directory of Open Access Journals (Sweden)

    Aleksandra Sobodić

    2018-04-01

    Full Text Available This paper discusses the effect of a gamified learning system for students of the master course on Web Design and Programming performed at the Faculty of Organization and Informatics. A new set of usability metrics was derived from web-based learning usability, user experience and instructional design literature and incorporated into the questionnaire which consists of three main categories: Usability, Educational Usability and User Experience. The main contribution of this paper is the development and validation of a questionnaire for measuring the usability of a gamified e-learning course from students’ perspective. Usability practitioners can use the developed metrics with confidence when evaluating the design of a gamified e-learning course in order to improve students’ engagement and motivation.

  4. Pazarlama Performans Ölçütleri: Bir Literatür Taraması(Marketing Metrics: A Literature Review

    Directory of Open Access Journals (Sweden)

    Güngör HACIOĞLU

    2012-01-01

    Full Text Available Marketing’s inability to measure its contribution to firm performance leads to losing its status in the firm, and therefore recently marketing function is under increasing pressure to evaluate its performance and be accountable. In this context, determining appropriate metrics to measure marketing performance is discussed by both marketing practitioners and scholars. The aim of this study is to review the literature on marketing metrics used to measure marketing performance and importance attached to these metrics. Besides, some forces elevating the importance of marketing metrics, difficulties and criticism of measuring marketing performance will be explicated. Also, managerial applications and future research opportunities are presented.

  5. Probabilistic metric spaces

    CERN Document Server

    Schweizer, B

    2005-01-01

    Topics include special classes of probabilistic metric spaces, topologies, and several related structures, such as probabilistic normed and inner-product spaces. 1983 edition, updated with 3 new appendixes. Includes 17 illustrations.

  6. Optimal colour quality of LED clusters based on memory colours.

    Science.gov (United States)

    Smet, Kevin; Ryckaert, Wouter R; Pointer, Michael R; Deconinck, Geert; Hanselaer, Peter

    2011-03-28

    The spectral power distributions of tri- and tetrachromatic clusters of Light-Emitting-Diodes, composed of simulated and commercially available LEDs, were optimized with a genetic algorithm to maximize the luminous efficacy of radiation and the colour quality as assessed by the memory colour quality metric developed by the authors. The trade-off of the colour quality as assessed by the memory colour metric and the luminous efficacy of radiation was investigated by calculating the Pareto optimal front using the NSGA-II genetic algorithm. Optimal peak wavelengths and spectral widths of the LEDs were derived, and over half of them were found to be close to Thornton's prime colours. The Pareto optimal fronts of real LED clusters were always found to be smaller than those of the simulated clusters. The effect of binning on designing a real LED cluster was investigated and was found to be quite large. Finally, a real LED cluster of commercially available AlGaInP, InGaN and phosphor white LEDs was optimized to obtain a higher score on memory colour quality scale than its corresponding CIE reference illuminant.

  7. Uncertainty in BMP evaluation and optimization for watershed management

    Science.gov (United States)

    Chaubey, I.; Cibin, R.; Sudheer, K.; Her, Y.

    2012-12-01

    Use of computer simulation models have increased substantially to make watershed management decisions and to develop strategies for water quality improvements. These models are often used to evaluate potential benefits of various best management practices (BMPs) for reducing losses of pollutants from sources areas into receiving waterbodies. Similarly, use of simulation models in optimizing selection and placement of best management practices under single (maximization of crop production or minimization of pollutant transport) and multiple objective functions has increased recently. One of the limitations of the currently available assessment and optimization approaches is that the BMP strategies are considered deterministic. Uncertainties in input data (e.g. precipitation, streamflow, sediment, nutrient and pesticide losses measured, land use) and model parameters may result in considerable uncertainty in watershed response under various BMP options. We have developed and evaluated options to include uncertainty in BMP evaluation and optimization for watershed management. We have also applied these methods to evaluate uncertainty in ecosystem services from mixed land use watersheds. In this presentation, we will discuss methods to to quantify uncertainties in BMP assessment and optimization solutions due to uncertainties in model inputs and parameters. We have used a watershed model (Soil and Water Assessment Tool or SWAT) to simulate the hydrology and water quality in mixed land use watershed located in Midwest USA. The SWAT model was also used to represent various BMPs in the watershed needed to improve water quality. SWAT model parameters, land use change parameters, and climate change parameters were considered uncertain. It was observed that model parameters, land use and climate changes resulted in considerable uncertainties in BMP performance in reducing P, N, and sediment loads. In addition, climate change scenarios also affected uncertainties in SWAT

  8. Metric solution of a spinning mass

    International Nuclear Information System (INIS)

    Sato, H.

    1982-01-01

    Studies on a particular class of asymptotically flat and stationary metric solutions called the Kerr-Tomimatsu-Sato class are reviewed about its derivation and properties. For a further study, an almost complete list of the papers worked on the Tomimatsu-Sato metrics is given. (Auth.)

  9. On Nakhleh's metric for reduced phylogenetic networks

    OpenAIRE

    Cardona, Gabriel; Llabrés, Mercè; Rosselló, Francesc; Valiente Feruglio, Gabriel Alejandro

    2009-01-01

    We prove that Nakhleh’s metric for reduced phylogenetic networks is also a metric on the classes of tree-child phylogenetic networks, semibinary tree-sibling time consistent phylogenetic networks, and multilabeled phylogenetic trees. We also prove that it separates distinguishable phylogenetic networks. In this way, it becomes the strongest dissimilarity measure for phylogenetic networks available so far. Furthermore, we propose a generalization of that metric that separates arbitrary phyl...

  10. Beam angle optimization for intensity-modulated radiation therapy using a guided pattern search method

    International Nuclear Information System (INIS)

    Rocha, Humberto; Dias, Joana M; Ferreira, Brígida C; Lopes, Maria C

    2013-01-01

    Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem. (paper)

  11. Growth Modeling of Human Mandibles using Non-Euclidean Metrics

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen; Larsen, Rasmus; Wrobel, Mark

    2003-01-01

    From a set of 31 three-dimensional CT scans we model the temporal shape and size of the human mandible. Each anatomical structure is represented using 14851 semi-landmarks, and mapped into Procrustes tangent space. Exploratory subspace analyses are performed leading to linear models of mandible...... shape evolution in Procrustes space. The traditional variance analysis results in a one-dimensional growth model. However, working in a non-Euclidean metric results in a multimodal model with uncorrelated modes of biological variation. The applied non-Euclidean metric is governed by the correlation...... structure of the estimated noise in the data. The generative models are compared, and evaluated on the basis of a cross validation study. The new non-Euclidean analysis is completely data driven. It not only gives comparable results w.r.t. to previous studies of the mean modelling error, but in addition...

  12. A comparison theorem of the Kobayashi metric and the Bergman metric on a class of Reinhardt domains

    International Nuclear Information System (INIS)

    Weiping Yin.

    1990-03-01

    A comparison theorem for the Kobayashi and Bergman metric is given on a class of Reinhardt domains in C n . In the meantime, we obtain a class of complete invariant Kaehler metrics for these domains of the special cases. (author). 5 refs

  13. Using Activity Metrics for DEVS Simulation Profiling

    Directory of Open Access Journals (Sweden)

    Muzy A.

    2014-01-01

    Full Text Available Activity metrics can be used to profile DEVS models before and during the simulation. It is critical to get good activity metrics of models before and during their simulation. Having a means to compute a-priori activity of components (analytic activity may be worth when simulating a model (or parts of it for the first time. After, during the simulation, analytic activity can be corrected using dynamic one. In this paper, we introduce McCabe cyclomatic complexity metric (MCA to compute analytic activity. Both static and simulation activity metrics have been implemented through a plug-in of the DEVSimPy (DEVS Simulator in Python language environment and applied to DEVS models.

  14. Tracker Performance Metric

    National Research Council Canada - National Science Library

    Olson, Teresa; Lee, Harry; Sanders, Johnnie

    2002-01-01

    .... We have developed the Tracker Performance Metric (TPM) specifically for this purpose. It was designed to measure the output performance, on a frame-by-frame basis, using its output position and quality...

  15. Metrication: An economic wake-up call for US industry

    Science.gov (United States)

    Carver, G. P.

    1993-03-01

    As the international standard of measurement, the metric system is one key to success in the global marketplace. International standards have become an important factor in international economic competition. Non-metric products are becoming increasingly unacceptable in world markets that favor metric products. Procurement is the primary federal tool for encouraging and helping U.S. industry to convert voluntarily to the metric system. Besides the perceived unwillingness of the customer, certain regulatory language, and certain legal definitions in some states, there are no major impediments to conversion of the remaining non-metric industries to metric usage. Instead, there are good reasons for changing, including an opportunity to rethink many industry standards and to take advantage of size standardization. Also, when the remaining industries adopt the metric system, they will come into conformance with federal agencies engaged in similar activities.

  16. Conformal and related changes of metric on the product of two almost contact metric manifolds.

    OpenAIRE

    Blair, D. E.

    1990-01-01

    This paper studies conformal and related changes of the product metric on the product of two almost contact metric manifolds. It is shown that if one factor is Sasakian, the other is not, but that locally the second factor is of the type studied by Kenmotsu. The results are more general and given in terms of trans-Sasakian, α-Sasakian and β-Kenmotsu structures.

  17. Extremal limits of the C metric: Nariai, Bertotti-Robinson, and anti-Nariai C metrics

    International Nuclear Information System (INIS)

    Dias, Oscar J.C.; Lemos, Jose P.S.

    2003-01-01

    In two previous papers we have analyzed the C metric in a background with a cosmological constant Λ, namely, the de-Sitter (dS) C metric (Λ>0), and the anti-de Sitter (AdS) C metric (Λ 0, Λ=0, and Λ 2 xS-tilde 2 ) to each point in the deformed two-sphere S-tilde 2 corresponds a dS 2 spacetime, except for one point which corresponds to a dS 2 spacetime with an infinite straight strut or string. There are other important new features that appear. One expects that the solutions found in this paper are unstable and decay into a slightly nonextreme black hole pair accelerated by a strut or by strings. Moreover, the Euclidean version of these solutions mediate the quantum process of black hole pair creation that accompanies the decay of the dS and AdS spaces

  18. Graev metrics on free products and HNN extensions

    DEFF Research Database (Denmark)

    Slutsky, Konstantin

    2014-01-01

    We give a construction of two-sided invariant metrics on free products (possibly with amalgamation) of groups with two-sided invariant metrics and, under certain conditions, on HNN extensions of such groups. Our approach is similar to the Graev's construction of metrics on free groups over pointed...

  19. Validation of Metrics for Collaborative Systems

    OpenAIRE

    Ion IVAN; Cristian CIUREA

    2008-01-01

    This paper describe the new concepts of collaborative systems metrics validation. The paper define the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.

  20. g-Weak Contraction in Ordered Cone Rectangular Metric Spaces

    Directory of Open Access Journals (Sweden)

    S. K. Malhotra

    2013-01-01

    Full Text Available We prove some common fixed-point theorems for the ordered g-weak contractions in cone rectangular metric spaces without assuming the normality of cone. Our results generalize some recent results from cone metric and cone rectangular metric spaces into ordered cone rectangular metric spaces. Examples are provided which illustrate the results.

  1. SU-E-J-155: Automatic Quantitative Decision Making Metric for 4DCT Image Quality

    Energy Technology Data Exchange (ETDEWEB)

    Kiely, J Blanco; Olszanski, A; Both, S; White, B [University of Pennsylvania, Philadelphia, PA (United States); Low, D [Deparment of Radiation Oncology, University of California Los Angeles, Los Angeles, CA (United States)

    2015-06-15

    Purpose: To develop a quantitative decision making metric for automatically detecting irregular breathing using a large patient population that received phase-sorted 4DCT. Methods: This study employed two patient cohorts. Cohort#1 contained 256 patients who received a phasesorted 4DCT. Cohort#2 contained 86 patients who received three weekly phase-sorted 4DCT scans. A previously published technique used a single abdominal surrogate to calculate the ratio of extreme inhalation tidal volume to normal inhalation tidal volume, referred to as the κ metric. Since a single surrogate is standard for phase-sorted 4DCT in radiation oncology clinical practice, tidal volume was not quantified. Without tidal volume, the absolute κ metric could not be determined, so a relative κ (κrel) metric was defined based on the measured surrogate amplitude instead of tidal volume. Receiver operator characteristic (ROC) curves were used to quantitatively determine the optimal cutoff value (jk) and efficiency cutoff value (τk) of κrel to automatically identify irregular breathing that would reduce the image quality of phase-sorted 4DCT. Discriminatory accuracy (area under the ROC curve) of κrel was calculated by a trapezoidal numeric integration technique. Results: The discriminatory accuracy of ?rel was found to be 0.746. The key values of jk and tk were calculated to be 1.45 and 1.72 respectively. For values of ?rel such that jk≤κrel≤τk, the decision to reacquire the 4DCT would be at the discretion of the physician. This accounted for only 11.9% of the patients in this study. The magnitude of κrel held consistent over 3 weeks for 73% of the patients in cohort#3. Conclusion: The decision making metric, ?rel, was shown to be an accurate classifier of irregular breathing patients in a large patient population. This work provided an automatic quantitative decision making metric to quickly and accurately assess the extent to which irregular breathing is occurring during phase

  2. SU-E-J-155: Automatic Quantitative Decision Making Metric for 4DCT Image Quality

    International Nuclear Information System (INIS)

    Kiely, J Blanco; Olszanski, A; Both, S; White, B; Low, D

    2015-01-01

    Purpose: To develop a quantitative decision making metric for automatically detecting irregular breathing using a large patient population that received phase-sorted 4DCT. Methods: This study employed two patient cohorts. Cohort#1 contained 256 patients who received a phasesorted 4DCT. Cohort#2 contained 86 patients who received three weekly phase-sorted 4DCT scans. A previously published technique used a single abdominal surrogate to calculate the ratio of extreme inhalation tidal volume to normal inhalation tidal volume, referred to as the κ metric. Since a single surrogate is standard for phase-sorted 4DCT in radiation oncology clinical practice, tidal volume was not quantified. Without tidal volume, the absolute κ metric could not be determined, so a relative κ (κrel) metric was defined based on the measured surrogate amplitude instead of tidal volume. Receiver operator characteristic (ROC) curves were used to quantitatively determine the optimal cutoff value (jk) and efficiency cutoff value (τk) of κrel to automatically identify irregular breathing that would reduce the image quality of phase-sorted 4DCT. Discriminatory accuracy (area under the ROC curve) of κrel was calculated by a trapezoidal numeric integration technique. Results: The discriminatory accuracy of ?rel was found to be 0.746. The key values of jk and tk were calculated to be 1.45 and 1.72 respectively. For values of ?rel such that jk≤κrel≤τk, the decision to reacquire the 4DCT would be at the discretion of the physician. This accounted for only 11.9% of the patients in this study. The magnitude of κrel held consistent over 3 weeks for 73% of the patients in cohort#3. Conclusion: The decision making metric, ?rel, was shown to be an accurate classifier of irregular breathing patients in a large patient population. This work provided an automatic quantitative decision making metric to quickly and accurately assess the extent to which irregular breathing is occurring during phase

  3. The dynamics of metric-affine gravity

    International Nuclear Information System (INIS)

    Vitagliano, Vincenzo; Sotiriou, Thomas P.; Liberati, Stefano

    2011-01-01

    Highlights: → The role and the dynamics of the connection in metric-affine theories is explored. → The most general second order action does not lead to a dynamical connection. → Including higher order invariants excites new degrees of freedom in the connection. → f(R) actions are also discussed and shown to be a non- representative class. - Abstract: Metric-affine theories of gravity provide an interesting alternative to general relativity: in such an approach, the metric and the affine (not necessarily symmetric) connection are independent quantities. Furthermore, the action should include covariant derivatives of the matter fields, with the covariant derivative naturally defined using the independent connection. As a result, in metric-affine theories a direct coupling involving matter and connection is also present. The role and the dynamics of the connection in such theories is explored. We employ power counting in order to construct the action and search for the minimal requirements it should satisfy for the connection to be dynamical. We find that for the most general action containing lower order invariants of the curvature and the torsion the independent connection does not carry any dynamics. It actually reduces to the role of an auxiliary field and can be completely eliminated algebraically in favour of the metric and the matter field, introducing extra interactions with respect to general relativity. However, we also show that including higher order terms in the action radically changes this picture and excites new degrees of freedom in the connection, making it (or parts of it) dynamical. Constructing actions that constitute exceptions to this rule requires significant fine tuned and/or extra a priori constraints on the connection. We also consider f(R) actions as a particular example in order to show that they constitute a distinct class of metric-affine theories with special properties, and as such they cannot be used as representative toy

  4. The definitive guide to IT service metrics

    CERN Document Server

    McWhirter, Kurt

    2012-01-01

    Used just as they are, the metrics in this book will bring many benefits to both the IT department and the business as a whole. Details of the attributes of each metric are given, enabling you to make the right choices for your business. You may prefer and are encouraged to design and create your own metrics to bring even more value to your business - this book will show you how to do this, too.

  5. Physiologically based pharmacokinetic rat model for methyl tertiary-butyl ether; comparison of selected dose metrics following various MTBE exposure scenarios used for toxicity and carcinogenicity evaluation

    International Nuclear Information System (INIS)

    Borghoff, Susan J.; Parkinson, Horace; Leavens, Teresa L.

    2010-01-01

    There are a number of cancer and toxicity studies that have been carried out to assess hazard from methyl tertiary-butyl ether (MTBE) exposure via inhalation and oral administration. MTBE has been detected in surface as well as ground water supplies which emphasized the need to assess the risk from exposure via drinking water contamination. This model can now be used to evaluate route-to-route extrapolation issues concerning MTBE exposures but also as a means of comparing potential dose metrics that may provide insight to differences in biological responses observed in rats following different routes of MTBE exposure. Recently an updated rat physiologically based pharmacokinetic (PBPK) model was published that relied on a description of MTBE and its metabolite tertiary-butyl alcohol (TBA) binding to α2u-globulin, a male rat-specific protein. This model was used to predict concentrations of MTBE and TBA in the kidney, a target tissue in the male rat. The objective of this study was to use this model to evaluate the dosimetry of MTBE and TBA in rats following different exposure scenarios, used to evaluate the toxicity and carcinogenicity of MTBE, and compare various dose metrics under these different conditions. Model simulations suggested that although inhalation and drinking water exposures show a similar pattern of MTBE and TBA exposure in the blood and kidney (i.e. concentration-time profiles), the total blood and kidney levels following exposure of MTBE to 7.5 mg/ml MTBE in the drinking water for 90 days is in the same range as administration of an oral dose of 1000 mg/kg MTBE. Evaluation of the dose metrics also supports that a high oral bolus dose (i.e. 1000 mg/kg MTBE) results in a greater percentage of the dose exhaled as MTBE with a lower percent metabolized to TBA as compared to dose of MTBE that is delivered over a longer period of time as in the case of drinking water.

  6. NASA education briefs for the classroom. Metrics in space

    Science.gov (United States)

    The use of metric measurement in space is summarized for classroom use. Advantages of the metric system over the English measurement system are described. Some common metric units are defined, as are special units for astronomical study. International system unit prefixes and a conversion table of metric/English units are presented. Questions and activities for the classroom are recommended.

  7. Enhancing Authentication Models Characteristic Metrics via ...

    African Journals Online (AJOL)

    In this work, we derive the universal characteristic metrics set for authentication models based on security, usability and design issues. We then compute the probability of the occurrence of each characteristic metrics in some single factor and multifactor authentication models in order to determine the effectiveness of these ...

  8. Validation of Metrics for Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2008-01-01

    Full Text Available This paper describe the new concepts of collaborative systems metrics validation. The paper define the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.

  9. Effective dose efficiency: an application-specific metric of quality and dose for digital radiography

    Energy Technology Data Exchange (ETDEWEB)

    Samei, Ehsan; Ranger, Nicole T; Dobbins, James T III; Ravin, Carl E, E-mail: samei@duke.edu [Carl E Ravin Advanced Imaging Laboratories, Department of Radiology (United States)

    2011-08-21

    The detective quantum efficiency (DQE) and the effective DQE (eDQE) are relevant metrics of image quality for digital radiography detectors and systems, respectively. The current study further extends the eDQE methodology to technique optimization using a new metric of the effective dose efficiency (eDE), reflecting both the image quality as well as the effective dose (ED) attributes of the imaging system. Using phantoms representing pediatric, adult and large adult body habitus, image quality measurements were made at 80, 100, 120 and 140 kVp using the standard eDQE protocol and exposures. ED was computed using Monte Carlo methods. The eDE was then computed as a ratio of image quality to ED for each of the phantom/spectral conditions. The eDQE and eDE results showed the same trends across tube potential with 80 kVp yielding the highest values and 120 kVp yielding the lowest. The eDE results for the pediatric phantom were markedly lower than the results for the adult phantom at spatial frequencies lower than 1.2-1.7 mm{sup -1}, primarily due to a correspondingly higher value of ED per entrance exposure. The relative performance for the adult and large adult phantoms was generally comparable but affected by kVps. The eDE results for the large adult configuration were lower than the eDE results for the adult phantom, across all spatial frequencies (120 and 140 kVp) and at spatial frequencies greater than 1.0 mm{sup -1} (80 and 100 kVp). Demonstrated for chest radiography, the eDE shows promise as an application-specific metric of imaging performance, reflective of body habitus and radiographic technique, with utility for radiography protocol assessment and optimization.

  10. Evaluating and optimizing the NERSC workload on Knights Landing

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, T; Cook, B; Deslippe, J; Doerfler, D; Friesen, B; He, Y; Kurth, T; Koskela, T; Lobet, M; Malas, T; Oliker, L; Ovsyannikov, A; Sarje, A; Vay, JL; Vincenti, H; Williams, S; Carrier, P; Wichmann, N; Wagner, M; Kent, P; Kerr, C; Dennis, J

    2017-01-30

    NERSC has partnered with 20 representative application teams to evaluate performance on the Xeon-Phi Knights Landing architecture and develop an application-optimization strategy for the greater NERSC workload on the recently installed Cori system. In this article, we present early case studies and summarized results from a subset of the 20 applications highlighting the impact of important architecture differences between the Xeon-Phi and traditional Xeon processors. We summarize the status of the applications and describe the greater optimization strategy that has formed.

  11. Understanding Acceptance of Software Metrics--A Developer Perspective

    Science.gov (United States)

    Umarji, Medha

    2009-01-01

    Software metrics are measures of software products and processes. Metrics are widely used by software organizations to help manage projects, improve product quality and increase efficiency of the software development process. However, metrics programs tend to have a high failure rate in organizations, and developer pushback is one of the sources…

  12. Emission metrics for quantifying regional climate impacts of aviation

    Directory of Open Access Journals (Sweden)

    M. T. Lund

    2017-07-01

    Full Text Available This study examines the impacts of emissions from aviation in six source regions on global and regional temperatures. We consider the NOx-induced impacts on ozone and methane, aerosols and contrail-cirrus formation and calculate the global and regional emission metrics global warming potential (GWP, global temperature change potential (GTP and absolute regional temperature change potential (ARTP. The GWPs and GTPs vary by a factor of 2–4 between source regions. We find the highest aviation aerosol metric values for South Asian emissions, while contrail-cirrus metrics are higher for Europe and North America, where contrail formation is prevalent, and South America plus Africa, where the optical depth is large once contrails form. The ARTP illustrate important differences in the latitudinal patterns of radiative forcing (RF and temperature response: the temperature response in a given latitude band can be considerably stronger than suggested by the RF in that band, also emphasizing the importance of large-scale circulation impacts. To place our metrics in context, we quantify temperature change in four broad latitude bands following 1 year of emissions from present-day aviation, including CO2. Aviation over North America and Europe causes the largest net warming impact in all latitude bands, reflecting the higher air traffic activity in these regions. Contrail cirrus gives the largest warming contribution in the short term, but remain important at about 15 % of the CO2 impact in several regions even after 100 years. Our results also illustrate both the short- and long-term impacts of CO2: while CO2 becomes dominant on longer timescales, it also gives a notable warming contribution already 20 years after the emission. Our emission metrics can be further used to estimate regional temperature change under alternative aviation emission scenarios. A first evaluation of the ARTP in the context of aviation suggests that further work to account

  13. Measuring reliability under epistemic uncertainty: Review on non-probabilistic reliability metrics

    Directory of Open Access Journals (Sweden)

    Kang Rui

    2016-06-01

    Full Text Available In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability and uncertainty-theory-based reliability metrics (belief reliability. It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to (1 compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2 satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  14. Construction of self-dual codes in the Rosenbloom-Tsfasman metric

    Science.gov (United States)

    Krisnawati, Vira Hari; Nisa, Anzi Lina Ukhtin

    2017-12-01

    Linear code is a very basic code and very useful in coding theory. Generally, linear code is a code over finite field in Hamming metric. Among the most interesting families of codes, the family of self-dual code is a very important one, because it is the best known error-correcting code. The concept of Hamming metric is develop into Rosenbloom-Tsfasman metric (RT-metric). The inner product in RT-metric is different from Euclid inner product that is used to define duality in Hamming metric. Most of the codes which are self-dual in Hamming metric are not so in RT-metric. And, generator matrix is very important to construct a code because it contains basis of the code. Therefore in this paper, we give some theorems and methods to construct self-dual codes in RT-metric by considering properties of the inner product and generator matrix. Also, we illustrate some examples for every kind of the construction.

  15. Chaotic inflation with metric and matter perturbations

    International Nuclear Information System (INIS)

    Feldman, H.A.; Brandenberger, R.H.

    1989-01-01

    A perturbative scheme to analyze the evolution of both metric and scalar field perturbations in an expanding universe is developed. The scheme is applied to study chaotic inflation with initial metric and scalar field perturbations present. It is shown that initial gravitational perturbations with wavelength smaller than the Hubble radius rapidly decay. The metric simultaneously picks up small perturbations determined by the matter inhomogeneities. Both are frozen in once the wavelength exceeds the Hubble radius. (orig.)

  16. Phantom metrics with Killing spinors

    Directory of Open Access Journals (Sweden)

    W.A. Sabra

    2015-11-01

    Full Text Available We study metric solutions of Einstein–anti-Maxwell theory admitting Killing spinors. The analogue of the IWP metric which admits a space-like Killing vector is found and is expressed in terms of a complex function satisfying the wave equation in flat (2+1-dimensional space–time. As examples, electric and magnetic Kasner spaces are constructed by allowing the solution to depend only on the time coordinate. Euclidean solutions are also presented.

  17. Metric-based method of software requirements correctness improvement

    Directory of Open Access Journals (Sweden)

    Yaremchuk Svitlana

    2017-01-01

    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

  18. Evaluation and optimization of LWR fuel cycles

    International Nuclear Information System (INIS)

    Akbas, T.; Zabunoglu, O.; Tombakoglu, M.

    2001-01-01

    There are several options in the back-end of the nuclear fuel cycle. Discharge burn-up, length of interim storage period, choice of direct disposal or recycling and method of reprocessing in case of recycling affect the options and determine/define the fuel cycle scenarios. These options have been evaluated in viewpoint of some tangible (fuel cycle cost, natural uranium requirement, decay heat of high level waste, radiological ingestion and inhalation hazards) and intangible factors (technological feasibility, nonproliferation aspect, etc.). Neutronic parameters are calculated using versatile fuel depletion code ORIGEN2.1. A program is developed for calculation of cost related parameters. Analytical hierarchy process is used to transform the intangible factors into the tangible ones. Then all these tangible and intangible factors are incorporated into a form that is suitable for goal programming, which is a linear optimization technique and used to determine the optimal option among alternatives. According to the specified objective function and constraints, the optimal fuel cycle scenario is determined using GPSYS (a linear programming software) as a goal programming tool. In addition, a sensitivity analysis is performed for some selected important parameters

  19. An MILP-Based Cross-Layer Optimization for a Multi-Reader Arbitration in the UHF RFID System

    Science.gov (United States)

    Choi, Jinchul; Lee, Chaewoo

    2011-01-01

    In RFID systems, the performance of each reader such as interrogation range and tag recognition rate may suffer from interferences from other readers. Since the reader interference can be mitigated by output signal power control, spectral and/or temporal separation among readers, the system performance depends on how to adapt the various reader arbitration metrics such as time, frequency, and output power to the system environment. However, complexity and difficulty of the optimization problem increase with respect to the variety of the arbitration metrics. Thus, most proposals in previous study have been suggested to primarily prevent the reader collision with consideration of one or two arbitration metrics. In this paper, we propose a novel cross-layer optimization design based on the concept of combining time division, frequency division, and power control not only to solve the reader interference problem, but also to achieve the multiple objectives such as minimum interrogation delay, maximum reader utilization, and energy efficiency. Based on the priority of the multiple objectives, our cross-layer design optimizes the system sequentially by means of the mixed-integer linear programming. In spite of the multi-stage optimization, the optimization design is formulated as a concise single mathematical form by properly assigning a weight to each objective. Numerical results demonstrate the effectiveness of the proposed optimization design. PMID:22163743

  20. An MILP-Based Cross-Layer Optimization for a Multi-Reader Arbitration in the UHF RFID System

    Directory of Open Access Journals (Sweden)

    Chaewoo Lee

    2011-02-01

    Full Text Available In RFID systems, the performance of each reader such as interrogation range and tag recognition rate may suffer from interferences from other readers. Since the reader interference can be mitigated by output signal power control, spectral and/or temporal separation among readers, the system performance depends on how to adapt the various reader arbitration metrics such as time, frequency, and output power to the system environment. However, complexity and difficulty of the optimization problem increase with respect to the variety of the arbitration metrics. Thus, most proposals in previous study have been suggested to primarily prevent the reader collision with consideration of one or two arbitration metrics. In this paper, we propose a novel cross-layer optimization design based on the concept of combining time division, frequency division, and power control not only to solve the reader interference problem, but also to achieve the multiple objectives such as minimum interrogation delay, maximum reader utilization, and energy efficiency. Based on the priority of the multiple objectives, our cross-layer design optimizes the system sequentially by means of the mixed-integer linear programming. In spite of the multi-stage optimization, the optimization design is formulated as a concise single mathematical form by properly assigning a weight to each objective. Numerical results demonstrate the effectiveness of the proposed optimization design.

  1. Multipeak Mean Based Optimized Histogram Modification Framework Using Swarm Intelligence for Image Contrast Enhancement

    Directory of Open Access Journals (Sweden)

    P. Babu

    2015-01-01

    Full Text Available A novel approach, Multipeak mean based optimized histogram modification framework (MMOHM is introduced for the purpose of enhancing the contrast as well as preserving essential details for any given gray scale and colour images. The basic idea of this technique is the calculation of multiple peaks (local maxima from the original histogram. The mean value of multiple peaks is computed and the input image’s histogram is segmented into two subhistograms based on this multipeak mean (mmean value. Then, a bicriteria optimization problem is formulated and the subhistograms are modified by selecting optimal contrast enhancement parameters. While formulating the enhancement parameters, particle swarm optimization is employed to find optimal values of them. Finally, the union of the modified subhistograms produces a contrast enhanced and details preserved output image. This mechanism enhances the contrast of the input image better than the existing contemporary HE methods. The performance of the proposed method is well supported by the contrast enhancement quantitative metrics such as discrete entropy, natural image quality evaluator, and absolute mean brightness error.

  2. Improving alignment in Tract-based spatial statistics: evaluation and optimization of image registration.

    Science.gov (United States)

    de Groot, Marius; Vernooij, Meike W; Klein, Stefan; Ikram, M Arfan; Vos, Frans M; Smith, Stephen M; Niessen, Wiro J; Andersson, Jesper L R

    2013-08-01

    Anatomical alignment in neuroimaging studies is of such importance that considerable effort is put into improving the registration used to establish spatial correspondence. Tract-based spatial statistics (TBSS) is a popular method for comparing diffusion characteristics across subjects. TBSS establishes spatial correspondence using a combination of nonlinear registration and a "skeleton projection" that may break topological consistency of the transformed brain images. We therefore investigated feasibility of replacing the two-stage registration-projection procedure in TBSS with a single, regularized, high-dimensional registration. To optimize registration parameters and to evaluate registration performance in diffusion MRI, we designed an evaluation framework that uses native space probabilistic tractography for 23 white matter tracts, and quantifies tract similarity across subjects in standard space. We optimized parameters for two registration algorithms on two diffusion datasets of different quality. We investigated reproducibility of the evaluation framework, and of the optimized registration algorithms. Next, we compared registration performance of the regularized registration methods and TBSS. Finally, feasibility and effect of incorporating the improved registration in TBSS were evaluated in an example study. The evaluation framework was highly reproducible for both algorithms (R(2) 0.993; 0.931). The optimal registration parameters depended on the quality of the dataset in a graded and predictable manner. At optimal parameters, both algorithms outperformed the registration of TBSS, showing feasibility of adopting such approaches in TBSS. This was further confirmed in the example experiment. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Software metrics to improve software quality in HEP

    International Nuclear Information System (INIS)

    Lancon, E.

    1996-01-01

    The ALEPH reconstruction program maintainability has been evaluated with a case tool implementing an ISO standard methodology based on software metrics. It has been found that the overall quality of the program is good and has shown improvement over the past five years. Frequently modified routines exhibits lower quality; most buys were located in routines with particularly low quality. Implementing from the beginning a quality criteria could have avoided time losses due to bug corrections. (author)

  4. On Optimal Geographical Caching in Heterogeneous Cellular Networks

    NARCIS (Netherlands)

    Serbetci, Berksan; Goseling, Jasper

    2017-01-01

    In this work we investigate optimal geographical caching in heterogeneous cellular networks where different types of base stations (BSs) have different cache capacities. Users request files from a content library according to a known probability distribution. The performance metric is the total hit

  5. Invariant metric for nonlinear symplectic maps

    Indian Academy of Sciences (India)

    In this paper, we construct an invariant metric in the space of homogeneous polynomials of a given degree (≥ 3). The homogeneous polynomials specify a nonlinear symplectic map which in turn represents a Hamiltonian system. By minimizing the norm constructed out of this metric as a function of system parameters, we ...

  6. Sulcal set optimization for cortical surface registration.

    Science.gov (United States)

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  7. Optimal Topology of Aircraft Rib and Spar Structures under Aeroelastic Loads

    Science.gov (United States)

    Stanford, Bret K.; Dunning, Peter D.

    2014-01-01

    Several topology optimization problems are conducted within the ribs and spars of a wing box. It is desired to locate the best position of lightening holes, truss/cross-bracing, etc. A variety of aeroelastic metrics are isolated for each of these problems: elastic wing compliance under trim loads and taxi loads, stress distribution, and crushing loads. Aileron effectiveness under a constant roll rate is considered, as are dynamic metrics: natural vibration frequency and flutter. This approach helps uncover the relationship between topology and aeroelasticity in subsonic transport wings, and can therefore aid in understanding the complex aircraft design process which must eventually consider all these metrics and load cases simultaneously.

  8. Two-dimensional manifolds with metrics of revolution

    International Nuclear Information System (INIS)

    Sabitov, I Kh

    2000-01-01

    This is a study of the topological and metric structure of two-dimensional manifolds with a metric that is locally a metric of revolution. In the case of compact manifolds this problem can be thoroughly investigated, and in particular it is explained why there are no closed analytic surfaces of revolution in R 3 other than a sphere and a torus (moreover, in the smoothness class C ∞ such surfaces, understood in a certain generalized sense, exist in any topological class)

  9. Gravitational lensing in metric theories of gravity

    International Nuclear Information System (INIS)

    Sereno, Mauro

    2003-01-01

    Gravitational lensing in metric theories of gravity is discussed. I introduce a generalized approximate metric element, inclusive of both post-post-Newtonian contributions and a gravitomagnetic field. Following Fermat's principle and standard hypotheses, I derive the time delay function and deflection angle caused by an isolated mass distribution. Several astrophysical systems are considered. In most of the cases, the gravitomagnetic correction offers the best perspectives for an observational detection. Actual measurements distinguish only marginally different metric theories from each other

  10. The uniqueness of the Fisher metric as information metric

    Czech Academy of Sciences Publication Activity Database

    Le, Hong-Van

    2017-01-01

    Roč. 69, č. 4 (2017), s. 879-896 ISSN 0020-3157 Institutional support: RVO:67985840 Keywords : Chentsov’s theorem * mixed topology * monotonicity of the Fisher metric Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.049, year: 2016 https://link.springer.com/article/10.1007%2Fs10463-016-0562-0

  11. Hybrid metric-Palatini stars

    Science.gov (United States)

    Danilǎ, Bogdan; Harko, Tiberiu; Lobo, Francisco S. N.; Mak, M. K.

    2017-02-01

    We consider the internal structure and the physical properties of specific classes of neutron, quark and Bose-Einstein condensate stars in the recently proposed hybrid metric-Palatini gravity theory, which is a combination of the metric and Palatini f (R ) formalisms. It turns out that the theory is very successful in accounting for the observed phenomenology, since it unifies local constraints at the Solar System level and the late-time cosmic acceleration, even if the scalar field is very light. In this paper, we derive the equilibrium equations for a spherically symmetric configuration (mass continuity and Tolman-Oppenheimer-Volkoff) in the framework of the scalar-tensor representation of the hybrid metric-Palatini theory, and we investigate their solutions numerically for different equations of state of neutron and quark matter, by adopting for the scalar field potential a Higgs-type form. It turns out that the scalar-tensor definition of the potential can be represented as an Clairaut differential equation, and provides an explicit form for f (R ) given by f (R )˜R +Λeff, where Λeff is an effective cosmological constant. Furthermore, stellar models, described by the stiff fluid, radiation-like, bag model and the Bose-Einstein condensate equations of state are explicitly constructed in both general relativity and hybrid metric-Palatini gravity, thus allowing an in-depth comparison between the predictions of these two gravitational theories. As a general result it turns out that for all the considered equations of state, hybrid gravity stars are more massive than their general relativistic counterparts. Furthermore, two classes of stellar models corresponding to two particular choices of the functional form of the scalar field (constant value, and logarithmic form, respectively) are also investigated. Interestingly enough, in the case of a constant scalar field the equation of state of the matter takes the form of the bag model equation of state describing

  12. The universal connection and metrics on moduli spaces

    International Nuclear Information System (INIS)

    Massamba, Fortune; Thompson, George

    2003-11-01

    We introduce a class of metrics on gauge theoretic moduli spaces. These metrics are made out of the universal matrix that appears in the universal connection construction of M. S. Narasimhan and S. Ramanan. As an example we construct metrics on the c 2 = 1 SU(2) moduli space of instantons on R 4 for various universal matrices. (author)

  13. Reproducibility of graph metrics in fMRI networks

    Directory of Open Access Journals (Sweden)

    Qawi K Telesford

    2010-12-01

    Full Text Available The reliability of graph metrics calculated in network analysis is essential to the interpretation of complex network organization. These graph metrics are used to deduce the small-world properties in networks. In this study, we investigated the test-retest reliability of graph metrics from functional magnetic resonance imaging (fMRI data collected for two runs in 45 healthy older adults. Graph metrics were calculated on data for both runs and compared using intraclass correlation coefficient (ICC statistics and Bland-Altman (BA plots. ICC scores describe the level of absolute agreement between two measurements and provide a measure of reproducibility. For mean graph metrics, ICC scores were high for clustering coefficient (ICC=0.86, global efficiency (ICC=0.83, path length (ICC=0.79, and local efficiency (ICC=0.75; the ICC score for degree was found to be low (ICC=0.29. ICC scores were also used to generate reproducibility maps in brain space to test voxel-wise reproducibility for unsmoothed and smoothed data. Reproducibility was uniform across the brain for global efficiency and path length, but was only high in network hubs for clustering coefficient, local efficiency and degree. BA plots were used to test the measurement repeatability of all graph metrics. All graph metrics fell within the limits for repeatability. Together, these results suggest that with exception of degree, mean graph metrics are reproducible and suitable for clinical studies. Further exploration is warranted to better understand reproducibility across the brain on a voxel-wise basis.

  14. Use of two population metrics clarifies biodiversity dynamics in large-scale monitoring: the case of trees in Japanese old-growth forests: the need for multiple population metrics in large-scale monitoring.

    Science.gov (United States)

    Ogawa, Mifuyu; Yamaura, Yuichi; Abe, Shin; Hoshino, Daisuke; Hoshizaki, Kazuhiko; Iida, Shigeo; Katsuki, Toshio; Masaki, Takashi; Niiyama, Kaoru; Saito, Satoshi; Sakai, Takeshi; Sugita, Hisashi; Tanouchi, Hiroyuki; Amano, Tatsuya; Taki, Hisatomo; Okabe, Kimiko

    2011-07-01

    Many indicators/indices provide information on whether the 2010 biodiversity target of reducing declines in biodiversity have been achieved. The strengths and limitations of the various measures used to assess the success of such measures are now being discussed. Biodiversity dynamics are often evaluated by a single biological population metric, such as the abundance of each species. Here we examined tree population dynamics of 52 families (192 species) at 11 research sites (three vegetation zones) of Japanese old-growth forests using two population metrics: number of stems and basal area. We calculated indices that track the rate of change in all species of tree by taking the geometric mean of changes in population metrics between the 1990s and the 2000s at the national level and at the levels of the vegetation zone and family. We specifically focused on whether indices based on these two metrics behaved similarly. The indices showed that (1) the number of stems declined, whereas basal area did not change at the national level and (2) the degree of change in the indices varied by vegetation zone and family. These results suggest that Japanese old-growth forests have not degraded and may even be developing in some vegetation zones, and indicate that the use of a single population metric (or indicator/index) may be insufficient to precisely understand the state of biodiversity. It is therefore important to incorporate more metrics into monitoring schemes to overcome the risk of misunderstanding or misrepresenting biodiversity dynamics.

  15. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  16. Complexity Metrics for Workflow Nets

    DEFF Research Database (Denmark)

    Lassen, Kristian Bisgaard; van der Aalst, Wil M.P.

    2009-01-01

    analysts have difficulties grasping the dynamics implied by a process model. Recent empirical studies show that people make numerous errors when modeling complex business processes, e.g., about 20 percent of the EPCs in the SAP reference model have design flaws resulting in potential deadlocks, livelocks......, etc. It seems obvious that the complexity of the model contributes to design errors and a lack of understanding. It is not easy to measure complexity, however. This paper presents three complexity metrics that have been implemented in the process analysis tool ProM. The metrics are defined...... for a subclass of Petri nets named Workflow nets, but the results can easily be applied to other languages. To demonstrate the applicability of these metrics, we have applied our approach and tool to 262 relatively complex Protos models made in the context of various student projects. This allows us to validate...

  17. Medical Image Registration by means of a Bio-Inspired Optimization Strategy

    Directory of Open Access Journals (Sweden)

    Hariton Costin

    2012-07-01

    Full Text Available Medical imaging mainly treats and processes missing, ambiguous, complementary, redundant and distorted data. Biomedical image registration is the process of geometric overlaying or alignment of two or more 2D/3D images of the same scene, taken at different time slots, from different angles, and/or by different acquisition systems. In medical practice, it is becoming increasingly important in diagnosis, treatment planning, functional studies, computer-guided therapies, and in biomedical research. Technically, image registration implies a complex optimization of different parameters, performed at local or/and global levels. Local optimization methods frequently fail because functions of the involved metrics with respect to transformation parameters are generally nonconvex and irregular. Therefore, global methods are often required, at least at the beginning of the procedure. In this paper, a new evolutionary and bio-inspired approach -- bacterial foraging optimization -- is adapted for single-slice to 3-D PET and CT multimodal image registration. Preliminary results of optimizing the normalized mutual information similarity metric validated the efficacy of the proposed method by using a freely available medical image database.

  18. Sustainability Metrics: The San Luis Basin Project

    Science.gov (United States)

    Sustainability is about promoting humanly desirable dynamic regimes of the environment. Metrics: ecological footprint, net regional product, exergy, emergy, and Fisher Information. Adaptive management: (1) metrics assess problem, (2) specific problem identified, and (3) managemen...

  19. Goedel-type metrics in various dimensions

    International Nuclear Information System (INIS)

    Guerses, Metin; Karasu, Atalay; Sarioglu, Oezguer

    2005-01-01

    Goedel-type metrics are introduced and used in producing charged dust solutions in various dimensions. The key ingredient is a (D - 1)-dimensional Riemannian geometry which is then employed in constructing solutions to the Einstein-Maxwell field equations with a dust distribution in D dimensions. The only essential field equation in the procedure turns out to be the source-free Maxwell's equation in the relevant background. Similarly the geodesics of this type of metric are described by the Lorentz force equation for a charged particle in the lower dimensional geometry. It is explicitly shown with several examples that Goedel-type metrics can be used in obtaining exact solutions to various supergravity theories and in constructing spacetimes that contain both closed timelike and closed null curves and that contain neither of these. Among the solutions that can be established using non-flat backgrounds, such as the Tangherlini metrics in (D - 1)-dimensions, there exists a class which can be interpreted as describing black-hole-type objects in a Goedel-like universe

  20. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    International Nuclear Information System (INIS)

    Yaparpalvi, R; Mynampati, D; Kuo, H; Garg, M; Tome, W; Kalnicki, S

    2016-01-01

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performed using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V_2_0 and V_5 to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm"3. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V_2_0 (+3.1%) and V_5 (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates attributable to algorithm

  1. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Yaparpalvi, R; Mynampati, D; Kuo, H; Garg, M; Tome, W; Kalnicki, S [Montefiore Medical Center, Bronx, NY (United States)

    2016-06-15

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performed using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V{sub 20} and V{sub 5} to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm{sup 3}. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V{sub 20} (+3.1%) and V{sub 5} (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates

  2. Developing a Security Metrics Scorecard for Healthcare Organizations.

    Science.gov (United States)

    Elrefaey, Heba; Borycki, Elizabeth; Kushniruk, Andrea

    2015-01-01

    In healthcare, information security is a key aspect of protecting a patient's privacy and ensuring systems availability to support patient care. Security managers need to measure the performance of security systems and this can be achieved by using evidence-based metrics. In this paper, we describe the development of an evidence-based security metrics scorecard specific to healthcare organizations. Study participants were asked to comment on the usability and usefulness of a prototype of a security metrics scorecard that was developed based on current research in the area of general security metrics. Study findings revealed that scorecards need to be customized for the healthcare setting in order for the security information to be useful and usable in healthcare organizations. The study findings resulted in the development of a security metrics scorecard that matches the healthcare security experts' information requirements.

  3. Systematic study of source mask optimization and verification flows

    Science.gov (United States)

    Ben, Yu; Latypov, Azat; Chua, Gek Soon; Zou, Yi

    2012-06-01

    Source mask optimization (SMO) emerged as powerful resolution enhancement technique (RET) for advanced technology nodes. However, there is a plethora of flow and verification metrics in the field, confounding the end user of the technique. Systemic study of different flows and the possible unification thereof is missing. This contribution is intended to reveal the pros and cons of different SMO approaches and verification metrics, understand the commonality and difference, and provide a generic guideline for RET selection via SMO. The paper discusses 3 different type of variations commonly arise in SMO, namely pattern preparation & selection, availability of relevant OPC recipe for freeform source and finally the metrics used in source verification. Several pattern selection algorithms are compared and advantages of systematic pattern selection algorithms are discussed. In the absence of a full resist model for SMO, alternative SMO flow without full resist model is reviewed. Preferred verification flow with quality metrics of DOF and MEEF is examined.

  4. Landscape pattern metrics and regional assessment

    Science.gov (United States)

    O'Neill, R. V.; Riitters, K.H.; Wickham, J.D.; Jones, K.B.

    1999-01-01

    The combination of remote imagery data, geographic information systems software, and landscape ecology theory provides a unique basis for monitoring and assessing large-scale ecological systems. The unique feature of the work has been the need to develop and interpret quantitative measures of spatial pattern-the landscape indices. This article reviews what is known about the statistical properties of these pattern metrics and suggests some additional metrics based on island biogeography, percolation theory, hierarchy theory, and economic geography. Assessment applications of this approach have required interpreting the pattern metrics in terms of specific environmental endpoints, such as wildlife and water quality, and research into how to represent synergystic effects of many overlapping sources of stress.

  5. Sing to the Lord a New Song: John Calvin and the Spiritual Discipline of Metrical Psalmody

    Directory of Open Access Journals (Sweden)

    Brandon J. Bellanti

    2014-11-01

    Full Text Available The purpose of this essay is to evaluate the way that psalmody - specifically metrical psalmody - serves as a sort of spiritual discipline. In other words, this essay seeks to demonstrate how the singing of psalms can be a tool to aid in spiritual growth. Much of the research for this essay focuses on the theological writings of the Protestant reformer John Calvin, as well as the way in which he incorporated metrical psalmody into his liturgical framework. The research also comprises primary writings from Aristotle, Plato, Saint John Chrysostom, Saint Basil, and Saint Augustine - all of whom influenced Calvin’s own philosophy regarding the use of art, music, and psalmody in worship. Additional areas examined in this research include the historical musical development of psalmody and the collection and arrangement of metrical psalms into psalters. For reference, specific examples of metrical psalms and psalters have been added. These additional areas and examples help to give a more holistic understanding of the nature of metrical psalmody, and they help to show how it may accurately be considered a spiritual discipline.

  6. WE-B-304-00: Point/Counterpoint: Biological Dose Optimization

    International Nuclear Information System (INIS)

    2015-01-01

    The ultimate goal of radiotherapy treatment planning is to find a treatment that will yield a high tumor control probability (TCP) with an acceptable normal tissue complication probability (NTCP). Yet most treatment planning today is not based upon optimization of TCPs and NTCPs, but rather upon meeting physical dose and volume constraints defined by the planner. It has been suggested that treatment planning evaluation and optimization would be more effective if they were biologically and not dose/volume based, and this is the claim debated in this month’s Point/Counterpoint. After a brief overview of biologically and DVH based treatment planning by the Moderator Colin Orton, Joseph Deasy (for biological planning) and Charles Mayo (against biological planning) will begin the debate. Some of the arguments in support of biological planning include: this will result in more effective dose distributions for many patients DVH-based measures of plan quality are known to have little predictive value there is little evidence that either D95 or D98 of the PTV is a good predictor of tumor control sufficient validated outcome prediction models are now becoming available and should be used to drive planning and optimization Some of the arguments against biological planning include: several decades of experience with DVH-based planning should not be discarded we do not know enough about the reliability and errors associated with biological models the radiotherapy community in general has little direct experience with side by side comparisons of DVH vs biological metrics and outcomes it is unlikely that a clinician would accept extremely cold regions in a CTV or hot regions in a PTV, despite having acceptable TCP values Learning Objectives: To understand dose/volume based treatment planning and its potential limitations To understand biological metrics such as EUD, TCP, and NTCP To understand biologically based treatment planning and its potential limitations

  7. Capital Cost Optimization for Prefabrication: A Factor Analysis Evaluation Model

    Directory of Open Access Journals (Sweden)

    Hong Xue

    2018-01-01

    Full Text Available High capital cost is a significant hindrance to the promotion of prefabrication. In order to optimize cost management and reduce capital cost, this study aims to explore the latent factors and factor analysis evaluation model. Semi-structured interviews were conducted to explore potential variables and then questionnaire survey was employed to collect professionals’ views on their effects. After data collection, exploratory factor analysis was adopted to explore the latent factors. Seven latent factors were identified, including “Management Index”, “Construction Dissipation Index”, “Productivity Index”, “Design Efficiency Index”, “Transport Dissipation Index”, “Material increment Index” and “Depreciation amortization Index”. With these latent factors, a factor analysis evaluation model (FAEM, divided into factor analysis model (FAM and comprehensive evaluation model (CEM, was established. The FAM was used to explore the effect of observed variables on the high capital cost of prefabrication, while the CEM was used to evaluate comprehensive cost management level on prefabrication projects. Case studies were conducted to verify the models. The results revealed that collaborative management had a positive effect on capital cost of prefabrication. Material increment costs and labor costs had significant impacts on production cost. This study demonstrated the potential of on-site management and standardization design to reduce capital cost. Hence, collaborative management is necessary for cost management of prefabrication. Innovation and detailed design were needed to improve cost performance. The new form of precast component factories can be explored to reduce transportation cost. Meanwhile, targeted strategies can be adopted for different prefabrication projects. The findings optimized the capital cost and improved the cost performance through providing an evaluation and optimization model, which helps managers to

  8. Software metrics a rigorous and practical approach

    CERN Document Server

    Fenton, Norman

    2014-01-01

    A Framework for Managing, Measuring, and Predicting Attributes of Software Development Products and ProcessesReflecting the immense progress in the development and use of software metrics in the past decades, Software Metrics: A Rigorous and Practical Approach, Third Edition provides an up-to-date, accessible, and comprehensive introduction to software metrics. Like its popular predecessors, this third edition discusses important issues, explains essential concepts, and offers new approaches for tackling long-standing problems.New to the Third EditionThis edition contains new material relevant

  9. Hermitian-Einstein metrics on parabolic stable bundles

    International Nuclear Information System (INIS)

    Li Jiayu; Narasimhan, M.S.

    1995-12-01

    Let M-bar be a compact complex manifold of complex dimension two with a smooth Kaehler metric and D a smooth divisor on M-bar. If E is a rank 2 holomorphic vector bundle on M-bar with a stable parabolic structure along D, we prove the existence of a metric on E' = E module MbarD (compatible with the parabolic structure) which is Hermitian-Einstein with respect to the restriction of Kaehler metric of M-barD. A converse is also proved. (author). 24 refs

  10. Comparison of image sharpness metrics and real-time sharpening methods with GPU implementations

    CSIR Research Space (South Africa)

    De Villiers, Johan P

    2010-06-01

    Full Text Available , and not in trying to adjust the image to some fixed sharpness value. With the advent of the increased progammability of Graphics Pro- cessing Units (GPU) and their seemingly ever increasing number of processor cores (the dual-GPU NVidia GTX295 has 480 cores...) Quadro MDS 140M 16 400 64 700 ATI HD 2400XT 40 800 64 700 NVidia 9600GT 64 650 256 900 NVidia GTX280 240 602 512 1107 2 Metric descriptions Three metrics are used to evaluate images for sharpness. The first two are a measure of how much information...

  11. Evaluation of different set-up error corrections on dose-volume metrics in prostate IMRT using CBCT images

    International Nuclear Information System (INIS)

    Hirose, Yoshinori; Tomita, Tsuneyuki; Kitsuda, Kenji; Notogawa, Takuya; Miki, Katsuhito; Nakamura, Mitsuhiro; Nakamura, Kiyonao; Ishigaki, Takashi

    2014-01-01

    We investigated the effect of different set-up error corrections on dose-volume metrics in intensity-modulated radiotherapy (IMRT) for prostate cancer under different planning target volume (PTV) margin settings using cone-beam computed tomography (CBCT) images. A total of 30 consecutive patients who underwent IMRT for prostate cancer were retrospectively analysed, and 7-14 CBCT datasets were acquired per patient. Interfractional variations in dose-volume metrics were evaluated under six different set-up error corrections, including tattoo, bony anatomy, and four different target matching groups. Set-up errors were incorporated into planning the isocenter position, and dose distributions were recalculated on CBCT images. These processes were repeated under two different PTV margin settings. In the on-line bony anatomy matching groups, systematic error (Σ) was 0.3 mm, 1.4 mm, and 0.3 mm in the left-right, anterior-posterior (AP), and superior-inferior directions, respectively. Σ in three successive off-line target matchings was finally comparable with that in the on-line bony anatomy matching in the AP direction. Although doses to the rectum and bladder wall were reduced for a small PTV margin, averaged reductions in the volume receiving 100% of the prescription dose from planning were within 2.5% under all PTV margin settings for all correction groups, with the exception of the tattoo set-up error correction only (≥ 5.0%). Analysis of variance showed no significant difference between on-line bony anatomy matching and target matching. While variations between the planned and delivered doses were smallest when target matching was applied, the use of bony anatomy matching still ensured the planned doses. (author)

  12. Characterizing uncertainty when evaluating risk management metrics: risk assessment modeling of Listeria monocytogenes contamination in ready-to-eat deli meats.

    Science.gov (United States)

    Gallagher, Daniel; Ebel, Eric D; Gallagher, Owen; Labarre, David; Williams, Michael S; Golden, Neal J; Pouillot, Régis; Dearfield, Kerry L; Kause, Janell

    2013-04-01

    This report illustrates how the uncertainty about food safety metrics may influence the selection of a performance objective (PO). To accomplish this goal, we developed a model concerning Listeria monocytogenes in ready-to-eat (RTE) deli meats. This application used a second order Monte Carlo model that simulates L. monocytogenes concentrations through a series of steps: the food-processing establishment, transport, retail, the consumer's home and consumption. The model accounted for growth inhibitor use, retail cross contamination, and applied an FAO/WHO dose response model for evaluating the probability of illness. An appropriate level of protection (ALOP) risk metric was selected as the average risk of illness per serving across all consumed servings-per-annum and the model was used to solve for the corresponding performance objective (PO) risk metric as the maximum allowable L. monocytogenes concentration (cfu/g) at the processing establishment where regulatory monitoring would occur. Given uncertainty about model inputs, an uncertainty distribution of the PO was estimated. Additionally, we considered how RTE deli meats contaminated at levels above the PO would be handled by the industry using three alternative approaches. Points on the PO distribution represent the probability that - if the industry complies with a particular PO - the resulting risk-per-serving is less than or equal to the target ALOP. For example, assuming (1) a target ALOP of -6.41 log10 risk of illness per serving, (2) industry concentrations above the PO that are re-distributed throughout the remaining concentration distribution and (3) no dose response uncertainty, establishment PO's of -4.98 and -4.39 log10 cfu/g would be required for 90% and 75% confidence that the target ALOP is met, respectively. The PO concentrations from this example scenario are more stringent than the current typical monitoring level of an absence in 25 g (i.e., -1.40 log10 cfu/g) or a stricter criteria of absence

  13. Optimizing area under the ROC curve using semi-supervised learning.

    Science.gov (United States)

    Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M

    2015-01-01

    Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.

  14. A Simple Metric for Determining Resolution in Optical, Ion, and Electron Microscope Images.

    Science.gov (United States)

    Curtin, Alexandra E; Skinner, Ryan; Sanders, Aric W

    2015-06-01

    A resolution metric intended for resolution analysis of arbitrary spatially calibrated images is presented. By fitting a simple sigmoidal function to pixel intensities across slices of an image taken perpendicular to light-dark edges, the mean distance over which the light-dark transition occurs can be determined. A fixed multiple of this characteristic distance is then reported as the image resolution. The prefactor is determined by analysis of scanning transmission electron microscope high-angle annular dark field images of Si. This metric has been applied to optical, scanning electron microscope, and helium ion microscope images. This method provides quantitative feedback about image resolution, independent of the tool on which the data were collected. In addition, our analysis provides a nonarbitrary and self-consistent framework that any end user can utilize to evaluate the resolution of multiple microscopes from any vendor using the same metric.

  15. Coverage Metrics for Model Checking

    Science.gov (United States)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  16. Future of the PCI Readmission Metric.

    Science.gov (United States)

    Wasfy, Jason H; Yeh, Robert W

    2016-03-01

    Between 2013 and 2014, the Centers for Medicare and Medicaid Services and the National Cardiovascular Data Registry publically reported risk-adjusted 30-day readmission rates after percutaneous coronary intervention (PCI) as a pilot project. A key strength of this public reporting effort included risk adjustment with clinical rather than administrative data. Furthermore, because readmission after PCI is common, expensive, and preventable, this metric has substantial potential to improve quality and value in American cardiology care. Despite this, concerns about the metric exist. For example, few PCI readmissions are caused by procedural complications, limiting the extent to which improved procedural technique can reduce readmissions. Also, similar to other readmission measures, PCI readmission is associated with socioeconomic status and race. Accordingly, the metric may unfairly penalize hospitals that care for underserved patients. Perhaps in the context of these limitations, Centers for Medicare and Medicaid Services has not yet included PCI readmission among metrics that determine Medicare financial penalties. Nevertheless, provider organizations may still wish to focus on this metric to improve value for cardiology patients. PCI readmission is associated with low-risk chest discomfort and patient anxiety. Therefore, patient education, improved triage mechanisms, and improved care coordination offer opportunities to minimize PCI readmissions. Because PCI readmission is common and costly, reducing PCI readmission offers provider organizations a compelling target to improve the quality of care, and also performance in contracts involve shared financial risk. © 2016 American Heart Association, Inc.

  17. Model assessment using a multi-metric ranking technique

    Science.gov (United States)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  18. Effects of subsampling of passive acoustic recordings on acoustic metrics.

    Science.gov (United States)

    Thomisch, Karolin; Boebel, Olaf; Zitterbart, Daniel P; Samaran, Flore; Van Parijs, Sofie; Van Opzeeland, Ilse

    2015-07-01

    Passive acoustic monitoring is an important tool in marine mammal studies. However, logistics and finances frequently constrain the number and servicing schedules of acoustic recorders, requiring a trade-off between deployment periods and sampling continuity, i.e., the implementation of a subsampling scheme. Optimizing such schemes to each project's specific research questions is desirable. This study investigates the impact of subsampling on the accuracy of two common metrics, acoustic presence and call rate, for different vocalization patterns (regimes) of baleen whales: (1) variable vocal activity, (2) vocalizations organized in song bouts, and (3) vocal activity with diel patterns. To this end, above metrics are compared for continuous and subsampled data subject to different sampling strategies, covering duty cycles between 50% and 2%. The results show that a reduction of the duty cycle impacts negatively on the accuracy of both acoustic presence and call rate estimates. For a given duty cycle, frequent short listening periods improve accuracy of daily acoustic presence estimates over few long listening periods. Overall, subsampling effects are most pronounced for low and/or temporally clustered vocal activity. These findings illustrate the importance of informed decisions when applying subsampling strategies to passive acoustic recordings or analyses for a given target species.

  19. Metric learning for DNA microarray data analysis

    International Nuclear Information System (INIS)

    Takeuchi, Ichiro; Nakagawa, Masao; Seto, Masao

    2009-01-01

    In many microarray studies, gene set selection is an important preliminary step for subsequent main task such as tumor classification, cancer subtype identification, etc. In this paper, we investigate the possibility of using metric learning as an alternative to gene set selection. We develop a simple metric learning algorithm aiming to use it for microarray data analysis. Exploiting a property of the algorithm, we introduce a novel approach for extending the metric learning to be adaptive. We apply the algorithm to previously studied microarray data on malignant lymphoma subtype identification.

  20. Running from features: Optimized evaluation of inflationary power spectra

    Science.gov (United States)

    Motohashi, Hayato; Hu, Wayne

    2015-08-01

    In models like axion monodromy, temporal features during inflation which are not associated with its ending can produce scalar, and to a lesser extent, tensor power spectra where deviations from scale-free power law spectra can be as large as the deviations from scale invariance itself. Here the standard slow-roll approach breaks down since its parameters evolve on an e -folding scale Δ N much smaller than the e -folds to the end of inflation. Using the generalized slow-roll approach, we show that the expansion of observables in a hierarchy of potential or Hubble evolution parameters comes from a Taylor expansion of the features around an evaluation point that can be optimized. Optimization of the leading-order expression provides a sufficiently accurate approximation for current data as long as the power spectrum can be described over the well-observed few e -folds by the local tilt and running. Standard second-order approaches, often used in the literature, ironically are worse than leading-order approaches due to inconsistent evaluation of observables. We develop a new optimized next-order approach which predicts observables to 10-3 even for Δ N ˜1 where all parameters in the infinite hierarchy are of comparable magnitude. For models with Δ N ≪1 , the generalized slow-roll approach provides integral expressions that are accurate to second order in the deviation from scale invariance. Their evaluation in the monodromy model provides highly accurate explicit relations between the running oscillation amplitude, frequency, and phase in the curvature spectrum and parameters of the potential.