WorldWideScience

Sample records for metrically measuring liver

  1. Measuring Information Security: Guidelines to Build Metrics

    Science.gov (United States)

    von Faber, Eberhard

    Measuring information security is a genuine interest of security managers. With metrics they can develop their security organization's visibility and standing within the enterprise or public authority as a whole. Organizations using information technology need to use security metrics. Despite the clear demands and advantages, security metrics are often poorly developed or ineffective parameters are collected and analysed. This paper describes best practices for the development of security metrics. First attention is drawn to motivation showing both requirements and benefits. The main body of this paper lists things which need to be observed (characteristic of metrics), things which can be measured (how measurements can be conducted) and steps for the development and implementation of metrics (procedures and planning). Analysis and communication is also key when using security metrics. Examples are also given in order to develop a better understanding. The author wants to resume, continue and develop the discussion about a topic which is or increasingly will be a critical factor of success for any security managers in larger organizations.

  2. Mass Customization Measurements Metrics

    DEFF Research Database (Denmark)

    Nielsen, Kjeld; Brunø, Thomas Ditlev; Jørgensen, Kaj Asbjørn

    2014-01-01

    A recent survey has indicated that 17 % of companies have ceased mass customizing less than 1 year after initiating the effort. This paper presents measurement for a company’s mass customization performance, utilizing metrics within the three fundamental capabilities: robust process design, choice...... navigation, and solution space development. A mass customizer when assessing performance with these metrics can identify within which areas improvement would increase competitiveness the most and enable more efficient transition to mass customization....

  3. 22 CFR 226.15 - Metric system of measurement.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Metric system of measurement. 226.15 Section 226.15 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT ADMINISTRATION OF ASSISTANCE AWARDS TO U.S. NON-GOVERNMENTAL ORGANIZATIONS Pre-award Requirements § 226.15 Metric system of measurement. (a...

  4. 20 CFR 435.15 - Metric system of measurement.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Metric system of measurement. 435.15 Section 435.15 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE REQUIREMENTS FOR... metric system is the preferred measurement system for U.S. trade and commerce. The Act requires each...

  5. Measures of agreement between computation and experiment:validation metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Oberkampf, William Louis

    2005-08-01

    With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables and sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric and also features that should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent free-shear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.

  6. Liver fibrosis: in vivo evaluation using intravoxel incoherent motion-derived histogram metrics with histopathologic findings at 3.0 T.

    Science.gov (United States)

    Hu, Fubi; Yang, Ru; Huang, Zixing; Wang, Min; Zhang, Hanmei; Yan, Xu; Song, Bin

    2017-12-01

    To retrospectively determine the feasibility of intravoxel incoherent motion (IVIM) imaging based on histogram analysis for the staging of liver fibrosis (LF) using histopathologic findings as the reference standard. 56 consecutive patients (14 men, 42 women; age range, 15-76, years) with chronic liver diseases (CLDs) were studied using IVIM-DWI with 9 b-values (0, 25, 50, 75, 100, 150, 200, 500, 800 s/mm 2 ) at 3.0 T. Fibrosis stage was evaluated using the METAVIR scoring system. Histogram metrics including mean, standard deviation (Std), skewness, kurtosis, minimum (Min), maximum (Max), range, interquartile (Iq) range, and percentiles (10, 25, 50, 75, 90th) were extracted from apparent diffusion coefficient (ADC), true diffusion coefficient (D), pseudo-diffusion coefficient (D*), and perfusion fraction (f) maps. All histogram metrics among different fibrosis groups were compared using one-way analysis of variance or nonparametric Kruskal-Wallis test. For significant parameters, receivers operating characteristic curve (ROC) analyses were further performed for the staging of LF. Based on their METAVIR stage, the 56 patients were reclassified into three groups as follows: F0-1 group (n = 25), F2-3 group (n = 21), and F4 group (n = 10). The mean, Iq range, percentiles (50, 75, and 90th) of D* maps between the groups were significant differences (all P histogram metrics of ADC, D, and f maps demonstrated no significant difference among the groups (all P > 0.05). Histogram analysis of D* map derived from IVIM can be used to stage liver fibrosis in patients with CLDs and provide more quantitative information beyond the mean value.

  7. Path integral measure for first-order and metric gravities

    International Nuclear Information System (INIS)

    Aros, Rodrigo; Contreras, Mauricio; Zanelli, Jorge

    2003-01-01

    The equivalence between the path integrals for first-order gravity and the standard torsion-free, metric gravity in 3 + 1 dimensions is analysed. Starting with the path integral for first-order gravity, the correct measure for the path integral of the metric theory is obtained

  8. Metrics for measuring net-centric data strategy implementation

    Science.gov (United States)

    Kroculick, Joseph B.

    2010-04-01

    An enterprise data strategy outlines an organization's vision and objectives for improved collection and use of data. We propose generic metrics and quantifiable measures for each of the DoD Net-Centric Data Strategy (NCDS) data goals. Data strategy metrics can be adapted to the business processes of an enterprise and the needs of stakeholders in leveraging the organization's data assets to provide for more effective decision making. Generic metrics are applied to a specific application where logistics supply and transportation data is integrated across multiple functional groups. A dashboard presents a multidimensional view of the current progress to a state where logistics data shared in a timely and seamless manner among users, applications, and systems.

  9. Measurable Control System Security through Ideal Driven Technical Metrics

    Energy Technology Data Exchange (ETDEWEB)

    Miles McQueen; Wayne Boyer; Sean McBride; Marie Farrar; Zachary Tudor

    2008-01-01

    The Department of Homeland Security National Cyber Security Division supported development of a small set of security ideals as a framework to establish measurable control systems security. Based on these ideals, a draft set of proposed technical metrics was developed to allow control systems owner-operators to track improvements or degradations in their individual control systems security posture. The technical metrics development effort included review and evaluation of over thirty metrics-related documents. On the bases of complexity, ambiguity, or misleading and distorting effects the metrics identified during the reviews were determined to be weaker than necessary to aid defense against the myriad threats posed by cyber-terrorism to human safety, as well as to economic prosperity. Using the results of our metrics review and the set of security ideals as a starting point for metrics development, we identified thirteen potential technical metrics - with at least one metric supporting each ideal. Two case study applications of the ideals and thirteen metrics to control systems were then performed to establish potential difficulties in applying both the ideals and the metrics. The case studies resulted in no changes to the ideals, and only a few deletions and refinements to the thirteen potential metrics. This led to a final proposed set of ten core technical metrics. To further validate the security ideals, the modifications made to the original thirteen potential metrics, and the final proposed set of ten core metrics, seven separate control systems security assessments performed over the past three years were reviewed for findings and recommended mitigations. These findings and mitigations were then mapped to the security ideals and metrics to assess gaps in their coverage. The mappings indicated that there are no gaps in the security ideals and that the ten core technical metrics provide significant coverage of standard security issues with 87% coverage. Based

  10. INFORMATIVE ENERGY METRIC FOR SIMILARITY MEASURE IN REPRODUCING KERNEL HILBERT SPACES

    Directory of Open Access Journals (Sweden)

    Songhua Liu

    2012-02-01

    Full Text Available In this paper, information energy metric (IEM is obtained by similarity computing for high-dimensional samples in a reproducing kernel Hilbert space (RKHS. Firstly, similar/dissimilar subsets and their corresponding informative energy functions are defined. Secondly, IEM is proposed for similarity measure of those subsets, which converts the non-metric distances into metric ones. Finally, applications of this metric is introduced, such as classification problems. Experimental results validate the effectiveness of the proposed method.

  11. Metrics for measuring distances in configuration spaces

    International Nuclear Information System (INIS)

    Sadeghi, Ali; Ghasemi, S. Alireza; Schaefer, Bastian; Mohr, Stephan; Goedecker, Stefan; Lill, Markus A.

    2013-01-01

    In order to characterize molecular structures we introduce configurational fingerprint vectors which are counterparts of quantities used experimentally to identify structures. The Euclidean distance between the configurational fingerprint vectors satisfies the properties of a metric and can therefore safely be used to measure dissimilarities between configurations in the high dimensional configuration space. In particular we show that these metrics are a perfect and computationally cheap replacement for the root-mean-square distance (RMSD) when one has to decide whether two noise contaminated configurations are identical or not. We introduce a Monte Carlo approach to obtain the global minimum of the RMSD between configurations, which is obtained from a global minimization over all translations, rotations, and permutations of atomic indices

  12. Contrasting Various Metrics for Measuring Tropical Cyclone Activity

    Directory of Open Access Journals (Sweden)

    Jia-Yuh Yu Ping-Gin Chiu

    2012-01-01

    Full Text Available Popular metrics used for measuring the tropical cyclone (TC activity, including NTC (number of tropical cyclones, TCD (tropical cyclone days, ACE (accumulated cyclone energy, PDI (power dissipation index, along with two newly proposed indices: RACE (revised accumulated cyclone energy and RPDI (revised power dissipation index, are compared using the JTWC (Joint Typhoon Warning Center best-track data of TC over the western North Pacific basin. Our study shows that, while the above metrics have demonstrated various degrees of discrepancies, but in practical terms, they are all able to produce meaningful temporal and spatial changes in response to climate variability. Compared with the conventional ACE and PDI, RACE and RPDI seem to provide a more precise estimate of the total TC activity, especially in projecting the upswing trend of TC activity over the past few decades, simply because of a better approach in estimating TC wind energy. However, we would argue that there is still no need to find a ¡§universal¡¨ or ¡§best¡¨ metric for TC activity because different metrics are designed to stratify different aspects of TC activity, and whether the selected metric is appropriate or not should be determined solely by the purpose of study. Except for magnitude difference, the analysis results seem insensitive to the choice of the best-track datasets.

  13. Robust Design Impact Metrics: Measuring the effect of implementing and using Robust Design

    DEFF Research Database (Denmark)

    Ebro, Martin; Olesen, Jesper; Howard, Thomas J.

    2014-01-01

    Measuring the performance of an organisation’s product development process can be challenging due to the limited use of metrics in R&D. An organisation considering whether to use Robust Design as an integrated part of their development process may find it difficult to define whether it is relevant......, and afterwards measure the effect of having implemented it. This publication identifies and evaluates Robust Design-related metrics and finds that 2 metrics are especially useful: 1) Relative amount of R&D Resources spent after Design Verification and 2) Number of ‘change notes’ after Design Verification....... The metrics have been applied in a case company to test the assumptions made during the evaluation. It is concluded that the metrics are useful and relevant, but further work is necessary to make a proper overview and categorisation of different types of robustness related metrics....

  14. Diffuse Reflectance Spectroscopy for Surface Measurement of Liver Pathology.

    Science.gov (United States)

    Nilsson, Jan H; Reistad, Nina; Brange, Hannes; Öberg, Carl-Fredrik; Sturesson, Christian

    2017-01-01

    Liver parenchymal injuries such as steatosis, steatohepatitis, fibrosis, and sinusoidal obstruction syndrome can lead to increased morbidity and liver failure after liver resection. Diffuse reflectance spectroscopy (DRS) is an optical measuring method that is fast, convenient, and established. DRS has previously been used on the liver with an invasive technique consisting of a needle that is inserted into the parenchyma. We developed a DRS system with a hand-held probe that is applied to the liver surface. In this study, we investigated the impact of the liver capsule on DRS measurements and whether liver surface measurements are representative of the whole liver. We also wanted to confirm that we could discriminate between tumor and liver parenchyma by DRS. The instrumentation setup consisted of a light source, a fiber-optic contact probe, and two spectrometers connected to a computer. Patients scheduled for liver resection due to hepatic malignancy were included, and DRS measurements were performed on the excised liver part with and without the liver capsule and alongside a newly cut surface. To estimate the scattering parameters and tissue chromophore volume fractions, including blood, bile, and fat, the measured diffuse reflectance spectra were applied to an analytical model. In total, 960 DRS spectra from the excised liver tissue of 18 patients were analyzed. All factors analyzed regarding tumor versus liver tissue were significantly different. When measuring through the capsule, the blood volume fraction was found to be 8.4 ± 3.5%, the lipid volume fraction was 9.9 ± 4.7%, and the bile volume fraction was 8.2 ± 4.6%. No differences could be found between surface measurements and cross-sectional measurements. In measurements with/without the liver capsule, the differences in volume fraction were 1.63% (0.75-2.77), -0.54% (-2.97 to 0.32), and -0.15% (-1.06 to 1.24) for blood, lipid, and bile, respectively. This study shows that it is possible to manage DRS

  15. Quality measurement and improvement in liver transplantation.

    Science.gov (United States)

    Mathur, Amit K; Talwalkar, Jayant

    2018-06-01

    There is growing interest in the quality of health care delivery in liver transplantation. Multiple stakeholders, including patients, transplant providers and their hospitals, payers, and regulatory bodies have an interest in measuring and monitoring quality in the liver transplant process, and understanding differences in quality across centres. This article aims to provide an overview of quality measurement and regulatory issues in liver transplantation performed within the United States. We review how broader definitions of health care quality should be applied to liver transplant care models. We outline the status quo including the current regulatory agencies, public reporting mechanisms, and requirements around quality assurance and performance improvement (QAPI) activities. Additionally, we further discuss unintended consequences and opportunities for growth in quality measurement. Quality measurement and the integration of quality improvement strategies into liver transplant programmes hold significant promise, but multiple challenges to successful implementation must be addressed to optimise value. Copyright © 2018 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  16. Acoustic radiation force impulse elastography of the liver. Can fat deposition in the liver affect the measurement of liver stiffness?

    International Nuclear Information System (INIS)

    Motosugi, Utaroh; Ichikawa, Tomoaki; Araki, Tsutomu; Niitsuma, Yoshibumi

    2011-01-01

    The aim of this study was to compare acoustic radiation force impulse (ARFI) results between livers with and without fat deposition. We studied 200 consecutive healthy individuals who underwent health checkups at our institution. The subjects were divided into three groups according to the echogenicity of the liver on ultrasonography (US) and the liver-spleen attenuation ratio index (LSR) on computed tomography: normal liver group (n=121, no evidence of bright liver on US and LSR >1); fatty liver group (n=46, bright liver on US and LSR 5 days a week (n=18) were excluded from the analysis. The velocities measured by ARFI in the normal and fatty liver groups were compared using the two one-sided test. The mean (SD) velocity measured in the normal and fatty liver groups were 1.03 (0.12) m/s and 1.02 (0.12) m/s, respectively. The ARFI results of the fatty liver group were similar to those of the normal liver group (P<0.0001). This study suggested that fat deposition in the liver does not affect the liver stiffness measurement determined by ARFI. (author)

  17. Age dependence of rat liver function measurements

    DEFF Research Database (Denmark)

    Fischer-Nielsen, A; Poulsen, H E; Hansen, B A

    1989-01-01

    Changes in the galactose elimination capacity, the capacity of urea-N synthesis and antipyrine clearance were studied in male Wistar rats at the age of 8, 20 and 44 weeks. Further, liver tissue concentrations of microsomal cytochrome P-450, microsomal protein and glutathione were measured. All...... liver function measurements increased from the age of 8 to 44 weeks when expressed in absolute values. In relation to body weight, these function measurements were unchanged or reduced from week 8 to week 20. At week 44, galactose elimination capacity and capacity of urea-N synthesis related to body...... weight were increased by 10% and 36%, respectively, and antipyrine plasma clearance was reduced to 50%. Liver tissue concentrations of microsomal cytochrome P-450 and microsomal protein increased with age when expressed in absolute values, but were unchanged per g liver, i.e., closely related to liver...

  18. Measurement of liver volume by emission computed tomography

    International Nuclear Information System (INIS)

    Kan, M.K.; Hopkins, G.B.

    1979-01-01

    In 22 volunteers without clinical or laboratory evidence of liver disease, liver volume was determined using single-photon emission computed tomography (ECT). This technique provided excellent object contrast between the liver and its surroundings and permitted calculation of liver volume without geometric assumptions about the liver's configuration. Reproducibility of results was satisfactory, with a root-mean-square error of less than 6% between duplicate measurements in 15 individuals. The volume measurements were validated by the use of phantoms

  19. On the differential structure of metric measure spaces and applications

    CERN Document Server

    Gigli, Nicola

    2015-01-01

    The main goals of this paper are: (i) To develop an abstract differential calculus on metric measure spaces by investigating the duality relations between differentials and gradients of Sobolev functions. This will be achieved without calling into play any sort of analysis in charts, our assumptions being: the metric space is complete and separable and the measure is Radon and non-negative. (ii) To employ these notions of calculus to provide, via integration by parts, a general definition of distributional Laplacian, thus giving a meaning to an expression like \\Delta g=\\mu, where g is a functi

  20. Productivity in Pediatric Palliative Care: Measuring and Monitoring an Elusive Metric.

    Science.gov (United States)

    Kaye, Erica C; Abramson, Zachary R; Snaman, Jennifer M; Friebert, Sarah E; Baker, Justin N

    2017-05-01

    Workforce productivity is poorly defined in health care. Particularly in the field of pediatric palliative care (PPC), the absence of consensus metrics impedes aggregation and analysis of data to track workforce efficiency and effectiveness. Lack of uniformly measured data also compromises the development of innovative strategies to improve productivity and hinders investigation of the link between productivity and quality of care, which are interrelated but not interchangeable. To review the literature regarding the definition and measurement of productivity in PPC; to identify barriers to productivity within traditional PPC models; and to recommend novel metrics to study productivity as a component of quality care in PPC. PubMed ® and Cochrane Database of Systematic Reviews searches for scholarly literature were performed using key words (pediatric palliative care, palliative care, team, workforce, workflow, productivity, algorithm, quality care, quality improvement, quality metric, inpatient, hospital, consultation, model) for articles published between 2000 and 2016. Organizational searches of Center to Advance Palliative Care, National Hospice and Palliative Care Organization, National Association for Home Care & Hospice, American Academy of Hospice and Palliative Medicine, Hospice and Palliative Nurses Association, National Quality Forum, and National Consensus Project for Quality Palliative Care were also performed. Additional semistructured interviews were conducted with directors from seven prominent PPC programs across the U.S. to review standard operating procedures for PPC team workflow and productivity. Little consensus exists in the PPC field regarding optimal ways to define, measure, and analyze provider and program productivity. Barriers to accurate monitoring of productivity include difficulties with identification, measurement, and interpretation of metrics applicable to an interdisciplinary care paradigm. In the context of inefficiencies

  1. Information Entropy-Based Metrics for Measuring Emergences in Artificial Societies

    Directory of Open Access Journals (Sweden)

    Mingsheng Tang

    2014-08-01

    Full Text Available Emergence is a common phenomenon, and it is also a general and important concept in complex dynamic systems like artificial societies. Usually, artificial societies are used for assisting in resolving several complex social issues (e.g., emergency management, intelligent transportation system with the aid of computer science. The levels of an emergence may have an effect on decisions making, and the occurrence and degree of an emergence are generally perceived by human observers. However, due to the ambiguity and inaccuracy of human observers, to propose a quantitative method to measure emergences in artificial societies is a meaningful and challenging task. This article mainly concentrates upon three kinds of emergences in artificial societies, including emergence of attribution, emergence of behavior, and emergence of structure. Based on information entropy, three metrics have been proposed to measure emergences in a quantitative way. Meanwhile, the correctness of these metrics has been verified through three case studies (the spread of an infectious influenza, a dynamic microblog network, and a flock of birds with several experimental simulations on the Netlogo platform. These experimental results confirm that these metrics increase with the rising degree of emergences. In addition, this article also has discussed the limitations and extended applications of these metrics.

  2. Sharp metric obstructions for quasi-Einstein metrics

    Science.gov (United States)

    Case, Jeffrey S.

    2013-02-01

    Using the tractor calculus to study smooth metric measure spaces, we adapt results of Gover and Nurowski to give sharp metric obstructions to the existence of quasi-Einstein metrics on suitably generic manifolds. We do this by introducing an analogue of the Weyl tractor W to the setting of smooth metric measure spaces. The obstructions we obtain can be realized as tensorial invariants which are polynomial in the Riemann curvature tensor and its divergence. By taking suitable limits of their tensorial forms, we then find obstructions to the existence of static potentials, generalizing to higher dimensions a result of Bartnik and Tod, and to the existence of potentials for gradient Ricci solitons.

  3. Impact of liver fibrosis and fatty liver on T1rho measurements: A prospective study

    International Nuclear Information System (INIS)

    Xie, Shuang Shuang; Li, Qing; Cheng, Yue; Shen, Wen; Zhang, Yu; Zhuo, Zhi Zheng; Zhao, Guiming

    2017-01-01

    To investigate the liver T1rho values for detecting fibrosis, and the potential impact of fatty liver on T1rho measurements. This study included 18 healthy subjects, 18 patients with fatty liver, and 18 patients with liver fibrosis, who underwent T1rho MRI and mDIXON collections. Liver T1rho, proton density fat fraction (PDFF) and T2* values were measured and compared among the three groups. Receiver operating characteristic (ROC) curve analysis was performed to evaluate the T1rho values for detecting liver fibrosis. Liver T1rho values were correlated with PDFF, T2* values and clinical data. Liver T1rho and PDFF values were significantly different (p 0.05). T1rho MRI is useful for noninvasive detection of liver fibrosis, and may not be affected with the presence of fatty liver

  4. An Introduction to the SI Metric System. Inservice Guide for Teaching Measurement, Kindergarten Through Grade Eight.

    Science.gov (United States)

    California State Dept. of Education, Sacramento.

    This handbook was designed to serve as a reference for teacher workshops that: (1) introduce the metric system and help teachers gain confidence with metric measurement, and (2) develop classroom measurement activities. One chapter presents the history and basic features of SI metrics. A second chapter presents a model for the measurement program.…

  5. 41 CFR 105-72.205 - Metric system of measurement.

    Science.gov (United States)

    2010-07-01

    ... Management Regulations System (Continued) GENERAL SERVICES ADMINISTRATION Regional Offices-General Services Administration 72-UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act...

  6. Study of liver volume measurement and its clinical application for liver transplantation using multiple-slice spiral CT

    International Nuclear Information System (INIS)

    Peng Zhiyi; Yu Zhefeng; Kuang Pingding; Xiao Shengxiang; Huang Dongsheng; Zheng Shusen; Wu Jian

    2004-01-01

    Objective: To study the accuracy of liver volume measurement using MSCT and its application in liver transplantation. Methods: (1) Experimental study. Ten pig livers were scanned using MSCT with two collimations (3.2 mm and 6.5 mm) and pitch 1.25. Semi-automatic method was used to reconstruct 3D liver models to measure the liver volume. (2) Clinical study. Twenty-three patients received MSCT scan with collimation of 6.5 mm before liver transplantation. Same method was used to calculate the liver volume and the measurement was repeated by the same observer after 1 month. Results: (1) Experimental study. Actual liver volumes were (1134.1 ± 288.0) ml. Liver volumes by MSCT with two collimations were (1125.0 ± 282.5) ml (3.2 mm) and (1101.6 ± 277.6) ml (6.5 mm). The accuracy was (99.5 ± 0.8)% and (97.4 ± 0.8)%, respectively. Both showed same good agreement with actual liver volume: r=0.999, P<0.01 (2) Clinical study. Actual liver volumes were (1455.7±730.0) ml. Liver volume by MSCT was (1462.7 ± 774.1) ml. The accuracy was (99.5±9.6)%, r=0.986, P<0.01. Liver volume measured again was (1449.4 ± 768.9) ml, r=0.991 (P<0.01). Conclusion: MSCT can assess the liver volume correctly, and could be used as a routine step for evaluations before liver transplantation

  7. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies

    Science.gov (United States)

    Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora

    2017-01-01

    Objective Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to (a) catalog feasibility measures/metrics and (b) propose a framework. Methods For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. Findings We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Conclusions Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization. PMID:29333105

  8. National Quality Forum Colon Cancer Quality Metric Performance: How Are Hospitals Measuring Up?

    Science.gov (United States)

    Mason, Meredith C; Chang, George J; Petersen, Laura A; Sada, Yvonne H; Tran Cao, Hop S; Chai, Christy; Berger, David H; Massarweh, Nader N

    2017-12-01

    To evaluate the impact of care at high-performing hospitals on the National Quality Forum (NQF) colon cancer metrics. The NQF endorses evaluating ≥12 lymph nodes (LNs), adjuvant chemotherapy (AC) for stage III patients, and AC within 4 months of diagnosis as colon cancer quality indicators. Data on hospital-level metric performance and the association with survival are unclear. Retrospective cohort study of 218,186 patients with resected stage I to III colon cancer in the National Cancer Data Base (2004-2012). High-performing hospitals (>75% achievement) were identified by the proportion of patients achieving each measure. The association between hospital performance and survival was evaluated using Cox shared frailty modeling. Only hospital LN performance improved (15.8% in 2004 vs 80.7% in 2012; trend test, P fashion [0 metrics, reference; 1, hazard ratio (HR) 0.96 (0.89-1.03); 2, HR 0.92 (0.87-0.98); 3, HR 0.85 (0.80-0.90); 2 vs 1, HR 0.96 (0.91-1.01); 3 vs 1, HR 0.89 (0.84-0.93); 3 vs 2, HR 0.95 (0.89-0.95)]. Performance on metrics in combination was associated with lower risk of death [LN + AC, HR 0.86 (0.78-0.95); AC + timely AC, HR 0.92 (0.87-0.98); LN + AC + timely AC, HR 0.85 (0.80-0.90)], whereas individual measures were not [LN, HR 0.95 (0.88-1.04); AC, HR 0.95 (0.87-1.05)]. Less than half of hospitals perform well on these NQF colon cancer metrics concurrently, and high performance on individual measures is not associated with improved survival. Quality improvement efforts should shift focus from individual measures to defining composite measures encompassing the overall multimodal care pathway and capturing successful transitions from one care modality to another.

  9. Measuring floodplain spatial patterns using continuous surface metrics at multiple scales

    Science.gov (United States)

    Scown, Murray W.; Thoms, Martin C.; DeJager, Nathan R.

    2015-01-01

    Interactions between fluvial processes and floodplain ecosystems occur upon a floodplain surface that is often physically complex. Spatial patterns in floodplain topography have only recently been quantified over multiple scales, and discrepancies exist in how floodplain surfaces are perceived to be spatially organised. We measured spatial patterns in floodplain topography for pool 9 of the Upper Mississippi River, USA, using moving window analyses of eight surface metrics applied to a 1 × 1 m2 DEM over multiple scales. The metrics used were Range, SD, Skewness, Kurtosis, CV, SDCURV,Rugosity, and Vol:Area, and window sizes ranged from 10 to 1000 m in radius. Surface metric values were highly variable across the floodplain and revealed a high degree of spatial organisation in floodplain topography. Moran's I correlograms fit to the landscape of each metric at each window size revealed that patchiness existed at nearly all window sizes, but the strength and scale of patchiness changed within window size, suggesting that multiple scales of patchiness and patch structure exist in the topography of this floodplain. Scale thresholds in the spatial patterns were observed, particularly between the 50 and 100 m window sizes for all surface metrics and between the 500 and 750 m window sizes for most metrics. These threshold scales are ~ 15–20% and 150% of the main channel width (1–2% and 10–15% of the floodplain width), respectively. These thresholds may be related to structuring processes operating across distinct scale ranges. By coupling surface metrics, multi-scale analyses, and correlograms, quantifying floodplain topographic complexity is possible in ways that should assist in clarifying how floodplain ecosystems are structured.

  10. Impact of liver fibrosis and fatty liver on T1rho measurements: A prospective study

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Shuang Shuang; Li, Qing; Cheng, Yue; Shen, Wen [Dept. of Radiology, Tianjin First Center Hospital, Tianjin (China); Zhang, Yu; Zhuo, Zhi Zheng [Clinical Science, Philips Healthcare, Beijing (China); Zhao, Guiming [Dept. of Hepatology, Tianjin Second People' s Hospital, Tianjin (China)

    2017-11-15

    To investigate the liver T1rho values for detecting fibrosis, and the potential impact of fatty liver on T1rho measurements. This study included 18 healthy subjects, 18 patients with fatty liver, and 18 patients with liver fibrosis, who underwent T1rho MRI and mDIXON collections. Liver T1rho, proton density fat fraction (PDFF) and T2* values were measured and compared among the three groups. Receiver operating characteristic (ROC) curve analysis was performed to evaluate the T1rho values for detecting liver fibrosis. Liver T1rho values were correlated with PDFF, T2* values and clinical data. Liver T1rho and PDFF values were significantly different (p < 0.001), whereas the T2* (p = 0.766) values were similar, among the three groups. Mean liver T1rho values in the fibrotic group (52.6 ± 6.8 ms) were significantly higher than those of healthy subjects (44.9 ± 2.8 ms, p < 0.001) and fatty liver group (45.0 ± 3.5 ms, p < 0.001). Mean liver T1rho values were similar between healthy subjects and fatty liver group (p = 0.999). PDFF values in the fatty liver group (16.07 ± 10.59%) were significantly higher than those of healthy subjects (1.43 ± 1.36%, p < 0.001) and fibrosis group (1.07 ± 1.06%, p < 0.001). PDFF values were similar in healthy subjects and fibrosis group (p = 0.984). Mean T1rho values performed well to detect fibrosis at a threshold of 49.5 ms (area under the ROC curve, 0.855), had a moderate correlation with liver stiffness (r = 0.671, p = 0.012), and no correlation with PDFF, T2* values, subject age, or body mass index (p > 0.05). T1rho MRI is useful for noninvasive detection of liver fibrosis, and may not be affected with the presence of fatty liver.

  11. 43 CFR 12.915 - Metric system of measurement.

    Science.gov (United States)

    2010-10-01

    ... procurements, grants, and other business-related activities. Metric implementation may take longer where the... recipient, such as when foreign competitors are producing competing products in non-metric units. (End of...

  12. Semantic metrics

    OpenAIRE

    Hu, Bo; Kalfoglou, Yannis; Dupplaw, David; Alani, Harith; Lewis, Paul; Shadbolt, Nigel

    2006-01-01

    In the context of the Semantic Web, many ontology-related operations, e.g. ontology ranking, segmentation, alignment, articulation, reuse, evaluation, can be boiled down to one fundamental operation: computing the similarity and/or dissimilarity among ontological entities, and in some cases among ontologies themselves. In this paper, we review standard metrics for computing distance measures and we propose a series of semantic metrics. We give a formal account of semantic metrics drawn from a...

  13. Measuring reliability under epistemic uncertainty: Review on non-probabilistic reliability metrics

    Directory of Open Access Journals (Sweden)

    Kang Rui

    2016-06-01

    Full Text Available In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability and uncertainty-theory-based reliability metrics (belief reliability. It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to (1 compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2 satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  14. Lean manufacturing measurement: the relationship between lean activities and lean metrics

    Directory of Open Access Journals (Sweden)

    Manotas Duque Diego Fernando

    2007-10-01

    Full Text Available Lean Manufacturing was developed by Toyota Motor company to address their specific needs in a restricted market in times of economic trouble. These concepts have been studied and proven to be transferrable and applicable to a wide variety of industries. This paper aims to integrate a set of metrics that have been proposed by different authors in such a way that they are consistent with the different stages and elements of Lean Manufacturing implementations. To achieve this, two frameworks for Lean implementations are presented and then the main factors for success are used as the basis to propose metrics that measure the advance in these factors. A tabular display of the impact of “Lean activities” on the metrics is presented, proposing that many a priori assumptions about the benefits on many different levels of improvement should be accurate. Finally, some ideas for future research and extension of the applications proposed on this paper are presented as closing points.

  15. Using Complexity Metrics With R-R Intervals and BPM Heart Rate Measures

    DEFF Research Database (Denmark)

    Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian

    2013-01-01

    Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of inter......Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker...... of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically...

  16. The spleen-liver uptake ratio in liver scan: review of its measurement and correlation between hemodynamical changes of the liver in portal hypertension

    International Nuclear Information System (INIS)

    Lee, S. Y.; Chung, Y. A.; Chung, H. S.; Lee, H. G.; Kim, S. H.; Chung, S. K.

    1999-01-01

    We analyzed correlation between changes of the Spleen-Liver Ratio in liver scintigram and hemodynamical changes of the liver in overall grades of portal hypertension by non-invasive, scintigraphic method. And the methods for measurement of the Spleen-Liver Ratio were also reviewed. Hepatic scintiangiograms for 120 seconds with 250-333 MBq of 99mTc-Sn-phytate followed by liver scintigrams were performed in 62 patients group consisted with clinically proven norma and various diffuse hepatocellular diseases. Hepatic Perfusion indices were calculated from the Time-Activity Curves of hepatic scintiangiograms. Each Spleen-Liver Ratios of maximum, average and total counts within ROIs of the liver and spleen from both anterior and posterior liver scintigrams and their geometrical means were calculated. Linear correlations between each Spleen-Liver Ratios and Hepatic Perfusion indices were evaluated. There was strong correlation (y=0.0002x 2 -0.0049x+0.2746, R=0.8790, p<0.0001) between Hepatic Perfusion Indices and Spleen-Liver Ratios calculated from posterior maxium counts of the liver scintigrams. Weaker correlations with either geometrical means of maximum and average count methods (R=0.8101, 0.7268, p<0.0001) or average counts of both posterior and anterior veiws (R=0.8134, 0.6200, p<0.0001) were noted. We reconfirmed that changes of Spleen-Liver Ratio in liver scintigrams represent hemodynamical changes in portal hypertension of diffuse hepatocellular diseases. Among them, the posterior Spleen-Liver Ratio measured by maximum counts will give the best information. And matching with Hepatic Perfusion index will be another useful index to evaluate characteristics splenic extraction coefficient of a certain radiocolloid for liver scintigram

  17. Virtual reality, ultrasound-guided liver biopsy simulator: development and performance discrimination

    Science.gov (United States)

    Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F

    2012-01-01

    Objectives The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Methods Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Results Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=−2.487 (−2.040 to −0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=−2.272 (−0.028 to −0.002). ANOVA reported significant differences across years of experience (0–1, 1–2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. Conclusion It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required. PMID:21304005

  18. Engineering performance metrics

    Science.gov (United States)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  19. Risk Metrics and Measures for an Extended PSA

    International Nuclear Information System (INIS)

    Wielenberg, A.; Loeffler, H.; Hasnaoui, C.; Burgazzi, L.; Cazzoli, E.; Jan, P.; La Rovere, S.; Siklossy, T.; Vitazkova, J.; Raimond, E.

    2016-01-01

    This report provides a review of the main used risk measures for Level 1 and Level 2 PSA. It depicts their advantages, limitations and disadvantages and develops some more precise risk measures relevant for extended PSAs and helpful for decision-making. This report does not recommend or suggest any quantitative value for the risk measures. It does not discuss in details decision-making based on PSA results neither. The choice of one appropriate risk measure or a set of risk measures depends on the decision making approach as well as on the issue to be decided. The general approach for decision making aims at a multi-attribute approach. This can include the use of several risk measures as appropriate. Section 5 provides some recommendations on the main risk metrics to be used for an extended PSA. For Level 1 PSA, Fuel Damage Frequency and Radionuclide Mobilization Frequency are recommended. For Level 2 PSA, the characterization of loss of containment function and a total risk measure based on the aggregated activity releases of all sequences rated by their frequencies is proposed. (authors)

  20. 10 CFR 600.306 - Metric system of measurement.

    Science.gov (United States)

    2010-01-01

    ... cause significant inefficiencies or loss of markets to United States firms. (b) Recipients are... Requirements for Grants and Cooperative Agreements With For-Profit Organizations General § 600.306 Metric... Competitiveness Act of 1988 (15 U.S.C. 205) and implemented by Executive Order 12770, states that: (1) The metric...

  1. Fault Management Metrics

    Science.gov (United States)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  2. Assessment of Performance Measures for Security of the Maritime Transportation Network, Port Security Metrics : Proposed Measurement of Deterrence Capability

    Science.gov (United States)

    2007-01-03

    This report is the thirs in a series describing the development of performance measures pertaining to the security of the maritime transportation network (port security metrics). THe development of measures to guide improvements in maritime security ...

  3. An uncertainty importance measure using a distance metric for the change in a cumulative distribution function

    International Nuclear Information System (INIS)

    Chun, Moon-Hyun; Han, Seok-Jung; Tak, Nam-IL

    2000-01-01

    A simple measure of uncertainty importance using the entire change of cumulative distribution functions (CDFs) has been developed for use in probability safety assessments (PSAs). The entire change of CDFs is quantified in terms of the metric distance between two CDFs. The metric distance measure developed in this study reflects the relative impact of distributional changes of inputs on the change of an output distribution, while most of the existing uncertainty importance measures reflect the magnitude of relative contribution of input uncertainties to the output uncertainty. The present measure has been evaluated analytically for various analytical distributions to examine its characteristics. To illustrate the applicability and strength of the present measure, two examples are provided. The first example is an application of the present measure to a typical problem of a system fault tree analysis and the second one is for a hypothetical non-linear model. Comparisons of the present result with those obtained by existing uncertainty importance measures show that the metric distance measure is a useful tool to express the measure of uncertainty importance in terms of the relative impact of distributional changes of inputs on the change of an output distribution

  4. Comparison of continuous versus categorical tumor measurement-based metrics to predict overall survival in cancer treatment trials

    Science.gov (United States)

    An, Ming-Wen; Mandrekar, Sumithra J.; Branda, Megan E.; Hillman, Shauna L.; Adjei, Alex A.; Pitot, Henry; Goldberg, Richard M.; Sargent, Daniel J.

    2011-01-01

    Purpose The categorical definition of response assessed via the Response Evaluation Criteria in Solid Tumors has documented limitations. We sought to identify alternative metrics for tumor response that improve prediction of overall survival. Experimental Design Individual patient data from three North Central Cancer Treatment Group trials (N0026, n=117; N9741, n=1109; N9841, n=332) were used. Continuous metrics of tumor size based on longitudinal tumor measurements were considered in addition to a trichotomized response (TriTR: Response vs. Stable vs. Progression). Cox proportional hazards models, adjusted for treatment arm and baseline tumor burden, were used to assess the impact of the metrics on subsequent overall survival, using a landmark analysis approach at 12-, 16- and 24-weeks post baseline. Model discrimination was evaluated using the concordance (c) index. Results The overall best response rates for the three trials were 26%, 45%, and 25% respectively. While nearly all metrics were statistically significantly associated with overall survival at the different landmark time points, the c-indices for the traditional response metrics ranged from 0.59-0.65; for the continuous metrics from 0.60-0.66 and for the TriTR metrics from 0.64-0.69. The c-indices for TriTR at 12-weeks were comparable to those at 16- and 24-weeks. Conclusions Continuous tumor-measurement-based metrics provided no predictive improvement over traditional response based metrics or TriTR; TriTR had better predictive ability than best TriTR or confirmed response. If confirmed, TriTR represents a promising endpoint for future Phase II trials. PMID:21880789

  5. Measuring scientific impact beyond academia: An assessment of existing impact metrics and proposed improvements.

    Science.gov (United States)

    Ravenscroft, James; Liakata, Maria; Clare, Amanda; Duma, Daniel

    2017-01-01

    How does scientific research affect the world around us? Being able to answer this question is of great importance in order to appropriately channel efforts and resources in science. The impact by scientists in academia is currently measured by citation based metrics such as h-index, i-index and citation counts. These academic metrics aim to represent the dissemination of knowledge among scientists rather than the impact of the research on the wider world. In this work we are interested in measuring scientific impact beyond academia, on the economy, society, health and legislation (comprehensive impact). Indeed scientists are asked to demonstrate evidence of such comprehensive impact by authoring case studies in the context of the Research Excellence Framework (REF). We first investigate the extent to which existing citation based metrics can be indicative of comprehensive impact. We have collected all recent REF impact case studies from 2014 and we have linked these to papers in citation networks that we constructed and derived from CiteSeerX, arXiv and PubMed Central using a number of text processing and information retrieval techniques. We have demonstrated that existing citation-based metrics for impact measurement do not correlate well with REF impact results. We also consider metrics of online attention surrounding scientific works, such as those provided by the Altmetric API. We argue that in order to be able to evaluate wider non-academic impact we need to mine information from a much wider set of resources, including social media posts, press releases, news articles and political debates stemming from academic work. We also provide our data as a free and reusable collection for further analysis, including the PubMed citation network and the correspondence between REF case studies, grant applications and the academic literature.

  6. Measuring scientific impact beyond academia: An assessment of existing impact metrics and proposed improvements.

    Directory of Open Access Journals (Sweden)

    James Ravenscroft

    Full Text Available How does scientific research affect the world around us? Being able to answer this question is of great importance in order to appropriately channel efforts and resources in science. The impact by scientists in academia is currently measured by citation based metrics such as h-index, i-index and citation counts. These academic metrics aim to represent the dissemination of knowledge among scientists rather than the impact of the research on the wider world. In this work we are interested in measuring scientific impact beyond academia, on the economy, society, health and legislation (comprehensive impact. Indeed scientists are asked to demonstrate evidence of such comprehensive impact by authoring case studies in the context of the Research Excellence Framework (REF. We first investigate the extent to which existing citation based metrics can be indicative of comprehensive impact. We have collected all recent REF impact case studies from 2014 and we have linked these to papers in citation networks that we constructed and derived from CiteSeerX, arXiv and PubMed Central using a number of text processing and information retrieval techniques. We have demonstrated that existing citation-based metrics for impact measurement do not correlate well with REF impact results. We also consider metrics of online attention surrounding scientific works, such as those provided by the Altmetric API. We argue that in order to be able to evaluate wider non-academic impact we need to mine information from a much wider set of resources, including social media posts, press releases, news articles and political debates stemming from academic work. We also provide our data as a free and reusable collection for further analysis, including the PubMed citation network and the correspondence between REF case studies, grant applications and the academic literature.

  7. Cyber threat metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Frye, Jason Neal; Veitch, Cynthia K.; Mateski, Mark Elliot; Michalski, John T.; Harris, James Mark; Trevino, Cassandra M.; Maruoka, Scott

    2012-03-01

    Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.

  8. Measuring the user experience collecting, analyzing, and presenting usability metrics

    CERN Document Server

    Tullis, Thomas

    2013-01-01

    Measuring the User Experience was the first book that focused on how to quantify the user experience. Now in the second edition, the authors include new material on how recent technologies have made it easier and more effective to collect a broader range of data about the user experience. As more UX and web professionals need to justify their design decisions with solid, reliable data, Measuring the User Experience provides the quantitative analysis training that these professionals need. The second edition presents new metrics such as emotional engagement, personas, k

  9. Measurement of liver and spleen volume by computed tomography using point counting technique in chronic liver disease

    International Nuclear Information System (INIS)

    Sato, Hiroyuki

    1983-01-01

    Liver and spleen volume were measured by computed tomography (CT) using point counting technique. This method is very simple and applicable to any kind of CT scanner. The volumes of the livers and spleens estimated by this method correlated with the weights of the corresponding organs measured on autopsy or surgical operation, indication the accuracy and usefulness of this method. Hepatic and splenic volumes were estimated by this method in 48 patients with chronic liver disease and 13 subjects with non-hepatobiliary discase. The mean hepatic volume in non-alcoholic liver cirrhosis but not in alcoholic cirrhosis was significantly smaller than those in non-hepatobiliary disease and other chronic liver diseases. Alcoholic cirrhosis showed significantly larger liver volume than non-alcoholic cirrhosis. In alcoholic fibrosis, the mean hepatic volume was significantly larger than non-hepatobiliary disease. The mean splenic volumes both in alcoholic and non-alcoholic cirrhosis were significantly larger than in other disease. A significantly positive correlation between hepatic and splenic volumes was found in alcoholic cirrhosis but not in non-alcoholic cirrhosis. These results indicate that estimation of hepatic and splenic volumes by this method is useful for the analysis of the pathophysiology of chronic liver disease. (author)

  10. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies: The Evidence and the Framework.

    Science.gov (United States)

    Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora

    2017-12-01

    Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to ( a ) catalog feasibility measures/metrics and ( b ) propose a framework. For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization.

  11. Regge calculus from discontinuous metrics

    International Nuclear Information System (INIS)

    Khatsymovsky, V.M.

    2003-01-01

    Regge calculus is considered as a particular case of the more general system where the linklengths of any two neighbouring 4-tetrahedra do not necessarily coincide on their common face. This system is treated as that one described by metric discontinuous on the faces. In the superspace of all discontinuous metrics the Regge calculus metrics form some hypersurface defined by continuity conditions. Quantum theory of the discontinuous metric system is assumed to be fixed somehow in the form of quantum measure on (the space of functionals on) the superspace. The problem of reducing this measure to the Regge hypersurface is addressed. The quantum Regge calculus measure is defined from a discontinuous metric measure by inserting the δ-function-like phase factor. The requirement that continuity conditions be imposed in a 'face-independent' way fixes this factor uniquely. The term 'face-independent' means that this factor depends only on the (hyper)plane spanned by the face, not on it's form and size. This requirement seems to be natural from the viewpoint of existence of the well-defined continuum limit maximally free of lattice artefacts

  12. Particle image velocimetry correlation signal-to-noise ratio metrics and measurement uncertainty quantification

    International Nuclear Information System (INIS)

    Xue, Zhenyu; Charonko, John J; Vlachos, Pavlos P

    2014-01-01

    In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we extend the original work by Charonko and Vlachos and present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. Several corrections have been applied in this work. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations by applying a subtraction of the minimum correlation value to remove the effect of the background image noise. In addition, the notion of a ‘valid’ measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct from an ‘outlier’ measurement. Finally the type and significance of the error distribution function is investigated. These advancements lead to more robust and reliable uncertainty estimation models compared with the original work by Charonko and Vlachos. The models are tested against both synthetic benchmark data as well as experimental measurements. In this work, U 68.5 uncertainties are estimated at the 68.5% confidence level while U 95 uncertainties are estimated at 95% confidence level. For all cases the resulting calculated coverage factors approximate the expected theoretical confidence intervals, thus demonstrating the applicability of these new models for estimation of uncertainty for individual PIV measurements. (paper)

  13. Particle image velocimetry correlation signal-to-noise ratio metrics and measurement uncertainty quantification

    Science.gov (United States)

    Xue, Zhenyu; Charonko, John J.; Vlachos, Pavlos P.

    2014-11-01

    In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we extend the original work by Charonko and Vlachos and present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. Several corrections have been applied in this work. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations by applying a subtraction of the minimum correlation value to remove the effect of the background image noise. In addition, the notion of a ‘valid’ measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct from an ‘outlier’ measurement. Finally the type and significance of the error distribution function is investigated. These advancements lead to more robust and reliable uncertainty estimation models compared with the original work by Charonko and Vlachos. The models are tested against both synthetic benchmark data as well as experimental measurements. In this work, {{U}68.5} uncertainties are estimated at the 68.5% confidence level while {{U}95} uncertainties are estimated at 95% confidence level. For all cases the resulting calculated coverage factors approximate the expected theoretical confidence intervals, thus demonstrating the applicability of these new models for estimation of uncertainty for individual PIV measurements.

  14. Rice by Weight, Other Produce by Bulk, and Snared Iguanas at So Much Per One. A Talk on Measurement Standards and on Metric Conversion.

    Science.gov (United States)

    Allen, Harold Don

    This script for a short radio broadcast on measurement standards and metric conversion begins by tracing the rise of the metric system in the international marketplace. Metric units are identified and briefly explained. Arguments for conversion to metric measures are presented. The history of the development and acceptance of the metric system is…

  15. The development of a practical and uncomplicated predictive equation to determine liver volume from simple linear ultrasound measurements of the liver

    International Nuclear Information System (INIS)

    Childs, Jessie T.; Thoirs, Kerry A.; Esterman, Adrian J.

    2016-01-01

    This study sought to develop a practical and uncomplicated predictive equation that could accurately calculate liver volumes, using multiple simple linear ultrasound measurements combined with measurements of body size. Penalized (lasso) regression was used to develop a new model and compare it to the ultrasonic linear measurements currently used clinically. A Bland–Altman analysis showed that the large limits of agreement of the new model render it too inaccurate to be of clinical use for estimating liver volume per se, but it holds value in tracking disease progress or response to treatment over time in individuals, and is certainly substantially better as an indicator of overall liver size than the ultrasonic linear measurements currently being used clinically. - Highlights: • A new model to calculate liver volumes from simple linear ultrasound measurements. • This model was compared to the linear measurements currently used clinically. • The new model holds value in tracking disease progress or response to treatment. • This model is better as an indicator of overall liver size.

  16. Principles in selecting human capital measurements and metrics

    Directory of Open Access Journals (Sweden)

    Pharny D. Chrysler-Fox

    2014-09-01

    Research purpose: The study explored principles in selecting human capital measurements,drawing on the views and recommendations of human resource management professionals,all experts in human capital measurement. Motivation for the study: The motivation was to advance the understanding of selectingappropriate and strategic valid measurements, in order for human resource practitioners tocontribute to creating value and driving strategic change. Research design, approach and method: A qualitative approach, with purposively selectedcases from a selected panel of human capital measurement experts, generated a datasetthrough unstructured interviews, which were analysed thematically. Main findings: Nineteen themes were found. They represent a process that considers thecentrality of the business strategy and a systemic integration across multiple value chains inthe organisation through business partnering, in order to select measurements and generatemanagement level-appropriate information. Practical/managerial implications: Measurement practitioners, in partnership withmanagement from other functions, should integrate the business strategy across multiplevalue chains in order to select measurements. Analytics becomes critical in discoveringrelationships and formulating hypotheses to understand value creation. Higher educationinstitutions should produce graduates able to deal with systems thinking and to operatewithin complexity. Contribution: This study identified principles to select measurements and metrics. Noticeableis the move away from the interrelated scorecard perspectives to a systemic view of theorganisation in order to understand value creation. In addition, the findings may help toposition the human resource management function as a strategic asset.

  17. Morphology and morphometry of the caudate lobe of the liver in two populations.

    Science.gov (United States)

    Sagoo, Mandeep Gill; Aland, R Claire; Gosden, Edward

    2018-01-01

    The caudate lobe of the liver has portal blood supply and hepatic vein drainage independent of the remainder of the liver and may be differentially affected in liver pathologies. Ultrasonographic measurement of the caudate lobe can be used to generate hepatic indices that may indicate cirrhosis. This study investigated the relationship of metrics of the caudate lobe and other morphological features of human livers from a northwest Indian Punjabi population (n = 50) and a UK Caucasian population (n = 25), which may affect the calculation of hepatic indices. The width of the right lobe of the liver was significantly smaller, while the anteroposterior diameter of the caudate lobe and both Harbin's Index and the Hess Index scores were significantly larger in NWI livers than in UKC livers. The Hess Index score, in particular, is much larger in the NWI population (265 %, p liver. These differences may affect the calculation of hepatic indices, resulting in a greater percentage of false positives of cirrhosis in the NWI population. Population-specific data are required to correctly determine normal ranges.

  18. Quantifying, Measuring, and Strategizing Energy Security: Determining the Most Meaningful Dimensions and Metrics

    DEFF Research Database (Denmark)

    Ren, Jingzheng; Sovacool, Benjamin

    2014-01-01

    subjective concepts of energy security into more objective criteria, to investigate the cause-effect relationships among these different metrics, and to provide some recommendations for the stakeholders to draft efficacious measures for enhancing energy security. To accomplish this feat, the study utilizes...

  19. Measuring distance “as the horse runs”: Cross-scale comparison of terrain-based metrics

    Science.gov (United States)

    Buttenfield, Barbara P.; Ghandehari, M; Leyk, S; Stanislawski, Larry V.; Brantley, M E; Qiang, Yi

    2016-01-01

    Distance metrics play significant roles in spatial modeling tasks, such as flood inundation (Tucker and Hancock 2010), stream extraction (Stanislawski et al. 2015), power line routing (Kiessling et al. 2003) and analysis of surface pollutants such as nitrogen (Harms et al. 2009). Avalanche risk is based on slope, aspect, and curvature, all directly computed from distance metrics (Gutiérrez 2012). Distance metrics anchor variogram analysis, kernel estimation, and spatial interpolation (Cressie 1993). Several approaches are employed to measure distance. Planar metrics measure straight line distance between two points (“as the crow flies”) and are simple and intuitive, but suffer from uncertainties. Planar metrics assume that Digital Elevation Model (DEM) pixels are rigid and flat, as tiny facets of ceramic tile approximating a continuous terrain surface. In truth, terrain can bend, twist and undulate within each pixel.Work with Light Detection and Ranging (lidar) data or High Resolution Topography to achieve precise measurements present challenges, as filtering can eliminate or distort significant features (Passalacqua et al. 2015). The current availability of lidar data is far from comprehensive in developed nations, and non-existent in many rural and undeveloped regions. Notwithstanding computational advances, distance estimation on DEMs has never been systematically assessed, due to assumptions that improvements are so small that surface adjustment is unwarranted. For individual pixels inaccuracies may be small, but additive effects can propagate dramatically, especially in regional models (e.g., disaster evacuation) or global models (e.g., sea level rise) where pixels span dozens to hundreds of kilometers (Usery et al 2003). Such models are increasingly common, lending compelling reasons to understand shortcomings in the use of planar distance metrics. Researchers have studied curvature-based terrain modeling. Jenny et al. (2011) use curvature to generate

  20. Metric-independent measures for supersymmetric extended object theories on curved backgrounds

    International Nuclear Information System (INIS)

    Nishino, Hitoshi; Rajpoot, Subhash

    2014-01-01

    For Green–Schwarz superstring σ-model on curved backgrounds, we introduce a non-metric measure Φ≡ϵ ij ϵ IJ (∂ i φ I )(∂ j φ J ) with two scalars φ I (I=1,2) used in ‘Two-Measure Theory’ (TMT). As in the flat-background case, the string tension T=(2πα ′ ) −1 emerges as an integration constant for the A i -field equation. This mechanism is further generalized to supermembrane theory, and to super-p-brane theory, both on general curved backgrounds. This shows the universal applications of dynamical measure of TMT to general supersymmetric extended objects on general curved backgrounds

  1. Defining a Progress Metric for CERT RMM Improvement

    Science.gov (United States)

    2017-09-14

    REV-03.18.2016.0 Defining a Progress Metric for CERT-RMM Improvement Gregory Crabb Nader Mehravari David Tobar September 2017 TECHNICAL ...fendable resource allocation decisions. Technical metrics measure aspects of controls implemented through technology (systems, soft- ware, hardware...implementation metric would be the percentage of users who have received anti-phishing training . • Effectiveness/efficiency metrics measure whether

  2. Quantitative dual energy CT measurements in rabbit VX2 liver tumors: Comparison to perfusion CT measurements and histopathological findings

    International Nuclear Information System (INIS)

    Zhang, Long Jiang; Wu, Shengyong; Wang, Mei; Lu, Li; Chen, Bo; Jin, Lixin; Wang, Jiandong; Larson, Andrew C.; Lu, Guang Ming

    2012-01-01

    Purpose: To evaluate the correlation between quantitative dual energy CT and perfusion CT measurements in rabbit VX2 liver tumors. Materials and methods: This study was approved by the institutional animal care and use committee at our institution. Nine rabbits with VX2 liver tumors underwent contrast-enhanced dual energy CT and perfusion CT. CT attenuation for the tumors and normal liver parenchyma and tumor-to-liver ratio were obtained at the 140 kVp, 80 kVp, average weighted images and dual energy CT iodine maps. Quantitative parameters for the viable tumor and adjacent liver were measured with perfusion CT. The correlation between the enhancement values of the tumor in iodine maps and perfusion CT parameters of each tumor was analyzed. Radiation dose from dual energy CT and perfusion CT was measured. Results: Enhancement values for the tumor were higher than that for normal liver parenchyma at the hepatic arterial phase (P < 0.05). The highest tumor-to-liver ratio was obtained in hepatic arterial phase iodine map. Hepatic blood flow of the tumor was higher than that for adjacent liver (P < 0.05). Enhancement values of hepatic tumors in the iodine maps positively correlated with permeability of capillary vessel surface (r = 0.913, P < 0.001), hepatic blood flow (r = 0.512, P = 0.010), and hepatic blood volume (r = 0.464, P = 0.022) at the hepatic arterial phases. The effective radiation dose from perfusion CT was higher than that from DECT (P < 0.001). Conclusions: The enhancement values for viable tumor tissues measured in iodine maps were well correlated to perfusion CT measurements in rabbit VX2 liver tumors. Compared with perfusion CT, dual energy CT of the liver required a lower radiation dose.

  3. Overview of journal metrics

    Directory of Open Access Journals (Sweden)

    Kihong Kim

    2018-02-01

    Full Text Available Various kinds of metrics used for the quantitative evaluation of scholarly journals are reviewed. The impact factor and related metrics including the immediacy index and the aggregate impact factor, which are provided by the Journal Citation Reports, are explained in detail. The Eigenfactor score and the article influence score are also reviewed. In addition, journal metrics such as CiteScore, Source Normalized Impact per Paper, SCImago Journal Rank, h-index, and g-index are discussed. Limitations and problems that these metrics have are pointed out. We should be cautious to rely on those quantitative measures too much when we evaluate journals or researchers.

  4. Technical Privacy Metrics: a Systematic Survey

    OpenAIRE

    Wagner, Isabel; Eckhoff, David

    2018-01-01

    The file attached to this record is the author's final peer reviewed version The goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature makes an informed choice of metrics challenging. As a result, instead of using existing metrics, n...

  5. Using measures of information content and complexity of time series as hydrologic metrics

    Science.gov (United States)

    The information theory has been previously used to develop metrics that allowed to characterize temporal patterns in soil moisture dynamics, and to evaluate and to compare performance of soil water flow models. The objective of this study was to apply information and complexity measures to characte...

  6. Metric learning

    CERN Document Server

    Bellet, Aurelien; Sebban, Marc

    2015-01-01

    Similarity between objects plays an important role in both human cognitive processes and artificial systems for recognition and categorization. How to appropriately measure such similarities for a given task is crucial to the performance of many machine learning, pattern recognition and data mining methods. This book is devoted to metric learning, a set of techniques to automatically learn similarity and distance functions from data that has attracted a lot of interest in machine learning and related fields in the past ten years. In this book, we provide a thorough review of the metric learnin

  7. Software Quality Assurance Metrics

    Science.gov (United States)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  8. NASA education briefs for the classroom. Metrics in space

    Science.gov (United States)

    The use of metric measurement in space is summarized for classroom use. Advantages of the metric system over the English measurement system are described. Some common metric units are defined, as are special units for astronomical study. International system unit prefixes and a conversion table of metric/English units are presented. Questions and activities for the classroom are recommended.

  9. The metrics of science and technology

    CERN Document Server

    Geisler, Eliezer

    2000-01-01

    Dr. Geisler's far-reaching, unique book provides an encyclopedic compilation of the key metrics to measure and evaluate the impact of science and technology on academia, industry, and government. Focusing on such items as economic measures, patents, peer review, and other criteria, and supported by an extensive review of the literature, Dr. Geisler gives a thorough analysis of the strengths and weaknesses inherent in metric design, and in the use of the specific metrics he cites. His book has already received prepublication attention, and will prove especially valuable for academics in technology management, engineering, and science policy; industrial R&D executives and policymakers; government science and technology policymakers; and scientists and managers in government research and technology institutions. Geisler maintains that the application of metrics to evaluate science and technology at all levels illustrates the variety of tools we currently possess. Each metric has its own unique strengths and...

  10. 41 CFR 101-29.102 - Use of metric system of measurement in Federal product descriptions.

    Science.gov (United States)

    2010-07-01

    ... PROCUREMENT 29-FEDERAL PRODUCT DESCRIPTIONS 29.1-General § 101-29.102 Use of metric system of measurement in... measurement in Federal product descriptions. 101-29.102 Section 101-29.102 Public Contracts and Property... Federal agencies to: (a) Maintain close liaison with other Federal agencies, State and local governments...

  11. Liver stiffness measurement-based scoring system for significant inflammation related to chronic hepatitis B.

    Directory of Open Access Journals (Sweden)

    Mei-Zhu Hong

    Full Text Available Liver biopsy is indispensable because liver stiffness measurement alone cannot provide information on intrahepatic inflammation. However, the presence of fibrosis highly correlates with inflammation. We constructed a noninvasive model to determine significant inflammation in chronic hepatitis B patients by using liver stiffness measurement and serum markers.The training set included chronic hepatitis B patients (n = 327, and the validation set included 106 patients; liver biopsies were performed, liver histology was scored, and serum markers were investigated. All patients underwent liver stiffness measurement.An inflammation activity scoring system for significant inflammation was constructed. In the training set, the area under the curve, sensitivity, and specificity of the fibrosis-based activity score were 0.964, 91.9%, and 90.8% in the HBeAg(+ patients and 0.978, 85.0%, and 94.0% in the HBeAg(- patients, respectively. In the validation set, the area under the curve, sensitivity, and specificity of the fibrosis-based activity score were 0.971, 90.5%, and 92.5% in the HBeAg(+ patients and 0.977, 95.2%, and 95.8% in the HBeAg(- patients. The liver stiffness measurement-based activity score was comparable to that of the fibrosis-based activity score in both HBeAg(+ and HBeAg(- patients for recognizing significant inflammation (G ≥3.Significant inflammation can be accurately predicted by this novel method. The liver stiffness measurement-based scoring system can be used without the aid of computers and provides a noninvasive alternative for the prediction of chronic hepatitis B-related significant inflammation.

  12. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection

    Science.gov (United States)

    DeWeber, Jefferson T.; Wagner, Tyler

    2018-01-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30‐day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species’ distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold‐water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid‐century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation

  13. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection.

    Science.gov (United States)

    DeWeber, Jefferson T; Wagner, Tyler

    2018-06-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30-day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species' distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold-water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid-century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our

  14. Measures and metrics of sustainable diets with a focus on milk, yogurt, and dairy products

    Science.gov (United States)

    Drewnowski, Adam

    2018-01-01

    The 4 domains of sustainable diets are nutrition, economics, society, and the environment. To be sustainable, foods and food patterns need to be nutrient-rich, affordable, culturally acceptable, and sparing of natural resources and the environment. Each sustainability domain has its own measures and metrics. Nutrient density of foods has been assessed through nutrient profiling models, such as the Nutrient-Rich Foods family of scores. The Food Affordability Index, applied to different food groups, has measured both calories and nutrients per penny (kcal/$). Cultural acceptance measures have been based on relative food consumption frequencies across population groups. Environmental impact of individual foods and composite food patterns has been measured in terms of land, water, and energy use. Greenhouse gas emissions assess the carbon footprint of agricultural food production, processing, and retail. Based on multiple sustainability metrics, milk, yogurt, and other dairy products can be described as nutrient-rich, affordable, acceptable, and appealing. The environmental impact of dairy farming needs to be weighed against the high nutrient density of milk, yogurt, and cheese as compared with some plant-based alternatives. PMID:29206982

  15. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  16. Tracker Performance Metric

    National Research Council Canada - National Science Library

    Olson, Teresa; Lee, Harry; Sanders, Johnnie

    2002-01-01

    .... We have developed the Tracker Performance Metric (TPM) specifically for this purpose. It was designed to measure the output performance, on a frame-by-frame basis, using its output position and quality...

  17. Otherwise Engaged : Social Media from Vanity Metrics to Critical Analytics

    NARCIS (Netherlands)

    Rogers, R.

    2018-01-01

    Vanity metrics is a term that captures the measurement and display of how well one is doing in the “success theater” of social media. The notion of vanity metrics implies a critique of metrics concerning both the object of measurement as well as their capacity to measure unobtrusively or only to

  18. Impact of Different Creatinine Measurement Methods on Liver Transplant Allocation

    Science.gov (United States)

    Kaiser, Thorsten; Kinny-Köster, Benedict; Bartels, Michael; Parthaune, Tanja; Schmidt, Michael; Thiery, Joachim

    2014-01-01

    Introduction The model for end-stage liver disease (MELD) score is used in many countries to prioritize organ allocation for the majority of patients who require orthotopic liver transplantation. This score is calculated based on the following laboratory parameters: creatinine, bilirubin and the international normalized ratio (INR). Consequently, high measurement accuracy is essential for equitable and fair organ allocation. For serum creatinine measurements, the Jaffé method and enzymatic detection are well-established routine diagnostic tests. Methods A total of 1,013 samples from 445 patients on the waiting list or in evaluation for liver transplantation were measured using both creatinine methods from November 2012 to September 2013 at the university hospital Leipzig, Germany. The measurements were performed in parallel according to the manufacturer’s instructions after the samples arrived at the institute of laboratory medicine. Patients who had required renal replacement therapy twice in the previous week were excluded from analyses. Results Despite the good correlation between the results of both creatinine quantification methods, relevant differences were observed, which led to different MELD scores. The Jaffé measurement led to greater MELD score in 163/1,013 (16.1%) samples with differences of up to 4 points in one patient, whereas differences of up to 2 points were identified in 15/1,013 (1.5%) samples using the enzymatic assay. Overall, 50/152 (32.9%) patients with MELD scores >20 had higher scores when the Jaffé method was used. Discussion Using the Jaffé method to measure creatinine levels in samples from patients who require liver transplantation may lead to a systematic preference in organ allocation. In this study, the differences were particularly pronounced in samples with MELD scores >20, which has clinical relevance in the context of urgency of transplantation. These data suggest that official recommendations are needed to determine which

  19. Measurement of liver and spleen volume by computed tomography using point counting technique

    International Nuclear Information System (INIS)

    Matsuda, Yoshiro; Sato, Hiroyuki; Nei, Jinichi; Takada, Akira

    1982-01-01

    We devised a new method for measurement of liver and spleen volume by computed tomography using point counting technique. This method is very simple and applicable to any kind of CT scanner. The volumes of the livers and spleens estimated by this method were significantly correlated with the weights of the corresponding organs measured on autopsy or surgical operation, indicating clinical usefulness of this method. Hepatic and splenic volumes were estimated by this method in 43 patients with chronic liver disease and 9 subjects with non-hepatobiliary disease. The mean hepatic volume in non-alcoholic liver cirrhosis was significantly smaller than those in non-hepatobiliary disease and other chronic liver diseases. The mean hepatic volume in alcoholic cirrhosis and alcoholic fibrosis tended to be slightly larger than that in non-hepatobiliary disease. The mean splenic volume in liver cirrhosis was significantly larger than those in non-hepatobiliary disease and other chronic liver diseases. However, there was no significant difference of the mean splenic volume between alcoholic and non-alcoholic cirrhosis. Significantly positive correlation between hepatic and splenic volumes was found in alcoholic cirrhosis, but not in non-alcoholic cirrhosis. These results indicate that estimation of hepatic and splenic volumes by this method is useful for the analysis of the pathophysiological condition of chronic liver diseases. (author)

  20. Metrics for energy resilience

    International Nuclear Information System (INIS)

    Roege, Paul E.; Collier, Zachary A.; Mancillas, James; McDonagh, John A.; Linkov, Igor

    2014-01-01

    Energy lies at the backbone of any advanced society and constitutes an essential prerequisite for economic growth, social order and national defense. However there is an Achilles heel to today's energy and technology relationship; namely a precarious intimacy between energy and the fiscal, social, and technical systems it supports. Recently, widespread and persistent disruptions in energy systems have highlighted the extent of this dependence and the vulnerability of increasingly optimized systems to changing conditions. Resilience is an emerging concept that offers to reconcile considerations of performance under dynamic environments and across multiple time frames by supplementing traditionally static system performance measures to consider behaviors under changing conditions and complex interactions among physical, information and human domains. This paper identifies metrics useful to implement guidance for energy-related planning, design, investment, and operation. Recommendations are presented using a matrix format to provide a structured and comprehensive framework of metrics relevant to a system's energy resilience. The study synthesizes previously proposed metrics and emergent resilience literature to provide a multi-dimensional model intended for use by leaders and practitioners as they transform our energy posture from one of stasis and reaction to one that is proactive and which fosters sustainable growth. - Highlights: • Resilience is the ability of a system to recover from adversity. • There is a need for methods to quantify and measure system resilience. • We developed a matrix-based approach to generate energy resilience metrics. • These metrics can be used in energy planning, system design, and operations

  1. Validating new software for semiautomated liver volumetry. Better than manual measurement?

    Energy Technology Data Exchange (ETDEWEB)

    Noschinski, L.E.; Maiwald, B.; Voigt, P.; Kahn, T.; Stumpp, P. [University Hospital Leipzig (Germany). Dept. of Diagnostic and Interventional Radiology; Wiltberger, G. [University Hospital Leipzig (Germany). Dept. of Visceral, Transplantation, Thoracic and Vascular Surgery

    2015-09-15

    This prospective study compared a manual program for liver volumetry with semiautomated software. The hypothesis was that the semiautomated software would be faster, more accurate and less dependent on the evaluator's experience. Ten patients undergoing hemihepatectomy were included in this IRB approved study after written informed consent. All patients underwent a preoperative abdominal 3-phase CT scan, which was used for whole liver volumetry and volume prediction for the liver part to be resected. Two different types of software were used: 1) manual method: borders of the liver had to be defined per slice by the user; 2) semiautomated software: automatic identification of liver volume with manual assistance for definition of Couinaud segments. Measurements were done by six observers with different experience levels. Water displacement volumetry immediately after partial liver resection served as the gold standard. The resected part was examined with a CT scan after displacement volumetry. Volumetry of the resected liver scan showed excellent correlation to water displacement volumetry (manual: ρ = 0.997; semiautomated software: ρ = 0.995). The difference between the predicted volume and the real volume was significantly smaller with the semiautomated software than with the manual method (33 % vs. 57 %, p = 0.002). The semiautomated software was almost four times faster for volumetry of the whole liver (manual: 6:59 ± 3:04min; semiautomated: 1:47 ± 1:11 min). Both methods for liver volumetry give an estimated liver volume close to the real one. The tested semiautomated software is faster, more accurate in predicting the volume of the resected liver part, gives more reproducible results and is less dependent on the user's experience.

  2. Validating new software for semiautomated liver volumetry. Better than manual measurement?

    International Nuclear Information System (INIS)

    Noschinski, L.E.; Maiwald, B.; Voigt, P.; Kahn, T.; Stumpp, P.; Wiltberger, G.

    2015-01-01

    This prospective study compared a manual program for liver volumetry with semiautomated software. The hypothesis was that the semiautomated software would be faster, more accurate and less dependent on the evaluator's experience. Ten patients undergoing hemihepatectomy were included in this IRB approved study after written informed consent. All patients underwent a preoperative abdominal 3-phase CT scan, which was used for whole liver volumetry and volume prediction for the liver part to be resected. Two different types of software were used: 1) manual method: borders of the liver had to be defined per slice by the user; 2) semiautomated software: automatic identification of liver volume with manual assistance for definition of Couinaud segments. Measurements were done by six observers with different experience levels. Water displacement volumetry immediately after partial liver resection served as the gold standard. The resected part was examined with a CT scan after displacement volumetry. Volumetry of the resected liver scan showed excellent correlation to water displacement volumetry (manual: ρ = 0.997; semiautomated software: ρ = 0.995). The difference between the predicted volume and the real volume was significantly smaller with the semiautomated software than with the manual method (33 % vs. 57 %, p = 0.002). The semiautomated software was almost four times faster for volumetry of the whole liver (manual: 6:59 ± 3:04min; semiautomated: 1:47 ± 1:11 min). Both methods for liver volumetry give an estimated liver volume close to the real one. The tested semiautomated software is faster, more accurate in predicting the volume of the resected liver part, gives more reproducible results and is less dependent on the user's experience.

  3. Assessment of impact factors on shear wave based liver stiffness measurement

    Energy Technology Data Exchange (ETDEWEB)

    Ling, Wenwu, E-mail: lingwenwubing@163.com [Department of Ultrasound, West China Hospital of Sichuan University, Chengdu 610041 (China); Lu, Qiang, E-mail: wsluqiang@126.com [Department of Ultrasound, West China Hospital of Sichuan University, Chengdu 610041 (China); Quan, Jierong, E-mail: quanjierong@163.com [Department of Ultrasound, West China Hospital of Sichuan University, Chengdu 610041 (China); Ma, Lin, E-mail: malin2010US@163.com [Department of Ultrasound, West China Hospital of Sichuan University, Chengdu 610041 (China); Luo, Yan, E-mail: huaxiluoyan@gmail.com [Department of Ultrasound, West China Hospital of Sichuan University, Chengdu 610041 (China)

    2013-02-15

    Shear wave based ultrasound elastographies have been implemented as non-invasive methods for quantitative assessment of liver stiffness. Nonetheless, there are only a few studies that have investigated impact factors on liver stiffness measurement (LSM). Moreover, standard examination protocols for LSM are still lacking in clinical practice. Our study aimed to assess the impact factors on LSM to establish its standard examination protocols in clinical practice. We applied shear wave based elastography point quantification (ElastPQ) in 21 healthy individuals to determine the impact of liver location (segments I–VIII), breathing phase (end-inspiration and end-expiration), probe position (sub-costal and inter-costal position) and examiner on LSM. Additional studies in 175 healthy individuals were also performed to determine the influence of gender and age on liver stiffness. We found significant impact of liver location on LSM, while the liver segment V displayed the lowest coefficient of variation (CV 21%). The liver stiffness at the end-expiration was significantly higher than that at the end-inspiration (P = 2.1E−05). The liver stiffness was 8% higher in men than in women (3.8 ± 0.7 kPa vs. 3.5 ± 0.4 kPa, P = 0.0168). In contrast, the liver stiffness was comparable in the different probe positions, examiners and age groups (P > 0.05). In conclusion, this study reveals significant impact from liver location, breathing phase and gender on LSM, while furthermore strengthening the necessity for the development of standard examination protocols on LSM.

  4. Ideal Based Cyber Security Technical Metrics for Control Systems

    Energy Technology Data Exchange (ETDEWEB)

    W. F. Boyer; M. A. McQueen

    2007-10-01

    Much of the world's critical infrastructure is at risk from attack through electronic networks connected to control systems. Security metrics are important because they provide the basis for management decisions that affect the protection of the infrastructure. A cyber security technical metric is the security relevant output from an explicit mathematical model that makes use of objective measurements of a technical object. A specific set of technical security metrics are proposed for use by the operators of control systems. Our proposed metrics are based on seven security ideals associated with seven corresponding abstract dimensions of security. We have defined at least one metric for each of the seven ideals. Each metric is a measure of how nearly the associated ideal has been achieved. These seven ideals provide a useful structure for further metrics development. A case study shows how the proposed metrics can be applied to an operational control system.

  5. Liver Stiffness Measurement among Patients with Chronic Hepatitis B and C

    DEFF Research Database (Denmark)

    Christiansen, Karen M; Mössner, Belinda K; Hansen, Janne F

    2014-01-01

    viral hepatitis and valid LSM using Fibroscan. Information about liver biopsy, antiviral treatment, and clinical outcome was obtained from medical records and national registers. The study included 845 patients: 597 (71%) with hepatitis C virus (HCV), 235 (28%) with hepatitis B virus (HBV) and 13 (2......Liver stiffness measurement (LSM) is widely used to evaluate liver fibrosis, but longitudinal studies are rare. The current study was aimed to monitor LSM during follow-up, and to evaluate the association of LSM data with mortality and liver-related outcomes. We included all patients with chronic......%) with dual infection. The initial LSM distribution (patients with initial LSM values of 7-9.9 kPa, 60% of HCV patients and 83% of HBV patients showed LSM values of 20% and >2 kPa increase...

  6. 77 FR 12832 - Non-RTO/ISO Performance Metrics; Commission Staff Request Comments on Performance Metrics for...

    Science.gov (United States)

    2012-03-02

    ... Performance Metrics; Commission Staff Request Comments on Performance Metrics for Regions Outside of RTOs and... performance communicate about the benefits of RTOs and, where appropriate, (2) changes that need to be made to... common set of performance measures for markets both within and outside of ISOs/RTOs. As recommended by...

  7. Validation of Metrics for Collaborative Systems

    OpenAIRE

    Ion IVAN; Cristian CIUREA

    2008-01-01

    This paper describe the new concepts of collaborative systems metrics validation. The paper define the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.

  8. Fisher information metrics for binary classifier evaluation and training

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Different evaluation metrics for binary classifiers are appropriate to different scientific domains and even to different problems within the same domain. This presentation focuses on the optimisation of event selection to minimise statistical errors in HEP parameter estimation, a problem that is best analysed in terms of the maximisation of Fisher information about the measured parameters. After describing a general formalism to derive evaluation metrics based on Fisher information, three more specific metrics are introduced for the measurements of signal cross sections in counting experiments (FIP1) or distribution fits (FIP2) and for the measurements of other parameters from distribution fits (FIP3). The FIP2 metric is particularly interesting because it can be derived from any ROC curve, provided that prevalence is also known. In addition to its relation to measurement errors when used as an evaluation criterion (which makes it more interesting that the ROC AUC), a further advantage of the FIP2 metric is ...

  9. Using Publication Metrics to Highlight Academic Productivity and Research Impact

    Science.gov (United States)

    Carpenter, Christopher R.; Cone, David C.; Sarli, Cathy C.

    2016-01-01

    This article provides a broad overview of widely available measures of academic productivity and impact using publication data and highlights uses of these metrics for various purposes. Metrics based on publication data include measures such as number of publications, number of citations, the journal impact factor score, and the h-index, as well as emerging metrics based on document-level metrics. Publication metrics can be used for a variety of purposes for tenure and promotion, grant applications and renewal reports, benchmarking, recruiting efforts, and administrative purposes for departmental or university performance reports. The authors also highlight practical applications of measuring and reporting academic productivity and impact to emphasize and promote individual investigators, grant applications, or department output. PMID:25308141

  10. Marketing communication metrics for social media

    OpenAIRE

    Töllinen, Aarne; Karjaluoto, Heikki

    2011-01-01

    The objective of this paper is to develop a conceptual framework for measuring the effectiveness of social media marketing communications. Specifically, we study whether the existing marketing communications performance metrics are still valid in the changing digitalised communications landscape, or whether it is time to rethink them, or even to devise entirely new metrics. Recent advances in information technology and marketing bring a need to re-examine measurement models. We combine two im...

  11. Pragmatic security metrics applying metametrics to information security

    CERN Document Server

    Brotby, W Krag

    2013-01-01

    Other books on information security metrics discuss number theory and statistics in academic terms. Light on mathematics and heavy on utility, PRAGMATIC Security Metrics: Applying Metametrics to Information Security breaks the mold. This is the ultimate how-to-do-it guide for security metrics.Packed with time-saving tips, the book offers easy-to-follow guidance for those struggling with security metrics. Step by step, it clearly explains how to specify, develop, use, and maintain an information security measurement system (a comprehensive suite of metrics) to

  12. Measuring and managing radiologist productivity, part 1: clinical metrics and benchmarks.

    Science.gov (United States)

    Duszak, Richard; Muroff, Lawrence R

    2010-06-01

    Physician productivity disparities are not uncommonly debated within radiology groups, sometimes in a contentious manner. Attempts to measure productivity, identify and motivate outliers, and develop equitable management policies can present challenges to private and academic practices alike but are often necessary for a variety of professional, financial, and personnel reasons. This is the first of a two-part series that will detail metrics for evaluating radiologist productivity and review published benchmarks, focusing primarily on clinical work. Issues and limitations that may prevent successful implementation of measurement systems are explored. Part 2 will expand that discussion to evaluating nonclinical administrative and academic activities, outlining advantages and disadvantages of addressing differential productivity, and introducing potential models for practices seeking to motivate physicians on the basis of both clinical and nonclinical work.

  13. Measurement of innovation in South Africa: An analysis of survey metrics and recommendations

    Directory of Open Access Journals (Sweden)

    Sibusiso T. Manzini

    2015-11-01

    Full Text Available The National System of Innovation (NSI is an important construct in South Africa’s policy discourse as illustrated in key national planning initiatives, such as the National Development Plan. The country’s capacity to innovate is linked to the prospects for industrial development leading to social and economic growth. Proper measurement of innovation activity is therefore crucial for policymaking. In this study, a constructive analytical critique of the innovation surveys that are conducted in South Africa is presented, the case for broadening current perspectives of innovation in the national policy discourse is reinforced, the significance of a broad perspective of innovation is demonstrated and new metrics for use in the measurement of the performance of the NSI are proposed. Current NSI survey instruments lack definition of non-technological innovation. They emphasise inputs rather than outputs, lack regional and sectoral analyses, give limited attention to innovation diffusion and are susceptible to respondent interpretation. Furthermore, there are gaps regarding the wider conditions of innovation and system linkages and learning. In order to obtain a comprehensive assessment of innovation in South Africa, there is a need to sharpen the metrics for measuring non-technological innovation and to define, account for and accurately measure the ‘hidden’ innovations that drive the realisation of value in management, the arts, public service and society in general. The new proposed indicators, which are mostly focused on innovation outputs, can be used as a basis for plugging the gaps identified in the existing surveys.

  14. Validation of Metrics for Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2008-01-01

    Full Text Available This paper describe the new concepts of collaborative systems metrics validation. The paper define the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.

  15. Metrics Feedback Cycle: measuring and improving user engagement in gamified eLearning systems

    Directory of Open Access Journals (Sweden)

    Adam Atkins

    2017-12-01

    Full Text Available This paper presents the identification, design and implementation of a set of metrics of user engagement in a gamified eLearning application. The 'Metrics Feedback Cycle' (MFC is introduced as a formal process prescribing the iterative evaluation and improvement of application-wide engagement, using data collected from metrics as input to improve related engagement features. This framework was showcased using a gamified eLearning application as a case study. In this paper, we designed a prototype and tested it with thirty-six (N=36 students to validate the effectiveness of the MFC. The analysis and interpretation of metrics data shows that the gamification features had a positive effect on user engagement, and helped identify areas in which this could be improved. We conclude that the MFC has applications in gamified systems that seek to maximise engagement by iteratively evaluating implemented features against a set of evolving metrics.

  16. Understanding Acceptance of Software Metrics--A Developer Perspective

    Science.gov (United States)

    Umarji, Medha

    2009-01-01

    Software metrics are measures of software products and processes. Metrics are widely used by software organizations to help manage projects, improve product quality and increase efficiency of the software development process. However, metrics programs tend to have a high failure rate in organizations, and developer pushback is one of the sources…

  17. Relationship of liver stiffness and controlled attenuation parameter measured by transient elastography with diabetes mellitus in patients with chronic liver disease.

    Science.gov (United States)

    Ahn, Jem Ma; Paik, Yong-Han; Kim, So Hyun; Lee, Jun Hee; Cho, Ju Yeon; Sohn, Won; Gwak, Geum-Youn; Choi, Moon Seok; Lee, Joon Hyeok; Koh, Kwang Cheol; Paik, Seung Woon; Yoo, Byung Chul

    2014-08-01

    High prevalence of diabetes mellitus in patients with liver cirrhosis has been reported in many studies. The aim of our study was to evaluate the relationship of hepatic fibrosis and steatosis assessed by transient elastography with diabetes in patients with chronic liver disease. The study population consisted of 979 chronic liver disease patients. Liver fibrosis and steatosis were assessed by liver stiffness measurement (LSM) and controlled attenuation parameter (CAP) on transient elastography. Diabetes was diagnosed in 165 (16.9%) of 979 patients. The prevalence of diabetes had significant difference among the etiologies of chronic liver disease. Higher degrees of liver fibrosis and steatosis, assessed by LSM and CAP score, showed higher prevalence of diabetes (F0/1 [14%], F2/3 [18%], F4 [31%], Pdiabetes were hypertension (OR, 1.98; P=0.001), LSM F4 (OR, 1.86; P=0.010), male gender (OR, 1.60; P=0.027), and age>50 yr (OR, 1.52; P=0.046). The degree of hepatic fibrosis but not steatosis assessed by transient elastography has significant relationship with the prevalence of diabetes in patients with chronic liver disease.

  18. Reproducibility of graph metrics in fMRI networks

    Directory of Open Access Journals (Sweden)

    Qawi K Telesford

    2010-12-01

    Full Text Available The reliability of graph metrics calculated in network analysis is essential to the interpretation of complex network organization. These graph metrics are used to deduce the small-world properties in networks. In this study, we investigated the test-retest reliability of graph metrics from functional magnetic resonance imaging (fMRI data collected for two runs in 45 healthy older adults. Graph metrics were calculated on data for both runs and compared using intraclass correlation coefficient (ICC statistics and Bland-Altman (BA plots. ICC scores describe the level of absolute agreement between two measurements and provide a measure of reproducibility. For mean graph metrics, ICC scores were high for clustering coefficient (ICC=0.86, global efficiency (ICC=0.83, path length (ICC=0.79, and local efficiency (ICC=0.75; the ICC score for degree was found to be low (ICC=0.29. ICC scores were also used to generate reproducibility maps in brain space to test voxel-wise reproducibility for unsmoothed and smoothed data. Reproducibility was uniform across the brain for global efficiency and path length, but was only high in network hubs for clustering coefficient, local efficiency and degree. BA plots were used to test the measurement repeatability of all graph metrics. All graph metrics fell within the limits for repeatability. Together, these results suggest that with exception of degree, mean graph metrics are reproducible and suitable for clinical studies. Further exploration is warranted to better understand reproducibility across the brain on a voxel-wise basis.

  19. Validating New Software for Semiautomated Liver Volumetry--Better than Manual Measurement?

    Science.gov (United States)

    Noschinski, L E; Maiwald, B; Voigt, P; Wiltberger, G; Kahn, T; Stumpp, P

    2015-09-01

    This prospective study compared a manual program for liver volumetry with semiautomated software. The hypothesis was that the semiautomated software would be faster, more accurate and less dependent on the evaluator's experience. Ten patients undergoing hemihepatectomy were included in this IRB approved study after written informed consent. All patients underwent a preoperative abdominal 3-phase CT scan, which was used for whole liver volumetry and volume prediction for the liver part to be resected. Two different types of software were used: 1) manual method: borders of the liver had to be defined per slice by the user; 2) semiautomated software: automatic identification of liver volume with manual assistance for definition of Couinaud segments. Measurements were done by six observers with different experience levels. Water displacement volumetry immediately after partial liver resection served as the gold standard. The resected part was examined with a CT scan after displacement volumetry. Volumetry of the resected liver scan showed excellent correlation to water displacement volumetry (manual: ρ = 0.997; semiautomated software: ρ = 0.995). The difference between the predicted volume and the real volume was significantly smaller with the semiautomated software than with the manual method (33% vs. 57%, p = 0.002). The semiautomated software was almost four times faster for volumetry of the whole liver (manual: 6:59 ± 3:04 min; semiautomated: 1:47 ± 1:11 min). Both methods for liver volumetry give an estimated liver volume close to the real one. The tested semiautomated software is faster, more accurate in predicting the volume of the resected liver part, gives more reproducible results and is less dependent on the user's experience. Both tested types of software allow exact volumetry of resected liver parts. Preoperative prediction can be performed more accurately with the semiautomated software. The semiautomated software is nearly four times faster than the

  20. Estimates for Parameter Littlewood-Paley gκ⁎ Functions on Nonhomogeneous Metric Measure Spaces

    Directory of Open Access Journals (Sweden)

    Guanghui Lu

    2016-01-01

    Full Text Available Let (X,d,μ be a metric measure space which satisfies the geometrically doubling measure and the upper doubling measure conditions. In this paper, the authors prove that, under the assumption that the kernel of Mκ⁎ satisfies a certain Hörmander-type condition, Mκ⁎,ρ is bounded from Lebesgue spaces Lp(μ to Lebesgue spaces Lp(μ for p≥2 and is bounded from L1(μ into L1,∞(μ. As a corollary, Mκ⁎,ρ is bounded on Lp(μ for 1

  1. On Nakhleh's metric for reduced phylogenetic networks

    OpenAIRE

    Cardona, Gabriel; Llabrés, Mercè; Rosselló, Francesc; Valiente Feruglio, Gabriel Alejandro

    2009-01-01

    We prove that Nakhleh’s metric for reduced phylogenetic networks is also a metric on the classes of tree-child phylogenetic networks, semibinary tree-sibling time consistent phylogenetic networks, and multilabeled phylogenetic trees. We also prove that it separates distinguishable phylogenetic networks. In this way, it becomes the strongest dissimilarity measure for phylogenetic networks available so far. Furthermore, we propose a generalization of that metric that separates arbitrary phyl...

  2. Liver stiffness measurement by transient elastography predicts late posthepatectomy outcomes in patients undergoing resection for hepatocellular carcinoma.

    Science.gov (United States)

    Rajakannu, Muthukumarassamy; Cherqui, Daniel; Ciacio, Oriana; Golse, Nicolas; Pittau, Gabriella; Allard, Marc Antoine; Antonini, Teresa Maria; Coilly, Audrey; Sa Cunha, Antonio; Castaing, Denis; Samuel, Didier; Guettier, Catherine; Adam, René; Vibert, Eric

    2017-10-01

    Postoperative hepatic decompensation is a serious complication of liver resection in patients undergoing hepatectomy for hepatocellular carcinoma. Liver fibrosis and clinical significant portal hypertension are well-known risk factors for hepatic decompensation. Liver stiffness measurement is a noninvasive method of evaluating hepatic venous pressure gradient and functional hepatic reserve by estimating hepatic fibrosis. Effectiveness of liver stiffness measurement in predicting persistent postoperative hepatic decompensation has not been investigated. Consecutive patients with resectable hepatocellular carcinoma were recruited prospectively and liver stiffness measurement of nontumoral liver was measured using FibroScan. Hepatic venous pressure gradient was measured intraoperatively by direct puncture of portal vein and inferior vena cava. Hepatic venous pressure gradient ≥10 mm Hg was defined as clinically significant portal hypertension. Primary outcome was persistent hepatic decompensation defined as the presence of at least one of the following: unresolved ascites, jaundice, and/or encephalopathy >3 months after hepatectomy. One hundred and six hepatectomies, including 22 right hepatectomy (20.8%), 3 central hepatectomy (2.8%), 12 left hepatectomy (11.3%), 11 bisegmentectomy (10.4%), 30 unisegmentectomy (28.3%), and 28 partial hepatectomy (26.4%) were performed in patients for hepatocellular carcinoma (84 men and 22 women with median age of 67.5 years; median model for end-stage liver disease score of 8). Ninety-day mortality was 4.7%. Nine patients (8.5%) developed postoperative hepatic decompensation. Multivariate logistic regression bootstrapped at 1,000 identified liver stiffness measurement (P = .001) as the only preoperative predictor of postoperative hepatic decompensation. Area under receiver operating characteristic curve for liver stiffness measurement and hepatic venous pressure gradient was 0.81 (95% confidence interval, 0.506-0.907) and 0

  3. RAAK PRO project: measuring safety in aviation : concept for the design of new metrics

    NARCIS (Netherlands)

    Karanikas, Nektarios; Kaspers, Steffen; Roelen, Alfred; Piric, Selma; van Aalst, Robbert; de Boer, Robert

    2017-01-01

    Following the completion of the 1st phase of the RAAK PRO project Aviation Safety Metrics, during which the researchers mapped the current practice in safety metrics and explored the validity of monotonic relationships of SMS, activity and demographic metrics with safety outcomes, this report

  4. Is Time the Best Metric to Measure Carbon-Related Climate Change Potential and Tune the Economy Toward Reduced Fossil Carbon Extraction?

    Science.gov (United States)

    DeGroff, F. A.

    2016-12-01

    Anthropogenic changes to non-anthropogenic carbon fluxes are a primary driver of climate change. There currently exists no comprehensive metric to measure and value anthropogenic changes in carbon flux between all states of carbon. Focusing on atmospheric carbon emissions as a measure of anthropogenic activity on the environment ignores the fungible characteristics of carbon that are crucial in both the biosphere and the worldwide economy. Focusing on a single form of inorganic carbon as a proxy metric for the plethora of anthropogenic activity and carbon compounds will prove inadequate, convoluted, and unmanageable. A broader, more basic metric is needed to capture the entirety of carbon activity, particularly in an economic, profit-driven environment. We propose a new metric to measure changes in the temporal distance of any form or state of carbon from one state to another. Such a metric would be especially useful to measure the temporal distance of carbon from sinks such as the atmosphere or oceans. The effect of changes in carbon flux as a result of any human activity can be measured by the difference between the anthropogenic and non-anthropogenic temporal distance. The change in the temporal distance is a measure of the climate change potential much like voltage is a measure of electrical potential. The integral of the climate change potential is proportional to the anthropogenic climate change. We also propose a logarithmic vector scale for carbon quality, cq, as a measure of anthropogenic changes in carbon flux. The distance between the cq vector starting and ending temporal distances represents the change in cq. A base-10 logarithmic scale would allow the addition and subtraction of exponents to calculate changes in cq. As anthropogenic activity changes the temporal distance of carbon, the change in cq is measured as: cq = ß ( log10 [mean carbon temporal distance] ) where ß represents the carbon price coefficient for a particular country. For any

  5. Web metrics for library and information professionals

    CERN Document Server

    Stuart, David

    2014-01-01

    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional.Th...

  6. Assessing Software Quality Through Visualised Cohesion Metrics

    Directory of Open Access Journals (Sweden)

    Timothy Shih

    2001-05-01

    Full Text Available Cohesion is one of the most important factors for software quality as well as maintainability, reliability and reusability. Module cohesion is defined as a quality attribute that seeks for measuring the singleness of the purpose of a module. The module of poor quality can be a serious obstacle to the system quality. In order to design a good software quality, software managers and engineers need to introduce cohesion metrics to measure and produce desirable software. A highly cohesion software is thought to be a desirable constructing. In this paper, we propose a function-oriented cohesion metrics based on the analysis of live variables, live span and the visualization of processing element dependency graph. We give six typical cohesion examples to be measured as our experiments and justification. Therefore, a well-defined, well-normalized, well-visualized and well-experimented cohesion metrics is proposed to indicate and thus enhance software cohesion strength. Furthermore, this cohesion metrics can be easily incorporated with software CASE tool to help software engineers to improve software quality.

  7. Multi-slice CT three dimensional volume measurement of tumors and livers in hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Yu Yuanlong; Li Liangcai; Tang Binghang; Hu Zemin

    2004-01-01

    Objective: To examine the accuracy of multi-slice CT (MSCT) three dimensional (3D) volume measurement of tumors and livers in hepatocellular carcinoma cases by using immersion method as the standard. Methods: (1) The volume of 25 porkling livers was measured using immersion method in experiment group in vitro. Then the models were built according to Matsumoto's method and CT scanning and special software were used to measure the volume of the livers. (2) The volume of the tumors in 25 cases of hepatocellular carcinoma was measured using diameter measurement method and special volume measurement software (tissue measurements). Two tumors of them were measured respectively using MSCT 3D measurement, diameter measurement before the operation and immersion method after the operation. The data of the two groups were examined using pairing t test. Results: (1) The volume range of 25 porkling livers was 68.50-1150.10 ml using immersion method and 69.78-1069.97 ml using MSCT 3D measurement. There was no significant difference of the data in these two groups using t-test (t=1.427, P>0.05). (2) The volume range of 25 hepatocellular tumors was 395.16-2747.7 ml using diameter measurement and 203.10-1463.19 ml using MSCT 3D measurement before the operation. There was significant difference of the data in these two groups using t-test (t=7.689, P<0.001). In 2 ablated tumors, 1 case's volume was (21.75±0.60) ml using MSCT 3D measurement and 33.73 ml using diameter measurement before the operation and 21.50 ml using immersion measurement after the operation. The other case's volume was (696.13±5.30) ml using MSCT 3D measurement and 1323.51 ml using diameter measurement before the operation and 685.50 ml using immersion measurement after the operation. Conclusion: MSCT 3D volume measurement can accurately measure the volume of tumor and liver and has important clinical application value. There is no significant difference between MSCT 3D volume measurement and immersion method

  8. Standardised metrics for global surgical surveillance.

    Science.gov (United States)

    Weiser, Thomas G; Makary, Martin A; Haynes, Alex B; Dziekan, Gerald; Berry, William R; Gawande, Atul A

    2009-09-26

    Public health surveillance relies on standardised metrics to evaluate disease burden and health system performance. Such metrics have not been developed for surgical services despite increasing volume, substantial cost, and high rates of death and disability associated with surgery. The Safe Surgery Saves Lives initiative of WHO's Patient Safety Programme has developed standardised public health metrics for surgical care that are applicable worldwide. We assembled an international panel of experts to develop and define metrics for measuring the magnitude and effect of surgical care in a population, while taking into account economic feasibility and practicability. This panel recommended six measures for assessing surgical services at a national level: number of operating rooms, number of operations, number of accredited surgeons, number of accredited anaesthesia professionals, day-of-surgery death ratio, and postoperative in-hospital death ratio. We assessed the feasibility of gathering such statistics at eight diverse hospitals in eight countries and incorporated them into the WHO Guidelines for Safe Surgery, in which methods for data collection, analysis, and reporting are outlined.

  9. Metric Learning for Hyperspectral Image Segmentation

    Science.gov (United States)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  10. Characterising risk - aggregated metrics: radiation and noise

    International Nuclear Information System (INIS)

    Passchier, W.

    1998-01-01

    The characterisation of risk is an important phase in the risk assessment - risk management process. From the multitude of risk attributes a few have to be selected to obtain a risk characteristic or profile that is useful for risk management decisions and implementation of protective measures. One way to reduce the number of attributes is aggregation. In the field of radiation protection such an aggregated metric is firmly established: effective dose. For protection against environmental noise the Health Council of the Netherlands recently proposed a set of aggregated metrics for noise annoyance and sleep disturbance. The presentation will discuss similarities and differences between these two metrics and practical limitations. The effective dose has proven its usefulness in designing radiation protection measures, which are related to the level of risk associated with the radiation practice in question, given that implicit judgements on radiation induced health effects are accepted. However, as the metric does not take into account the nature of radiation practice, it is less useful in policy discussions on the benefits and harm of radiation practices. With respect to the noise exposure metric, only one effect is targeted (annoyance), and the differences between sources are explicitly taken into account. This should make the metric useful in policy discussions with respect to physical planning and siting problems. The metric proposed has only significance on a population level, and can not be used as a predictor for individual risk. (author)

  11. Development and application of a rat PBPK model to elucidate kidney and liver effects induced by ETBE and tert-butanol

    International Nuclear Information System (INIS)

    Salazar, Keith D.; Brinkerhoff, Christopher J.; Lee, Janice S.; Chiu, Weihsueh A.

    2015-01-01

    Subchronic and chronic studies in rats of the gasoline oxygenates ethyl tert-butyl ether (ETBE) and tert-butanol (TBA) report similar noncancer kidney and liver effects but differing results with respect to kidney and liver tumors. Because TBA is a major metabolite of ETBE, it is possible that TBA is the active toxic moiety in all these studies, with reported differences due simply to differences in the internal dose. To test this hypothesis, a physiologically-based pharmacokinetic (PBPK) model was developed for ETBE and TBA to calculate internal dosimetrics of TBA following either TBA or ETBE exposure. This model, based on earlier PBPK models of methyl tert-butyl ether (MTBE), was used to evaluate whether kidney and liver effects are consistent across routes of exposure, as well as between ETBE and TBA studies, on the basis of estimated internal dose. The results demonstrate that noncancer kidney effects, including kidney weight changes, urothelial hyperplasia, and chronic progressive nephropathy (CPN), yielded consistent dose–response relationships across routes of exposure and across ETBE and TBA studies using TBA blood concentration as the dose metric. Relative liver weights were also consistent across studies on the basis of TBA metabolism, which is proportional to TBA liver concentrations. However, kidney and liver tumors were not consistent using any dose metric. These results support the hypothesis that TBA mediates the noncancer kidney and liver effects following ETBE administration; however, additional factors besides internal dose are necessary to explain the induction of liver and kidney tumors. - Highlights: • We model two metabolically-related fuel oxygenates to address toxicity data gaps. • Kidney and liver effects are compared on an internal dose basis. • Noncancer kidney effects are consistent using TBA blood concentration. • Liver weight changes are consistent using TBA metabolic rate. • Kidney and liver tumors are not consistent using

  12. Development and application of a rat PBPK model to elucidate kidney and liver effects induced by ETBE and tert-butanol

    Energy Technology Data Exchange (ETDEWEB)

    Salazar, Keith D., E-mail: Salazar.keith@epa.gov [Toxicity Pathways Branch, IRIS Division, NCEA, ORD, US EPA, Washington, DC 20460 (United States); Brinkerhoff, Christopher J., E-mail: Brinkerhoff.Chris@epa.gov [Risk Assessment Division, OPPT, OCSPP, US EPA, Washington, DC 20460 (United States); Lee, Janice S., E-mail: Lee.JaniceS@epa.gov [Toxicity Pathways Branch, IRIS Division, NCEA, ORD, US EPA, Washington, DC 20460 (United States); Chiu, Weihsueh A., E-mail: wchiu@cvm.tamu.edu [Department of Veterinary Integrative Biosciences, College of Veterinary Medicine and Biomedical Sciences, Texas A& M University, College Station, TX 77843 (United States)

    2015-11-01

    Subchronic and chronic studies in rats of the gasoline oxygenates ethyl tert-butyl ether (ETBE) and tert-butanol (TBA) report similar noncancer kidney and liver effects but differing results with respect to kidney and liver tumors. Because TBA is a major metabolite of ETBE, it is possible that TBA is the active toxic moiety in all these studies, with reported differences due simply to differences in the internal dose. To test this hypothesis, a physiologically-based pharmacokinetic (PBPK) model was developed for ETBE and TBA to calculate internal dosimetrics of TBA following either TBA or ETBE exposure. This model, based on earlier PBPK models of methyl tert-butyl ether (MTBE), was used to evaluate whether kidney and liver effects are consistent across routes of exposure, as well as between ETBE and TBA studies, on the basis of estimated internal dose. The results demonstrate that noncancer kidney effects, including kidney weight changes, urothelial hyperplasia, and chronic progressive nephropathy (CPN), yielded consistent dose–response relationships across routes of exposure and across ETBE and TBA studies using TBA blood concentration as the dose metric. Relative liver weights were also consistent across studies on the basis of TBA metabolism, which is proportional to TBA liver concentrations. However, kidney and liver tumors were not consistent using any dose metric. These results support the hypothesis that TBA mediates the noncancer kidney and liver effects following ETBE administration; however, additional factors besides internal dose are necessary to explain the induction of liver and kidney tumors. - Highlights: • We model two metabolically-related fuel oxygenates to address toxicity data gaps. • Kidney and liver effects are compared on an internal dose basis. • Noncancer kidney effects are consistent using TBA blood concentration. • Liver weight changes are consistent using TBA metabolic rate. • Kidney and liver tumors are not consistent using

  13. Empirical Information Metrics for Prediction Power and Experiment Planning

    Directory of Open Access Journals (Sweden)

    Christopher Lee

    2011-01-01

    Full Text Available In principle, information theory could provide useful metrics for statistical inference. In practice this is impeded by divergent assumptions: Information theory assumes the joint distribution of variables of interest is known, whereas in statistical inference it is hidden and is the goal of inference. To integrate these approaches we note a common theme they share, namely the measurement of prediction power. We generalize this concept as an information metric, subject to several requirements: Calculation of the metric must be objective or model-free; unbiased; convergent; probabilistically bounded; and low in computational complexity. Unfortunately, widely used model selection metrics such as Maximum Likelihood, the Akaike Information Criterion and Bayesian Information Criterion do not necessarily meet all these requirements. We define four distinct empirical information metrics measured via sampling, with explicit Law of Large Numbers convergence guarantees, which meet these requirements: Ie, the empirical information, a measure of average prediction power; Ib, the overfitting bias information, which measures selection bias in the modeling procedure; Ip, the potential information, which measures the total remaining information in the observations not yet discovered by the model; and Im, the model information, which measures the model’s extrapolation prediction power. Finally, we show that Ip + Ie, Ip + Im, and Ie — Im are fixed constants for a given observed dataset (i.e. prediction target, independent of the model, and thus represent a fundamental subdivision of the total information contained in the observations. We discuss the application of these metrics to modeling and experiment planning.    

  14. Liver regeneration and restoration of liver function after partial hepatectomy in patients with liver tumors

    International Nuclear Information System (INIS)

    Jansen, P.L.M.; Chamuleau, R.A.F.; Leeuwen, D.J. van; Schippor, H.G.; Busemann-Sokole, E.; Heyde, M.N. van der

    1990-01-01

    Liver regeneration and restoration of liver function were studied in six patients who underwent partial hepatectomy with removal of 30-70% of the liver. Liver volume and liver regeneration were studied by single photon computed tomography (SPECT), using 99m Tc-colloid as tracer. The method was assessed in 11 patients by comparing the pre- and post-operative volume measurement with the volume of the resected liver mass. Liver function was determined by measuring the galactose elimination capacity and the caffeine clearance. After a postoperative follow-up period of 50 days, the liver had regenerated maximally to a volume of 75 ± 2% of the preoperative liver mass. Maximal restoration of liver function was achieved 120 days after operation and amounted to 75 ± 10% for the caffeine clearance and to 100 ± 25% for the galactose elimination capacity. This study shows that SPECT is a useful method for assessing liver regeneration in patients after partial hepatectomy. The study furthermore shows that caffeine clearance correlates well with total liver volume, whereas the galactose elimination capacity overestimates total liver volume after partial hepatectomy. 22 refs

  15. Software Power Metric Model: An Implementation | Akwukwuma ...

    African Journals Online (AJOL)

    ... and the execution time (TIME) in each case was recorded. We then obtain the application functions point count. Our result shows that the proposed metric is computable, consistent in its use of unit, and is programming language independent. Keywords: Software attributes, Software power, measurement, Software metric, ...

  16. About the possibility of a generalized metric

    International Nuclear Information System (INIS)

    Lukacs, B.; Ladik, J.

    1991-10-01

    The metric (the structure of the space-time) may be dependent on the properties of the object measuring it. The case of size dependence of the metric was examined. For this dependence the simplest possible form of the metric tensor has been constructed which fulfils the following requirements: there be two extremal characteristic scales; the metric be unique and the usual between them; the change be sudden in the neighbourhood of these scales; the size of the human body appear as a parameter (postulated on the basis of some philosophical arguments). Estimates have been made for the two extremal length scales according to existing observations. (author) 19 refs

  17. In vivo measurements of relaxation process in the human liver by MRI. The role of respiratory gating/triggering

    DEFF Research Database (Denmark)

    Thomsen, C; Henriksen, O; Ring, P

    1988-01-01

    In vivo estimation of relaxation processes in the liver by magnetic resonance imaging (MRI) may be helpful for characterization of various pathological conditions in the liver. However, such measurements may be significantly hampered by movement of the liver with the respiration. The effect...... of synchronization of data acquisition to the respiratory cycle on measured T1- and T2-relaxation curves was studied in normal subjects, patients with diffuse liver disease, and patients with focal liver pathology. Multi spin echo sequences with five different repetition times were used. The measurements were...... carried out with and without respiratory gating/triggering. In the healthy subjects as well as in the patients with diffuse liver diseases respiratory synchronization did not alter the obtained relaxation curves. However, in the patients with focal pathology the relaxation curves were significantly...

  18. Temporal variability of daily personal magnetic field exposure metrics in pregnant women.

    Science.gov (United States)

    Lewis, Ryan C; Evenson, Kelly R; Savitz, David A; Meeker, John D

    2015-01-01

    Recent epidemiology studies of power-frequency magnetic fields and reproductive health have characterized exposures using data collected from personal exposure monitors over a single day, possibly resulting in exposure misclassification due to temporal variability in daily personal magnetic field exposure metrics, but relevant data in adults are limited. We assessed the temporal variability of daily central tendency (time-weighted average, median) and peak (upper percentiles, maximum) personal magnetic field exposure metrics over 7 consecutive days in 100 pregnant women. When exposure was modeled as a continuous variable, central tendency metrics had substantial reliability, whereas peak metrics had fair (maximum) to moderate (upper percentiles) reliability. The predictive ability of a single-day metric to accurately classify participants into exposure categories based on a weeklong metric depended on the selected exposure threshold, with sensitivity decreasing with increasing exposure threshold. Consistent with the continuous measures analysis, sensitivity was higher for central tendency metrics than for peak metrics. If there is interest in peak metrics, more than 1 day of measurement is needed over the window of disease susceptibility to minimize measurement error, but 1 day may be sufficient for central tendency metrics.

  19. $\\eta$-metric structures

    OpenAIRE

    Gaba, Yaé Ulrich

    2017-01-01

    In this paper, we discuss recent results about generalized metric spaces and fixed point theory. We introduce the notion of $\\eta$-cone metric spaces, give some topological properties and prove some fixed point theorems for contractive type maps on these spaces. In particular we show that theses $\\eta$-cone metric spaces are natural generalizations of both cone metric spaces and metric type spaces.

  20. Gravitational lensing in metric theories of gravity

    International Nuclear Information System (INIS)

    Sereno, Mauro

    2003-01-01

    Gravitational lensing in metric theories of gravity is discussed. I introduce a generalized approximate metric element, inclusive of both post-post-Newtonian contributions and a gravitomagnetic field. Following Fermat's principle and standard hypotheses, I derive the time delay function and deflection angle caused by an isolated mass distribution. Several astrophysical systems are considered. In most of the cases, the gravitomagnetic correction offers the best perspectives for an observational detection. Actual measurements distinguish only marginally different metric theories from each other

  1. Homodynamic changes with liver fibrosis measured by dynamic contrast-enhanced MRI in the rat

    International Nuclear Information System (INIS)

    Kubo, Hitoshi; Harada, Masafumi; Ishikawa, Makoto; Nishitani, Hiromu

    2006-01-01

    The purpose of this study was to evaluate the hemodynamic changes of liver cirrhosis in the rat and investigate the relationship between hemodynamic changes and properties of fibrotic change in the liver. Three rats with cirrhosis induced by thioacetamide (TAA), three with disease induced by carbon tetrachloride (CCl 4 ), and three with no treatment were measured on dynamic MRI using a 1.5T scanner. Compartment and moment analysis were used to quantitate hemodynamic changes. Compartment model analysis showed that increased transition speed from vessels to the liver correlated with grade of liver fibrosis. Moment analysis demonstrated that decrease of area under the curve (AUC), mean residence time (MRT), variance of residence time (VRT), half life (T1/2) and increased total clearance (CL) correlated with grade of liver fibrosis. Hemodynamic changes in injured fibrotic liver may be influenced by the grade of fibrosis. Compartment model and moment analysis may be useful for evaluating hemodynamic changes in injured liver. (author)

  2. Improved noninvasive prediction of liver fibrosis by liver stiffness measurement in patients with nonalcoholic fatty liver disease accounting for controlled attenuation parameter values.

    Science.gov (United States)

    Petta, Salvatore; Wong, Vincent Wai-Sun; Cammà, Calogero; Hiriart, Jean-Baptiste; Wong, Grace Lai-Hung; Marra, Fabio; Vergniol, Julien; Chan, Anthony Wing-Hung; Di Marco, Vito; Merrouche, Wassil; Chan, Henry Lik-Yuen; Barbara, Marco; Le-Bail, Brigitte; Arena, Umberto; Craxì, Antonio; de Ledinghen, Victor

    2017-04-01

    Liver stiffness measurement (LSM) frequently overestimates the severity of liver fibrosis in nonalcoholic fatty liver disease (NAFLD). Controlled attenuation parameter (CAP) is a new parameter provided by the same machine used for LSM and associated with both steatosis and body mass index, the two factors mostly affecting LSM performance in NAFLD. We aimed to determine whether prediction of liver fibrosis by LSM in NAFLD patients is affected by CAP values. Patients (n = 324) were assessed by clinical and histological (Kleiner score) features. LSM and CAP were performed using the M probe. CAP values were grouped by tertiles (lower 132-298, middle 299-338, higher 339-400 dB/m). Among patients with F0-F2 fibrosis, mean LSM values, expressed in kilopascals, increased according to CAP tertiles (6.8 versus 8.6 versus 9.4, P = 0.001), and along this line the area under the curve of LSM for the diagnosis of F3-F4 fibrosis was progressively reduced from lower to middle and further to higher CAP tertiles (0.915, 0.848-0.982; 0.830, 0.753-0.908; 0.806, 0.723-0.890). As a consequence, in subjects with F0-F2 fibrosis, the rates of false-positive LSM results for F3-F4 fibrosis increased according to CAP tertiles (7.2% in lower versus 16.6% in middle versus 18.1% in higher). Consistent with this, a decisional flowchart for predicting fibrosis was suggested by combining both LSM and CAP values. In patients with NAFLD, CAP values should always be taken into account in order to avoid overestimations of liver fibrosis assessed by transient elastography. (Hepatology 2017;65:1145-1155). © 2016 by the American Association for the Study of Liver Diseases.

  3. Coverage Metrics for Model Checking

    Science.gov (United States)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  4. Software metrics a rigorous and practical approach

    CERN Document Server

    Fenton, Norman

    2014-01-01

    A Framework for Managing, Measuring, and Predicting Attributes of Software Development Products and ProcessesReflecting the immense progress in the development and use of software metrics in the past decades, Software Metrics: A Rigorous and Practical Approach, Third Edition provides an up-to-date, accessible, and comprehensive introduction to software metrics. Like its popular predecessors, this third edition discusses important issues, explains essential concepts, and offers new approaches for tackling long-standing problems.New to the Third EditionThis edition contains new material relevant

  5. Geographic inequities in liver allograft supply and demand: does it affect patient outcomes?

    Science.gov (United States)

    Rana, Abbas; Kaplan, Bruce; Riaz, Irbaz B; Porubsky, Marian; Habib, Shahid; Rilo, Horacio; Gruessner, Angelika C; Gruessner, Rainer W G

    2015-03-01

    Significant geographic inequities mar the distribution of liver allografts for transplantation. We analyzed the effect of geographic inequities on patient outcomes. During our study period (January 1 through December 31, 2010), 11,244 adult candidates were listed for liver transplantation: 5,285 adult liver allografts became available, and 5,471 adult recipients underwent transplantation. We obtained population data from the 2010 United States Census. To determine the effect of regional supply and demand disparities on patient outcomes, we performed linear regression and multivariate Cox regression analyses. Our proposed disparity metric, the ratio of listed candidates to liver allografts available varied from 1.3 (region 11) to 3.4 (region 1). When that ratio was used as the explanatory variable, the R(2) values for outcome measures were as follows: 1-year waitlist mortality, 0.23 and 1-year posttransplant survival, 0.27. According to our multivariate analysis, the ratio of listed candidates to liver allografts available had a significant effect on waitlist survival (hazards ratio, 1.21; 95% confidence interval, 1.04-1.40) but was not a significant risk factor for posttransplant survival. We found significant differences in liver allograft supply and demand--but these differences had only a modest effect on patient outcomes. Redistricting and allocation-sharing schemes should seek to equalize regional supply and demand rather than attempting to equalize patient outcomes.

  6. A new metric for measuring condition in large predatory sharks.

    Science.gov (United States)

    Irschick, D J; Hammerschlag, N

    2014-09-01

    A simple metric (span condition analysis; SCA) is presented for quantifying the condition of sharks based on four measurements of body girth relative to body length. Data on 104 live sharks from four species that vary in body form, behaviour and habitat use (Carcharhinus leucas, Carcharhinus limbatus, Ginglymostoma cirratum and Galeocerdo cuvier) are given. Condition shows similar levels of variability among individuals within each species. Carcharhinus leucas showed a positive relationship between condition and body size, whereas the other three species showed no relationship. There was little evidence for strong differences in condition between males and females, although more male sharks are needed for some species (e.g. G. cuvier) to verify this finding. SCA is potentially viable for other large marine or terrestrial animals that are captured live and then released. © 2014 The Fisheries Society of the British Isles.

  7. A Study on the Measurement of Intrapulmonary Shunt in Liver Diseases by the Nucleolide Method

    International Nuclear Information System (INIS)

    Yun, Sung Chul; Ahn, Jae Hee; Choi, Soo Bong

    1987-01-01

    The fact there are increase of intrapulmonary arteriovenous shunt amount in the liver cirrhosis patient has been known since 1950. And the method of shunt amount calculation by radionuclide method using 99m Tc-MAA was introduced in the middle of 1970. We measured intrapulmonary shunt amount by means of perfusion lung scan using 99m Tc-MAA in the various type of liver diseases especially in chronic liver diseases and acute liver disease. The results were as followed. 1) The amount of arteriovenous intrapulmonary shunt in the total case of liver disease was 9.3±3.9%, and that of in the control group was 4.6±2.1%. 2) The amount of arteriovenous intrapulmonary shunt in the chronic liver disease was 10.8±4.4%, and that of in the acute liver disease was 7.2±2.8%. We observed significant differences between normal control group and liver disease group, and between chronic liver disease group and acute liver disease group in the amount of shunt by the nucleolide method.

  8. Estimating liver perfusion from free-breathing continuously acquired dynamic gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid-enhanced acquisition with compressed sensing reconstruction.

    Science.gov (United States)

    Chandarana, Hersh; Block, Tobias Kai; Ream, Justin; Mikheev, Artem; Sigal, Samuel H; Otazo, Ricardo; Rusinek, Henry

    2015-02-01

    The purpose of this study was to estimate perfusion metrics in healthy and cirrhotic liver with pharmacokinetic modeling of high-temporal resolution reconstruction of continuously acquired free-breathing gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid-enhanced acquisition in patients undergoing clinically indicated liver magnetic resonance imaging. In this Health Insurance Portability and Accountability Act-compliant prospective study, 9 cirrhotic and 10 noncirrhotic patients underwent clinical magnetic resonance imaging, which included continuously acquired radial stack-of-stars 3-dimensional gradient recalled echo sequence with golden-angle ordering scheme in free breathing during contrast injection. A total of 1904 radial spokes were acquired continuously in 318 to 340 seconds. High-temporal resolution data sets were formed by grouping 13 spokes per frame for temporal resolution of 2.2 to 2.4 seconds, which were reconstructed using the golden-angle radial sparse parallel technique that combines compressed sensing and parallel imaging. High-temporal resolution reconstructions were evaluated by a board-certified radiologist to generate gadolinium concentration-time curves in the aorta (arterial input function), portal vein (venous input function), and liver, which were fitted to dual-input dual-compartment model to estimate liver perfusion metrics that were compared between cirrhotic and noncirrhotic livers. The cirrhotic livers had significantly lower total plasma flow (70.1 ± 10.1 versus 103.1 ± 24.3 mL/min per 100 mL; P The mean transit time was higher in the cirrhotic livers (24.4 ± 4.7 versus 15.7 ± 3.4 seconds; P the hepatocellular uptake rate was lower (3.03 ± 2.1 versus 6.53 ± 2.4 100/min; P < 0.05). Liver perfusion metrics can be estimated from free-breathing dynamic acquisition performed for every clinical examination without additional contrast injection or time. This is a novel paradigm for dynamic liver imaging.

  9. Measuring US Army medical evacuation: Metrics for performance improvement.

    Science.gov (United States)

    Galvagno, Samuel M; Mabry, Robert L; Maddry, Joseph; Kharod, Chetan U; Walrath, Benjamin D; Powell, Elizabeth; Shackelford, Stacy

    2018-01-01

    The US Army medical evacuation (MEDEVAC) community has maintained a reputation for high levels of success in transporting casualties from the point of injury to definitive care. This work served as a demonstration project to advance a model of quality assurance surveillance and medical direction for prehospital MEDEVAC providers within the Joint Trauma System. A retrospective interrupted time series analysis using prospectively collected data was performed as a process improvement project. Records were reviewed during two distinct periods: 2009 and 2014 to 2015. MEDEVAC records were matched to outcomes data available in the Department of Defense Trauma Registry. Abstracted deidentified data were reviewed for specific outcomes, procedures, and processes of care. Descriptive statistics were applied as appropriate. A total of 1,008 patients were included in this study. Nine quality assurance metrics were assessed. These metrics were: airway management, management of hypoxemia, compliance with a blood transfusion protocol, interventions for hypotensive patients, quality of battlefield analgesia, temperature measurement and interventions, proportion of traumatic brain injury (TBI) patients with hypoxemia and/or hypotension, proportion of traumatic brain injury patients with an appropriate assessment, and proportion of missing data. Overall survival in the subset of patients with outcomes data available in the Department of Defense Trauma Registry was 97.5%. The data analyzed for this study suggest overall high compliance with established tactical combat casualty care guidelines. In the present study, nearly 7% of patients had at least one documented oxygen saturation of less than 90%, and 13% of these patients had no documentation of any intervention for hypoxemia, indicating a need for training focus on airway management for hypoxemia. Advances in battlefield analgesia continued to evolve over the period when data for this study was collected. Given the inherent high

  10. A new formula for estimation of standard liver volume using computed tomography-measured body thickness.

    Science.gov (United States)

    Ma, Ka Wing; Chok, Kenneth S H; Chan, Albert C Y; Tam, Henry S C; Dai, Wing Chiu; Cheung, Tan To; Fung, James Y Y; Lo, Chung Mau

    2017-09-01

    The objective of this article is to derive a more accurate and easy-to-use formula for finding estimated standard liver volume (ESLV) using novel computed tomography (CT) measurement parameters. New formulas for ESLV have been emerging that aim to improve the accuracy of estimation. However, many of these formulas contain body surface area measurements and logarithms in the equations that lead to a more complicated calculation. In addition, substantial errors in ESLV using these old formulas have been shown. An improved version of the formula for ESLV is needed. This is a retrospective cohort of consecutive living donor liver transplantations from 2005 to 2016. Donors were randomly assigned to either the formula derivation or validation groups. Total liver volume (TLV) measured by CT was used as the reference for a linear regression analysis against various patient factors. The derived formula was compared with the existing formulas. There were 722 patients (197 from the derivation group, 164 from the validation group, and 361 from the recipient group) involved in the study. The donor's body weight (odds ratio [OR], 10.42; 95% confidence interval [CI], 7.25-13.60; P Liver Transplantation 23 1113-1122 2017 AASLD. © 2017 by the American Association for the Study of Liver Diseases.

  11. Correlation study of spleen stiffness measured by FibroTouch with esophageal and gastric varices in patients with liver cirrhosis

    Directory of Open Access Journals (Sweden)

    WEI Yutong

    2015-03-01

    Full Text Available ObjectiveTo explore the correlation of spleen stiffness measured by FibroScan with esophageal and gastric varices in patients with liver cirrhosis. MethodsSpleen and liver stiffness was measured by FibroScan in 72 patients with liver cirrhosis who received gastroscopy in our hospital from December 2012 to December 2013. Categorical data were analyzed by χ2 test, and continuous data were analyzed by t test. Pearson's correlation analysis was used to investigate the correlation between the degree of esophageal varices and spleen stiffness. ResultsWith the increase in the Child-Pugh score in patients, the measurements of liver and spleen stiffness showed a rising trend. Correlation was found between the measurements of spleen and liver stiffness (r=0.367, P<0.05. The differences in measurements of spleen stiffness between patients with Child-Pugh classes A, B, and C were all significant (t=5.149, 7.231, and 6.119, respectively; P=0031, 0.025, and 0.037, respectively. The measurements of spleen and liver stiffness showed marked increases in patients with moderate and severe esophageal and gastric varices. The receiver operating characteristic (ROC curve analysis showed that the area under the ROC curve, sensitivity, and specificity for spleen stiffness were significantly higher than those for liver stiffness and platelet count/spleen thickness. ConclusionThe spleen stiffness measurement by FibroScan shows a good correlation with the esophageal and gastric varices in patients with liver cirrhosis. FibroScan is safe and noninvasive, and especially useful for those who are not suitable for gastroscopy.

  12. The independence of software metrics taken at different life-cycle stages

    Science.gov (United States)

    Kafura, D.; Canning, J.; Reddy, G.

    1984-01-01

    Over the past few years a large number of software metrics have been proposed and, in varying degrees, a number of these metrics have been subjected to empirical validation which demonstrated the utility of the metrics in the software development process. Attempts to classify these metrics and to determine if the metrics in these different classes appear to be measuring distinct attributes of the software product are studied. Statistical analysis is used to determine the degree of relationship among the metrics.

  13. Metrical results on systems of small linear forms

    DEFF Research Database (Denmark)

    Hussain, M.; Kristensen, Simon

    In this paper the metric theory of Diophantine approximation associated with the small linear forms is investigated. Khintchine--Groshev theorems are established along with Hausdorff measure generalization without the monotonic assumption on the approximating function.......In this paper the metric theory of Diophantine approximation associated with the small linear forms is investigated. Khintchine--Groshev theorems are established along with Hausdorff measure generalization without the monotonic assumption on the approximating function....

  14. Metrication: An economic wake-up call for US industry

    Science.gov (United States)

    Carver, G. P.

    1993-03-01

    As the international standard of measurement, the metric system is one key to success in the global marketplace. International standards have become an important factor in international economic competition. Non-metric products are becoming increasingly unacceptable in world markets that favor metric products. Procurement is the primary federal tool for encouraging and helping U.S. industry to convert voluntarily to the metric system. Besides the perceived unwillingness of the customer, certain regulatory language, and certain legal definitions in some states, there are no major impediments to conversion of the remaining non-metric industries to metric usage. Instead, there are good reasons for changing, including an opportunity to rethink many industry standards and to take advantage of size standardization. Also, when the remaining industries adopt the metric system, they will come into conformance with federal agencies engaged in similar activities.

  15. Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact

    Science.gov (United States)

    2011-01-01

    Background Citations in peer-reviewed articles and the impact factor are generally accepted measures of scientific impact. Web 2.0 tools such as Twitter, blogs or social bookmarking tools provide the possibility to construct innovative article-level or journal-level metrics to gauge impact and influence. However, the relationship of the these new metrics to traditional metrics such as citations is not known. Objective (1) To explore the feasibility of measuring social impact of and public attention to scholarly articles by analyzing buzz in social media, (2) to explore the dynamics, content, and timing of tweets relative to the publication of a scholarly article, and (3) to explore whether these metrics are sensitive and specific enough to predict highly cited articles. Methods Between July 2008 and November 2011, all tweets containing links to articles in the Journal of Medical Internet Research (JMIR) were mined. For a subset of 1573 tweets about 55 articles published between issues 3/2009 and 2/2010, different metrics of social media impact were calculated and compared against subsequent citation data from Scopus and Google Scholar 17 to 29 months later. A heuristic to predict the top-cited articles in each issue through tweet metrics was validated. Results A total of 4208 tweets cited 286 distinct JMIR articles. The distribution of tweets over the first 30 days after article publication followed a power law (Zipf, Bradford, or Pareto distribution), with most tweets sent on the day when an article was published (1458/3318, 43.94% of all tweets in a 60-day period) or on the following day (528/3318, 15.9%), followed by a rapid decay. The Pearson correlations between tweetations and citations were moderate and statistically significant, with correlation coefficients ranging from .42 to .72 for the log-transformed Google Scholar citations, but were less clear for Scopus citations and rank correlations. A linear multivariate model with time and tweets as significant

  16. Clinical evaluation of preoperative measurement of liver volume by CT volumetry

    International Nuclear Information System (INIS)

    Takahashi, Masahiro; Sasaki, Ryoko; Kato, Kenichi

    2003-01-01

    The utility of measuring liver volume by CT volumetry prior to hepatectomy for treatment of hepatobiliary diseases was assessed by investigating the relationship between liver volume and perioperative hepatic function, and some perioperative factors. Both residual liver volume (RLV) and functional residual liver volume rate (%FRLV) had a significant negative correlation with maximum postoperative total bilirubin (T. Bil) (r=-0.318, r=-0.477, respectively). Further, RLV and %FRLV exhibited a negative correlation with length of intensive care unit (ICU) stay (r=-0.297, r=-0.397, respectively). The ratio of patients with maximum postoperative T. Bil≥10 mg/dl among patients with RLV<500 ml was significantly higher than that among patients with RLV≥500 ml (p<0.05). Similarly, the ratio of patients with maximum postoperative T. Bil≥10 mg/dl among patients with %FRLV<40% was significantly higher than that among patients with %FRLV≥40% (p<0.05). Among patients with %FRLV<40%, maximum T. Bil for patients who underwent portal vein embolization (PVE) was significantly lower than that for patients who did not undergo PVE (p<0.05). When performing hepatectomy, the risk of severe postoperative liver failure is low as long as %FRLV and RLV are above 40% and 500 ml, respectively, and PVE is useful for performing extended hepatectomy when %FRLV is <40%. (author)

  17. Landscape pattern metrics and regional assessment

    Science.gov (United States)

    O'Neill, R. V.; Riitters, K.H.; Wickham, J.D.; Jones, K.B.

    1999-01-01

    The combination of remote imagery data, geographic information systems software, and landscape ecology theory provides a unique basis for monitoring and assessing large-scale ecological systems. The unique feature of the work has been the need to develop and interpret quantitative measures of spatial pattern-the landscape indices. This article reviews what is known about the statistical properties of these pattern metrics and suggests some additional metrics based on island biogeography, percolation theory, hierarchy theory, and economic geography. Assessment applications of this approach have required interpreting the pattern metrics in terms of specific environmental endpoints, such as wildlife and water quality, and research into how to represent synergystic effects of many overlapping sources of stress.

  18. State of the art metrics for aspect oriented programming

    Science.gov (United States)

    Ghareb, Mazen Ismaeel; Allen, Gary

    2018-04-01

    The quality evaluation of software, e.g., defect measurement, gains significance with higher use of software applications. Metric measurements are considered as the primary indicator of imperfection prediction and software maintenance in various empirical studies of software products. However, there is no agreement on which metrics are compelling quality indicators for novel development approaches such as Aspect Oriented Programming (AOP). AOP intends to enhance programming quality, by providing new and novel constructs for the development of systems, for example, point cuts, advice and inter-type relationships. Hence, it is not evident if quality pointers for AOP can be derived from direct expansions of traditional OO measurements. Then again, investigations of AOP do regularly depend on established coupling measurements. Notwithstanding the late reception of AOP in empirical studies, coupling measurements have been adopted as useful markers of flaw inclination in this context. In this paper we will investigate the state of the art metrics for measurement of Aspect Oriented systems development.

  19. Adapting the Surgical Apgar Score for Perioperative Outcome Prediction in Liver Transplantation: A Retrospective Study

    Directory of Open Access Journals (Sweden)

    Amy C. S. Pearson, MD

    2017-11-01

    Conclusions. The SAS-LT utilized simple intraoperative metrics to predict early morbidity and mortality after liver transplant with similar accuracy to other scoring systems at an earlier postoperative time point.

  20. A family of metric gravities

    Science.gov (United States)

    Shuler, Robert

    2018-04-01

    The goal of this paper is to take a completely fresh approach to metric gravity, in which the metric principle is strictly adhered to but its properties in local space-time are derived from conservation principles, not inferred from a global field equation. The global field strength variation then gains some flexibility, but only in the regime of very strong fields (2nd-order terms) whose measurement is now being contemplated. So doing provides a family of similar gravities, differing only in strong fields, which could be developed into meaningful verification targets for strong fields after the manner in which far-field variations were used in the 20th century. General Relativity (GR) is shown to be a member of the family and this is demonstrated by deriving the Schwarzschild metric exactly from a suitable field strength assumption. The method of doing so is interesting in itself because it involves only one differential equation rather than the usual four. Exact static symmetric field solutions are also given for one pedagogical alternative based on potential, and one theoretical alternative based on inertia, and the prospects of experimentally differentiating these are analyzed. Whether the method overturns the conventional wisdom that GR is the only metric theory of gravity and that alternatives must introduce additional interactions and fields is somewhat semantical, depending on whether one views the field strength assumption as a field and whether the assumption that produces GR is considered unique in some way. It is of course possible to have other fields, and the local space-time principle can be applied to field gravities which usually are weak-field approximations having only time dilation, giving them the spatial factor and promoting them to full metric theories. Though usually pedagogical, some of them are interesting from a quantum gravity perspective. Cases are noted where mass measurement errors, or distributions of dark matter, can cause one

  1. Metric modular spaces

    CERN Document Server

    Chistyakov, Vyacheslav

    2015-01-01

    Aimed toward researchers and graduate students familiar with elements of functional analysis, linear algebra, and general topology; this book contains a general study of modulars, modular spaces, and metric modular spaces. Modulars may be thought of as generalized velocity fields and serve two important purposes: generate metric spaces in a unified manner and provide a weaker convergence, the modular convergence, whose topology is non-metrizable in general. Metric modular spaces are extensions of metric spaces, metric linear spaces, and classical modular linear spaces. The topics covered include the classification of modulars, metrizability of modular spaces, modular transforms and duality between modular spaces, metric  and modular topologies. Applications illustrated in this book include: the description of superposition operators acting in modular spaces, the existence of regular selections of set-valued mappings, new interpretations of spaces of Lipschitzian and absolutely continuous mappings, the existe...

  2. Developing a Security Metrics Scorecard for Healthcare Organizations.

    Science.gov (United States)

    Elrefaey, Heba; Borycki, Elizabeth; Kushniruk, Andrea

    2015-01-01

    In healthcare, information security is a key aspect of protecting a patient's privacy and ensuring systems availability to support patient care. Security managers need to measure the performance of security systems and this can be achieved by using evidence-based metrics. In this paper, we describe the development of an evidence-based security metrics scorecard specific to healthcare organizations. Study participants were asked to comment on the usability and usefulness of a prototype of a security metrics scorecard that was developed based on current research in the area of general security metrics. Study findings revealed that scorecards need to be customized for the healthcare setting in order for the security information to be useful and usable in healthcare organizations. The study findings resulted in the development of a security metrics scorecard that matches the healthcare security experts' information requirements.

  3. Next-Generation Metrics: Responsible Metrics & Evaluation for Open Science

    Energy Technology Data Exchange (ETDEWEB)

    Wilsdon, J.; Bar-Ilan, J.; Peters, I.; Wouters, P.

    2016-07-01

    Metrics evoke a mixed reaction from the research community. A commitment to using data to inform decisions makes some enthusiastic about the prospect of granular, real-time analysis o of research and its wider impacts. Yet we only have to look at the blunt use of metrics such as journal impact factors, h-indices and grant income targets, to be reminded of the pitfalls. Some of the most precious qualities of academic culture resist simple quantification, and individual indicators often struggle to do justice to the richness and plurality of research. Too often, poorly designed evaluation criteria are “dominating minds, distorting behaviour and determining careers (Lawrence, 2007).” Metrics hold real power: they are constitutive of values, identities and livelihoods. How to exercise that power to more positive ends has been the focus of several recent and complementary initiatives, including the San Francisco Declaration on Research Assessment (DORA1), the Leiden Manifesto2 and The Metric Tide3 (a UK government review of the role of metrics in research management and assessment). Building on these initiatives, the European Commission, under its new Open Science Policy Platform4, is now looking to develop a framework for responsible metrics for research management and evaluation, which can be incorporated into the successor framework to Horizon 2020. (Author)

  4. IDENTIFYING MARKETING EFFECTIVENESS METRICS (Case study: East Azerbaijan`s industrial units)

    OpenAIRE

    Faridyahyaie, Reza; Faryabi, Mohammad; Bodaghi Khajeh Noubar, Hossein

    2012-01-01

    The Paper attempts to identify marketing eff ectiveness metrics in industrial units. The metrics investigated in this study are completely applicable and comprehensive, and consequently they can evaluate marketing eff ectiveness in various industries. The metrics studied include: Market Share, Profitability, Sales Growth, Customer Numbers, Customer Satisfaction and Customer Loyalty. The findings indicate that these six metrics are impressive when measuring marketing effectiveness. Data was ge...

  5. Complexity Metrics for Workflow Nets

    DEFF Research Database (Denmark)

    Lassen, Kristian Bisgaard; van der Aalst, Wil M.P.

    2009-01-01

    analysts have difficulties grasping the dynamics implied by a process model. Recent empirical studies show that people make numerous errors when modeling complex business processes, e.g., about 20 percent of the EPCs in the SAP reference model have design flaws resulting in potential deadlocks, livelocks......, etc. It seems obvious that the complexity of the model contributes to design errors and a lack of understanding. It is not easy to measure complexity, however. This paper presents three complexity metrics that have been implemented in the process analysis tool ProM. The metrics are defined...... for a subclass of Petri nets named Workflow nets, but the results can easily be applied to other languages. To demonstrate the applicability of these metrics, we have applied our approach and tool to 262 relatively complex Protos models made in the context of various student projects. This allows us to validate...

  6. Measurement of the Ecological Integrity of Cerrado Streams Using Biological Metrics and the Index of Habitat Integrity

    Directory of Open Access Journals (Sweden)

    Deusiano Florêncio dos Reis

    2017-01-01

    Full Text Available Generally, aquatic communities reflect the effects of anthropogenic changes such as deforestation or organic pollution. The Cerrado stands among the most threatened ecosystems by human activities in Brazil. In order to evaluate the ecological integrity of the streams in a preserved watershed in the Northern Cerrado biome corresponding to a mosaic of ecosystems in transition to the Amazonia biome in Brazil, biological metrics related to diversity, structure, and sensitivity of aquatic macroinvertebrates were calculated. Sampling included collections along stretches of 200 m of nine streams and measurements of abiotic variables (temperature, electrical conductivity, pH, total dissolved solids, dissolved oxygen, and discharge and the Index of Habitat Integrity (HII. The values of the abiotic variables and the HII indicated that most of the streams have good ecological integrity, due to high oxygen levels and low concentrations of dissolved solids and electric conductivity. Two streams showed altered HII scores mainly related to small dams for recreational and domestic use, use of Cerrado natural pasture for cattle raising, and spot deforestation in bathing areas. However, this finding is not reflected in the biological metrics that were used. Considering all nine streams, only two showed satisfactory ecological quality (measured by Biological Monitoring Working Party (BMWP, total richness, and EPT (Ephemeroptera, Plecoptera, and Trichoptera richness, only one of which had a low HII score. These results indicate that punctual measures of abiotic parameters do not reveal the long-term impacts of anthropic activities in these streams, including related fire management of pasture that annually alters the vegetation matrix and may act as a disturbance for the macroinvertebrate communities. Due to this, biomonitoring of low order streams in Cerrado ecosystems of the Northern Central Brazil by different biotic metrics and also physical attributes of the

  7. Measuring Success: Metrics that Link Supply Chain Management to Aircraft Readiness

    National Research Council Canada - National Science Library

    Balestreri, William

    2002-01-01

    This thesis evaluates and analyzes current strategic management planning methods that develop performance metrics linking supply chain management to aircraft readiness, Our primary focus is the Marine...

  8. Metrics for Probabilistic Geometries

    DEFF Research Database (Denmark)

    Tosi, Alessandra; Hauberg, Søren; Vellido, Alfredo

    2014-01-01

    the distribution over mappings is given by a Gaussian process. We treat the corresponding latent variable model as a Riemannian manifold and we use the expectation of the metric under the Gaussian process prior to define interpolating paths and measure distance between latent points. We show how distances...

  9. Comparison of routing metrics for wireless mesh networks

    CSIR Research Space (South Africa)

    Nxumalo, SL

    2011-09-01

    Full Text Available in each and every relay node so as to find the next hop for the packet. A routing metric is simply a measure used for selecting the best path, used by a routing protocol. Figure 2 shows the relationship between a routing protocol and the routing... on its QoS-awareness level. The routing metrics that considered QoS the most were selected from each group. This section discusses the four routing metrics that were compared in this paper, which are: hop count (HOP), expected transmission count (ETX...

  10. Metric inhomogeneous Diophantine approximation in positive characteristic

    DEFF Research Database (Denmark)

    Kristensen, Simon

    2011-01-01

    We obtain asymptotic formulae for the number of solutions to systems of inhomogeneous linear Diophantine inequalities over the field of formal Laurent series with coefficients from a finite fields, which are valid for almost every such system. Here `almost every' is with respect to Haar measure...... of the coefficients of the homogeneous part when the number of variables is at least two (singly metric case), and with respect to the Haar measure of all coefficients for any number of variables (doubly metric case). As consequences, we derive zero-one laws in the spirit of the Khintchine-Groshev Theorem and zero...

  11. Metric inhomogeneous Diophantine approximation in positive characteristic

    DEFF Research Database (Denmark)

    Kristensen, S.

    We obtain asymptotic formulae for the number of solutions to systems of inhomogeneous linear Diophantine inequalities over the field of formal Laurent series with coefficients from a finite fields, which are valid for almost every such system. Here 'almost every' is with respect to Haar measure...... of the coefficients of the homogeneous part when the number of variables is at least two (singly metric case), and with respect to the Haar measure of all coefficients for any number of variables (doubly metric case). As consequences, we derive zero-one laws in the spirit of the Khintchine--Groshev Theorem and zero...

  12. Model and methods to assess hepatic function from indocyanine green fluorescence dynamical measurements of liver tissue.

    Science.gov (United States)

    Audebert, Chloe; Vignon-Clementel, Irene E

    2018-03-30

    The indocyanine green (ICG) clearance, presented as plasma disappearance rate is, presently, a reliable method to estimate the hepatic "function". However, this technique is not instantaneously available and thus cannot been used intra-operatively (during liver surgery). Near-infrared spectroscopy enables to assess hepatic ICG concentration over time in the liver tissue. This article proposes to extract more information from the liver intensity dynamics by interpreting it through a dedicated pharmacokinetics model. In order to account for the different exchanges between the liver tissues, the proposed model includes three compartments for the liver model (sinusoids, hepatocytes and bile canaliculi). The model output dependency to parameters is studied with sensitivity analysis and solving an inverse problem on synthetic data. The estimation of model parameters is then performed with in-vivo measurements in rabbits (El-Desoky et al. 1999). Parameters for different liver states are estimated, and their link with liver function is investigated. A non-linear (Michaelis-Menten type) excretion rate from the hepatocytes to the bile canaliculi was necessary to reproduce the measurements for different liver conditions. In case of bile duct ligation, the model suggests that this rate is reduced, and that the ICG is stored in the hepatocytes. Moreover, the level of ICG remains high in the blood following the ligation of the bile duct. The percentage of retention of indocyanine green in blood, which is a common test for hepatic function estimation, is also investigated with the model. The impact of bile duct ligation and reduced liver inflow on the percentage of ICG retention in blood is studied. The estimation of the pharmacokinetics model parameters may lead to an evaluation of different liver functions. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Evaluating hydrological model performance using information theory-based metrics

    Science.gov (United States)

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  14. Point shear wave speed measurement in differentiating benign and malignant focal liver lesions.

    Science.gov (United States)

    Dong, Yi; Wang, Wen-Ping; Xu, Yadan; Cao, Jiaying; Mao, Feng; Dietrich, Cristoph F

    2017-06-26

    To investigate the value of ElastPQ measurement for differential diagnosis of benign and malignant focal liver lesions (FLLs) by using histologic results as a reference standard. A total of 154 patients were included. ElastPQ measurement was performed for each lesion in which the shear wave speed (SWS) was measured. The difference in SWS and SWS ratio of FLL to surrounding liver were evaluated, and the cut off value was investigated. Receiver operating characteristic (ROC) curve was plotted to evaluate the diagnostic performance. Histology as a gold standard was obtained by surgery in all patients. A total of 154 lesions including 129 (83.7 %) malignant FLLs and 25 (16.3 %) benign ones were analysed. The SWS of malignant and benign FLLs was significantly different, 2.77±0.68 m/s and 1.57±0.55 m/s (p<0.05). The SWS ratio of each FLL to surrounding liver parenchyma was 2.23±0.49 for malignant and 1.14±0.36 for benign FLLs (p<0.05). The cut off value for differential diagnosis was 2.06 m/s for SWS and 1.67 for SWS ratio.  ElastPQ measurement provides reliable quantitative stiffness information of FLLs and may be helpful in the differential diagnosis between malignant and benign FLLs.

  15. Baby universe metric equivalent to an interior black-hole metric

    International Nuclear Information System (INIS)

    Gonzalez-Diaz, P.F.

    1991-01-01

    It is shown that the maximally extended metric corresponding to a large wormhole is the unique possible wormhole metric whose baby universe sector is conformally equivalent ot the maximal inextendible Kruskal metric corresponding to the interior region of a Schwarzschild black hole whose gravitational radius is half the wormhole neck radius. The physical implications of this result in the black hole evaporation process are discussed. (orig.)

  16. Cophenetic metrics for phylogenetic trees, after Sokal and Rohlf.

    Science.gov (United States)

    Cardona, Gabriel; Mir, Arnau; Rosselló, Francesc; Rotger, Lucía; Sánchez, David

    2013-01-16

    Phylogenetic tree comparison metrics are an important tool in the study of evolution, and hence the definition of such metrics is an interesting problem in phylogenetics. In a paper in Taxon fifty years ago, Sokal and Rohlf proposed to measure quantitatively the difference between a pair of phylogenetic trees by first encoding them by means of their half-matrices of cophenetic values, and then comparing these matrices. This idea has been used several times since then to define dissimilarity measures between phylogenetic trees but, to our knowledge, no proper metric on weighted phylogenetic trees with nested taxa based on this idea has been formally defined and studied yet. Actually, the cophenetic values of pairs of different taxa alone are not enough to single out phylogenetic trees with weighted arcs or nested taxa. For every (rooted) phylogenetic tree T, let its cophenetic vectorφ(T) consist of all pairs of cophenetic values between pairs of taxa in T and all depths of taxa in T. It turns out that these cophenetic vectors single out weighted phylogenetic trees with nested taxa. We then define a family of cophenetic metrics dφ,p by comparing these cophenetic vectors by means of Lp norms, and we study, either analytically or numerically, some of their basic properties: neighbors, diameter, distribution, and their rank correlation with each other and with other metrics. The cophenetic metrics can be safely used on weighted phylogenetic trees with nested taxa and no restriction on degrees, and they can be computed in O(n2) time, where n stands for the number of taxa. The metrics dφ,1 and dφ,2 have positive skewed distributions, and they show a low rank correlation with the Robinson-Foulds metric and the nodal metrics, and a very high correlation with each other and with the splitted nodal metrics. The diameter of dφ,p, for p⩾1 , is in O(n(p+2)/p), and thus for low p they are more discriminative, having a wider range of values.

  17. Experiences with Software Quality Metrics in the EMI Middleware

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project t...

  18. Properties of C-metric spaces

    Science.gov (United States)

    Croitoru, Anca; Apreutesei, Gabriela; Mastorakis, Nikos E.

    2017-09-01

    The subject of this paper belongs to the theory of approximate metrics [23]. An approximate metric on X is a real application defined on X × X that satisfies only a part of the metric axioms. In a recent paper [23], we introduced a new type of approximate metric, named C-metric, that is an application which satisfies only two metric axioms: symmetry and triangular inequality. The remarkable fact in a C-metric space is that a topological structure induced by the C-metric can be defined. The innovative idea of this paper is that we obtain some convergence properties of a C-metric space in the absence of a metric. In this paper we investigate C-metric spaces. The paper is divided into four sections. Section 1 is for Introduction. In Section 2 we recall some concepts and preliminary results. In Section 3 we present some properties of C-metric spaces, such as convergence properties, a canonical decomposition and a C-fixed point theorem. Finally, in Section 4 some conclusions are highlighted.

  19. Learning Low-Dimensional Metrics

    OpenAIRE

    Jain, Lalit; Mason, Blake; Nowak, Robert

    2017-01-01

    This paper investigates the theoretical foundations of metric learning, focused on three key questions that are not fully addressed in prior work: 1) we consider learning general low-dimensional (low-rank) metrics as well as sparse metrics; 2) we develop upper and lower (minimax)bounds on the generalization error; 3) we quantify the sample complexity of metric learning in terms of the dimension of the feature space and the dimension/rank of the underlying metric;4) we also bound the accuracy ...

  20. Metrics for Analyzing Quantifiable Differentiation of Designs with Varying Integrity for Hardware Assurance

    Science.gov (United States)

    2017-03-01

    Keywords — Trojan; integrity; trust; quantify; hardware; assurance; verification; metrics ; reference, quality ; profile I. INTRODUCTION A. The Rising...as a framework for benchmarking Trusted Part certifications. Previous work conducted in Trust Metric development has focused on measures at the...the lowest integrities. Based on the analysis, the DI metric shows measurable differentiation between all five Test Article Error Location Error

  1. A software quality model and metrics for risk assessment

    Science.gov (United States)

    Hyatt, L.; Rosenberg, L.

    1996-01-01

    A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.

  2. Enterprise Sustainment Metrics

    Science.gov (United States)

    2015-06-19

    are negatively impacting KPIs” (Parmenter, 2010: 31). In the current state, the Air Force’s AA and PBL metrics are once again split . AA does...must have the authority to “take immediate action to rectify situations that are negatively impacting KPIs” (Parmenter, 2010: 31). 3. Measuring...highest profitability and shareholder value for each company” (2014: 273). By systematically diagraming a process, either through a swim lane flowchart

  3. Whole-organ and segmental stiffness measured with liver magnetic resonance elastography in healthy adults: significance of the region of interest.

    Science.gov (United States)

    Rusak, Grażyna; Zawada, Elżbieta; Lemanowicz, Adam; Serafin, Zbigniew

    2015-04-01

    MR elastography (MRE) is a recent non-invasive technique that provides in vivo data on the viscoelasticity of the liver. Since the method is not well established, several different protocols were proposed that differ in results. The aim of the study was to analyze the variability of stiffness measurements in different regions of the liver. Twenty healthy adults aged 24-45 years were recruited. The examination was performed using a mechanical excitation of 64 Hz. MRE images were fused with axial T2WI breath-hold images (thickness 10 mm, spacing 10 mm). Stiffness was measured as a mean value of each cross section of the whole liver, on a single largest cross section, in the right lobe, and in ROIs (50 pix.) placed in the center of the left lobe, segments 5/6, 7, 8, and the parahilar region. Whole-liver stiffness ranged from 1.56 to 2.75 kPa. Mean segmental stiffness differed significantly between the tested regions (range from 1.55 ± 0.28 to 2.37 ± 0.32 kPa; P < 0.0001, ANOVA). Within-method variability of measurements ranged from 14 % for whole liver and segment 8-26 % for segment 7. Within-subject variability ranged from 13 to 31 %. Results of measurement within segment 8 were closest to the whole-liver method (ICC, 0.84). Stiffness of the liver presented significant variability depending on the region of measurement. The most reproducible method is averaging of cross sections of the whole liver. There was significant variability between stiffness in subjects considered healthy, which requires further investigation.

  4. Scalar-metric and scalar-metric-torsion gravitational theories

    International Nuclear Information System (INIS)

    Aldersley, S.J.

    1977-01-01

    The techniques of dimensional analysis and of the theory of tensorial concomitants are employed to study field equations in gravitational theories which incorporate scalar fields of the Brans-Dicke type. Within the context of scalar-metric gravitational theories, a uniqueness theorem for the geometric (or gravitational) part of the field equations is proven and a Lagrangian is determined which is uniquely specified by dimensional analysis. Within the context of scalar-metric-torsion gravitational theories a uniqueness theorem for field Lagrangians is presented and the corresponding Euler-Lagrange equations are given. Finally, an example of a scalar-metric-torsion theory is presented which is similar in many respects to the Brans-Dicke theory and the Einstein-Cartan theory

  5. Metrics of quantum states

    International Nuclear Information System (INIS)

    Ma Zhihao; Chen Jingling

    2011-01-01

    In this work we study metrics of quantum states, which are natural generalizations of the usual trace metric and Bures metric. Some useful properties of the metrics are proved, such as the joint convexity and contractivity under quantum operations. Our result has a potential application in studying the geometry of quantum states as well as the entanglement detection.

  6. A New Study of Two Divergence Metrics for Change Detection in Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali; Wang, Suojin; Carroll, Raymond; Zhang, Xiangliang

    2014-01-01

    Streaming data are dynamic in nature with frequent changes. To detect such changes, most methods measure the difference between the data distributions in a current time window and a reference window. Divergence metrics and density estimation are required to measure the difference between the data distributions. Our study shows that the Kullback-Leibler (KL) divergence, the most popular metric for comparing distributions, fails to detect certain changes due to its asymmetric property and its dependence on the variance of the data. We thus consider two metrics for detecting changes in univariate data streams: a symmetric KL-divergence and a divergence metric measuring the intersection area of two distributions. The experimental results show that these two metrics lead to more accurate results in change detection than baseline methods such as Change Finder and using conventional KL-divergence.

  7. A New Study of Two Divergence Metrics for Change Detection in Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali

    2014-08-01

    Streaming data are dynamic in nature with frequent changes. To detect such changes, most methods measure the difference between the data distributions in a current time window and a reference window. Divergence metrics and density estimation are required to measure the difference between the data distributions. Our study shows that the Kullback-Leibler (KL) divergence, the most popular metric for comparing distributions, fails to detect certain changes due to its asymmetric property and its dependence on the variance of the data. We thus consider two metrics for detecting changes in univariate data streams: a symmetric KL-divergence and a divergence metric measuring the intersection area of two distributions. The experimental results show that these two metrics lead to more accurate results in change detection than baseline methods such as Change Finder and using conventional KL-divergence.

  8. Assessment of the Log-Euclidean Metric Performance in Diffusion Tensor Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mostafa Charmi

    2010-06-01

    Full Text Available Introduction: Appropriate definition of the distance measure between diffusion tensors has a deep impact on Diffusion Tensor Image (DTI segmentation results. The geodesic metric is the best distance measure since it yields high-quality segmentation results. However, the important problem with the geodesic metric is a high computational cost of the algorithms based on it. The main goal of this paper is to assess the possible substitution of the geodesic metric with the Log-Euclidean one to reduce the computational cost of a statistical surface evolution algorithm. Materials and Methods: We incorporated the Log-Euclidean metric in the statistical surface evolution algorithm framework. To achieve this goal, the statistics and gradients of diffusion tensor images were defined using the Log-Euclidean metric. Numerical implementation of the segmentation algorithm was performed in the MATLAB software using the finite difference techniques. Results: In the statistical surface evolution framework, the Log-Euclidean metric was able to discriminate the torus and helix patterns in synthesis datasets and rat spinal cords in biological phantom datasets from the background better than the Euclidean and J-divergence metrics. In addition, similar results were obtained with the geodesic metric. However, the main advantage of the Log-Euclidean metric over the geodesic metric was the dramatic reduction of computational cost of the segmentation algorithm, at least by 70 times. Discussion and Conclusion: The qualitative and quantitative results have shown that the Log-Euclidean metric is a good substitute for the geodesic metric when using a statistical surface evolution algorithm in DTIs segmentation.

  9. Successes and Failures of Knowledge Management: An Investigation into Knowledge Management Metrics

    International Nuclear Information System (INIS)

    Liebowitz, J.

    2016-01-01

    Full text: In reviewing the literature and industry reports, a number of organizations have approached KM metrics from a balanced scorecard, intellectual capital (e.g., Skandia’s intellectual capital navigator), activity-based costing, or other borrowed approaches from the accounting and human resources disciplines. Liebowitz in his edited book, Making Cents Out of Knowledge Management (Scarecrow Press, 2008), shows case studies of organizations trying to measure knowledge management success. A few methodologies have examined ways to measure return on knowledge, such as Housel and Bell’s knowledge value-added (KVA) methodology (Housel and Bell, 2001). Liebowitz analyzed over 80 publications on knowledge management metrics, whereby KM metrics can be divided into system measures, output measures, and outcome measures. (author

  10. Fractional type Marcinkiewicz integrals over non-homogeneous metric measure spaces

    Directory of Open Access Journals (Sweden)

    Guanghui Lu

    2016-10-01

    Full Text Available Abstract The main goal of the paper is to establish the boundedness of the fractional type Marcinkiewicz integral M β , ρ , q $\\mathcal{M}_{\\beta,\\rho,q}$ on non-homogeneous metric measure space which includes the upper doubling and the geometrically doubling conditions. Under the assumption that the kernel satisfies a certain Hörmander-type condition, the authors prove that M β , ρ , q $\\mathcal{M}_{\\beta,\\rho,q}$ is bounded from Lebesgue space L 1 ( μ $L^{1}(\\mu$ into the weak Lebesgue space L 1 , ∞ ( μ $L^{1,\\infty}(\\mu$ , from the Lebesgue space L ∞ ( μ $L^{\\infty}(\\mu$ into the space RBLO ( μ $\\operatorname{RBLO}(\\mu$ , and from the atomic Hardy space H 1 ( μ $H^{1}(\\mu$ into the Lebesgue space L 1 ( μ $L^{1}(\\mu$ . Moreover, the authors also get a corollary, that is, M β , ρ , q $\\mathcal{M}_{\\beta,\\rho,q}$ is bounded on L p ( μ $L^{p}(\\mu$ with 1 < p < ∞ $1< p<\\infty$ .

  11. Metric and structural equivalence of core cognitive abilities measured with the Wechsler Adult Intelligence Scale-III in the United States and Australia.

    Science.gov (United States)

    Bowden, Stephen C; Lissner, Dianne; McCarthy, Kerri A L; Weiss, Lawrence G; Holdnack, James A

    2007-10-01

    Equivalence of the psychological model underlying Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) scores obtained in the United States and Australia was examined in this study. Examination of metric invariance involves testing the hypothesis that all components of the measurement model relating observed scores to latent variables are numerically equal in different samples. The assumption of metric invariance is necessary for interpretation of scores derived from research studies that seek to generalize patterns of convergent and divergent validity and patterns of deficit or disability. An Australian community volunteer sample was compared to the US standardization data. A pattern of strict metric invariance was observed across samples. In addition, when the effects of different demographic characteristics of the US and Australian samples were included, structural parameters reflecting values of the latent cognitive variables were found not to differ. These results provide important evidence for the equivalence of measurement of core cognitive abilities with the WAIS-III and suggest that latent cognitive abilities in the US and Australia do not differ.

  12. Decision Analysis for Metric Selection on a Clinical Quality Scorecard.

    Science.gov (United States)

    Guth, Rebecca M; Storey, Patricia E; Vitale, Michael; Markan-Aurora, Sumita; Gordon, Randolph; Prevost, Traci Q; Dunagan, Wm Claiborne; Woeltje, Keith F

    2016-09-01

    Clinical quality scorecards are used by health care institutions to monitor clinical performance and drive quality improvement. Because of the rapid proliferation of quality metrics in health care, BJC HealthCare found it increasingly difficult to select the most impactful scorecard metrics while still monitoring metrics for regulatory purposes. A 7-step measure selection process was implemented incorporating Kepner-Tregoe Decision Analysis, which is a systematic process that considers key criteria that must be satisfied in order to make the best decision. The decision analysis process evaluates what metrics will most appropriately fulfill these criteria, as well as identifies potential risks associated with a particular metric in order to identify threats to its implementation. Using this process, a list of 750 potential metrics was narrowed to 25 that were selected for scorecard inclusion. This decision analysis process created a more transparent, reproducible approach for selecting quality metrics for clinical quality scorecards. © The Author(s) 2015.

  13. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  14. METRIC context unit architecture

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, R.O.

    1988-01-01

    METRIC is an architecture for a simple but powerful Reduced Instruction Set Computer (RISC). Its speed comes from the simultaneous processing of several instruction streams, with instructions from the various streams being dispatched into METRIC's execution pipeline as they become available for execution. The pipeline is thus kept full, with a mix of instructions for several contexts in execution at the same time. True parallel programming is supported within a single execution unit, the METRIC Context Unit. METRIC's architecture provides for expansion through the addition of multiple Context Units and of specialized Functional Units. The architecture thus spans a range of size and performance from a single-chip microcomputer up through large and powerful multiprocessors. This research concentrates on the specification of the METRIC Context Unit at the architectural level. Performance tradeoffs made during METRIC's design are discussed, and projections of METRIC's performance are made based on simulation studies.

  15. Magnetic resonance elastography: Feasibility of liver stiffness measurements in healthy volunteers at 3 T

    International Nuclear Information System (INIS)

    Mannelli, L.; Godfrey, E.; Graves, M.J.; Patterson, A.J.; Beddy, P.; Bowden, D.; Joubert, I.; Priest, A.N.; Lomas, D.J.

    2012-01-01

    Aim: To demonstrate the feasibility of obtaining liver stiffness measurements with magnetic resonance elastography (MRE) at 3 T in normal healthy volunteers using the same technique that has been successfully applied at 1.5 T. Methods and materials: The study was approved by the local ethics committee and written informed consent was obtained from all volunteers. Eleven volunteers (mean age 35 ± 9 years) with no history of gastrointestinal, hepatobiliary, or cardiovascular disease were recruited. The magnetic resonance imaging (MRI) protocol included a gradient echo-based MRE sequence using a 60 Hz pneumatic excitation. The MRE images were processed using a local frequency estimation inversion algorithm to provide quantitative stiffness maps. Adequate image quality was assessed subjectively by demonstrating the presence of visible propagating waves within the liver parenchyma underlying the driver location. Liver stiffness values were obtained using manually placed regions of interest (ROI) outlining the liver margins on the gradient echo wave images, which were then mapped onto the corresponding stiffness image. The mean stiffness values from two adjacent sections were recorded. Results: Eleven volunteers underwent MRE. The quality of the MRE images was adequate in all the volunteers. The mean liver stiffness for the group was 2.3 ± 0.38 kPa (ranging from 1.7–2.8 kPa). Conclusions: This preliminary work using MRE at 3 T in healthy volunteers demonstrates the feasibility of liver stiffness evaluation at 3 T without modification of the approach used at 1.5 T. Adequate image quality and normal MRE values were obtained in all volunteers. The obtained stiffness values were in the range of those reported for healthy volunteers in previous studies at 1.5 T. There was good interobserver reproducibility in the stiffness measurements.

  16. Magnetic resonance elastography: Feasibility of liver stiffness measurements in healthy volunteers at 3 T

    Energy Technology Data Exchange (ETDEWEB)

    Mannelli, L., E-mail: mannellilorenzo@yahoo.it [Department of Radiology, Addenbrooke' s Hospital and University of Cambridge, Cambridge (United Kingdom); Department of Radiology, University of Washington, Seattle, WA (United States); Godfrey, E.; Graves, M.J.; Patterson, A.J.; Beddy, P.; Bowden, D.; Joubert, I.; Priest, A.N.; Lomas, D.J. [Department of Radiology, Addenbrooke' s Hospital and University of Cambridge, Cambridge (United Kingdom)

    2012-03-15

    Aim: To demonstrate the feasibility of obtaining liver stiffness measurements with magnetic resonance elastography (MRE) at 3 T in normal healthy volunteers using the same technique that has been successfully applied at 1.5 T. Methods and materials: The study was approved by the local ethics committee and written informed consent was obtained from all volunteers. Eleven volunteers (mean age 35 {+-} 9 years) with no history of gastrointestinal, hepatobiliary, or cardiovascular disease were recruited. The magnetic resonance imaging (MRI) protocol included a gradient echo-based MRE sequence using a 60 Hz pneumatic excitation. The MRE images were processed using a local frequency estimation inversion algorithm to provide quantitative stiffness maps. Adequate image quality was assessed subjectively by demonstrating the presence of visible propagating waves within the liver parenchyma underlying the driver location. Liver stiffness values were obtained using manually placed regions of interest (ROI) outlining the liver margins on the gradient echo wave images, which were then mapped onto the corresponding stiffness image. The mean stiffness values from two adjacent sections were recorded. Results: Eleven volunteers underwent MRE. The quality of the MRE images was adequate in all the volunteers. The mean liver stiffness for the group was 2.3 {+-} 0.38 kPa (ranging from 1.7-2.8 kPa). Conclusions: This preliminary work using MRE at 3 T in healthy volunteers demonstrates the feasibility of liver stiffness evaluation at 3 T without modification of the approach used at 1.5 T. Adequate image quality and normal MRE values were obtained in all volunteers. The obtained stiffness values were in the range of those reported for healthy volunteers in previous studies at 1.5 T. There was good interobserver reproducibility in the stiffness measurements.

  17. Methods of measuring metabolism during surgery in humans: focus on the liver-brain relationship.

    Science.gov (United States)

    Battezzati, Alberto; Bertoli, Simona

    2004-09-01

    The purpose of this work is to review recent advances in setting methods and models for measuring metabolism during surgery in humans. Surgery, especially solid organ transplantation, may offer unique experimental models in which it is ethically acceptable to gain information on difficult problems of amino acid and protein metabolism. Two areas are reviewed: the metabolic study of the anhepatic phase during liver transplantation and brain microdialysis during cerebral surgery. The first model offers an innovative approach to understand the relative role of liver and extrahepatic organs in gluconeogenesis, and to evaluate whether other organs can perform functions believed to be exclusively or almost exclusively performed by the liver. The second model offers an insight to intracerebral metabolism that is closely bound to that of the liver. The recent advances in metabolic research during surgery provide knowledge immediately useful for perioperative patient management and for a better control of surgical stress. The studies during the anhepatic phase of liver transplantation have showed that gluconeogenesis and glutamine metabolism are very active processes outside the liver. One of the critical organs for extrahepatic glutamine metabolism is the brain. Microdialysis studies helped to prove that in humans there is an intense trafficking of glutamine, glutamate and alanine among neurons and astrocytes. This delicate network is influenced by systemic amino acid metabolism. The metabolic dialogue between the liver and the brain is beginning to be understood in this light in order to explain the metabolic events of brain damage during liver failure.

  18. Observable traces of non-metricity: New constraints on metric-affine gravity

    Science.gov (United States)

    Delhom-Latorre, Adrià; Olmo, Gonzalo J.; Ronco, Michele

    2018-05-01

    Relaxing the Riemannian condition to incorporate geometric quantities such as torsion and non-metricity may allow to explore new physics associated with defects in a hypothetical space-time microstructure. Here we show that non-metricity produces observable effects in quantum fields in the form of 4-fermion contact interactions, thereby allowing us to constrain the scale of non-metricity to be greater than 1 TeV by using results on Bahbah scattering. Our analysis is carried out in the framework of a wide class of theories of gravity in the metric-affine approach. The bound obtained represents an improvement of several orders of magnitude to previous experimental constraints.

  19. An Innovative Metric to Evaluate Satellite Precipitation's Spatial Distribution

    Science.gov (United States)

    Liu, H.; Chu, W.; Gao, X.; Sorooshian, S.

    2011-12-01

    Thanks to its capability to cover the mountains, where ground measurement instruments cannot reach, satellites provide a good means of estimating precipitation over mountainous regions. In regions with complex terrains, accurate information on high-resolution spatial distribution of precipitation is critical for many important issues, such as flood/landslide warning, reservoir operation, water system planning, etc. Therefore, in order to be useful in many practical applications, satellite precipitation products should possess high quality in characterizing spatial distribution. However, most existing validation metrics, which are based on point/grid comparison using simple statistics, cannot effectively measure satellite's skill of capturing the spatial patterns of precipitation fields. This deficiency results from the fact that point/grid-wised comparison does not take into account of the spatial coherence of precipitation fields. Furth more, another weakness of many metrics is that they can barely provide information on why satellite products perform well or poor. Motivated by our recent findings of the consistent spatial patterns of the precipitation field over the western U.S., we developed a new metric utilizing EOF analysis and Shannon entropy. The metric can be derived through two steps: 1) capture the dominant spatial patterns of precipitation fields from both satellite products and reference data through EOF analysis, and 2) compute the similarities between the corresponding dominant patterns using mutual information measurement defined with Shannon entropy. Instead of individual point/grid, the new metric treat the entire precipitation field simultaneously, naturally taking advantage of spatial dependence. Since the dominant spatial patterns are shaped by physical processes, the new metric can shed light on why satellite product can or cannot capture the spatial patterns. For demonstration, a experiment was carried out to evaluate a satellite

  20. Elements in normal and cirrhotic human liver. Potassium, iron, copper, zinc and bromine measured by X-ray fluorescence spectrometry

    DEFF Research Database (Denmark)

    Laursen, J.; Milman, N.; Leth, Peter Mygind

    1990-01-01

    Various elements (K, Fe, Cu, Zn, Br) were measured by X-ray flourescence spectrometry in cellular and connective tissue fractions of normal and cirrhotic liver samples obtained at autopsy. Normal livers: 32 subjects (16 males, 16 females) median age 69 years. Cirrhotic livers: 14 subjects (13 mal...

  1. Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics

    Directory of Open Access Journals (Sweden)

    Bernardin Keni

    2008-01-01

    Full Text Available Abstract Simultaneous tracking of multiple persons in real-world environments is an active research field and several approaches have been proposed, based on a variety of features and algorithms. Recently, there has been a growing interest in organizing systematic evaluations to compare the various techniques. Unfortunately, the lack of common metrics for measuring the performance of multiple object trackers still makes it hard to compare their results. In this work, we introduce two intuitive and general metrics to allow for objective comparison of tracker characteristics, focusing on their precision in estimating object locations, their accuracy in recognizing object configurations and their ability to consistently label objects over time. These metrics have been extensively used in two large-scale international evaluations, the 2006 and 2007 CLEAR evaluations, to measure and compare the performance of multiple object trackers for a wide variety of tracking tasks. Selected performance results are presented and the advantages and drawbacks of the presented metrics are discussed based on the experience gained during the evaluations.

  2. The error analysis of Lobular and segmental division of right liver by volume measurement.

    Science.gov (United States)

    Zhang, Jianfei; Lin, Weigang; Chi, Yanyan; Zheng, Nan; Xu, Qiang; Zhang, Guowei; Yu, Shengbo; Li, Chan; Wang, Bin; Sui, Hongjin

    2017-07-01

    The aim of this study is to explore the inconsistencies between right liver volume as measured by imaging and the actual anatomical appearance of the right lobe. Five healthy donated livers were studied. The liver slices were obtained with hepatic segments multicolor-infused through the portal vein. In the slices, the lobes were divided by two methods: radiological landmarks and real anatomical boundaries. The areas of the right anterior lobe (RAL) and right posterior lobe (RPL) on each slice were measured using Photoshop CS5 and AutoCAD, and the volumes of the two lobes were calculated. There was no statistically significant difference between the volumes of the RAL or RPL as measured by the radiological landmarks (RL) and anatomical boundaries (AB) methods. However, the curves of the square error value of the RAL and RPL measured using CT showed that the three lowest points were at the cranial, intermediate, and caudal levels. The U- or V-shaped curves of the square error rate of the RAL and RPL revealed that the lowest value is at the intermediate level and the highest at the cranial and caudal levels. On CT images, less accurate landmarks were used to divide the RAL and RPL at the cranial and caudal layers. The measured volumes of hepatic segments VIII and VI would be less than their true values, and the measured volumes of hepatic segments VII and V would be greater than their true values, according to radiological landmarks. Clin. Anat. 30:585-590, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  3. Non-invasive evaluation of liver stiffness after splenectomy in rabbits with CCl4-induced liver fibrosis.

    Science.gov (United States)

    Wang, Ming-Jun; Ling, Wen-Wu; Wang, Hong; Meng, Ling-Wei; Cai, He; Peng, Bing

    2016-12-14

    To investigate the diagnostic performance of liver stiffness measurement (LSM) by elastography point quantification (ElastPQ) in animal models and determine the longitudinal changes in liver stiffness by ElastPQ after splenectomy at different stages of fibrosis. Liver stiffness was measured in sixty-eight rabbits with CCl 4 -induced liver fibrosis at different stages and eight healthy control rabbits by ElastPQ. Liver biopsies and blood samples were obtained at scheduled time points to assess liver function and degree of fibrosis. Thirty-one rabbits with complete data that underwent splenectomy at different stages of liver fibrosis were then included for dynamic monitoring of changes in liver stiffness by ElastPQ and liver function according to blood tests. LSM by ElastPQ was significantly correlated with histologic fibrosis stage ( r = 0.85, P fibrosis, moderate fibrosis, and cirrhosis, respectively. Longitudinal monitoring of the changes in liver stiffness by ElastPQ showed that early splenectomy (especially F1) may delay liver fibrosis progression. ElastPQ is an available, convenient, objective and non-invasive technique for assessing liver stiffness in rabbits with CCl 4 -induced liver fibrosis. In addition, liver stiffness measurements using ElastPQ can dynamically monitor the changes in liver stiffness in rabbit models, and in patients, after splenectomy.

  4. Elevated C-reactive protein and hypoalbuminemia measured before resection of colorectal liver metastases predict postoperative survival.

    Science.gov (United States)

    Kobayashi, Takashi; Teruya, Masanori; Kishiki, Tomokazu; Endo, Daisuke; Takenaka, Yoshiharu; Miki, Kenji; Kobayashi, Kaoru; Morita, Koji

    2010-01-01

    Few studies have investigated whether the Glasgow Prognostic Score (GPS), an inflammation-based prognostic score measured before resection of colorectal liver metastasis (CRLM), can predict postoperative survival. Sixty-three consecutive patients who underwent curative resection for CRLM were investigated. GPS was calculated on the basis of admission data as follows: patients with both an elevated C-reactive protein (>10 mg/l) and hypoalbuminemia (l) were allocated a GPS score of 2. Patients in whom only one of these biochemical abnormalities was present were allocated a GPS score of 1, and patients with a normal C-reactive protein and albumin were allocated a score of 0. Significant factors concerning survival were the number of liver metastases (p = 0.0044), carcinoembryonic antigen level (p = 0.0191), GPS (p = 0.0029), grade of liver metastasis (p = 0.0033), and the number of lymph node metastases around the primary cancer (p = 0.0087). Multivariate analysis showed the two independent prognostic variables: liver metastases > or =3 (relative risk 2.83) and GPS1/2 (relative risk 3.07). GPS measured before operation and the number of liver metastases may be used as novel predictors of postoperative outcomes in patients who underwent curative resection for CRLM. Copyright 2010 S. Karger AG, Basel.

  5. Future of the PCI Readmission Metric.

    Science.gov (United States)

    Wasfy, Jason H; Yeh, Robert W

    2016-03-01

    Between 2013 and 2014, the Centers for Medicare and Medicaid Services and the National Cardiovascular Data Registry publically reported risk-adjusted 30-day readmission rates after percutaneous coronary intervention (PCI) as a pilot project. A key strength of this public reporting effort included risk adjustment with clinical rather than administrative data. Furthermore, because readmission after PCI is common, expensive, and preventable, this metric has substantial potential to improve quality and value in American cardiology care. Despite this, concerns about the metric exist. For example, few PCI readmissions are caused by procedural complications, limiting the extent to which improved procedural technique can reduce readmissions. Also, similar to other readmission measures, PCI readmission is associated with socioeconomic status and race. Accordingly, the metric may unfairly penalize hospitals that care for underserved patients. Perhaps in the context of these limitations, Centers for Medicare and Medicaid Services has not yet included PCI readmission among metrics that determine Medicare financial penalties. Nevertheless, provider organizations may still wish to focus on this metric to improve value for cardiology patients. PCI readmission is associated with low-risk chest discomfort and patient anxiety. Therefore, patient education, improved triage mechanisms, and improved care coordination offer opportunities to minimize PCI readmissions. Because PCI readmission is common and costly, reducing PCI readmission offers provider organizations a compelling target to improve the quality of care, and also performance in contracts involve shared financial risk. © 2016 American Heart Association, Inc.

  6. Diagnostic accuracy and prognostic significance of blood fibrosis tests and liver stiffness measurement by FibroScan in non-alcoholic fatty liver disease.

    Science.gov (United States)

    Boursier, Jérôme; Vergniol, Julien; Guillet, Anne; Hiriart, Jean-Baptiste; Lannes, Adrien; Le Bail, Brigitte; Michalak, Sophie; Chermak, Faiza; Bertrais, Sandrine; Foucher, Juliette; Oberti, Frédéric; Charbonnier, Maude; Fouchard-Hubert, Isabelle; Rousselet, Marie-Christine; Calès, Paul; de Lédinghen, Victor

    2016-09-01

    NAFLD is highly prevalent but only a small subset of patients develop advanced liver fibrosis with impaired liver-related prognosis. We aimed to compare blood fibrosis tests and liver stiffness measurement (LSM) by FibroScan for the diagnosis of liver fibrosis and the evaluation of prognosis in NAFLD. Diagnostic accuracy was evaluated in a cross-sectional study including 452 NAFLD patients with liver biopsy (NASH-CRN fibrosis stage), LSM, and eight blood fibrosis tests (BARD, NAFLD fibrosis score, FibroMeter(NAFLD), aspartate aminotransferase to platelet ratio index (APRI), FIB4, FibroTest, Hepascore, FibroMeter(V2G)). Prognostic accuracy was evaluated in a longitudinal study including 360 NAFLD patients. LSM and FibroMeter(V2G) were the two best-performing tests in the cross-sectional study: AUROCs for advanced fibrosis (F3/4) were, respectively, 0.831±0.019 and 0.817±0.020 (p⩽0.041 vs. other tests); rates of patients with ⩾90% negative/positive predictive values for F3/4 were 56.4% and 46.7% (ptests); Obuchowski indexes were 0.834±0.014 and 0.798±0.016 (p⩽0.036 vs. other tests). Two fibrosis classifications were developed to precisely estimate the histological fibrosis stage from LSM or FibroMeter(V2G) results without liver biopsy (diagnostic accuracy, respectively: 80.8% vs. 77.4%, p=0.190). Kaplan-Meier curves in the longitudinal study showed that both classifications categorised NAFLD patients into subgroups with significantly different prognoses (pfibrosis classification, the worse was the prognosis. LSM and FibroMeter(V2G) were the most accurate of nine evaluated tests for the non-invasive diagnosis of liver fibrosis in NAFLD. LSM and FibroMeter(V2G) fibrosis classifications help physicians estimate both fibrosis stage and patient prognosis in clinical practice. The amount of liver fibrosis is the main determinant of the liver-related prognosis in patients with non-alcoholic fatty liver disease (NAFLD). We evaluated eight blood tests and Fibro

  7. Measuring solar reflectance - Part I: Defining a metric that accurately predicts solar heat gain

    Energy Technology Data Exchange (ETDEWEB)

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul [Heat Island Group, Environmental Energy Technologies Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States)

    2010-09-15

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective ''cool colored'' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland US latitudes, this metric R{sub E891BN} can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {<=} 5:12 [23 ]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool roof net energy savings by as much as 23%. We define clear sky air mass one global horizontal (''AM1GH'') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer. (author)

  8. Measuring solar reflectance Part I: Defining a metric that accurately predicts solar heat gain

    Energy Technology Data Exchange (ETDEWEB)

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul

    2010-05-14

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective 'cool colored' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland U.S. latitudes, this metric RE891BN can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {le} 5:12 [23{sup o}]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool-roof net energy savings by as much as 23%. We define clear-sky air mass one global horizontal ('AM1GH') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer.

  9. Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.

    Science.gov (United States)

    Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen

    2017-06-01

    The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.

  10. Conceptual Soundness, Metric Development, Benchmarking, and Targeting for PATH Subprogram Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Mosey. G.; Doris, E.; Coggeshall, C.; Antes, M.; Ruch, J.; Mortensen, J.

    2009-01-01

    The objective of this study is to evaluate the conceptual soundness of the U.S. Department of Housing and Urban Development (HUD) Partnership for Advancing Technology in Housing (PATH) program's revised goals and establish and apply a framework to identify and recommend metrics that are the most useful for measuring PATH's progress. This report provides an evaluative review of PATH's revised goals, outlines a structured method for identifying and selecting metrics, proposes metrics and benchmarks for a sampling of individual PATH programs, and discusses other metrics that potentially could be developed that may add value to the evaluation process. The framework and individual program metrics can be used for ongoing management improvement efforts and to inform broader program-level metrics for government reporting requirements.

  11. Metric diffusion along foliations

    CERN Document Server

    Walczak, Szymon M

    2017-01-01

    Up-to-date research in metric diffusion along compact foliations is presented in this book. Beginning with fundamentals from the optimal transportation theory and the theory of foliations; this book moves on to cover Wasserstein distance, Kantorovich Duality Theorem, and the metrization of the weak topology by the Wasserstein distance. Metric diffusion is defined, the topology of the metric space is studied and the limits of diffused metrics along compact foliations are discussed. Essentials on foliations, holonomy, heat diffusion, and compact foliations are detailed and vital technical lemmas are proved to aide understanding. Graduate students and researchers in geometry, topology and dynamics of foliations and laminations will find this supplement useful as it presents facts about the metric diffusion along non-compact foliation and provides a full description of the limit for metrics diffused along foliation with at least one compact leaf on the two dimensions.

  12. A Practitioners’ Perspective on Developmental Models, Metrics and Community

    Directory of Open Access Journals (Sweden)

    Chad Stewart

    2009-12-01

    Full Text Available This article builds on a paper by Stein and Heikkinen (2009, and suggestsways to expand and improve our measurement of the quality of the developmentalmodels, metrics and instruments and the results we get in collaborating with clients. Wesuggest that this dialogue needs to be about more than stage development measured by(even calibrated stage development-focused, linguistic-based, developmental psychologymetrics that produce lead indicators and are shown to be reliable and valid bypsychometric qualities alone. The article first provides a brief overview of ourbackground and biases, and an applied version of Ken Wilber’s Integral OperatingSystem that has provided increased development, client satisfaction, and contribution toour communities measured by verifiable, tangible results (as well as intangible resultssuch as increased ability to cope with complex surroundings, reduced stress and growthin developmental stages to better fit to the environment in which our clients wereengaged at that time. It then addresses four key points raised by Stein and Heikkinen(need for quality control, defining and deciding on appropriate metrics, building a systemto evaluate models and metrics, and clarifying and increasing the reliability and validityof the models and metrics we use by providing initial concrete steps to:• Adopt a systemic value-chain approach• Measure results in addition to language• Build on the evaluation system for instruments, models and metrics suggested byStein & Heikkinen• Clarify and improve the reliability and validity of the instruments, models andmetrics we useWe complete the article with an echoing call for the community of AppliedDevelopmental Theory suggested by Ross (2008 and Stein and Heikkinen, a briefdescription of that community (from our perspective, and a table that builds on Table 2proposed by Stein and Heikkinen.

  13. The relationship between settlement population size and sustainable development measured by two sustainability metrics

    International Nuclear Information System (INIS)

    O'Regan, Bernadette; Morrissey, John; Foley, Walter; Moles, Richard

    2009-01-01

    This paper reports on a study of the relative sustainability of 79 Irish villages, towns and a small city (collectively called 'settlements') classified by population size. Quantitative data on more than 300 economic, social and environmental attributes of each settlement were assembled into a database. Two aggregated metrics were selected to model the relative sustainability of settlements: Ecological Footprint (EF) and Sustainable Development Index (SDI). Subsequently these were aggregated to create a single Combined Sustainable Development Index. Creation of this database meant that metric calculations did not rely on proxies, and were therefore considered to be robust. Methods employed provided values for indicators at various stages of the aggregation process. This allowed both the first reported empirical analysis of the relationship between settlement sustainability and population size, and the elucidation of information provided at different stages of aggregation. At the highest level of aggregation, settlement sustainability increased with population size, but important differences amongst individual settlements were masked by aggregation. EF and SDI metrics ranked settlements in differing orders of relative sustainability. Aggregation of indicators to provide Ecological Footprint values was found to be especially problematic, and this metric was inadequately sensitive to distinguish amongst the relative sustainability achieved by all settlements. Many authors have argued that, for policy makers to be able to inform planning decisions using sustainability indicators, it is necessary that they adopt a toolkit of aggregated indicators. Here it is argued that to interpret correctly each aggregated metric value, policy makers also require a hierarchy of disaggregated component indicator values, each explained fully. Possible implications for urban planning are briefly reviewed

  14. Liver stiffness measured by magnetic resonance elastography as a risk factor for hepatocellular carcinoma: a preliminary case-control study

    Energy Technology Data Exchange (ETDEWEB)

    Motosugi, Utaroh; Ichikawa, Tomoaki; Koshiishi, Tsuyota; Sano, Katsuhiro; Morisaka, Hiroyuki; Ichikawa, Shintaro; Araki, Tsutomu [University of Yamanashi, Department of Radiology, Yamanashi-ken (Japan); Enomoto, Nobuyuki [University of Yamanashi, 1st Department of Internal Medicine, Yamanashi (Japan); Matsuda, Masanori; Fujii, Hideki [University of Yamanashi, 1st Department of Surgery, Yamanashi (Japan)

    2013-01-15

    To examine if liver stiffness measured by magnetic resonance elastography (MRE) is a risk factor for hepatocellular carcinoma (HCC) in patients with chronic liver disease. By reviewing the records of magnetic resonance (MR) examinations performed at our institution, we selected 301 patients with chronic liver disease who did not have a previous medical history of HCC. All patients underwent MRE and gadoxetic acid-enhanced MR imaging. HCC was identified on MR images in 66 of the 301 patients, who were matched to controls from the remaining patients without HCC according to age. MRE images were obtained by visualising elastic waves generated in the liver by pneumatic vibration transferred via a cylindrical passive driver. Risk factors of HCC development were determined by the odds ratio with logistic regression analysis; gender and liver stiffness by MRE and serum levels of aspartate transferase, alanine transferase, alpha-fetoprotein, and protein induced by vitamin K absence-II. Multivariate analysis revealed that only liver stiffness by MRE was a significant risk factor for HCC with an odds ratio (95 % confidence interval) of 1.38 (1.05-1.84). Liver stiffness measured by MRE is an independent risk factor for HCC in patients with chronic liver disease. (orig.)

  15. Completion of a Dislocated Metric Space

    Directory of Open Access Journals (Sweden)

    P. Sumati Kumari

    2015-01-01

    Full Text Available We provide a construction for the completion of a dislocated metric space (abbreviated d-metric space; we also prove that the completion of the metric associated with a d-metric coincides with the metric associated with the completion of the d-metric.

  16. Non-Invasive Assessment of Hepatic Fibrosis by Elastic Measurement of Liver Using Magnetic Resonance Tagging Images

    Directory of Open Access Journals (Sweden)

    Xuejun Zhang

    2018-03-01

    Full Text Available To date, the measurement of the stiffness of liver requires a special vibrational tool that limits its application in many hospitals. In this study, we developed a novel method for automatically assessing the elasticity of the liver without any use of contrast agents or mechanical devices. By calculating the non-rigid deformation of the liver from magnetic resonance (MR tagging images, the stiffness was quantified as the displacement of grids on the liver image during a forced exhalation cycle. Our methods include two major processes: (1 quantification of the non-rigid deformation as the bending energy (BE based on the thin-plate spline method in the spatial domain and (2 calculation of the difference in the power spectrum from the tagging images, by using fast Fourier transform in the frequency domain. By considering 34 cases (17 normal and 17 abnormal liver cases, a remarkable difference between the two groups was found by both methods. The elasticity of the liver was finally analyzed by combining the bending energy and power spectral features obtained through MR tagging images. The result showed that only one abnormal case was misclassified in our dataset, which implied our method for non-invasive assessment of liver fibrosis has the potential to reduce the traditional liver biopsy.

  17. Strategic Human Resource Metrics: A Perspective of the General Systems Theory

    Directory of Open Access Journals (Sweden)

    Chux Gervase Iwu

    2016-04-01

    Full Text Available Measuring and quantifying strategic human resource outcomes in relation to key performance criteria is essential to developing value-adding metrics. Objectives This paper posits (using a general systems lens that strategic human resource metrics should interpret the relationship between attitudinal human resource outcomes and performance criteria such as profitability, quality or customer service. Approach Using the general systems model as underpinning theory, the study assesses the variation in response to a Likert type questionnaire with twenty-four (24 items measuring the major attitudinal dispositions of HRM outcomes (employee commitment, satisfaction, engagement and embeddedness. Results A Chi-square test (Chi-square test statistic = 54.898, p=0.173 showed that variation in responses to the attitudinal statements occurred due to chance. This was interpreted to mean that attitudinal human resource outcomes influence performance as a unit of system components. The neutral response was found to be associated with the ‘reject’ response than the ‘acceptance’ response. Value The study offers suggestion on the determination of strategic HR metrics and recommends the use of systems theory in HRM related studies. Implications This study provides another dimension to human resource metrics by arguing that strategic human resource metrics should measure the relationship between attitudinal human resource outcomes and performance using a systems perspective.

  18. A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric

    Science.gov (United States)

    Simon, Donald L.

    2011-01-01

    Receiver Operator Characteristic (ROC) curves are commonly applied as metrics for quantifying the performance of binary fault detection systems. An ROC curve provides a visual representation of a detection system s True Positive Rate versus False Positive Rate sensitivity as the detection threshold is varied. The area under the curve provides a measure of fault detection performance independent of the applied detection threshold. While the standard ROC curve is well suited for quantifying binary fault detection performance, it is not suitable for quantifying the classification performance of multi-fault classification problems. Furthermore, it does not provide a measure of diagnostic latency. To address these shortcomings, a novel three-dimensional receiver operator characteristic (3D ROC) surface metric has been developed. This is done by generating and applying two separate curves: the standard ROC curve reflecting fault detection performance, and a second curve reflecting fault classification performance. A third dimension, diagnostic latency, is added giving rise to 3D ROC surfaces. Applying numerical integration techniques, the volumes under and between the surfaces are calculated to produce metrics of the diagnostic system s detection and classification performance. This paper will describe the 3D ROC surface metric in detail, and present an example of its application for quantifying the performance of aircraft engine gas path diagnostic methods. Metric limitations and potential enhancements are also discussed

  19. Evaluation of Subjective and Objective Performance Metrics for Haptically Controlled Robotic Systems

    Directory of Open Access Journals (Sweden)

    Cong Dung Pham

    2014-07-01

    Full Text Available This paper studies in detail how different evaluation methods perform when it comes to describing the performance of haptically controlled mobile manipulators. Particularly, we investigate how well subjective metrics perform compared to objective metrics. To find the best metrics to describe the performance of a control scheme is challenging when human operators are involved; how the user perceives the performance of the controller does not necessarily correspond to the directly measurable metrics normally used in controller evaluation. It is therefore important to study whether there is any correspondence between how the user perceives the performance of a controller, and how it performs in terms of directly measurable metrics such as the time used to perform a task, number of errors, accuracy, and so on. To perform these tests we choose a system that consists of a mobile manipulator that is controlled by an operator through a haptic device. This is a good system for studying different performance metrics as the performance can be determined by subjective metrics based on feedback from the users, and also as objective and directly measurable metrics. The system consists of a robotic arm which provides for interaction and manipulation, which is mounted on a mobile base which extends the workspace of the arm. The operator thus needs to perform both interaction and locomotion using a single haptic device. While the position of the on-board camera is determined by the base motion, the principal control objective is the motion of the manipulator arm. This calls for intelligent control allocation between the base and the manipulator arm in order to obtain intuitive control of both the camera and the arm. We implement three different approaches to the control allocation problem, i.e., whether the vehicle or manipulator arm actuation is applied to generate the desired motion. The performance of the different control schemes is evaluated, and our

  20. Metrics with vanishing quantum corrections

    International Nuclear Information System (INIS)

    Coley, A A; Hervik, S; Gibbons, G W; Pope, C N

    2008-01-01

    We investigate solutions of the classical Einstein or supergravity equations that solve any set of quantum corrected Einstein equations in which the Einstein tensor plus a multiple of the metric is equated to a symmetric conserved tensor T μν (g αβ , ∂ τ g αβ , ∂ τ ∂ σ g αβ , ...,) constructed from sums of terms, the involving contractions of the metric and powers of arbitrary covariant derivatives of the curvature tensor. A classical solution, such as an Einstein metric, is called universal if, when evaluated on that Einstein metric, T μν is a multiple of the metric. A Ricci flat classical solution is called strongly universal if, when evaluated on that Ricci flat metric, T μν vanishes. It is well known that pp-waves in four spacetime dimensions are strongly universal. We focus attention on a natural generalization; Einstein metrics with holonomy Sim(n - 2) in which all scalar invariants are zero or constant. In four dimensions we demonstrate that the generalized Ghanam-Thompson metric is weakly universal and that the Goldberg-Kerr metric is strongly universal; indeed, we show that universality extends to all four-dimensional Sim(2) Einstein metrics. We also discuss generalizations to higher dimensions

  1. Neurosurgical virtual reality simulation metrics to assess psychomotor skills during brain tumor resection.

    Science.gov (United States)

    Azarnoush, Hamed; Alzhrani, Gmaan; Winkler-Schwartz, Alexander; Alotaibi, Fahad; Gelinas-Phaneuf, Nicholas; Pazos, Valérie; Choudhury, Nusrat; Fares, Jawad; DiRaddo, Robert; Del Maestro, Rolando F

    2015-05-01

    Virtual reality simulator technology together with novel metrics could advance our understanding of expert neurosurgical performance and modify and improve resident training and assessment. This pilot study introduces innovative metrics that can be measured by the state-of-the-art simulator to assess performance. Such metrics cannot be measured in an operating room and have not been used previously to assess performance. Three sets of performance metrics were assessed utilizing the NeuroTouch platform in six scenarios with simulated brain tumors having different visual and tactile characteristics. Tier 1 metrics included percentage of brain tumor resected and volume of simulated "normal" brain tissue removed. Tier 2 metrics included instrument tip path length, time taken to resect the brain tumor, pedal activation frequency, and sum of applied forces. Tier 3 metrics included sum of forces applied to different tumor regions and the force bandwidth derived from the force histogram. The results outlined are from a novice resident in the second year of training and an expert neurosurgeon. The three tiers of metrics obtained from the NeuroTouch simulator do encompass the wide variability of technical performance observed during novice/expert resections of simulated brain tumors and can be employed to quantify the safety, quality, and efficiency of technical performance during simulated brain tumor resection. Tier 3 metrics derived from force pyramids and force histograms may be particularly useful in assessing simulated brain tumor resections. Our pilot study demonstrates that the safety, quality, and efficiency of novice and expert operators can be measured using metrics derived from the NeuroTouch platform, helping to understand how specific operator performance is dependent on both psychomotor ability and cognitive input during multiple virtual reality brain tumor resections.

  2. Revisiting measurement invariance in intelligence testing in aging research: Evidence for almost complete metric invariance across age groups.

    Science.gov (United States)

    Sprague, Briana N; Hyun, Jinshil; Molenaar, Peter C M

    2017-01-01

    Invariance of intelligence across age is often assumed but infrequently explicitly tested. Horn and McArdle (1992) tested measurement invariance of intelligence, providing adequate model fit but might not consider all relevant aspects such as sub-test differences. The goal of the current paper is to explore age-related invariance of the WAIS-R using an alternative model that allows direct tests of age on WAIS-R subtests. Cross-sectional data on 940 participants aged 16-75 from the WAIS-R normative values were used. Subtests examined were information, comprehension, similarities, vocabulary, picture completion, block design, picture arrangement, and object assembly. The two intelligence factors considered were fluid and crystallized intelligence. Self-reported ages were divided into young (16-22, n = 300), adult (29-39, n = 275), middle (40-60, n = 205), and older (61-75, n = 160) adult groups. Results suggested partial metric invariance holds. Although most of the subtests reflected fluid and crystalized intelligence similarly across different ages, invariance did not hold for block design on fluid intelligence and picture arrangement on crystallized intelligence for older adults. Additionally, there was evidence of a correlated residual between information and vocabulary for the young adults only. This partial metric invariance model yielded acceptable model fit compared to previously-proposed invariance models of Horn and McArdle (1992). Almost complete metric invariance holds for a two-factor model of intelligence. Most of the subtests were invariant across age groups, suggesting little evidence for age-related bias in the WAIS-R. However, we did find unique relationships between two subtests and intelligence. Future studies should examine age-related differences in subtests when testing measurement invariance in intelligence.

  3. Remarks on G-Metric Spaces

    Directory of Open Access Journals (Sweden)

    Bessem Samet

    2013-01-01

    Full Text Available In 2005, Mustafa and Sims (2006 introduced and studied a new class of generalized metric spaces, which are called G-metric spaces, as a generalization of metric spaces. We establish some useful propositions to show that many fixed point theorems on (nonsymmetric G-metric spaces given recently by many authors follow directly from well-known theorems on metric spaces. Our technique can be easily extended to other results as shown in application.

  4. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less

  5. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    International Nuclear Information System (INIS)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-01-01

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F≤f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  6. First results from a combined analysis of CERN computing infrastructure metrics

    Science.gov (United States)

    Duellmann, Dirk; Nieke, Christian

    2017-10-01

    The IT Analysis Working Group (AWG) has been formed at CERN across individual computing units and the experiments to attempt a cross cutting analysis of computing infrastructure and application metrics. In this presentation we will describe the first results obtained using medium/long term data (1 months — 1 year) correlating box level metrics, job level metrics from LSF and HTCondor, IO metrics from the physics analysis disk pools (EOS) and networking and application level metrics from the experiment dashboards. We will cover in particular the measurement of hardware performance and prediction of job duration, the latency sensitivity of different job types and a search for bottlenecks with the production job mix in the current infrastructure. The presentation will conclude with the proposal of a small set of metrics to simplify drawing conclusions also in the more constrained environment of public cloud deployments.

  7. Prioritizing Urban Habitats for Connectivity Conservation: Integrating Centrality and Ecological Metrics.

    Science.gov (United States)

    Poodat, Fatemeh; Arrowsmith, Colin; Fraser, David; Gordon, Ascelin

    2015-09-01

    Connectivity among fragmented areas of habitat has long been acknowledged as important for the viability of biological conservation, especially within highly modified landscapes. Identifying important habitat patches in ecological connectivity is a priority for many conservation strategies, and the application of 'graph theory' has been shown to provide useful information on connectivity. Despite the large number of metrics for connectivity derived from graph theory, only a small number have been compared in terms of the importance they assign to nodes in a network. This paper presents a study that aims to define a new set of metrics and compares these with traditional graph-based metrics, used in the prioritization of habitat patches for ecological connectivity. The metrics measured consist of "topological" metrics, "ecological metrics," and "integrated metrics," Integrated metrics are a combination of topological and ecological metrics. Eight metrics were applied to the habitat network for the fat-tailed dunnart within Greater Melbourne, Australia. A non-directional network was developed in which nodes were linked to adjacent nodes. These links were then weighted by the effective distance between patches. By applying each of the eight metrics for the study network, nodes were ranked according to their contribution to the overall network connectivity. The structured comparison revealed the similarity and differences in the way the habitat for the fat-tailed dunnart was ranked based on different classes of metrics. Due to the differences in the way the metrics operate, a suitable metric should be chosen that best meets the objectives established by the decision maker.

  8. Project management metrics, KPIs, and dashboards a guide to measuring and monitoring project performance

    CERN Document Server

    Kerzner, Harold

    2013-01-01

    Today, with the growth of complex projects, stakeholder involvement in projects, advances in computer technology for dashboard designs, metrics, and key performance indicators for project management have become an important focus. This Second Edition of the bestselling book walks readers through everything from the basics of project management metrics and key performance indicators to establishing targets and using dashboards to monitor performance. The content is aligned with PMI's PMBOK Guide and stresses "value" as the main focal point.

  9. The AGIS metric and time of test: A replication study

    OpenAIRE

    Counsell, S; Swift, S; Tucker, A

    2016-01-01

    Visual Field (VF) tests and corresponding data are commonly used in clinical practices to manage glaucoma. The standard metric used to measure glaucoma severity is the Advanced Glaucoma Intervention Studies (AGIS) metric. We know that time of day when VF tests are applied can influence a patient’s AGIS metric value; a previous study showed that this was the case for a data set of 160 patients. In this paper, we replicate that study using data from 2468 patients obtained from Moorfields Eye Ho...

  10. Non-invasive evaluation of liver stiffness after splenectomy in rabbits with CCl4-induced liver fibrosis

    OpenAIRE

    Wang, Ming-Jun; Ling, Wen-Wu; Wang, Hong; Meng, Ling-Wei; Cai, He; Peng, Bing

    2016-01-01

    AIM To investigate the diagnostic performance of liver stiffness measurement (LSM) by elastography point quantification (ElastPQ) in animal models and determine the longitudinal changes in liver stiffness by ElastPQ after splenectomy at different stages of fibrosis. METHODS Liver stiffness was measured in sixty-eight rabbits with CCl4-induced liver fibrosis at different stages and eight healthy control rabbits by ElastPQ. Liver biopsies and blood samples were obtained at scheduled time points...

  11. Metric-adjusted skew information

    DEFF Research Database (Denmark)

    Liang, Cai; Hansen, Frank

    2010-01-01

    on a bipartite system and proved superadditivity of the Wigner-Yanase-Dyson skew informations for such states. We extend this result to the general metric-adjusted skew information. We finally show that a recently introduced extension to parameter values 1 ...We give a truly elementary proof of the convexity of metric-adjusted skew information following an idea of Effros. We extend earlier results of weak forms of superadditivity to general metric-adjusted skew information. Recently, Luo and Zhang introduced the notion of semi-quantum states...... of (unbounded) metric-adjusted skew information....

  12. PSQM-based RR and NR video quality metrics

    Science.gov (United States)

    Lu, Zhongkang; Lin, Weisi; Ong, Eeping; Yang, Xiaokang; Yao, Susu

    2003-06-01

    This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-based metric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.

  13. Influence of Musical Enculturation on Brain Responses to Metric Deviants

    Directory of Open Access Journals (Sweden)

    Niels T. Haumann

    2018-04-01

    Full Text Available The ability to recognize metric accents is fundamental in both music and language perception. It has been suggested that music listeners prefer rhythms that follow simple binary meters, which are common in Western music. This means that listeners expect odd-numbered beats to be strong and even-numbered beats to be weak. In support of this, studies have shown that listeners exposed to Western music show stronger novelty and incongruity related P3 and irregularity detection related mismatch negativity (MMN brain responses to attenuated odd- than attenuated even-numbered metric positions. Furthermore, behavioral evidence suggests that music listeners' preferences can be changed by long-term exposure to non-Western rhythms and meters, e.g., by listening to African or Balkan music. In our study, we investigated whether it might be possible to measure effects of music enculturation on neural responses to attenuated tones on specific metric positions. We compared the magnetic mismatch negativity (MMNm to attenuated beats in a “Western group” of listeners (n = 12 mainly exposed to Western music and a “Bicultural group” of listeners (n = 13 exposed for at least 1 year to both Sub-Saharan African music in addition to Western music. We found that in the “Western group” the MMNm was higher in amplitude to deviant tones on odd compared to even metric positions, but not in the “Bicultural group.” In support of this finding, there was also a trend of the “Western group” to rate omitted beats as more surprising on odd than even metric positions, whereas the “Bicultural group” seemed to discriminate less between metric positions in terms of surprise ratings. Also, we observed that the overall latency of the MMNm was significantly shorter in the Bicultural group compared to the Western group. These effects were not biased by possible differences in rhythm perception ability or music training, measured with the Musical Ear Test (MET

  14. Influence of Musical Enculturation on Brain Responses to Metric Deviants.

    Science.gov (United States)

    Haumann, Niels T; Vuust, Peter; Bertelsen, Freja; Garza-Villarreal, Eduardo A

    2018-01-01

    The ability to recognize metric accents is fundamental in both music and language perception. It has been suggested that music listeners prefer rhythms that follow simple binary meters, which are common in Western music. This means that listeners expect odd-numbered beats to be strong and even-numbered beats to be weak. In support of this, studies have shown that listeners exposed to Western music show stronger novelty and incongruity related P3 and irregularity detection related mismatch negativity (MMN) brain responses to attenuated odd- than attenuated even-numbered metric positions. Furthermore, behavioral evidence suggests that music listeners' preferences can be changed by long-term exposure to non-Western rhythms and meters, e.g., by listening to African or Balkan music. In our study, we investigated whether it might be possible to measure effects of music enculturation on neural responses to attenuated tones on specific metric positions. We compared the magnetic mismatch negativity (MMNm) to attenuated beats in a "Western group" of listeners ( n = 12) mainly exposed to Western music and a "Bicultural group" of listeners ( n = 13) exposed for at least 1 year to both Sub-Saharan African music in addition to Western music. We found that in the "Western group" the MMNm was higher in amplitude to deviant tones on odd compared to even metric positions, but not in the "Bicultural group." In support of this finding, there was also a trend of the "Western group" to rate omitted beats as more surprising on odd than even metric positions, whereas the "Bicultural group" seemed to discriminate less between metric positions in terms of surprise ratings. Also, we observed that the overall latency of the MMNm was significantly shorter in the Bicultural group compared to the Western group. These effects were not biased by possible differences in rhythm perception ability or music training, measured with the Musical Ear Test (MET). Furthermore, source localization analyses

  15. The Finsler spacetime framework. Backgrounds for physics beyond metric geometry

    International Nuclear Information System (INIS)

    Pfeifer, Christian

    2013-11-01

    The fundamental structure on which physics is described is the geometric spacetime background provided by a four dimensional manifold equipped with a Lorentzian metric. Most importantly the spacetime manifold does not only provide the stage for physical field theories but its geometry encodes causality, observers and their measurements and gravity simultaneously. This threefold role of the Lorentzian metric geometry of spacetime is one of the key insides of general relativity. During this thesis we extend the background geometry for physics from the metric framework of general relativity to our Finsler spacetime framework and ensure that the threefold role of the geometry of spacetime in physics is not changed. The geometry of Finsler spacetimes is determined by a function on the tangent bundle and includes metric geometry. In contrast to the standard formulation of Finsler geometry our Finsler spacetime framework overcomes the differentiability and existence problems of the geometric objects in earlier attempts to use Finsler geometry as an extension of Lorentzian metric geometry. The development of our nonmetric geometric framework which encodes causality is one central achievement of this thesis. On the basis of our well-defined Finsler spacetime geometry we are able to derive dynamics for the non-metric Finslerian geometry of spacetime from an action principle, obtained from the Einstein-Hilbert action, for the first time. We can complete the dynamics to a non-metric description of gravity by coupling matter fields, also formulated via an action principle, to the geometry of our Finsler spacetimes. We prove that the combined dynamics of the fields and the geometry are consistent with general relativity. Furthermore we demonstrate how to define observers and their measurements solely through the non-metric spacetime geometry. Physical consequence derived on the basis of our Finsler spacetime are: a possible solution to the fly-by anomaly in the solar system; the

  16. The Finsler spacetime framework. Backgrounds for physics beyond metric geometry

    Energy Technology Data Exchange (ETDEWEB)

    Pfeifer, Christian

    2013-11-15

    The fundamental structure on which physics is described is the geometric spacetime background provided by a four dimensional manifold equipped with a Lorentzian metric. Most importantly the spacetime manifold does not only provide the stage for physical field theories but its geometry encodes causality, observers and their measurements and gravity simultaneously. This threefold role of the Lorentzian metric geometry of spacetime is one of the key insides of general relativity. During this thesis we extend the background geometry for physics from the metric framework of general relativity to our Finsler spacetime framework and ensure that the threefold role of the geometry of spacetime in physics is not changed. The geometry of Finsler spacetimes is determined by a function on the tangent bundle and includes metric geometry. In contrast to the standard formulation of Finsler geometry our Finsler spacetime framework overcomes the differentiability and existence problems of the geometric objects in earlier attempts to use Finsler geometry as an extension of Lorentzian metric geometry. The development of our nonmetric geometric framework which encodes causality is one central achievement of this thesis. On the basis of our well-defined Finsler spacetime geometry we are able to derive dynamics for the non-metric Finslerian geometry of spacetime from an action principle, obtained from the Einstein-Hilbert action, for the first time. We can complete the dynamics to a non-metric description of gravity by coupling matter fields, also formulated via an action principle, to the geometry of our Finsler spacetimes. We prove that the combined dynamics of the fields and the geometry are consistent with general relativity. Furthermore we demonstrate how to define observers and their measurements solely through the non-metric spacetime geometry. Physical consequence derived on the basis of our Finsler spacetime are: a possible solution to the fly-by anomaly in the solar system; the

  17. Landscape metrics for three-dimension urban pattern recognition

    Science.gov (United States)

    Liu, M.; Hu, Y.; Zhang, W.; Li, C.

    2017-12-01

    Understanding how landscape pattern determines population or ecosystem dynamics is crucial for managing our landscapes. Urban areas are becoming increasingly dominant social-ecological systems, so it is important to understand patterns of urbanization. Most studies of urban landscape pattern examine land-use maps in two dimensions because the acquisition of 3-dimensional information is difficult. We used Brista software based on Quickbird images and aerial photos to interpret the height of buildings, thus incorporating a 3-dimensional approach. We estimated the feasibility and accuracy of this approach. A total of 164,345 buildings in the Liaoning central urban agglomeration of China, which included seven cities, were measured. Twelve landscape metrics were proposed or chosen to describe the urban landscape patterns in 2- and 3-dimensional scales. The ecological and social meaning of landscape metrics were analyzed with multiple correlation analysis. The results showed that classification accuracy compared with field surveys was 87.6%, which means this method for interpreting building height was acceptable. The metrics effectively reflected the urban architecture in relation to number of buildings, area, height, 3-D shape and diversity aspects. We were able to describe the urban characteristics of each city with these metrics. The metrics also captured ecological and social meanings. The proposed landscape metrics provided a new method for urban landscape analysis in three dimensions.

  18. Measurement of peroxisomal enzyme activities in the liver of brown trout (Salmo trutta, using spectrophotometric methods

    Directory of Open Access Journals (Sweden)

    Resende Albina D

    2003-03-01

    Full Text Available Abstract Background This study was aimed primarily at testing in the liver of brown trout (Salmo trutta spectrophotometric methods previously used to measure the activities of catalase and hydrogen peroxide producing oxidases in mammals. To evaluate the influence of temperature on the activities of those peroxisomal enzymes was the second objective. A third goal of this work was the study of enzyme distribution in crude cell fractions of brown trout liver. Results The assays revealed a linear increase in the activity of all peroxisomal enzymes as the temperature rose from 10° to 37°C. However, while the activities of hydrogen peroxide producing oxidases were strongly influenced by temperature, catalase activity was only slightly affected. A crude fraction enriched with peroxisomes was obtained by differential centrifugation of liver homogenates, and the contamination by other organelles was evaluated by the activities of marker enzymes for mitochondria (succinate dehydrogenase, lysosomes (aryl sulphatase and microsomes (NADPH cytochrome c reductase. For peroxisomal enzymes, the activities per mg of protein (specific activity in liver homogenates were strongly correlated with the activities per g of liver and with the total activities per liver. These correlations were not obtained with crude peroxisomal fractions. Conclusions The spectrophotometric protocols originally used to quantify the activity of mammalian peroxisomal enzymes can be successfully applied to the study of those enzymes in brown trout. Because the activity of all studied peroxisomal enzymes rose in a linear mode with temperature, their activities can be correctly measured between 10° and 37°C. Probably due to contamination by other organelles and losses of soluble matrix enzymes during homogenisation, enzyme activities in crude peroxisomal fractions do not correlate with the activities in liver homogenates. Thus, total homogenates will be used in future seasonal and

  19. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    Science.gov (United States)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  20. Hepatoscintigraphy with /sup 99m/Tc-colloid in children with liver cirrhosis

    Energy Technology Data Exchange (ETDEWEB)

    Mironov, S P

    1987-03-01

    Children aged 2 to 14 with the initial, formed and terminal stages of liver cirrhosis were examined by a method of radionuclide scintigraphy with /sup 99m/Tc-colloid. A set of indices characterizing function of the reticuloendothelial system (RES), the effective hepatic blood flow, metric parameters of the liver and spleen were obtained from an analysis of the curves of the heart, liver and spleen area, and digital imaging of the liver with the marked costal arch. It was shown that at the initial stage of disease indices of the time course of the radioactive colloid were of compensated nature. Spleen function was elevated, liver and spleen sizes were increased. The formed stage was characterized by the signs of subcompensation of liver function: changes of indices of RP retention in the blood, a decrease in the indices of the total and hepatic radioactive colloid. The terminal stage was characterized by marked disorder of liver RES function which was not compensated for by a high splenic uptake, image deformation and focal RP distribution. Irrespective of a stage of disease the syndrome of portal hypertension was shown to manifest itself in splenomegaly and an increase in the radioactive colloid uptake by the liver over 15%. The accuracy of the set of signs was 90%.

  1. Knowledge metrics of Brand Equity; critical measure of Brand Attachment

    OpenAIRE

    Arslan Rafi (Corresponding Author); Arslan Ali; Sidra Waris; Dr. Kashif-ur-Rehman

    2011-01-01

    Brand creation through an effective marketing strategy is necessary for creation of unique associations in the customers memory. Customers attitude, awareness and association towards the brand are primarily focused while evaluating performance of a brand, before designing the marketing strategies and subsequent evaluation of the progress. In this research, literature establishes a direct and significant effect of Knowledge metrics of the Brand equity, i.e. Brand Awareness and Brand Associatio...

  2. The metric system: An introduction

    Science.gov (United States)

    Lumley, Susan M.

    On 13 Jul. 1992, Deputy Director Duane Sewell restated the Laboratory's policy on conversion to the metric system which was established in 1974. Sewell's memo announced the Laboratory's intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory's conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on 25 Jul. 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation's conversion to the metric system. The second part of this report is on applying the metric system.

  3. The metric system: An introduction

    Energy Technology Data Exchange (ETDEWEB)

    Lumley, S.M.

    1995-05-01

    On July 13, 1992, Deputy Director Duane Sewell restated the Laboratory`s policy on conversion to the metric system which was established in 1974. Sewell`s memo announced the Laboratory`s intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory`s conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on July 25, 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation`s conversion to the metric system. The second part of this report is on applying the metric system.

  4. MR elastography of the liver at 3.0 T in diagnosing liver fibrosis grades; preliminary clinical experience.

    Science.gov (United States)

    Yoshimitsu, Kengo; Mitsufuji, Toshimichi; Shinagawa, Yoshinobu; Fujimitsu, Ritsuko; Morita, Ayako; Urakawa, Hiroshi; Hayashi, Hiroyuki; Takano, Koichi

    2016-03-01

    To clarify the usefulness of 3.0-T MR elastography (MRE) in diagnosing the histological grades of liver fibrosis using preliminary clinical data. Between November 2012 and March 2014, MRE was applied to all patients who underwent liver MR study at a 3.0-T clinical unit. Among them, those who had pathological evaluation of liver tissue within 3 months from MR examinations were retrospectively recruited, and the liver stiffness measured by MRE was correlated with histological results. Institutional review board approved this study, waiving informed consent. There were 70 patients who met the inclusion criteria. Liver stiffness showed significant correlation with the pathological grades of liver fibrosis (rho = 0.89, p 3.0-T clinical MRE was suggested to be sufficiently useful in assessing the grades of liver fibrosis. MR elastography may help clinicians assess patients with chronic liver diseases. Usefulness of 3.0-T MR elastography has rarely been reported. Measured liver stiffness correlated well with the histological grades of liver fibrosis. Measured liver stiffness was also affected by necroinflammation, but to a lesser degree. 3.0-T MRE could be a non-invasive alternative to liver biopsy.

  5. Analyses Of Two End-User Software Vulnerability Exposure Metrics

    Energy Technology Data Exchange (ETDEWEB)

    Jason L. Wright; Miles McQueen; Lawrence Wellman

    2012-08-01

    The risk due to software vulnerabilities will not be completely resolved in the near future. Instead, putting reliable vulnerability measures into the hands of end-users so that informed decisions can be made regarding the relative security exposure incurred by choosing one software package over another is of importance. To that end, we propose two new security metrics, average active vulnerabilities (AAV) and vulnerability free days (VFD). These metrics capture both the speed with which new vulnerabilities are reported to vendors and the rate at which software vendors fix them. We then examine how the metrics are computed using currently available datasets and demonstrate their estimation in a simulation experiment using four different browsers as a case study. Finally, we discuss how the metrics may be used by the various stakeholders of software and to software usage decisions.

  6. Energy Metrics for State Government Buildings

    Science.gov (United States)

    Michael, Trevor

    Measuring true progress towards energy conservation goals requires the accurate reporting and accounting of energy consumption. An accurate energy metrics framework is also a critical element for verifiable Greenhouse Gas Inventories. Energy conservation in government can reduce expenditures on energy costs leaving more funds available for public services. In addition to monetary savings, conserving energy can help to promote energy security, air quality, and a reduction of carbon footprint. With energy consumption/GHG inventories recently produced at the Federal level, state and local governments are beginning to also produce their own energy metrics systems. In recent years, many states have passed laws and executive orders which require their agencies to reduce energy consumption. In June 2008, SC state government established a law to achieve a 20% energy usage reduction in state buildings by 2020. This study examines case studies from other states who have established similar goals to uncover the methods used to establish an energy metrics system. Direct energy consumption in state government primarily comes from buildings and mobile sources. This study will focus exclusively on measuring energy consumption in state buildings. The case studies reveal that many states including SC are having issues gathering the data needed to accurately measure energy consumption across all state buildings. Common problems found include a lack of enforcement and incentives that encourage state agencies to participate in any reporting system. The case studies are aimed at finding the leverage used to gather the needed data. The various approaches at coercing participation will hopefully reveal methods that SC can use to establish the accurate metrics system needed to measure progress towards its 20% by 2020 energy reduction goal. Among the strongest incentives found in the case studies is the potential for monetary savings through energy efficiency. Framing energy conservation

  7. Attack-Resistant Trust Metrics

    Science.gov (United States)

    Levien, Raph

    The Internet is an amazingly powerful tool for connecting people together, unmatched in human history. Yet, with that power comes great potential for spam and abuse. Trust metrics are an attempt to compute the set of which people are trustworthy and which are likely attackers. This chapter presents two specific trust metrics developed and deployed on the Advogato Website, which is a community blog for free software developers. This real-world experience demonstrates that the trust metrics fulfilled their goals, but that for good results, it is important to match the assumptions of the abstract trust metric computation to the real-world implementation.

  8. NASA Aviation Safety Program Systems Analysis/Program Assessment Metrics Review

    Science.gov (United States)

    Louis, Garrick E.; Anderson, Katherine; Ahmad, Tisan; Bouabid, Ali; Siriwardana, Maya; Guilbaud, Patrick

    2003-01-01

    The goal of this project is to evaluate the metrics and processes used by NASA's Aviation Safety Program in assessing technologies that contribute to NASA's aviation safety goals. There were three objectives for reaching this goal. First, NASA's main objectives for aviation safety were documented and their consistency was checked against the main objectives of the Aviation Safety Program. Next, the metrics used for technology investment by the Program Assessment function of AvSP were evaluated. Finally, other metrics that could be used by the Program Assessment Team (PAT) were identified and evaluated. This investigation revealed that the objectives are in fact consistent across organizational levels at NASA and with the FAA. Some of the major issues discussed in this study which should be further investigated, are the removal of the Cost and Return-on-Investment metrics, the lack of the metrics to measure the balance of investment and technology, the interdependencies between some of the metric risk driver categories, and the conflict between 'fatal accident rate' and 'accident rate' in the language of the Aviation Safety goal as stated in different sources.

  9. Symmetries of the dual metrics

    International Nuclear Information System (INIS)

    Baleanu, D.

    1998-01-01

    The geometric duality between the metric g μν and a Killing tensor K μν is studied. The conditions were found when the symmetries of the metric g μν and the dual metric K μν are the same. Dual spinning space was constructed without introduction of torsion. The general results are applied to the case of Kerr-Newmann metric

  10. An Observability Metric for Underwater Vehicle Localization Using Range Measurements

    Directory of Open Access Journals (Sweden)

    Filippo Arrichiello

    2013-11-01

    Full Text Available The paper addresses observability issues related to the general problem of single and multiple Autonomous Underwater Vehicle (AUV localization using only range measurements. While an AUV is submerged, localization devices, such as Global Navigation Satellite Systems, are ineffective, due to the attenuation of electromagnetic waves. AUV localization based on dead reckoning techniques and the use of affordable motion sensor units is also not practical, due to divergence caused by sensor bias and drift. For these reasons, localization systems often build on trilateration algorithms that rely on the measurements of the ranges between an AUV and a set of fixed transponders using acoustic devices. Still, such solutions are often expensive, require cumbersome calibration procedures and only allow for AUV localization in an area that is defined by the geometrical arrangement of the transponders. A viable alternative for AUV localization that has recently come to the fore exploits the use of complementary information on the distance from the AUV to a single transponder, together with information provided by on-board resident motion sensors, such as, for example, depth, velocity and acceleration measurements. This concept can be extended to address the problem of relative localization between two AUVs equipped with acoustic sensors for inter-vehicle range measurements. Motivated by these developments, in this paper, we show that both the problems of absolute localization of a single vehicle and the relative localization of multiple vehicles can be treated using the same mathematical framework, and tailoring concepts of observability derived for nonlinear systems, we analyze how the performance in localization depends on the types of motion imparted to the AUVs. For this effect, we propose a well-defined observability metric and validate its usefulness, both in simulation and by carrying out experimental tests with a real marine vehicle during which the

  11. Smart Grid Status and Metrics Report

    Energy Technology Data Exchange (ETDEWEB)

    Balducci, Patrick J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Weimar, Mark R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kirkham, Harold [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-07-01

    To convey progress made in achieving the vision of a smart grid, this report uses a set of six characteristics derived from the National Energy Technology Laboratory Modern Grid Strategy. It measures 21 metrics to provide insight into the grid’s capacity to embody these characteristics. This report looks across a spectrum of smart grid concerns to measure the status of smart grid deployment and impacts.

  12. Effects of Metric Change on Workers’ Tools and Training.

    Science.gov (United States)

    1981-07-01

    understanding of the metric system, and particularly a lack of fluency in converting customary measurements to metric measuremerts, may increase the...assembly, installing, and repairing occupations 84 Painting, plastering, waterproofing, cementing , and related occupations 85 Excavating, grading... cementing , and related occupations 85 Excavating, grading, paving, and related occupations 86 Construction occupations, n.e.c. 89 Structural work

  13. Reproducibility of graph metrics of human brain functional networks.

    Science.gov (United States)

    Deuker, Lorena; Bullmore, Edward T; Smith, Marie; Christensen, Soren; Nathan, Pradeep J; Rockstroh, Brigitte; Bassett, Danielle S

    2009-10-01

    Graph theory provides many metrics of complex network organization that can be applied to analysis of brain networks derived from neuroimaging data. Here we investigated the test-retest reliability of graph metrics of functional networks derived from magnetoencephalography (MEG) data recorded in two sessions from 16 healthy volunteers who were studied at rest and during performance of the n-back working memory task in each session. For each subject's data at each session, we used a wavelet filter to estimate the mutual information (MI) between each pair of MEG sensors in each of the classical frequency intervals from gamma to low delta in the overall range 1-60 Hz. Undirected binary graphs were generated by thresholding the MI matrix and 8 global network metrics were estimated: the clustering coefficient, path length, small-worldness, efficiency, cost-efficiency, assortativity, hierarchy, and synchronizability. Reliability of each graph metric was assessed using the intraclass correlation (ICC). Good reliability was demonstrated for most metrics applied to the n-back data (mean ICC=0.62). Reliability was greater for metrics in lower frequency networks. Higher frequency gamma- and beta-band networks were less reliable at a global level but demonstrated high reliability of nodal metrics in frontal and parietal regions. Performance of the n-back task was associated with greater reliability than measurements on resting state data. Task practice was also associated with greater reliability. Collectively these results suggest that graph metrics are sufficiently reliable to be considered for future longitudinal studies of functional brain network changes.

  14. Regional metabolic liver function measured in patients with cirrhosis by 2-[¹⁸F]fluoro-2-deoxy-D-galactose PET/CT.

    Science.gov (United States)

    Sørensen, Michael; Mikkelsen, Kasper S; Frisch, Kim; Villadsen, Gerda E; Keiding, Susanne

    2013-06-01

    There is a clinical need for methods that can quantify regional hepatic function non-invasively in patients with cirrhosis. Here we validate the use of 2-[(18)F]fluoro-2-deoxy-d-galactose (FDGal) PET/CT for measuring regional metabolic function to this purpose, and apply the method to test the hypothesis of increased intrahepatic metabolic heterogeneity in cirrhosis. Nine cirrhotic patients underwent dynamic liver FDGal PET/CT with blood samples from a radial artery and a liver vein. Hepatic blood flow was measured by indocyanine green infusion/Fick's principle. From blood measurements, hepatic systemic clearance (Ksyst, Lblood/min) and hepatic intrinsic clearance (Vmax/Km, Lblood/min) of FDGal were calculated. From PET data, hepatic systemic clearance of FDGal in liver parenchyma (Kmet, mL blood/mL liver tissue/min) was calculated. Intrahepatic metabolic heterogeneity was evaluated in terms of coefficient-of-variation (CoV, %) using parametric images of Kmet. Mean approximation of Ksyst to Vmax/Km was 86% which validates the use of FDGal as PET tracer of hepatic metabolic function. Mean Kmet was 0.157 mL blood/mL liver tissue/min, which was lower than 0.274 mL blood/mL liver tissue/min, previously found in healthy subjects (pdynamic FDGal PET/CT with arterial sampling provides an accurate measure of regional hepatic metabolic function in patients with cirrhosis. This is likely to have clinical implications for the assessment of patients with liver disease as well as treatment planning and monitoring. Copyright © 2013 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  15. Comparing alternative and traditional dissemination metrics in medical education.

    Science.gov (United States)

    Amath, Aysah; Ambacher, Kristin; Leddy, John J; Wood, Timothy J; Ramnanan, Christopher J

    2017-09-01

    The impact of academic scholarship has traditionally been measured using citation-based metrics. However, citations may not be the only measure of impact. In recent years, other platforms (e.g. Twitter) have provided new tools for promoting scholarship to both academic and non-academic audiences. Alternative metrics (altmetrics) can capture non-traditional dissemination data such as attention generated on social media platforms. The aims of this exploratory study were to characterise the relationships among altmetrics, access counts and citations in an international and pre-eminent medical education journal, and to clarify the roles of these metrics in assessing the impact of medical education academic scholarship. A database study was performed (September 2015) for all papers published in Medical Education in 2012 (n = 236) and 2013 (n = 246). Citation, altmetric and access (HTML views and PDF downloads) data were obtained from Scopus, the Altmetric Bookmarklet tool and the journal Medical Education, respectively. Pearson coefficients (r-values) between metrics of interest were then determined. Twitter and Mendeley (an academic bibliography tool) were the only altmetric-tracked platforms frequently (> 50%) utilised in the dissemination of articles. Altmetric scores (composite measures of all online attention) were driven by Twitter mentions. For short and full-length articles in 2012 and 2013, both access counts and citation counts were most strongly correlated with one another, as well as with Mendeley downloads. By comparison, Twitter metrics and altmetric scores demonstrated weak to moderate correlations with both access and citation counts. Whereas most altmetrics showed limited correlations with readership (access counts) and impact (citations), Mendeley downloads correlated strongly with both readership and impact indices for articles published in the journal Medical Education and may therefore have potential use that is complementary to that of citations in

  16. Quantitative MRI for hepatic fat fraction and T2* measurement in pediatric patients with non-alcoholic fatty liver disease.

    Science.gov (United States)

    Deng, Jie; Fishbein, Mark H; Rigsby, Cynthia K; Zhang, Gang; Schoeneman, Samantha E; Donaldson, James S

    2014-11-01

    Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease in children. The gold standard for diagnosis is liver biopsy. MRI is a non-invasive imaging method to provide quantitative measurement of hepatic fat content. The methodology is particularly appealing for the pediatric population because of its rapidity and radiation-free imaging techniques. To develop a multi-point Dixon MRI method with multi-interference models (multi-fat-peak modeling and bi-exponential T2* correction) for accurate hepatic fat fraction (FF) and T2* measurements in pediatric patients with NAFLD. A phantom study was first performed to validate the accuracy of the MRI fat fraction measurement by comparing it with the chemical fat composition of the ex-vivo pork liver-fat homogenate. The most accurate model determined from the phantom study was used for fat fraction and T2* measurements in 52 children and young adults referred from the pediatric hepatology clinic with suspected or identified NAFLD. Separate T2* values of water (T2*W) and fat (T2*F) components derived from the bi-exponential fitting were evaluated and plotted as a function of fat fraction. In ten patients undergoing liver biopsy, we compared histological analysis of liver fat fraction with MRI fat fraction. In the phantom study the 6-point Dixon with 5-fat-peak, bi-exponential T2* modeling demonstrated the best precision and accuracy in fat fraction measurements compared with other methods. This model was further calibrated with chemical fat fraction and applied in patients, where similar patterns were observed as in the phantom study that conventional 2-point and 3-point Dixon methods underestimated fat fraction compared to the calibrated 6-point 5-fat-peak bi-exponential model (P fat fraction, T2*W (27.9 ± 3.5 ms) decreased, whereas T2*F (20.3 ± 5.5 ms) increased; and T2*W and T2*F became increasingly more similar when fat fraction was higher than 15-20%. Histological fat

  17. Quantitative MRI for hepatic fat fraction and T2* measurement in pediatric patients with non-alcoholic fatty liver disease

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Jie; Rigsby, Cynthia K.; Donaldson, James S. [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Medical Imaging, Chicago, IL (United States); Northwestern University, Department of Radiology, Feinberg School of Medicine, Chicago, IL (United States); Fishbein, Mark H. [Ann and Robert H. Lurie Children' s Hospital of Chicago, Division of Gastroenterology, Hepatology, and Nutrition, Chicago, IL (United States); Zhang, Gang [Ann and Robert H. Lurie Children' s Hospital of Chicago, Biostatistics Research Core, Chicago, IL (United States); Schoeneman, Samantha E. [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Medical Imaging, Chicago, IL (United States)

    2014-11-15

    Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease in children. The gold standard for diagnosis is liver biopsy. MRI is a non-invasive imaging method to provide quantitative measurement of hepatic fat content. The methodology is particularly appealing for the pediatric population because of its rapidity and radiation-free imaging techniques. To develop a multi-point Dixon MRI method with multi-interference models (multi-fat-peak modeling and bi-exponential T2* correction) for accurate hepatic fat fraction (FF) and T2* measurements in pediatric patients with NAFLD. A phantom study was first performed to validate the accuracy of the MRI fat fraction measurement by comparing it with the chemical fat composition of the ex-vivo pork liver-fat homogenate. The most accurate model determined from the phantom study was used for fat fraction and T2* measurements in 52 children and young adults referred from the pediatric hepatology clinic with suspected or identified NAFLD. Separate T2* values of water (T2*{sub W}) and fat (T2*{sub F}) components derived from the bi-exponential fitting were evaluated and plotted as a function of fat fraction. In ten patients undergoing liver biopsy, we compared histological analysis of liver fat fraction with MRI fat fraction. In the phantom study the 6-point Dixon with 5-fat-peak, bi-exponential T2* modeling demonstrated the best precision and accuracy in fat fraction measurements compared with other methods. This model was further calibrated with chemical fat fraction and applied in patients, where similar patterns were observed as in the phantom study that conventional 2-point and 3-point Dixon methods underestimated fat fraction compared to the calibrated 6-point 5-fat-peak bi-exponential model (P < 0.0001). With increasing fat fraction, T2*{sub W} (27.9 ± 3.5 ms) decreased, whereas T2*{sub F} (20.3 ± 5.5 ms) increased; and T2*{sub W} and T2*{sub F} became increasingly more similar when fat

  18. Better Metrics to Automatically Predict the Quality of a Text Summary

    Directory of Open Access Journals (Sweden)

    Judith D. Schlesinger

    2012-09-01

    Full Text Available In this paper we demonstrate a family of metrics for estimating the quality of a text summary relative to one or more human-generated summaries. The improved metrics are based on features automatically computed from the summaries to measure content and linguistic quality. The features are combined using one of three methods—robust regression, non-negative least squares, or canonical correlation, an eigenvalue method. The new metrics significantly outperform the previous standard for automatic text summarization evaluation, ROUGE.

  19. Cardiovascular health metrics and accelerometer-measured physical activity levels: National Health and Nutrition Examination Survey, 2003-2006.

    Science.gov (United States)

    Barreira, Tiago V; Harrington, Deirdre M; Katzmarzyk, Peter T

    2014-01-01

    To determine whether relationships exist between accelerometer-measured moderate-to-vigorous physical activity (MVPA) and other cardiovascular (CV) health metrics in a large sample. Data from the 2003-2006 National Health and Nutrition Examination Survey (NHANES) collected from January 1, 2003, through December 31, 2006, were used. Overall, 3454 nonpregnant adults 20 years or older who fasted for 6 hours or longer, with valid accelerometer data and with CV health metrics, were included in the study. Blood pressure (BP), body mass index (BMI), smoking status, diet, fasting plasma glucose level, and total cholesterol level were defined as ideal, intermediate, and poor on the basis of American Heart Association criteria. Results were weighted to account for sampling design, oversampling, and nonresponse. Significant increasing linear trends in mean daily MVPA were observed across CV health levels for BMI, BP, and fasting plasma glucose (Pphysical activity in the overall definition of ideal CV health. Copyright © 2014 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  20. Defining quality metrics and improving safety and outcome in allergy care.

    Science.gov (United States)

    Lee, Stella; Stachler, Robert J; Ferguson, Berrylin J

    2014-04-01

    The delivery of allergy immunotherapy in the otolaryngology office is variable and lacks standardization. Quality metrics encompasses the measurement of factors associated with good patient-centered care. These factors have yet to be defined in the delivery of allergy immunotherapy. We developed and applied quality metrics to 6 allergy practices affiliated with an academic otolaryngic allergy center. This work was conducted at a tertiary academic center providing care to over 1500 patients. We evaluated methods and variability between 6 sites. Tracking of errors and anaphylaxis was initiated across all sites. A nationwide survey of academic and private allergists was used to collect data on current practice and use of quality metrics. The most common types of errors recorded were patient identification errors (n = 4), followed by vial mixing errors (n = 3), and dosing errors (n = 2). There were 7 episodes of anaphylaxis of which 2 were secondary to dosing errors for a rate of 0.01% or 1 in every 10,000 injection visits/year. Site visits showed that 86% of key safety measures were followed. Analysis of nationwide survey responses revealed that quality metrics are still not well defined by either medical or otolaryngic allergy practices. Academic practices were statistically more likely to use quality metrics (p = 0.021) and perform systems reviews and audits in comparison to private practices (p = 0.005). Quality metrics in allergy delivery can help improve safety and quality care. These metrics need to be further defined by otolaryngic allergists in the changing health care environment. © 2014 ARS-AAOA, LLC.

  1. Automated measurement of uptake in cerebellum, liver, and aortic arch in full-body FDG PET/CT scans.

    Science.gov (United States)

    Bauer, Christian; Sun, Shanhui; Sun, Wenqing; Otis, Justin; Wallace, Audrey; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M; Beichel, Reinhard R

    2012-06-01

    The purpose of this work was to develop and validate fully automated methods for uptake measurement of cerebellum, liver, and aortic arch in full-body PET/CT scans. Such measurements are of interest in the context of uptake normalization for quantitative assessment of metabolic activity and/or automated image quality control. Cerebellum, liver, and aortic arch regions were segmented with different automated approaches. Cerebella were segmented in PET volumes by means of a robust active shape model (ASM) based method. For liver segmentation, a largest possible hyperellipsoid was fitted to the liver in PET scans. The aortic arch was first segmented in CT images of a PET/CT scan by a tubular structure analysis approach, and the segmented result was then mapped to the corresponding PET scan. For each of the segmented structures, the average standardized uptake value (SUV) was calculated. To generate an independent reference standard for method validation, expert image analysts were asked to segment several cross sections of each of the three structures in 134 F-18 fluorodeoxyglucose (FDG) PET/CT scans. For each case, the true average SUV was estimated by utilizing statistical models and served as the independent reference standard. For automated aorta and liver SUV measurements, no statistically significant scale or shift differences were observed between automated results and the independent standard. In the case of the cerebellum, the scale and shift were not significantly different, if measured in the same cross sections that were utilized for generating the reference. In contrast, automated results were scaled 5% lower on average although not shifted, if FDG uptake was calculated from the whole segmented cerebellum volume. The estimated reduction in total SUV measurement error ranged between 54.7% and 99.2%, and the reduction was found to be statistically significant for cerebellum and aortic arch. With the proposed methods, the authors have demonstrated that

  2. Holographic Spherically Symmetric Metrics

    Science.gov (United States)

    Petri, Michael

    The holographic principle (HP) conjectures, that the maximum number of degrees of freedom of any realistic physical system is proportional to the system's boundary area. The HP has its roots in the study of black holes. It has recently been applied to cosmological solutions. In this article we apply the HP to spherically symmetric static space-times. We find that any regular spherically symmetric object saturating the HP is subject to tight constraints on the (interior) metric, energy-density, temperature and entropy-density. Whenever gravity can be described by a metric theory, gravity is macroscopically scale invariant and the laws of thermodynamics hold locally and globally, the (interior) metric of a regular holographic object is uniquely determined up to a constant factor and the interior matter-state must follow well defined scaling relations. When the metric theory of gravity is general relativity, the interior matter has an overall string equation of state (EOS) and a unique total energy-density. Thus the holographic metric derived in this article can serve as simple interior 4D realization of Mathur's string fuzzball proposal. Some properties of the holographic metric and its possible experimental verification are discussed. The geodesics of the holographic metric describe an isotropically expanding (or contracting) universe with a nearly homogeneous matter-distribution within the local Hubble volume. Due to the overall string EOS the active gravitational mass-density is zero, resulting in a coasting expansion with Ht = 1, which is compatible with the recent GRB-data.

  3. Fatty liver diagnostic from medical examination to analyze the accuracy between the abdominal ultrasonography and liver hounsfield units

    International Nuclear Information System (INIS)

    Oh, Wang Kyun; Kim, Sang Hyun

    2017-01-01

    In abdominal Ultrasonography, the fatty liver is diagnosed through hepatic parenchymal echo increased parenchymal density and unclear blood vessel boundary, and according to many studies, abdominal Ultrasonography has 60 ∼ 90% of sensitivity and 84 ∼ 95% of specificity in diagnosis of fatty liver, but the result of Ultrasonography is dependent on operators, so there can be difference among operators, and quantitative measurement of fatty infiltration is impossible. Among examinees who same day received abdominal Ultrasonography and chest computed tomography (CT), patients who were diagnosed with a fatty liver in the Ultrasonography were measured with liver Hounsfield Units (HU) of chest CT imaging to analyze the accuracy of the fatty liver diagnosis. Among 720 subject examinees, those who were diagnosed with a fatty liver through abdominal Ultrasonography by family physicians were 448, which is 62.2%. The result of Liver HU measurement in the chest CT imaging of those who were diagnosed with a fatty liver showed that 175 out of 720 had the measured value of less than 40 HU, which is 24.3%, and 173 were included to the 175 among 448 who were diagnosed through Ultrasonography, so 98.9% corresponded. This indicates that the operators' subjective ability has a great impact on diagnosis of lesion in Ultrasonography diagnosis of a fatty liver, and that in check up chest CT, under 40 HU in the measurement of Liver HU can be used for reference materials in diagnosis of a fatty liver

  4. Fatty liver diagnostic from medical examination to analyze the accuracy between the abdominal ultrasonography and liver hounsfield units

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Wang Kyun [Dept. of Radiology, Cheongju Medical Center, Cheongju (Korea, Republic of); Kim, Sang Hyun [Dept. of Radiological Science, Shinhan University, Uijeongbu (Korea, Republic of)

    2017-06-15

    In abdominal Ultrasonography, the fatty liver is diagnosed through hepatic parenchymal echo increased parenchymal density and unclear blood vessel boundary, and according to many studies, abdominal Ultrasonography has 60 ∼ 90% of sensitivity and 84 ∼ 95% of specificity in diagnosis of fatty liver, but the result of Ultrasonography is dependent on operators, so there can be difference among operators, and quantitative measurement of fatty infiltration is impossible. Among examinees who same day received abdominal Ultrasonography and chest computed tomography (CT), patients who were diagnosed with a fatty liver in the Ultrasonography were measured with liver Hounsfield Units (HU) of chest CT imaging to analyze the accuracy of the fatty liver diagnosis. Among 720 subject examinees, those who were diagnosed with a fatty liver through abdominal Ultrasonography by family physicians were 448, which is 62.2%. The result of Liver HU measurement in the chest CT imaging of those who were diagnosed with a fatty liver showed that 175 out of 720 had the measured value of less than 40 HU, which is 24.3%, and 173 were included to the 175 among 448 who were diagnosed through Ultrasonography, so 98.9% corresponded. This indicates that the operators' subjective ability has a great impact on diagnosis of lesion in Ultrasonography diagnosis of a fatty liver, and that in check up chest CT, under 40 HU in the measurement of Liver HU can be used for reference materials in diagnosis of a fatty liver.

  5. Experiences with Software Quality Metrics in the EMI middleware

    International Nuclear Information System (INIS)

    Alandes, M; Meneses, D; Pucciani, G; Kenny, E M

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project to extract “code metrics” on the status of the software products and “process metrics” related to the quality of the development and support process such as reaction time to critical bugs, requirements tracking and delays in product releases.

  6. Fatty Liver

    International Nuclear Information System (INIS)

    Filippone, A.; Digiovandomenico, V.; Digiovandomenico, E.; Genovesi, N.; Bonomo, L.

    1991-01-01

    The authors report their experience with the combined use of US and CT in the study of diffuse and subtotal fatty infiltration of the liver. An apparent disagreement was initially found between the two examinations in the study of fatty infiltration. Fifty-five patients were studied with US and CT of the upper abdomen, as suggested by clinics. US showed normal liver echogenicity in 30 patients and diffuse increased echogenicity (bright liver) in 25 cases. In 5 patients with bright liver, US demonstrated a solitary hypoechoic area, appearing as a 'skip area', in the quadrate lobe. In 2 patients with bright liver, the hypoechoic area was seen in the right lobe and exhibited no typical US features of 'Skip area'. Bright liver was quantified by measuring CT density of both liver and spleen. The relative attenuation values of spleen and liver were compared on plain and enhanced CT scans. In 5 cases with a hypoechoic area in the right lobe, CT findings were suggestive of hemangioma. A good correlation was found between broght liver and CT attenuation values, which decrease with increasing fat content of the liver. Moreover, CT attenuation values confirmed US findings in the study of typical 'skip area', by demonstrating normal density - which suggests that CT can characterize normal tissue in atypical 'skip area'

  7. Critical concentrations of cadmium in human liver and kidney measured by prompt-gamma neutron activation

    International Nuclear Information System (INIS)

    Cohn, S.H.; Vartsky, D.; Yasumura, S.; Zanzi, I.; Ellis, K.J.

    1979-01-01

    Few data exist on Cd metabolism in human beings. In particular, data are needed on the role of parameters such as age, sex, weight, diet, smoking habits, and state of health. Prompt-gamma neutron activation analysis (PGNAA) provides the only currently available means for measuring in vivo levels of liver and kidney Cd. The method employs an 85 Ci, 235 Pu,Be neutron source and a gamma ray detection system consisting of two Ge(Li) detector. The dose delivered to the liver and left kidney is 666 mrem (detection limit is 1.4 μg/g Cd in the liver and 2.0 mg Cd for one kidney). Absolute levels of Cd in the kidney and concentrations of Cd in the liver were measured in vivo in twenty healthy adult males using 238 Pu,Be neutron sources. Organ Cd levels of smokers were significantly elevated above those of nonsmokers. Biological half-time for Cd in the body was estimated to be 15.7 yr. Cigarette smoking was estimated to result in the absorption of 1.9 μg of Cd per pack. No relationship was bound between body stores of Cd (liver and kidney) and Cd or β-microglobulin levels in urine and blood. Currently the above neutron activation facility is being mounted on a 34-ft mobile trailer unit. This unit will be used to monitor levels of Cd in industrial workers. It is anticipated that critically important data, particularly on industrially exposed workers, will provide a better basis for determining critical concentrations and for the setting or revision of standards for industrial and environmental Cd pollution

  8. Relevance of motion-related assessment metrics in laparoscopic surgery.

    Science.gov (United States)

    Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J

    2013-06-01

    Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.

  9. Metrical presentation boosts implicit learning of artificial grammar.

    Science.gov (United States)

    Selchenkova, Tatiana; François, Clément; Schön, Daniele; Corneyllie, Alexandra; Perrin, Fabien; Tillmann, Barbara

    2014-01-01

    The present study investigated whether a temporal hierarchical structure favors implicit learning. An artificial pitch grammar implemented with a set of tones was presented in two different temporal contexts, notably with either a strongly metrical structure or an isochronous structure. According to the Dynamic Attending Theory, external temporal regularities can entrain internal oscillators that guide attention over time, allowing for temporal expectations that influence perception of future events. Based on this framework, it was hypothesized that the metrical structure provides a benefit for artificial grammar learning in comparison to an isochronous presentation. Our study combined behavioral and event-related potential measurements. Behavioral results demonstrated similar learning in both participant groups. By contrast, analyses of event-related potentials showed a larger P300 component and an earlier N2 component for the strongly metrical group during the exposure phase and the test phase, respectively. These findings suggests that the temporal expectations in the strongly metrical condition helped listeners to better process the pitch dimension, leading to improved learning of the artificial grammar.

  10. Sustainability, Health and Environmental Metrics: Impact on Ranking and Associations with Socioeconomic Measures for 50 U.S. Cities

    Directory of Open Access Journals (Sweden)

    Timothy Wade

    2013-02-01

    Full Text Available Waste and materials management, land use planning, transportation and infrastructure including water and energy can have indirect or direct beneficial impacts on the environment and public health. The potential for impact, however, is rarely viewed in an integrated fashion. To facilitate such an integrated view in support of community-based policy decision making, we catalogued and evaluated associations between common, publically available, Environmental (e, Health (h, and Sustainability (s metrics and sociodemographic measurements (n = 10 for 50 populous U.S. cities. E, H, S indices combined from two sources were derived from component (e (h (s metrics for each city. A composite EHS Index was derived to reflect the integration across the E, H, and S indices. Rank order of high performing cities was highly dependent on the E, H and S indices considered. When viewed together with sociodemographic measurements, our analyses further the understanding of the interplay between these broad categories and reveal significant sociodemographic disparities (e.g., race, education, income associated with low performing cities. Our analyses demonstrate how publically available environmental, health, sustainability and socioeconomic data sets can be used to better understand interconnections between these diverse domains for more holistic community assessments.

  11. H-Metric: Characterizing Image Datasets via Homogenization Based on KNN-Queries

    Directory of Open Access Journals (Sweden)

    Welington M da Silva

    2012-01-01

    Full Text Available Precision-Recall is one of the main metrics for evaluating content-based image retrieval techniques. However, it does not provide an ample perception of the properties of an image dataset immersed in a metric space. In this work, we describe an alternative metric named H-Metric, which is determined along a sequence of controlled modifications in the image dataset. The process is named homogenization and works by altering the homogeneity characteristics of the classes of the images. The result is a process that measures how hard it is to deal with a set of images in respect to content-based retrieval, offering support in the task of analyzing configurations of distance functions and of features extractors.

  12. Clinical value of combined measurement of serum alpha-fetoprotein, alpha-L-fucosidase and ferritin levels in the diagnosis of primary liver cancer

    International Nuclear Information System (INIS)

    Zhang Aimin; Chai Xiaohong; Jin Ying; Dong Xuemei

    2005-01-01

    Objective: To investigate the clinical value of combined measurement of serum alpha-fetoprotein (AFP), alpha-L-fucosidase (AFU) and ferritin (SF) levels in the diagnosis of primary liver cancer. Methods: Serum AFP, AFU (with RIA) and SF (with biochemical method) were determined in 52 patients with primary liver cancer and 40 controls. Results: The positive rates of AFP, AFU and SF in patient with liver cancer were 82.7%, 86.6% and 76.9%, respectively. Positive rates with combined measurement of AFP plus AFU, AFP plus SF, and AFP plus AFU, SF were 94.2%, 90.4% and 98.1% respectively. Conclusion: Combined measurement of AFP, AFU and SF can significantly increase the positive rate in the diagnosis of primary liver cancer. (authors)

  13. Networks and centroid metrics for understanding football

    African Journals Online (AJOL)

    Gonçalo Dias

    games. However, it seems that the centroid metric, supported only by the position of players in the field ...... the strategy adopted by the coach (Gama et al., 2014). ... centroid distance as measures of team's tactical performance in youth football.

  14. A metric and frameworks for resilience analysis of engineered and infrastructure systems

    International Nuclear Information System (INIS)

    Francis, Royce; Bekera, Behailu

    2014-01-01

    In this paper, we have reviewed various approaches to defining resilience and the assessment of resilience. We have seen that while resilience is a useful concept, its diversity in usage complicates its interpretation and measurement. In this paper, we have proposed a resilience analysis framework and a metric for measuring resilience. Our analysis framework consists of system identification, resilience objective setting, vulnerability analysis, and stakeholder engagement. The implementation of this framework is focused on the achievement of three resilience capacities: adaptive capacity, absorptive capacity, and recoverability. These three capacities also form the basis of our proposed resilience factor and uncertainty-weighted resilience metric. We have also identified two important unresolved discussions emerging in the literature: the idea of resilience as an epistemological versus inherent property of the system, and design for ecological versus engineered resilience in socio-technical systems. While we have not resolved this tension, we have shown that our framework and metric promote the development of methodologies for investigating “deep” uncertainties in resilience assessment while retaining the use of probability for expressing uncertainties about highly uncertain, unforeseeable, or unknowable hazards in design and management activities. - Highlights: • While resilience is a useful concept, its diversity in usage complicates its interpretation and measurement. • We proposed a resilience analysis framework whose implementation is encapsulated within resilience metric incorporating absorptive, adaptive, and restorative capacities. • We have shown that our framework and metric can support the investigation of “deep” uncertainties in resilience assessment or analysis. • We have discussed the role of quantitative metrics in design for ecological versus engineered resilience in socio-technical systems. • Our resilience metric supports

  15. Scintigraphic assessment of liver function in patients requiring liver surgery

    NARCIS (Netherlands)

    Cieślak, K.P.

    2018-01-01

    This thesis addresses various aspects of assessment of liver function using a quantitative liver function test, 99mTc-mebrofenin hepatobiliary scintigraphy (HBS). HBS enables direct measurement of at least one of the liver’s true processes with minimal external interference and offers the

  16. Project management metrics, KPIs, and dashboards a guide to measuring and monitoring project performance

    CERN Document Server

    Kerzner, Harold

    2017-01-01

    With the growth of complex projects, stakeholder involvement, and advancements in visual-based technology, metrics and KPIs (key performance indicators) are key factors in evaluating project performance. Dashboard reporting systems provide accessible project performance data, and sharing this vital data in a concise and consistent manner is a key communication responsibility of all project managers. This 3rd edition of Kerzner’s groundbreaking work includes the following updates: new sections on processing dashboard information, portfolio management PMO and metrics, and BI tool flexibility. PPT decks by chapter and a test bank will be available for use in seminar presentations and courses.

  17. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  18. Context-dependent ATC complexity metric

    NARCIS (Netherlands)

    Mercado Velasco, G.A.; Borst, C.

    2015-01-01

    Several studies have investigated Air Traffic Control (ATC) complexity metrics in a search for a metric that could best capture workload. These studies have shown how daunting the search for a universal workload metric (one that could be applied in different contexts: sectors, traffic patterns,

  19. Evidence-based Metrics Toolkit for Measuring Safety and Efficiency in Human-Automation Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — APRIL 2016 NOTE: Principal Investigator moved to Rice University in mid-2015. Project continues at Rice with the same title (Evidence-based Metrics Toolkit for...

  20. DLA Energy Biofuel Feedstock Metrics Study

    Science.gov (United States)

    2012-12-11

    moderately/highly in- vasive  Metric 2: Genetically modified organism ( GMO ) hazard, Yes/No and Hazard Category  Metric 3: Species hybridization...4– biofuel distribution Stage # 5– biofuel use Metric 1: State inva- siveness ranking Yes Minimal Minimal No No Metric 2: GMO hazard Yes...may utilize GMO microbial or microalgae species across the applicable biofuel life cycles (stages 1–3). The following consequence Metrics 4–6 then

  1. A Cross-Domain Survey of Metrics for Modelling and Evaluating Collisions

    Directory of Open Access Journals (Sweden)

    Jeremy A. Marvel

    2014-09-01

    Full Text Available This paper provides a brief survey of the metrics for measuring probability, degree, and severity of collisions as applied to autonomous and intelligent systems. Though not exhaustive, this survey evaluates the state-of-the-art of collision metrics, and assesses which are likely to aid in the establishment and support of autonomous system collision modelling. The survey includes metrics for 1 robot arms; 2 mobile robot platforms; 3 nonholonomic physical systems such as ground vehicles, aircraft, and naval vessels, and; 4 virtual and mathematical models.

  2. Assessment of every day extremely low frequency (Elf) electromagnetic fields (50-60 Hz) exposure: which metrics?

    International Nuclear Information System (INIS)

    Verrier, A.; Magne, I.; Souqes, M.; Lambrozo, J.

    2006-01-01

    Because electricity is encountered at every moment of the day, at home with household appliances, or in every type of transportation, people are most of the time exposed to extremely low frequency (E.L.F.) electromagnetic fields (50-60 Hz) in a various way. Due to a lack of knowledge about the biological mechanisms of 50 Hz magnetic fields, studies seeking to identify health effects of exposure use central tendency metrics. The objective of our study is to provide better information about these exposure measurements from three categories of metrics. We calculated metrics of exposure measurements from data series (79 very day exposed subjects), made up approximately 20,000 recordings of magnetic fields, measured every 30 seconds for 7 days with an E.M.D.E.X. II dosimeter. These indicators were divided into three categories : central tendency metrics, dispersion metrics and variability metrics.We use Principal Component Analysis, a multidimensional technique to examine the relations between different exposure metrics for a group of subjects. Principal component Analysis (P.C.A.) enabled us to identify from the foreground 71.7% of the variance. The first component (42.7%) was characterized by central tendency; the second (29.0%) was composed of dispersion characteristics. The third component (17.2%) was composed of variability characteristics. This study confirm the need to improve exposure measurements by using at least two dimensions intensity and dispersion. (authors)

  3. Individuality evaluation for paper based artifact-metrics using transmitted light image

    Science.gov (United States)

    Yamakoshi, Manabu; Tanaka, Junichi; Furuie, Makoto; Hirabayashi, Masashi; Matsumoto, Tsutomu

    2008-02-01

    Artifact-metrics is an automated method of authenticating artifacts based on a measurable intrinsic characteristic. Intrinsic characters, such as microscopic random-patterns made during the manufacturing process, are very difficult to copy. A transmitted light image of the distribution can be used for artifact-metrics, since the fiber distribution of paper is random. Little is known about the individuality of the transmitted light image although it is an important requirement for intrinsic characteristic artifact-metrics. Measuring individuality requires that the intrinsic characteristic of each artifact significantly differs, so having sufficient individuality can make an artifact-metric system highly resistant to brute force attack. Here we investigate the influence of paper category, matching size of sample, and image-resolution on the individuality of a transmitted light image of paper through a matching test using those images. More concretely, we evaluate FMR/FNMR curves by calculating similarity scores with matches using correlation coefficients between pairs of scanner input images, and the individuality of paper by way of estimated EER with probabilistic measure through a matching method based on line segments, which can localize the influence of rotation gaps of a sample in the case of large matching size. As a result, we found that the transmitted light image of paper has a sufficient individuality.

  4. Green Chemistry Metrics with Special Reference to Green Analytical Chemistry

    Directory of Open Access Journals (Sweden)

    Marek Tobiszewski

    2015-06-01

    Full Text Available The concept of green chemistry is widely recognized in chemical laboratories. To properly measure an environmental impact of chemical processes, dedicated assessment tools are required. This paper summarizes the current state of knowledge in the field of development of green chemistry and green analytical chemistry metrics. The diverse methods used for evaluation of the greenness of organic synthesis, such as eco-footprint, E-Factor, EATOS, and Eco-Scale are described. Both the well-established and recently developed green analytical chemistry metrics, including NEMI labeling and analytical Eco-scale, are presented. Additionally, this paper focuses on the possibility of the use of multivariate statistics in evaluation of environmental impact of analytical procedures. All the above metrics are compared and discussed in terms of their advantages and disadvantages. The current needs and future perspectives in green chemistry metrics are also discussed.

  5. Green Chemistry Metrics with Special Reference to Green Analytical Chemistry.

    Science.gov (United States)

    Tobiszewski, Marek; Marć, Mariusz; Gałuszka, Agnieszka; Namieśnik, Jacek

    2015-06-12

    The concept of green chemistry is widely recognized in chemical laboratories. To properly measure an environmental impact of chemical processes, dedicated assessment tools are required. This paper summarizes the current state of knowledge in the field of development of green chemistry and green analytical chemistry metrics. The diverse methods used for evaluation of the greenness of organic synthesis, such as eco-footprint, E-Factor, EATOS, and Eco-Scale are described. Both the well-established and recently developed green analytical chemistry metrics, including NEMI labeling and analytical Eco-scale, are presented. Additionally, this paper focuses on the possibility of the use of multivariate statistics in evaluation of environmental impact of analytical procedures. All the above metrics are compared and discussed in terms of their advantages and disadvantages. The current needs and future perspectives in green chemistry metrics are also discussed.

  6. Analysis of Subjects' Vulnerability in a Touch Screen Game Using Behavioral Metrics.

    Science.gov (United States)

    Parsinejad, Payam; Sipahi, Rifat

    2017-12-01

    In this article, we report results on an experimental study conducted with volunteer subjects playing a touch-screen game with two unique difficulty levels. Subjects have knowledge about the rules of both game levels, but only sufficient playing experience with the easy level of the game, making them vulnerable with the difficult level. Several behavioral metrics associated with subjects' playing the game are studied in order to assess subjects' mental-workload changes induced by their vulnerability. Specifically, these metrics are calculated based on subjects' finger kinematics and decision making times, which are then compared with baseline metrics, namely, performance metrics pertaining to how well the game is played and a physiological metric called pnn50 extracted from heart rate measurements. In balanced experiments and supported by comparisons with baseline metrics, it is found that some of the studied behavioral metrics have the potential to be used to infer subjects' mental workload changes through different levels of the game. These metrics, which are decoupled from task specifics, relate to subjects' ability to develop strategies to play the game, and hence have the advantage of offering insight into subjects' task-load and vulnerability assessment across various experimental settings.

  7. DIGITAL MARKETING: SUCCESS METRICS, FUTURE TRENDS

    OpenAIRE

    Preeti Kaushik

    2017-01-01

    Abstract – Business Marketing is one of the prospective which has been tremendously affected by digital world in last few years. Digital marketing refers to doing advertising through digital channels. This paper provides detailed study of metrics to measure success of digital marketing platform and glimpse of future of technologies by 2020.

  8. Symmetries of Taub-NUT dual metrics

    International Nuclear Information System (INIS)

    Baleanu, D.; Codoban, S.

    1998-01-01

    Recently geometric duality was analyzed for a metric which admits Killing tensors. An interesting example arises when the manifold has Killing-Yano tensors. The symmetries of the dual metrics in the case of Taub-NUT metric are investigated. Generic and non-generic symmetries of dual Taub-NUT metric are analyzed

  9. Shear wave elastography results correlate with liver fibrosis histology and liver function reserve.

    Science.gov (United States)

    Feng, Yan-Hong; Hu, Xiang-Dong; Zhai, Lin; Liu, Ji-Bin; Qiu, Lan-Yan; Zu, Yuan; Liang, Si; Gui, Yu; Qian, Lin-Xue

    2016-05-07

    To evaluate the correlation of shear wave elastography (SWE) results with liver fibrosis histology and quantitative function reserve. Weekly subcutaneous injection of 60% carbon tetrachloride (1.5 mL/kg) was given to 12 canines for 24 wk to induce experimental liver fibrosis, with olive oil given to 2 control canines. At 24 wk, liver condition was evaluated using clinical biochemistry assays, SWE imaging, lidocaine metabolite monoethylglycine-xylidide (MEGX) test, and histologic fibrosis grading. Clinical biochemistry assays were performed at the institutional central laboratory for routine liver function evaluation. Liver stiffness was measured in triplicate from three different intercostal spaces and expressed as mean liver stiffness modulus (LSM). Plasma concentrations of lidocaine and its metabolite MEGX were determined using high-performance liquid chromatography repeated in duplicate. Liver biopsy samples were fixed in 10% formaldehyde, and liver fibrosis was graded using the modified histological activity index Knodell score (F0-F4). Correlations among histologic grading, LSM, and MEGX measures were analyzed with the Pearson linear correlation coefficient. At 24 wk liver fibrosis histologic grading was as follows: F0, n = 2 (control); F1, n = 0; F2, n = 3; F3, n = 7; and F4, n = 2. SWE LSM was positively correlated with histologic grading (r = 0.835, P function reserve in experimental severe fibrosis and cirrhosis.

  10. Volume measurement variability in three-dimensional high-frequency ultrasound images of murine liver metastases

    International Nuclear Information System (INIS)

    Wirtzfeld, L A; Graham, K C; Groom, A C; MacDonald, I C; Chambers, A F; Fenster, A; Lacefield, J C

    2006-01-01

    The identification and quantification of tumour volume measurement variability is imperative for proper study design of longitudinal non-invasive imaging of pre-clinical mouse models of cancer. Measurement variability will dictate the minimum detectable volume change, which in turn influences the scheduling of imaging sessions and the interpretation of observed changes in tumour volume. In this paper, variability is quantified for tumour volume measurements from 3D high-frequency ultrasound images of murine liver metastases. Experimental B16F1 liver metastases were analysed in different size ranges including less than 1 mm 3 , 1-4 mm 3 , 4-8 mm 3 and 8-70 mm 3 . The intra- and inter-observer repeatability was high over a large range of tumour volumes, but the coefficients of variation (COV) varied over the volume ranges. The minimum and maximum intra-observer COV were 4% and 14% for the 1-4 mm 3 and 3 tumours, respectively. For tumour volumes measured by segmenting parallel planes, the maximum inter-slice distance that maintained acceptable measurement variability increased from 100 to 600 μm as tumour volume increased. Comparison of free breathing versus ventilated animals demonstrated that respiratory motion did not significantly change the measured volume. These results enable design of more efficient imaging studies by using the measured variability to estimate the time required to observe a significant change in tumour volume

  11. Pazarlama Performans Ölçütleri: Bir Literatür Taraması(Marketing Metrics: A Literature Review

    Directory of Open Access Journals (Sweden)

    Güngör HACIOĞLU

    2012-01-01

    Full Text Available Marketing’s inability to measure its contribution to firm performance leads to losing its status in the firm, and therefore recently marketing function is under increasing pressure to evaluate its performance and be accountable. In this context, determining appropriate metrics to measure marketing performance is discussed by both marketing practitioners and scholars. The aim of this study is to review the literature on marketing metrics used to measure marketing performance and importance attached to these metrics. Besides, some forces elevating the importance of marketing metrics, difficulties and criticism of measuring marketing performance will be explicated. Also, managerial applications and future research opportunities are presented.

  12. Shared liver-like transcriptional characteristics in liver metastases and corresponding primary colorectal tumors.

    Science.gov (United States)

    Cheng, Jun; Song, Xuekun; Ao, Lu; Chen, Rou; Chi, Meirong; Guo, You; Zhang, Jiahui; Li, Hongdong; Zhao, Wenyuan; Guo, Zheng; Wang, Xianlong

    2018-01-01

    Background & Aims : Primary tumors of colorectal carcinoma (CRC) with liver metastasis might gain some liver-specific characteristics to adapt the liver micro-environment. This study aims to reveal potential liver-like transcriptional characteristics associated with the liver metastasis in primary colorectal carcinoma. Methods: Among the genes up-regulated in normal liver tissues versus normal colorectal tissues, we identified "liver-specific" genes whose expression levels ranked among the bottom 10% ("unexpressed") of all measured genes in both normal colorectal tissues and primary colorectal tumors without metastasis. These liver-specific genes were investigated for their expressions in both the primary tumors and the corresponding liver metastases of seven primary CRC patients with liver metastasis using microdissected samples. Results: Among the 3958 genes detected to be up-regulated in normal liver tissues versus normal colorectal tissues, we identified 12 liver-specific genes and found two of them, ANGPTL3 and CFHR5 , were unexpressed in microdissected primary colorectal tumors without metastasis but expressed in both microdissected liver metastases and corresponding primary colorectal tumors (Fisher's exact test, P colorectal tumors may express some liver-specific genes which may help the tumor cells adapt the liver micro-environment.

  13. Comparison of Employer Productivity Metrics to Lost Productivity Estimated by Commonly Used Questionnaires.

    Science.gov (United States)

    Gardner, Bethany T; Dale, Ann Marie; Buckner-Petty, Skye; Van Dillen, Linda; Amick, Benjamin C; Evanoff, Bradley

    2016-02-01

    The aim of the study was to assess construct and discriminant validity of four health-related work productivity loss questionnaires in relation to employer productivity metrics, and to describe variation in economic estimates of productivity loss provided by the questionnaires in healthy workers. Fifty-eight billing office workers completed surveys including health information and four productivity loss questionnaires. Employer productivity metrics and work hours were also obtained. Productivity loss questionnaires were weakly to moderately correlated with employer productivity metrics. Workers with more health complaints reported greater health-related productivity loss than healthier workers, but showed no loss on employer productivity metrics. Economic estimates of productivity loss showed wide variation among questionnaires, yet no loss of actual productivity. Additional studies are needed comparing questionnaires with objective measures in larger samples and other industries, to improve measurement methods for health-related productivity loss.

  14. Comparison of employer productivity metrics to lost productivity estimated by commonly used questionnaires

    Science.gov (United States)

    Gardner, Bethany T.; Dale, Ann Marie; Buckner-Petty, Skye; Van Dillen, Linda; Amick, Benjamin C.; Evanoff, Bradley

    2016-01-01

    Objective To assess construct and discriminant validity of four health-related work productivity loss questionnaires in relation to employer productivity metrics, and to describe variation in economic estimates of productivity loss provided by the questionnaires in healthy workers. Methods 58 billing office workers completed surveys including health information and four productivity loss questionnaires. Employer productivity metrics and work hours were also obtained. Results Productivity loss questionnaires were weakly to moderately correlated with employer productivity metrics. Workers with more health complaints reported greater health-related productivity loss than healthier workers, but showed no loss on employer productivity metrics. Economic estimates of productivity loss showed wide variation among questionnaires, yet no loss of actual productivity. Conclusions Additional studies are needed comparing questionnaires with objective measures in larger samples and other industries, to improve measurement methods for health-related productivity loss. PMID:26849261

  15. Technology transfer metrics: Measurement and verification of data/reusable launch vehicle business analysis

    Science.gov (United States)

    Trivoli, George W.

    1996-01-01

    Congress and the Executive Branch have mandated that all branches of the Federal Government exert a concentrated effort to transfer appropriate government and government contractor-developed technology to the industrial use in the U.S. economy. For many years, NASA has had a formal technology transfer program to transmit information about new technologies developed for space applications into the industrial or commercial sector. Marshall Space Flight Center (MSFC) has been in the forefront of the development of U.S. industrial assistance programs using technologies developed at the Center. During 1992-93, MSFC initiated a technology transfer metrics study. The MSFC study was the first of its kind among the various NASA centers. The metrics study is a continuing process, with periodic updates that reflect on-going technology transfer activities.

  16. Performance metrics for Inertial Confinement Fusion implosions: aspects of the technical framework for measuring progress in the National Ignition Campaign

    International Nuclear Information System (INIS)

    Spears, B.K.; Glenzer, S.; Edwards, M.J.; Brandon, S.; Clark, D.; Town, R.; Cerjan, C.; Dylla-Spears, R.; Mapoles, E.; Munro, D.; Salmonson, J.; Sepke, S.; Weber, S.; Hatchett, S.; Haan, S.; Springer, P.; Moses, E.; Mapoles, E.; Munro, D.; Salmonson, J.; Sepke, S.

    2011-01-01

    The National Ignition Campaign (NIC) uses non-igniting 'THD' capsules to study and optimize the hydrodynamic assembly of the fuel without burn. These capsules are designed to simultaneously reduce DT neutron yield and to maintain hydrodynamic similarity with the DT ignition capsule. We will discuss nominal THD performance and the associated experimental observables. We will show the results of large ensembles of numerical simulations of THD and DT implosions and their simulated diagnostic outputs. These simulations cover a broad range of both nominal and off nominal implosions. We will focus on the development of an experimental implosion performance metric called the experimental ignition threshold factor (ITFX). We will discuss the relationship between ITFX and other integrated performance metrics, including the ignition threshold factor (ITF), the generalized Lawson criterion (GLC), and the hot spot pressure (HSP). We will then consider the experimental results of the recent NIC THD campaign. We will show that we can observe the key quantities for producing a measured ITFX and for inferring the other performance metrics. We will discuss trends in the experimental data, improvement in ITFX, and briefly the upcoming tuning campaign aimed at taking the next steps in performance improvement on the path to ignition on NIF.

  17. Performance metrics for Inertial Confinement Fusion implosions: aspects of the technical framework for measuring progress in the National Ignition Campaign

    Energy Technology Data Exchange (ETDEWEB)

    Spears, B K; Glenzer, S; Edwards, M J; Brandon, S; Clark, D; Town, R; Cerjan, C; Dylla-Spears, R; Mapoles, E; Munro, D; Salmonson, J; Sepke, S; Weber, S; Hatchett, S; Haan, S; Springer, P; Moses, E; Mapoles, E; Munro, D; Salmonson, J; Sepke, S

    2011-12-16

    The National Ignition Campaign (NIC) uses non-igniting 'THD' capsules to study and optimize the hydrodynamic assembly of the fuel without burn. These capsules are designed to simultaneously reduce DT neutron yield and to maintain hydrodynamic similarity with the DT ignition capsule. We will discuss nominal THD performance and the associated experimental observables. We will show the results of large ensembles of numerical simulations of THD and DT implosions and their simulated diagnostic outputs. These simulations cover a broad range of both nominal and off nominal implosions. We will focus on the development of an experimental implosion performance metric called the experimental ignition threshold factor (ITFX). We will discuss the relationship between ITFX and other integrated performance metrics, including the ignition threshold factor (ITF), the generalized Lawson criterion (GLC), and the hot spot pressure (HSP). We will then consider the experimental results of the recent NIC THD campaign. We will show that we can observe the key quantities for producing a measured ITFX and for inferring the other performance metrics. We will discuss trends in the experimental data, improvement in ITFX, and briefly the upcoming tuning campaign aimed at taking the next steps in performance improvement on the path to ignition on NIF.

  18. Magnetic resonance elastography is as accurate as liver biopsy for liver fibrosis staging.

    Science.gov (United States)

    Morisaka, Hiroyuki; Motosugi, Utaroh; Ichikawa, Shintaro; Nakazawa, Tadao; Kondo, Tetsuo; Funayama, Satoshi; Matsuda, Masanori; Ichikawa, Tomoaki; Onishi, Hiroshi

    2018-05-01

    Liver MR elastography (MRE) is available for the noninvasive assessment of liver fibrosis; however, no previous studies have compared the diagnostic ability of MRE with that of liver biopsy. To compare the diagnostic accuracy of liver fibrosis staging between MRE-based methods and liver biopsy using the resected liver specimens as the reference standard. A retrospective study at a single institution. In all, 200 patients who underwent preoperative MRE and subsequent surgical liver resection were included in this study. Data from 80 patients were used to estimate cutoff and distributions of liver stiffness values measured by MRE for each liver fibrosis stage (F0-F4, METAVIR system). In the remaining 120 patients, liver biopsy specimens were obtained from the resected liver tissues using a standard biopsy needle. 2D liver MRE with gradient-echo based sequence on a 1.5 or 3T scanner was used. Two radiologists independently measured the liver stiffness value on MRE and two types of MRE-based methods (threshold and Bayesian prediction method) were applied. Two pathologists evaluated all biopsy samples independently to stage liver fibrosis. Surgically resected whole tissue specimens were used as the reference standard. The accuracy for liver fibrosis staging was compared between liver biopsy and MRE-based methods with a modified McNemar's test. Accurate fibrosis staging was achieved in 53.3% (64/120) and 59.1% (71/120) of patients using MRE with threshold and Bayesian methods, respectively, and in 51.6% (62/120) with liver biopsy. Accuracies of MRE-based methods for diagnoses of ≥F2 (90-91% [108-9/120]), ≥F3 (79-81% [95-97/120]), and F4 (82-85% [98-102/120]) were statistically equivalent to those of liver biopsy (≥F2, 79% [95/120], P ≤ 0.01; ≥F3, 88% [105/120], P ≤ 0.006; and F4, 82% [99/120], P ≤ 0.017). MRE can be an alternative to liver biopsy for fibrosis staging. 3. Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:1268-1275. © 2017

  19. Which are the cut-off values of 2D-Shear Wave Elastography (2D-SWE) liver stiffness measurements predicting different stages of liver fibrosis, considering Transient Elastography (TE) as the reference method?

    Energy Technology Data Exchange (ETDEWEB)

    Sporea, Ioan, E-mail: isporea@umft.ro; Bota, Simona, E-mail: bota_simona1982@yahoo.com; Gradinaru-Taşcău, Oana, E-mail: bluonmyown@yahoo.com; Şirli, Roxana, E-mail: roxanasirli@gmail.com; Popescu, Alina, E-mail: alinamircea.popescu@gmail.com; Jurchiş, Ana, E-mail: ana.jurchis@yahoo.com

    2014-03-15

    Introduction: To identify liver stiffness (LS) cut-off values assessed by means of 2D-Shear Wave Elastography (2D-SWE) for predicting different stages of liver fibrosis, considering Transient Elastography (TE) as the reference method. Methods: Our prospective study included 383 consecutive subjects, with or without hepatopathies, in which LS was evaluated by means of TE and 2D-SWE. To discriminate between various stages of fibrosis by TE we used the following LS cut-offs (kPa): F1-6, F2-7.2, F3-9.6 and F4-14.5. Results: The rate of reliable LS measurements was similar for TE and 2D-SWE: 73.9% vs. 79.9%, p = 0.06. Older age and higher BMI were associated for both TE and 2D-SWE with the impossibility to obtain reliable LS measurements. Reliable LS measurements by both elastographic methods were obtained in 65.2% of patients. A significant correlation was found between TE and 2D-SWE measurements (r = 0.68). The best LS cut-off values assessed by 2D-SWE for predicting different stages of liver fibrosis were: F ≥ 1: >7.1 kPa (AUROC = 0.825); F ≥ 2: >7.8 kPa (AUROC = 0.859); F ≥ 3: >8 kPa (AUROC = 0.897) and for F = 4: >11.5 kPa (AUROC = 0.914). Conclusions: 2D-SWE is a reliable method for the non-invasive evaluation of liver fibrosis, considering TE as the reference method. The accuracy of 2D-SWE measurements increased with the severity of liver fibrosis.

  20. Interrelationships Among Several Variables Reflecting Quantitative Thinking in Elementary School Children with Particular Emphasis upon Those Measures Involving Metric and Decimal Skills

    Science.gov (United States)

    Selman, Delon; And Others

    1976-01-01

    The relationships among measures of quantitative thinking in first through fifth grade children assigned either to an experimental math program emphasizing tactile, manipulative, or individual activity in learning metric and decimal concepts, or to a control group, were examined. Tables are presented and conclusions discussed. (Author/JKS)

  1. On Information Metrics for Spatial Coding.

    Science.gov (United States)

    Souza, Bryan C; Pavão, Rodrigo; Belchior, Hindiael; Tort, Adriano B L

    2018-04-01

    The hippocampal formation is involved in navigation, and its neuronal activity exhibits a variety of spatial correlates (e.g., place cells, grid cells). The quantification of the information encoded by spikes has been standard procedure to identify which cells have spatial correlates. For place cells, most of the established metrics derive from Shannon's mutual information (Shannon, 1948), and convey information rate in bits/s or bits/spike (Skaggs et al., 1993, 1996). Despite their widespread use, the performance of these metrics in relation to the original mutual information metric has never been investigated. In this work, using simulated and real data, we find that the current information metrics correlate less with the accuracy of spatial decoding than the original mutual information metric. We also find that the top informative cells may differ among metrics, and show a surrogate-based normalization that yields comparable spatial information estimates. Since different information metrics may identify different neuronal populations, we discuss current and alternative definitions of spatially informative cells, which affect the metric choice. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Integrating Metrics across the Marketing Curriculum: The Digital and Social Media Opportunity

    Science.gov (United States)

    Spiller, Lisa; Tuten, Tracy

    2015-01-01

    Modern digital and social media formats have revolutionized marketing measurement, producing an abundance of data, meaningful metrics, new tools, and methodologies. This increased emphasis on metrics in the marketing industry signifies the need for increased quantitative and critical thinking content in our marketing coursework if we are to…

  3. Generalized Painleve-Gullstrand metrics

    Energy Technology Data Exchange (ETDEWEB)

    Lin Chunyu [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China)], E-mail: l2891112@mail.ncku.edu.tw; Soo Chopin [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China)], E-mail: cpsoo@mail.ncku.edu.tw

    2009-02-02

    An obstruction to the implementation of spatially flat Painleve-Gullstrand (PG) slicings is demonstrated, and explicitly discussed for Reissner-Nordstroem and Schwarzschild-anti-deSitter spacetimes. Generalizations of PG slicings which are not spatially flat but which remain regular at the horizons are introduced. These metrics can be obtained from standard spherically symmetric metrics by physical Lorentz boosts. With these generalized PG metrics, problematic contributions to the imaginary part of the action in the Parikh-Wilczek derivation of Hawking radiation due to the obstruction can be avoided.

  4. Kerr metric in the deSitter background

    International Nuclear Information System (INIS)

    Vaidya, P.C.

    1984-01-01

    In addition to the Kerr metric with cosmological constant Λ several other metrics are presented giving a Kerr-like solution of Einstein's equations in the background of deSitter universe. A new metric of what may be termed as rotating deSitter space-time devoid of matter but containing null fluid with twisting null rays, has been presented. This metric reduces to the standard deSitter metric when the twist in the rays vanishes. Kerr metric in this background is the immediate generalization of Schwarzschild's exterior metric with cosmological constant. (author)

  5. Multi-linear model set design based on the nonlinearity measure and H-gap metric.

    Science.gov (United States)

    Shaghaghi, Davood; Fatehi, Alireza; Khaki-Sedigh, Ali

    2017-05-01

    This paper proposes a model bank selection method for a large class of nonlinear systems with wide operating ranges. In particular, nonlinearity measure and H-gap metric are used to provide an effective algorithm to design a model bank for the system. Then, the proposed model bank is accompanied with model predictive controllers to design a high performance advanced process controller. The advantage of this method is the reduction of excessive switch between models and also decrement of the computational complexity in the controller bank that can lead to performance improvement of the control system. The effectiveness of the method is verified by simulations as well as experimental studies on a pH neutralization laboratory apparatus which confirms the efficiency of the proposed algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Kerr metric in cosmological background

    Energy Technology Data Exchange (ETDEWEB)

    Vaidya, P C [Gujarat Univ., Ahmedabad (India). Dept. of Mathematics

    1977-06-01

    A metric satisfying Einstein's equation is given which in the vicinity of the source reduces to the well-known Kerr metric and which at large distances reduces to the Robertson-Walker metric of a nomogeneous cosmological model. The radius of the event horizon of the Kerr black hole in the cosmological background is found out.

  7. Is There a Need for New Marketing Communications Performance Metrics for Social Media?

    OpenAIRE

    Töllinen, Aarne; Karjaluoto, Heikki

    2011-01-01

    The objective of this paper is to develop a conceptual framework for measuring the effectiveness of social media marketing communications. With recent advances in information and communications technology, especially in social collaboration technologies, both academics and practitioners rethink whether the existing marketing communications performance metrics are still valid in the changing communications landscape, or is it time to devise entirely new metrics for measuring mar...

  8. Reliability, Validity, Comparability and Practical Utility of Cybercrime-Related Data, Metrics, and Information

    OpenAIRE

    Nir Kshetri

    2013-01-01

    With an increasing pervasiveness, prevalence and severity of cybercrimes, various metrics, measures and statistics have been developed and used to measure various aspects of this phenomenon. Cybercrime-related data, metrics, and information, however, pose important and difficult dilemmas regarding the issues of reliability, validity, comparability and practical utility. While many of the issues of the cybercrime economy are similar to other underground and underworld industries, this economy ...

  9. Global Surgery System Strengthening: It Is All About the Right Metrics.

    Science.gov (United States)

    Watters, David A; Guest, Glenn D; Tangi, Viliami; Shrime, Mark G; Meara, John G

    2018-04-01

    Progress in achieving "universal access to safe, affordable surgery, and anesthesia care when needed" is dependent on consensus not only about the key messages but also on what metrics should be used to set goals and measure progress. The Lancet Commission on Global Surgery not only achieved consensus on key messages but also recommended 6 key metrics to inform national surgical plans and monitor scale-up toward 2030. These metrics measure access to surgery, as well as its timeliness, safety, and affordability: (1) Two-hour access to the 3 Bellwether procedures (cesarean delivery, emergency laparotomy, and management of an open fracture); (2) Surgeon, Anesthetist, and Obstetrician workforce >20/100,000; (3) Surgical volume of 5000 procedures/100,000; (4) Reporting of perioperative mortality rate; and (5 and 6) Risk rates of catastrophic expenditure and impoverishment when requiring surgery. This article discusses the definition, validity, feasibility, relevance, and progress with each of these metrics. The authors share their experience of introducing the metrics in the Pacific and sub-Saharan Africa. We identify appropriate messages for each potential stakeholder-the patients, practitioners, providers (health services and hospitals), public (community), politicians, policymakers, and payers. We discuss progress toward the metrics being included in core indicator lists by the World Health Organization and the World Bank and how they have been, or may be, used to inform National Surgical Plans in low- and middle-income countries to scale-up the delivery of safe, affordable, and timely surgical and anesthesia care to all who need it.

  10. Standardized reporting of functioning information on ICF-based common metrics.

    Science.gov (United States)

    Prodinger, Birgit; Tennant, Alan; Stucki, Gerold

    2018-02-01

    In clinical practice and research a variety of clinical data collection tools are used to collect information on people's functioning for clinical practice and research and national health information systems. Reporting on ICF-based common metrics enables standardized documentation of functioning information in national health information systems. The objective of this methodological note on applying the ICF in rehabilitation is to demonstrate how to report functioning information collected with a data collection tool on ICF-based common metrics. We first specify the requirements for the standardized reporting of functioning information. Secondly, we introduce the methods needed for transforming functioning data to ICF-based common metrics. Finally, we provide an example. The requirements for standardized reporting are as follows: 1) having a common conceptual framework to enable content comparability between any health information; and 2) a measurement framework so that scores between two or more clinical data collection tools can be directly compared. The methods needed to achieve these requirements are the ICF Linking Rules and the Rasch measurement model. Using data collected incorporating the 36-item Short Form Health Survey (SF-36), the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0), and the Stroke Impact Scale 3.0 (SIS 3.0), the application of the standardized reporting based on common metrics is demonstrated. A subset of items from the three tools linked to common chapters of the ICF (d4 Mobility, d5 Self-care and d6 Domestic life), were entered as "super items" into the Rasch model. Good fit was achieved with no residual local dependency and a unidimensional metric. A transformation table allows for comparison between scales, and between a scale and the reporting common metric. Being able to report functioning information collected with commonly used clinical data collection tools with ICF-based common metrics enables clinicians

  11. Investigation on liver fast metabolism with CT

    International Nuclear Information System (INIS)

    Huebener, K.H.; Schmitt, W.G.H.

    1981-01-01

    Measurements of the density of normal and diffusely diseased liver parenchyma show a significant difference only in fatty liver. A linear relationship between the fat content and physical density has been demonstrated. Computed tomographic densitometry of liver tissue correlates well with physical in vitro measurements of fat content and is sufficiently accurate for clinical use. Other types of liver diseases cannot be differentiated by densitometry, Lipolisis in fatty liver in chronic alcoholism alcohol withdrawal has been investigated. It has been found that a rate of decrease of the fatty degeneration of the liver equals to 1 percent/day. Fatty degeneration of the liver in acute pancreatitis and other diseases have been also investigated. CT densitometry of the liver should be considered as a useful routine clinical method to determine the fat content of liver. (author)

  12. Investigation on liver fast metabolism with CT

    Energy Technology Data Exchange (ETDEWEB)

    Huebener, K.H.; Schmitt, W.G.H. (Heidelberg Univ. (Germany, F.R.). Pathologisches Inst.)

    1981-01-01

    Measurements of the density of normal and diffusely diseased liver parenchyma show a significant difference only in fatty liver. A linear relationship between the fat content and physical density has been demonstrated. Computed tomographic densitometry of liver tissue correlates well with physical in vitro measurements of fat content and is sufficiently accurate for clinical use. Other types of liver diseases cannot be differentiated by densitometry, Lipolisis in fatty liver in chronic alcoholism alcohol withdrawal has been investigated. It has been found that a rate of decrease of the fatty degeneration of the liver equals to 1 percent/day. Fatty degeneration of the liver in acute pancreatitis and other diseases have been also investigated. CT densitometry of the liver should be considered as a useful routine clinical method to determine the fat content of liver.

  13. Objectively Quantifying Radiation Esophagitis With Novel Computed Tomography–Based Metrics

    Energy Technology Data Exchange (ETDEWEB)

    Niedzielski, Joshua S., E-mail: jsniedzielski@mdanderson.org [Department of Radiation Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); University of Texas Houston Graduate School of Biomedical Science, Houston, Texas (United States); Yang, Jinzhong [Department of Radiation Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); University of Texas Houston Graduate School of Biomedical Science, Houston, Texas (United States); Stingo, Francesco [Department of Biostatistics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); Martel, Mary K.; Mohan, Radhe [Department of Radiation Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); University of Texas Houston Graduate School of Biomedical Science, Houston, Texas (United States); Gomez, Daniel R. [Department of Radiation Oncology, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); Briere, Tina M. [Department of Radiation Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); University of Texas Houston Graduate School of Biomedical Science, Houston, Texas (United States); Liao, Zhongxing [Department of Radiation Oncology, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); Court, Laurence E. [Department of Radiation Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); University of Texas Houston Graduate School of Biomedical Science, Houston, Texas (United States)

    2016-02-01

    Purpose: To study radiation-induced esophageal expansion as an objective measure of radiation esophagitis in patients with non-small cell lung cancer (NSCLC) treated with intensity modulated radiation therapy. Methods and Materials: Eighty-five patients had weekly intra-treatment CT imaging and esophagitis scoring according to Common Terminlogy Criteria for Adverse Events 4.0, (24 Grade 0, 45 Grade 2, and 16 Grade 3). Nineteen esophageal expansion metrics based on mean, maximum, spatial length, and volume of expansion were calculated as voxel-based relative volume change, using the Jacobian determinant from deformable image registration between the planning and weekly CTs. An anatomic variability correction method was validated and applied to these metrics to reduce uncertainty. An analysis of expansion metrics and radiation esophagitis grade was conducted using normal tissue complication probability from univariate logistic regression and Spearman rank for grade 2 and grade 3 esophagitis endpoints, as well as the timing of expansion and esophagitis grade. Metrics' performance in classifying esophagitis was tested with receiver operating characteristic analysis. Results: Expansion increased with esophagitis grade. Thirteen of 19 expansion metrics had receiver operating characteristic area under the curve values >0.80 for both grade 2 and grade 3 esophagitis endpoints, with the highest performance from maximum axial expansion (MaxExp1) and esophageal length with axial expansion ≥30% (LenExp30%) with area under the curve values of 0.93 and 0.91 for grade 2, 0.90 and 0.90 for grade 3 esophagitis, respectively. Conclusions: Esophageal expansion may be a suitable objective measure of esophagitis, particularly maximum axial esophageal expansion and esophageal length with axial expansion ≥30%, with 2.1 Jacobian value and 98.6 mm as the metric value for 50% probability of grade 3 esophagitis. The uncertainty in esophageal Jacobian calculations can be reduced

  14. Objectively Quantifying Radiation Esophagitis With Novel Computed Tomography–Based Metrics

    International Nuclear Information System (INIS)

    Niedzielski, Joshua S.; Yang, Jinzhong; Stingo, Francesco; Martel, Mary K.; Mohan, Radhe; Gomez, Daniel R.; Briere, Tina M.; Liao, Zhongxing; Court, Laurence E.

    2016-01-01

    Purpose: To study radiation-induced esophageal expansion as an objective measure of radiation esophagitis in patients with non-small cell lung cancer (NSCLC) treated with intensity modulated radiation therapy. Methods and Materials: Eighty-five patients had weekly intra-treatment CT imaging and esophagitis scoring according to Common Terminlogy Criteria for Adverse Events 4.0, (24 Grade 0, 45 Grade 2, and 16 Grade 3). Nineteen esophageal expansion metrics based on mean, maximum, spatial length, and volume of expansion were calculated as voxel-based relative volume change, using the Jacobian determinant from deformable image registration between the planning and weekly CTs. An anatomic variability correction method was validated and applied to these metrics to reduce uncertainty. An analysis of expansion metrics and radiation esophagitis grade was conducted using normal tissue complication probability from univariate logistic regression and Spearman rank for grade 2 and grade 3 esophagitis endpoints, as well as the timing of expansion and esophagitis grade. Metrics' performance in classifying esophagitis was tested with receiver operating characteristic analysis. Results: Expansion increased with esophagitis grade. Thirteen of 19 expansion metrics had receiver operating characteristic area under the curve values >0.80 for both grade 2 and grade 3 esophagitis endpoints, with the highest performance from maximum axial expansion (MaxExp1) and esophageal length with axial expansion ≥30% (LenExp30%) with area under the curve values of 0.93 and 0.91 for grade 2, 0.90 and 0.90 for grade 3 esophagitis, respectively. Conclusions: Esophageal expansion may be a suitable objective measure of esophagitis, particularly maximum axial esophageal expansion and esophageal length with axial expansion ≥30%, with 2.1 Jacobian value and 98.6 mm as the metric value for 50% probability of grade 3 esophagitis. The uncertainty in esophageal Jacobian calculations can be reduced

  15. Two classes of metric spaces

    Directory of Open Access Journals (Sweden)

    Isabel Garrido

    2016-04-01

    Full Text Available The class of metric spaces (X,d known as small-determined spaces, introduced by Garrido and Jaramillo, are properly defined by means of some type of real-valued Lipschitz functions on X. On the other hand, B-simple metric spaces introduced by Hejcman are defined in terms of some kind of bornologies of bounded subsets of X. In this note we present a common framework where both classes of metric spaces can be studied which allows us to see not only the relationships between them but also to obtain new internal characterizations of these metric properties.

  16. CT head-scan dosimetry in an anthropomorphic phantom and associated measurement of ACR accreditation-phantom imaging metrics under clinically representative scan conditions

    Energy Technology Data Exchange (ETDEWEB)

    Brunner, Claudia C.; Stern, Stanley H.; Chakrabarti, Kish [U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993 (United States); Minniti, Ronaldo [National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, Maryland 20899 (United States); Parry, Marie I. [Walter Reed National Military Medical Center, 8901 Rockville Pike, Bethesda, Maryland 20889 (United States); Skopec, Marlene [National Institutes of Health, 9000 Rockville Pike, Bethesda, Maryland 20892 (United States)

    2013-08-15

    Purpose: To measure radiation absorbed dose and its distribution in an anthropomorphic head phantom under clinically representative scan conditions in three widely used computed tomography (CT) scanners, and to relate those dose values to metrics such as high-contrast resolution, noise, and contrast-to-noise ratio (CNR) in the American College of Radiology CT accreditation phantom.Methods: By inserting optically stimulated luminescence dosimeters (OSLDs) in the head of an anthropomorphic phantom specially developed for CT dosimetry (University of Florida, Gainesville), we measured dose with three commonly used scanners (GE Discovery CT750 HD, Siemens Definition, Philips Brilliance 64) at two different clinical sites (Walter Reed National Military Medical Center, National Institutes of Health). The scanners were set to operate with the same data-acquisition and image-reconstruction protocols as used clinically for typical head scans, respective of the practices of each facility for each scanner. We also analyzed images of the ACR CT accreditation phantom with the corresponding protocols. While the Siemens Definition and the Philips Brilliance protocols utilized only conventional, filtered back-projection (FBP) image-reconstruction methods, the GE Discovery also employed its particular version of an adaptive statistical iterative reconstruction (ASIR) algorithm that can be blended in desired proportions with the FBP algorithm. We did an objective image-metrics analysis evaluating the modulation transfer function (MTF), noise power spectrum (NPS), and CNR for images reconstructed with FBP. For images reconstructed with ASIR, we only analyzed the CNR, since MTF and NPS results are expected to depend on the object for iterative reconstruction algorithms.Results: The OSLD measurements showed that the Siemens Definition and the Philips Brilliance scanners (located at two different clinical facilities) yield average absorbed doses in tissue of 42.6 and 43.1 m

  17. Extraction of liver volumetry based on blood vessel from the portal phase CT dataset

    Science.gov (United States)

    Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Utsunomiya, Tohru; Shimada, Mitsuo

    2012-02-01

    At liver surgery planning stage, the liver volumetry would be essential for surgeons. Main problem at liver extraction is the wide variability of livers in shapes and sizes. Since, hepatic blood vessels structure varies from a person to another and covers liver region, the present method uses that information for extraction of liver in two stages. The first stage is to extract abdominal blood vessels in the form of hepatic and nonhepatic blood vessels. At the second stage, extracted vessels are used to control extraction of liver region automatically. Contrast enhanced CT datasets at only the portal phase of 50 cases is used. Those data include 30 abnormal livers. A reference for all cases is done through a comparison of two experts labeling results and correction of their inter-reader variability. Results of the proposed method agree with the reference at an average rate of 97.8%. Through application of different metrics mentioned at MICCAI workshop for liver segmentation, it is found that: volume overlap error is 4.4%, volume difference is 0.3%, average symmetric distance is 0.7 mm, Root mean square symmetric distance is 0.8 mm, and maximum distance is 15.8 mm. These results represent the average of overall data and show an improved accuracy compared to current liver segmentation methods. It seems to be a promising method for extraction of liver volumetry of various shapes and sizes.

  18. A Metric for Heterotic Moduli

    Science.gov (United States)

    Candelas, Philip; de la Ossa, Xenia; McOrist, Jock

    2017-12-01

    Heterotic vacua of string theory are realised, at large radius, by a compact threefold with vanishing first Chern class together with a choice of stable holomorphic vector bundle. These form a wide class of potentially realistic four-dimensional vacua of string theory. Despite all their phenomenological promise, there is little understanding of the metric on the moduli space of these. What is sought is the analogue of special geometry for these vacua. The metric on the moduli space is important in phenomenology as it normalises D-terms and Yukawa couplings. It is also of interest in mathematics, since it generalises the metric, first found by Kobayashi, on the space of gauge field connections, to a more general context. Here we construct this metric, correct to first order in {α^{\\backprime}}, in two ways: first by postulating a metric that is invariant under background gauge transformations of the gauge field, and also by dimensionally reducing heterotic supergravity. These methods agree and the resulting metric is Kähler, as is required by supersymmetry. Checking the metric is Kähler is intricate and the anomaly cancellation equation for the H field plays an essential role. The Kähler potential nevertheless takes a remarkably simple form: it is the Kähler potential of special geometry with the Kähler form replaced by the {α^{\\backprime}}-corrected hermitian form.

  19. Application of localized 31P MRS saturation transfer at 7 T for measurement of ATP metabolism in the liver: reproducibility and initial clinical application in patients with non-alcoholic fatty liver disease

    International Nuclear Information System (INIS)

    Valkovic, Ladislav; Gajdosik, Martin; Chmelik, Marek; Trattnig, Siegfried; Traussnigg, Stefan; Kienbacher, Christian; Trauner, Michael; Wolf, Peter; Krebs, Michael; Bogner, Wolfgang; Krssak, Martin

    2014-01-01

    Saturation transfer (ST) phosphorus MR spectroscopy ( 31 P MRS) enables in vivo insight into energy metabolism and thus could identify liver conditions currently diagnosed only by biopsy. This study assesses the reproducibility of the localized 31 P MRS ST in liver at 7 T and tests its potential for noninvasive differentiation of non-alcoholic fatty liver (NAFL) and steatohepatitis (NASH). After the ethics committee approval, reproducibility of the localized 31 P MRS ST at 7 T and the biological variation of acquired hepato-metabolic parameters were assessed in healthy volunteers. Subsequently, 16 suspected NAFL/NASH patients underwent MRS measurements and diagnostic liver biopsy. The Pi-to-ATP exchange parameters were compared between the groups by a Mann-Whitney U test and related to the liver fat content estimated by a single-voxel proton ( 1 H) MRS, measured at 3 T. The mean exchange rate constant (k) in healthy volunteers was 0.31 ± 0.03 s -1 with a coefficient of variation of 9.0 %. Significantly lower exchange rates (p -1 ) when compared to healthy volunteers, and NAFL patients (k = 0.30 ± 0.05 s -1 ). Significant correlation was found between the k value and the liver fat content (r = 0.824, p 31 P MRS ST technique provides a tool for gaining insight into hepatic ATP metabolism and could contribute to the differentiation of NAFL and NASH. (orig.)

  20. Association of adult weight gain and nonalcoholic fatty liver in a cross-sectional study in Wan Song Community, China

    Directory of Open Access Journals (Sweden)

    W.-J. Zhang

    2014-02-01

    Full Text Available Our objective was to examine associations of adult weight gain and nonalcoholic fatty liver disease (NAFLD. Cross-sectional interview data from 844 residents in Wan Song Community from October 2009 to April 2010 were analyzed in multivariate logistic regression models to examine odds ratios (OR and 95% confidence intervals (CI between NAFLD and weight change from age 20. Questionnaires, physical examinations, laboratory examinations, and ultrasonographic examination of the liver were carried out. Maximum rate of weight gain, body mass index, waist circumference, waist-to-hip ratio, systolic blood pressure, diastolic blood pressure, fasting blood glucose, cholesterol, triglycerides, uric acid, and alanine transaminase were higher in the NAFLD group than in the control group. HDL-C in the NAFLD group was lower than in the control group. As weight gain increased (measured as the difference between current weight and weight at age 20 years, the OR of NAFLD increased in multivariate models. NAFLD OR rose with increasing weight gain as follows: OR (95%CI for NAFLD associated with weight gain of 20+ kg compared to stable weight (change <5 kg was 4.23 (2.49-7.09. Significantly increased NAFLD OR were observed even for weight gains of 5-9.9 kg. For the “age 20 to highest lifetime weight” metric, the OR of NAFLD also increased as weight gain increased. For the “age 20 to highest lifetime weight” metric and the “age 20 to current weight” metric, insulin resistance index (HOMA-IR increased as weight gain increased (P<0.001. In a stepwise multivariate regression analysis, significant association was observed between adult weight gain and NAFLD (OR=1.027, 95%CI=1.002-1.055, P=0.025. We conclude that adult weight gain is strongly associated with NAFLD.

  1. Equilibrium thermodynamics and neutrino decoupling in quasi-metric cosmology

    Science.gov (United States)

    Østvang, Dag

    2018-05-01

    The laws of thermodynamics in the expanding universe are formulated within the quasi-metric framework. The quasi-metric cosmic expansion does not directly influence momenta of material particles, so the expansion directly cools null particles only (e.g., photons). Therefore, said laws differ substantially from their counterparts in standard cosmology. Consequently, all non-null neutrino mass eigenstates are predicted to have the same energy today as they had just after neutrino decoupling in the early universe. This indicates that the predicted relic neutrino background is strongly inconsistent with detection rates measured in solar neutrino detectors (Borexino in particular). Thus quasi-metric cosmology is in violent conflict with experiment unless some exotic property of neutrinos makes the relic neutrino background essentially undetectable (e.g., if all massive mass eigenstates decay into "invisible" particles over cosmic time scales). But in absence of hard evidence in favour of the necessary exotic neutrino physics needed to resolve said conflict, the current status of quasi-metric relativity has been changed to non-viable.

  2. On characterizations of quasi-metric completeness

    Energy Technology Data Exchange (ETDEWEB)

    Dag, H.; Romaguera, S.; Tirado, P.

    2017-07-01

    Hu proved in [4] that a metric space (X, d) is complete if and only if for any closed subspace C of (X, d), every Banach contraction on C has fixed point. Since then several authors have investigated the problem of characterizing the metric completeness by means of fixed point theorems. Recently this problem has been studied in the more general context of quasi-metric spaces for different notions of completeness. Here we present a characterization of a kind of completeness for quasi-metric spaces by means of a quasi-metric versions of Hu’s theorem. (Author)

  3. Performance-Based Measures Associate With Frailty in Patients With End-Stage Liver Disease.

    Science.gov (United States)

    Lai, Jennifer C; Volk, Michael L; Strasburg, Debra; Alexander, Neil

    2016-12-01

    Physical frailty, as measured by the Fried Frailty Index, is increasingly recognized as a critical determinant of outcomes in patients with cirrhosis. However, its utility is limited by the inclusion of self-reported components. We aimed to identify performance-based measures associated with frailty in patients with cirrhosis. Patients with cirrhosis, aged 50 years or older, underwent: 6-minute walk test (cardiopulmonary endurance), chair stands in 30 seconds (muscle endurance), isometric knee extension (lower extremity strength), unipedal stance time (static balance), and maximal step length (dynamic balance/coordination). Linear regression associated each physical performance test with frailty. Principal components exploratory factor analysis evaluated the interrelatedness of frailty and the 5 physical performance tests. Of 40 patients with cirrhosis, with a median age of 64 years and Model for End-stage Liver Disease (MELD) MELD of 12.10 (25%) were frail by Fried Frailty Index ≥3. Frail patients with cirrhosis had poorer performance in 6-minute walk test distance (231 vs 338 m), 30-second chair stands (7 vs 10), isometric knee extension (86 vs 122 Newton meters), and maximal step length (22 vs 27 in. (P ≤ 0.02 for each). Each physical performance test was significantly associated with frailty (P test to a single factor-frailty. Frailty in cirrhosis is a multidimensional construct that is distinct from liver dysfunction and incorporates endurance, strength, and balance. Our data provide specific targets for prehabilitation interventions aimed at reducing frailty in patients with cirrhosis in preparation for liver transplantation.

  4. Liver volume in thalassaemia major: relationship with body weight, serum ferritin, and liver function

    Energy Technology Data Exchange (ETDEWEB)

    Chan Yuleung; Law Manyee; Howard, Robert [Chinese University of Hong Kong, Department of Diagnostic Radiology and Organ Imaging, Prince of Wales Hospital, Hong Kong (China); Li Chikong; Chik Kiwai [Chinese University of Hong Kong, Department of Paediatrics, Prince of Wales Hospital, Hong Kong (China)

    2005-02-01

    It is not known whether body weight alone can adjust for the volume of liver in the calculation of the chelating dose in {beta}-thalassaemia major patients, who frequently have iron overload and hepatitis. The hypothesis is that liver volume in children and adolescents suffering from {beta}-thalassaemia major is affected by ferritin level and liver function. Thirty-five {beta}-thalassaemia major patients aged 7-18 years and 35 age- and sex-matched controls had liver volume measured by MRI. Serum alanine aminotransferase (ALT) and ferritin levels were obtained in the thalassaemia major patients. Body weight explained 65 and 86% of the change in liver volume in {beta}-thalassaemia major patients and age-matched control subjects, respectively. Liver volume/kilogram body weight was significantly higher (P<0.001) in thalassaemia major patients than in control subjects. There was a significant correlation between ALT level and liver volume/kilogram body weight (r=0.55, P=0.001). Patients with elevated ALT had significantly higher liver volume/kilogram body weight (mean 42.9{+-}12 cm{sup 3}/kg) than control subjects (mean 23.4{+-}3.6 cm{sup 3}/kg) and patients with normal ALT levels (mean 27.4{+-}3.6 cm{sup 3}/kg). Body weight is the most important single factor for liver-volume changes in thalassaemia major patients, but elevated ALT also has a significant role. Direct liver volume measurement for chelation dose adjustment may be advantageous in patients with elevated ALT. (orig.)

  5. Metrics for supporting the use of Modularisation in IPD

    DEFF Research Database (Denmark)

    Riitahuhta, Asko; Andreasen, Mogens Myrup

    1998-01-01

    The measuring of Modularisation is a relatively new subject. Because the Modularisation is gaining more importance in a remarkable way, it is necessary to create measurement systems for it. In the paper we present the theory base for metrics; business relations, the view to the Modularisation stage...

  6. Feasibility of Commercially Available, Fully Automated Hepatic CT Volumetry for Assessing Both Total and Territorial Liver Volumes in Liver Transplantation

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Cheong Il; Kim, Se Hyung; Rhim, Jung Hyo; Yi, Nam Joon; Suh, Kyung Suk; Lee, Jeong Min; Han, Joon Koo; Choi, Byung Ihn [Seoul National University Hospital, Seoul (Korea, Republic of)

    2013-02-15

    To assess the feasibility of commercially-available, fully automated hepatic CT volumetry for measuring both total and territorial liver volumes by comparing with interactive manual volumetry and measured ex-vivo liver volume. For the assessment of total and territorial liver volume, portal phase CT images of 77 recipients and 107 donors who donated right hemiliver were used. Liver volume was measured using both the fully automated and interactive manual methods with Advanced Liver Analysis software. The quality of the automated segmentation was graded on a 4-point scale. Grading was performed by two radiologists in consensus. For the cases with excellent-to-good quality, the accuracy of automated volumetry was compared with interactive manual volumetry and measured ex-vivo liver volume which was converted from weight using analysis of variance test and Pearson's or Spearman correlation test. Processing time for both automated and interactive manual methods was also compared. Excellent-to-good quality of automated segmentation for total liver and right hemiliver was achieved in 57.1% (44/77) and 17.8% (19/107), respectively. For both total and right hemiliver volumes, there were no significant differences among automated, manual, and ex-vivo volumes except between automate volume and manual volume of the total liver (p = 0.011). There were good correlations between automate volume and ex-vivo liver volume ({gamma}= 0.637 for total liver and {gamma}= 0.767 for right hemiliver). Both correlation coefficients were higher than those with manual method. Fully automated volumetry required significantly less time than interactive manual method (total liver: 48.6 sec vs. 53.2 sec, right hemiliver: 182 sec vs. 244.5 sec). Fully automated hepatic CT volumetry is feasible and time-efficient for total liver volume measurement. However, its usefulness for territorial liver volumetry needs to be improved.

  7. Feasibility of Commercially Available, Fully Automated Hepatic CT Volumetry for Assessing Both Total and Territorial Liver Volumes in Liver Transplantation

    International Nuclear Information System (INIS)

    Shin, Cheong Il; Kim, Se Hyung; Rhim, Jung Hyo; Yi, Nam Joon; Suh, Kyung Suk; Lee, Jeong Min; Han, Joon Koo; Choi, Byung Ihn

    2013-01-01

    To assess the feasibility of commercially-available, fully automated hepatic CT volumetry for measuring both total and territorial liver volumes by comparing with interactive manual volumetry and measured ex-vivo liver volume. For the assessment of total and territorial liver volume, portal phase CT images of 77 recipients and 107 donors who donated right hemiliver were used. Liver volume was measured using both the fully automated and interactive manual methods with Advanced Liver Analysis software. The quality of the automated segmentation was graded on a 4-point scale. Grading was performed by two radiologists in consensus. For the cases with excellent-to-good quality, the accuracy of automated volumetry was compared with interactive manual volumetry and measured ex-vivo liver volume which was converted from weight using analysis of variance test and Pearson's or Spearman correlation test. Processing time for both automated and interactive manual methods was also compared. Excellent-to-good quality of automated segmentation for total liver and right hemiliver was achieved in 57.1% (44/77) and 17.8% (19/107), respectively. For both total and right hemiliver volumes, there were no significant differences among automated, manual, and ex-vivo volumes except between automate volume and manual volume of the total liver (p = 0.011). There were good correlations between automate volume and ex-vivo liver volume (γ= 0.637 for total liver and γ= 0.767 for right hemiliver). Both correlation coefficients were higher than those with manual method. Fully automated volumetry required significantly less time than interactive manual method (total liver: 48.6 sec vs. 53.2 sec, right hemiliver: 182 sec vs. 244.5 sec). Fully automated hepatic CT volumetry is feasible and time-efficient for total liver volume measurement. However, its usefulness for territorial liver volumetry needs to be improved.

  8. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  9. The PROMIS Physical Function item bank was calibrated to a standardized metric and shown to improve measurement efficiency

    DEFF Research Database (Denmark)

    Rose, Matthias; Bjørner, Jakob; Gandek, Barbara

    2014-01-01

    OBJECTIVE: To document the development and psychometric evaluation of the Patient-Reported Outcomes Measurement Information System (PROMIS) Physical Function (PF) item bank and static instruments. STUDY DESIGN AND SETTING: The items were evaluated using qualitative and quantitative methods. A total...... response model was used to estimate item parameters, which were normed to a mean of 50 (standard deviation [SD]=10) in a US general population sample. RESULTS: The final bank consists of 124 PROMIS items covering upper, central, and lower extremity functions and instrumental activities of daily living...... to identify differences between age and disease groups. CONCLUSION: The item bank provides a common metric and can improve the measurement of PF by facilitating the standardization of patient-reported outcome measures and implementation of CATs for more efficient PF assessments over a larger range....

  10. Liver collagen in cirrhosis correlates with portal hypertension and liver dysfunction

    DEFF Research Database (Denmark)

    Nielsen, Kåre; Clemmesen, Jens Otto; Vassiliadis, Efstathios

    2014-01-01

    livers. In 20 of the livers, CPA was measured in more than one tissue sample. CPA showed significant correlations with HVPG and with various surrogate markers of hepatic dysfunction including albumin, bilirubin, INR, MELD score and Child-Pugh score. CPA reliably discriminated HVPG ≥10 mmHg, termed...

  11. Conversion factors: SI metric and U.S. customary units

    Science.gov (United States)

    ,

    1977-01-01

    The policy of the U.S. Geological Survey is to foster use of the International System of Units (SI) which was defined by the 11th General Conference of Weights and Measures in 1960. This modernized metric system constitutes an international "language" by means of which communications throughout the world's scientific and economic communities may be improved. This publication is designed to familiarize the reader with the SI units of measurement that correspond to the common units frequently used in programs of the Geological Survey. In the near future, SI units will be used exclusively in most publications of the Survey; the conversion factors provided herein will help readers to obtain a "feel" for each unit and to "think metric."

  12. CT- and MRI-based volumetry of resected liver specimen: Comparison to intraoperative volume and weight measurements and calculation of conversion factors

    International Nuclear Information System (INIS)

    Karlo, C.; Reiner, C.S.; Stolzmann, P.; Breitenstein, S.; Marincek, B.; Weishaupt, D.; Frauenfelder, T.

    2010-01-01

    Objective: To compare virtual volume to intraoperative volume and weight measurements of resected liver specimen and calculate appropriate conversion factors to reach better correlation. Methods: Preoperative (CT-group, n = 30; MRI-group, n = 30) and postoperative MRI (n = 60) imaging was performed in 60 patients undergoing partial liver resection. Intraoperative volume and weight of the resected liver specimen was measured. Virtual volume measurements were performed by two readers (R1,R2) using dedicated software. Conversion factors were calculated. Results: Mean intraoperative resection weight/volume: CT: 855 g/852 mL; MRI: 872 g/860 mL. Virtual resection volume: CT: 960 mL(R1), 982 mL(R2); MRI: 1112 mL(R1), 1115 mL(R2). Strong positive correlation for both readers between intraoperative and virtual measurements, mean of both readers: CT: R = 0.88(volume), R = 0.89(weight); MRI: R = 0.95(volume), R = 0.92(weight). Conversion factors: 0.85(CT), 0.78(MRI). Conclusion: CT- or MRI-based volumetry of resected liver specimen is accurate and recommended for preoperative planning. A conversion of the result is necessary to improve intraoperative and virtual measurement correlation. We found 0.85 for CT- and 0.78 for MRI-based volumetry the most appropriate conversion factors.

  13. Factors influencing reliability of liver stiffness measurements using transient elastography (M-probe)—Monocentric experience

    Energy Technology Data Exchange (ETDEWEB)

    Şirli, Roxana, E-mail: roxanasirli@gmail.com; Sporea, Ioan, E-mail: isporea@umft.ro; Bota, Simona, E-mail: bota_simona1982@yahoo.com; Jurchiş, Ana, E-mail: ana.jurchis@yahoo.com

    2013-08-15

    Aim: To retrospectively assess the feasibility of transient elastography (TE) and the factors associated with failed and unreliable liver stiffness measurements (LSMs), in patients with chronic liver diseases. Material and methods: Our retrospective study included 8218 consecutive adult patients with suspected chronic liver diseases. In each patient, LSMs were performed with a FibroScan{sup ®} device (Echosens, France), with the M probe. Failure of TE measurements was defined if no valid measurement was obtained after at least 10 shots and unreliable if fewer than 10 valid shots were obtained, success rate (SR) <60% and/or interquartile range interval/median value (IQR/Med) ≥30%. Results: From the 8218 patients, failed and unreliable LSMs were observed in 29.2% of cases. In univariant analysis, the following risk factors were associated with failed and unreliable measurements: age over 50 years (OR 2.04; 95%CI 1.84–2.26), female gender (OR 1.32; 95%CI 1.20–1.45), BMI > 27.7 kg/m{sup 2} (OR 2.89, 95%CI 2.62–3.19), weight > 77 kg (OR 2.17; 95%CI 1.97–2.40) and height < 162 cm (OR 1.26; 95%CI 1.14–1.40). In multivariate analysis all the factors mentioned above were independently associated with the risk of failed and unreliable measurements. If all the negative predictive factors were present (woman, older than 50 years, with BMI > 27.7 kg/m{sup 2}, heavier than 77 kg and shorter than 162 cm), the rate of failed and unreliable measurements was 58.5%. In obese patients (BMI ≥ 30 kg/m{sup 2}), the rate of failed and unreliable measurements was 49.5%. Conclusion: Failed and unreliable LSMs were observed in 29.1% of patients. Female gender, older age, higher BMI, higher weight and smaller height were significantly associated with failed and unreliable LSMs.

  14. Syntactic Complexity Metrics and the Readability of Programs in a Functional Computer Language

    NARCIS (Netherlands)

    van den Berg, Klaas; Engel, F.L.; Bouwhuis, D.G.; Bosser, T.; d'Ydewalle, G.

    This article reports on the defintion and the measutement of the software complexity metrics of Halstead and McCabe for programs written in the functional programming language Miranda. An automated measurement of these metrics is described. In a case study, the correlation is established between the

  15. Brand metrics that matter

    NARCIS (Netherlands)

    Muntinga, D.; Bernritter, S.

    2017-01-01

    Het merk staat steeds meer centraal in de organisatie. Het is daarom essentieel om de gezondheid, prestaties en ontwikkelingen van het merk te meten. Het is echter een uitdaging om de juiste brand metrics te selecteren. Een enorme hoeveelheid metrics vraagt de aandacht van merkbeheerders. Maar welke

  16. Privacy Metrics and Boundaries

    NARCIS (Netherlands)

    L-F. Pau (Louis-François)

    2005-01-01

    textabstractThis paper aims at defining a set of privacy metrics (quantitative and qualitative) in the case of the relation between a privacy protector ,and an information gatherer .The aims with such metrics are: -to allow to assess and compare different user scenarios and their differences; for

  17. Correlation of liver stiffness measured by FibroScan with sex and age in healthy adults undergoing physical examination

    Directory of Open Access Journals (Sweden)

    ZHAO Chongshan

    2016-04-01

    Full Text Available ObjectiveTo determine the reference range of liver stiffness in healthy population, and to investigate the influence of age and sex on liver stiffness. MethodsA total of 1794 healthy subjects who underwent physical examination in China National Petroleum Corporation Central Hospital from October 1, 2012 to October 31, 2014 were enrolled, and FibroScan was used to perform liver stiffness measurement (LSM. Since LSM value was not normally distributed, the Wilcoxon rank sum test was used to compare LSM value between male and female patients, the Kruskal-Wallis test was used to compare LSM value between different age groups, and the Spearman's rank correlation analysis was used to analyze the correlation between LSM value and age. The one-sided percentile method was used to determine the range of normal reference values in male and female subjects or in different age groups. ResultsLSM was successfully performed in 1590 patients, and the rate of successful measurement was 88.63%. A total of 107 patients were excluded due to abnormal liver enzymes. The analysis showed that LSM value showed a significant difference between male and female subjects (Z=-4.980, P<0.001, as well as between different age groups (χ2=16.983, P=0.001. Age was positively correlated with LSM value (r=0.087, P=0.001. The reference range was estimated to be ≤7.1 kPa in adults, ≤7.0 kPa in females, and ≤7.2 kPa in males. From the perspective of age, the reference range was estimated to be ≤6.8 kPa in persons aged 20-29 years, ≤6.7 kPa in persons aged 30-44 years, ≤7.8 kPa in persons aged 45-59 years, and ≤8.8 kPa in persons aged 60-74 years. ConclusionLiver stiffness value is influenced by sex and age. Sex and age should be taken into account while performing liver stiffness measurement in healthy subjects.

  18. Image characterization metrics for muon tomography

    Science.gov (United States)

    Luo, Weidong; Lehovich, Andre; Anashkin, Edward; Bai, Chuanyong; Kindem, Joel; Sossong, Michael; Steiger, Matt

    2014-05-01

    Muon tomography uses naturally occurring cosmic rays to detect nuclear threats in containers. Currently there are no systematic image characterization metrics for muon tomography. We propose a set of image characterization methods to quantify the imaging performance of muon tomography. These methods include tests of spatial resolution, uniformity, contrast, signal to noise ratio (SNR) and vertical smearing. Simulated phantom data and analysis methods were developed to evaluate metric applicability. Spatial resolution was determined as the FWHM of the point spread functions in X, Y and Z axis for 2.5cm tungsten cubes. Uniformity was measured by drawing a volume of interest (VOI) within a large water phantom and defined as the standard deviation of voxel values divided by the mean voxel value. Contrast was defined as the peak signals of a set of tungsten cubes divided by the mean voxel value of the water background. SNR was defined as the peak signals of cubes divided by the standard deviation (noise) of the water background. Vertical smearing, i.e. vertical thickness blurring along the zenith axis for a set of 2 cm thick tungsten plates, was defined as the FWHM of vertical spread function for the plate. These image metrics provided a useful tool to quantify the basic imaging properties for muon tomography.

  19. Adding Liver Stiffness Measurement to the Routine Evaluation of Hepatocellular Carcinoma Resectability Can Optimize Clinical Outcome.

    Science.gov (United States)

    Cucchetti, Alessandro; Cescon, Matteo; Colecchia, Antonio; Neri, Flavia; Cappelli, Alberta; Ravaioli, Matteo; Mazzotti, Federico; Ercolani, Giorgio; Festi, Davide; Pinna, Antonio Daniele

    2017-10-01

    Purpose  Liver stiffness (LS) has been shown to be of use in chronic liver disease patients but its utility in surgical judgment still needs to be proven. A decision-making approach was applied to evaluate whether LS measurement before surgery of hepatocellular carcinoma (HCC) can be useful in avoiding post-hepatectomy liver failure (PHLF). Materials and Methods  Decision curve analysis (DCA) was applied to 202 HCC patients (2008 - 14) with LS measurement prior to hepatectomy to verify whether the occurrence of PHLF grades B/C should be reduced through a decision-making approach with LS.  Results  Within 90 days of surgery, 4 patients died (2 %) and grades B/C PHLF occurred in 29.7 % of cases. Ascites and/or pleural effusion, treatable with medical therapy, were the most frequent complications. DCA showed that using the "expected utility theory" LS measurement can reduce up to 39 % of cases of PHLF without the exclusion of any patient from surgery that duly undergoes an uncomplicated postoperative course. LS measurement does not add any information to normal clinical judgment for patients with a low (expected utility theory" fulfilment. However, the degree of PHLF can be minor and "risk seeking" individuals can accept such a risk on the basis of surgical benefits. © Georg Thieme Verlag KG Stuttgart · New York.

  20. Assessment of liver volume with spiral computerized tomography scanning: predicting liver volume by age and height

    OpenAIRE

    Madhu Sharma; Abhishek Singh; Shewtank Goel; Setu Satani; Kavita Mudgil

    2016-01-01

    Background: Estimation of liver size has critical clinical implication. Precise knowledge of liver dimensions and volume is prerequisite for clinical assessment of liver disorders. Liver span as measured by palpation and USG is prone to inter-observer variability and poor repeatability. The aim was to assess the normal liver volume of healthy adults using spiral computed tomography scans and to observe its relationship with various body indices. Methods: In this prospective study, all the...

  1. Validation of network communicability metrics for the analysis of brain structural networks.

    Directory of Open Access Journals (Sweden)

    Jennifer Andreotti

    Full Text Available Computational network analysis provides new methods to analyze the brain's structural organization based on diffusion imaging tractography data. Networks are characterized by global and local metrics that have recently given promising insights into diagnosis and the further understanding of psychiatric and neurologic disorders. Most of these metrics are based on the idea that information in a network flows along the shortest paths. In contrast to this notion, communicability is a broader measure of connectivity which assumes that information could flow along all possible paths between two nodes. In our work, the features of network metrics related to communicability were explored for the first time in the healthy structural brain network. In addition, the sensitivity of such metrics was analysed using simulated lesions to specific nodes and network connections. Results showed advantages of communicability over conventional metrics in detecting densely connected nodes as well as subsets of nodes vulnerable to lesions. In addition, communicability centrality was shown to be widely affected by the lesions and the changes were negatively correlated with the distance from lesion site. In summary, our analysis suggests that communicability metrics that may provide an insight into the integrative properties of the structural brain network and that these metrics may be useful for the analysis of brain networks in the presence of lesions. Nevertheless, the interpretation of communicability is not straightforward; hence these metrics should be used as a supplement to the more standard connectivity network metrics.

  2. Concordance-based Kendall's Correlation for Computationally-Light vs. Computationally-Heavy Centrality Metrics: Lower Bound for Correlation

    Directory of Open Access Journals (Sweden)

    Natarajan Meghanathan

    2017-01-01

    Full Text Available We identify three different levels of correlation (pair-wise relative ordering, network-wide ranking and linear regression that could be assessed between a computationally-light centrality metric and a computationally-heavy centrality metric for real-world networks. The Kendall's concordance-based correlation measure could be used to quantitatively assess how well we could consider the relative ordering of two vertices vi and vj with respect to a computationally-light centrality metric as the relative ordering of the same two vertices with respect to a computationally-heavy centrality metric. We hypothesize that the pair-wise relative ordering (concordance-based assessment of the correlation between centrality metrics is the most strictest of all the three levels of correlation and claim that the Kendall's concordance-based correlation coefficient will be lower than the correlation coefficient observed with the more relaxed levels of correlation measures (linear regression-based Pearson's product-moment correlation coefficient and the network wide ranking-based Spearman's correlation coefficient. We validate our hypothesis by evaluating the three correlation coefficients between two sets of centrality metrics: the computationally-light degree and local clustering coefficient complement-based degree centrality metrics and the computationally-heavy eigenvector centrality, betweenness centrality and closeness centrality metrics for a diverse collection of 50 real-world networks.

  3. A new liver function test using the asialoglycoprotein-receptor system on the liver cell membrane, 2

    International Nuclear Information System (INIS)

    Kawa, Soukichi; Hazama, Hiroshi; Kojima, Michimasa

    1986-01-01

    We produced labeled neoglycoprotein (GHSA) that is physiologically equivalent to ASGP, and quantitatively examined whether its uptake by the liver is dose-related using the following methods: 1) binding assay between GHSA and ASGP receptors, 2) measurement of the liver extraction ratio in the initial circulation following administration into the portal vein, and 3) measurement of clearance in normal rats and rats with galacosamine-induced acute liver disorder. The binding assay showed a linear relationship between the concentration of 125 I-GHSA and the amount of ASGP receptors obtained from the rat liver. A membrane assay using 125 I-GHSA and the liver cell membrane revealed similar results. The liver extraction ratio in the initial circulation following the administration into the portal vein of normal rabbits was highly dose-dependent (r = -0.95 in the range of 5 - 100 μg GHSA). Serial imaging of 99m Tc-GHSA during two-hour period after administration into the peripheral blood showed specific accumulation in the liver beginning immediately after the intravenous injection and subsequent transport mainly via the biliary system into the small intestine in the normal rat and mainly into the urine in the bile duct ligated rat. As a dynamic model of 99m Tc-GHSA, its circulation through the heart and liver and inactivated release from the liver was used, and two-compartment analysis was made on measurement curves in the heart and liver to obtain clearance parameters. The concentration of administered 99m Tc-GHSA (50 - 100 μg/100 g body weight) showed a positive linear relationship with clearance. Administration of 50 μg/100 g body weight of 99m Tc-GHSA revealed a significant correlation (p < 0.001) between clearance and ASGP receptor activity in normal rats and rats with galactosamine-induced acute liver disorder. (J.P.N.)

  4. Determination of a Screening Metric for High Diversity DNA Libraries.

    Science.gov (United States)

    Guido, Nicholas J; Handerson, Steven; Joseph, Elaine M; Leake, Devin; Kung, Li A

    2016-01-01

    The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.

  5. Determination of a Screening Metric for High Diversity DNA Libraries.

    Directory of Open Access Journals (Sweden)

    Nicholas J Guido

    Full Text Available The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.

  6. [Combination of NAFLD Fibrosis Score and liver stiffness measurement for identification of moderate fibrosis stages (II & III) in non-alcoholic fatty liver disease].

    Science.gov (United States)

    Drolz, Andreas; Wehmeyer, Malte; Diedrich, Tom; Piecha, Felix; Schulze Zur Wiesch, Julian; Kluwe, Johannes

    2018-01-01

    Non-alcoholic fatty liver disease (NAFLD) has become one of the most frequent causes of chronic liver disease. Currently, therapeutic options for NAFLD patients are limited, but new pharmacologic agents are being investigated in the course of clinical trials. Because most of these studies are focusing on patients with fibrosis stages II and III (according to Kleiner), non-invasive identification of patients with intermediate fibrosis stages (II and III) is of increasing interest. Evaluation of NAFLD Fibrosis Score (NFS) and liver stiffness measurement (LSM) for prediction of fibrosis stages II/III. Patients with histologically confirmed NAFLD diagnosis were included in the study. All patients underwent a clinical and laboratory examination as well as a LSM prior to liver biopsy. Predictive value of NFS and LSM with respect to identification of fibrosis stages II/III was assessed. 134 NAFLD patients were included and analyzed. Median age was 53 (IQR 36 - 60) years, 55 patients (41 %) were female. 82 % of our patients were overweight/obese with typical aspects of metabolic syndrome. 84 patients (66 %) had liver fibrosis, 42 (50 %) advanced fibrosis. LSM and NFS correlated with fibrosis stage (r = 0.696 and r = 0.685, respectively; p stages II/III. If both criteria were met, probability of fibrosis stage II/III was 61 %. If none of the two criteria was met, chance for fibrosis stage II/III was only 6 % (negative predictive value 94 %). Combination of LSM and NFS enables identification of patients with significant probability of fibrosis stage II/III. Accordingly, these tests, especially in combination, may be a suitable screening tool for fibrosis stages II/III in NAFLD. The use of these non-invasive methods might also help to avoid unnecessary biopsies. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Fixed point theory in metric type spaces

    CERN Document Server

    Agarwal, Ravi P; O’Regan, Donal; Roldán-López-de-Hierro, Antonio Francisco

    2015-01-01

    Written by a team of leading experts in the field, this volume presents a self-contained account of the theory, techniques and results in metric type spaces (in particular in G-metric spaces); that is, the text approaches this important area of fixed point analysis beginning from the basic ideas of metric space topology. The text is structured so that it leads the reader from preliminaries and historical notes on metric spaces (in particular G-metric spaces) and on mappings, to Banach type contraction theorems in metric type spaces, fixed point theory in partially ordered G-metric spaces, fixed point theory for expansive mappings in metric type spaces, generalizations, present results and techniques in a very general abstract setting and framework. Fixed point theory is one of the major research areas in nonlinear analysis. This is partly due to the fact that in many real world problems fixed point theory is the basic mathematical tool used to establish the existence of solutions to problems which arise natur...

  8. A computer-simulated liver phantom (virtual liver phantom) for multidetector computed tomography evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Funama, Yoshinori [Kumamoto University, Department of Radiological Sciences, School of Health Sciences, Kumamoto (Japan); Awai, Kazuo; Nakayama, Yoshiharu; Liu, Da; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Miyazaki, Osamu; Goto, Taiga [Hitachi Medical Corporation, Tokyo (Japan); Hori, Shinichi [Gate Tower Institute of Image Guided Therapy, Osaka (Japan)

    2006-04-15

    The purpose of study was to develop a computer-simulated liver phantom for hepatic CT studies. A computer-simulated liver phantom was mathematically constructed on a computer workstation. The computer-simulated phantom was calibrated using real CT images acquired by an actual four-detector CT. We added an inhomogeneous texture to the simulated liver by referring to CT images of chronically damaged human livers. The mean CT number of the simulated liver was 60 HU and we added numerous 5-to 10-mm structures with 60{+-}10 HU/mm. To mimic liver tumors we added nodules measuring 8, 10, and 12 mm in diameter with CT numbers of 60{+-}10, 60{+-}15, and 60{+-}20 HU. Five radiologists visually evaluated similarity of the texture of the computer-simulated liver phantom and a real human liver to confirm the appropriateness of the virtual liver images using a five-point scale. The total score was 44 in two radiologists, and 42, 41, and 39 in one radiologist each. They evaluated that the textures of virtual liver were comparable to those of human liver. Our computer-simulated liver phantom is a promising tool for the evaluation of the image quality and diagnostic performance of hepatic CT imaging. (orig.)

  9. Deep Transfer Metric Learning.

    Science.gov (United States)

    Junlin Hu; Jiwen Lu; Yap-Peng Tan; Jie Zhou

    2016-12-01

    Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption does not hold in many real visual recognition applications, especially when samples are captured across different data sets. In this paper, we propose a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain. Specifically, our DTML learns a deep metric network by maximizing the inter-class variations and minimizing the intra-class variations, and minimizing the distribution divergence between the source domain and the target domain at the top layer of the network. To better exploit the discriminative information from the source domain, we further develop a deeply supervised transfer metric learning (DSTML) method by including an additional objective on DTML, where the output of both the hidden layers and the top layer are optimized jointly. To preserve the local manifold of input data points in the metric space, we present two new methods, DTML with autoencoder regularization and DSTML with autoencoder regularization. Experimental results on face verification, person re-identification, and handwritten digit recognition validate the effectiveness of the proposed methods.

  10. Energy functionals for Calabi-Yau metrics

    International Nuclear Information System (INIS)

    Headrick, M; Nassar, A

    2013-01-01

    We identify a set of ''energy'' functionals on the space of metrics in a given Kähler class on a Calabi-Yau manifold, which are bounded below and minimized uniquely on the Ricci-flat metric in that class. Using these functionals, we recast the problem of numerically solving the Einstein equation as an optimization problem. We apply this strategy, using the ''algebraic'' metrics (metrics for which the Kähler potential is given in terms of a polynomial in the projective coordinates), to the Fermat quartic and to a one-parameter family of quintics that includes the Fermat and conifold quintics. We show that this method yields approximations to the Ricci-flat metric that are exponentially accurate in the degree of the polynomial (except at the conifold point, where the convergence is polynomial), and therefore orders of magnitude more accurate than the balanced metrics, previously studied as approximations to the Ricci-flat metric. The method is relatively fast and easy to implement. On the theoretical side, we also show that the functionals can be used to give a heuristic proof of Yau's theorem

  11. An Experimental Study to Measure the Mechanical Properties of the Human Liver.

    Science.gov (United States)

    Karimi, Alireza; Shojaei, Ahmad

    2018-01-01

    Since the liver is one of the most important organs of the body that can be injured during trauma, that is, during accidents like car crashes, understanding its mechanical properties is of great interest. Experimental data is needed to address the mechanical properties of the liver to be used for a variety of applications, such as the numerical simulations for medical purposes, including the virtual reality simulators, trauma research, diagnosis objectives, as well as injury biomechanics. However, the data on the mechanical properties of the liver capsule is limited to the animal models or confined to the tensile/compressive loading under single direction. Therefore, this study was aimed at experimentally measuring the axial and transversal mechanical properties of the human liver capsule under both the tensile and compressive loadings. To do that, 20 human cadavers were autopsied and their liver capsules were excised and histologically analyzed to extract the mean angle of a large fibers population (bundle of the fine collagen fibers). Thereafter, the samples were cut and subjected to a series of axial and transversal tensile/compressive loadings. The results revealed the tensile elastic modulus of 12.16 ± 1.20 (mean ± SD) and 7.17 ± 0.85 kPa under the axial and transversal loadings respectively. Correspondingly, the compressive elastic modulus of 196.54 ± 13.15 and 112.41 ± 8.98 kPa were observed under the axial and transversal loadings respectively. The compressive axial and transversal maximum/failure stress of the capsule were 32.54 and 37.30 times higher than that of the tensile ones respectively. The capsule showed a stiffer behavior under the compressive load compared to the tensile one. In addition, the axial elastic modulus of the capsule was found to be higher than that of the transversal one. The findings of the current study have implications not only for understanding the mechanical properties of the human capsule tissue under tensile

  12. Non-invasive measurement of liver and pancreas fibrosis in patients with cystic fibrosis.

    Science.gov (United States)

    Friedrich-Rust, Mireen; Schlueter, Nina; Smaczny, Christina; Eickmeier, Olaf; Rosewich, Martin; Feifel, Kirstin; Herrmann, Eva; Poynard, Thierry; Gleiber, Wolfgang; Lais, Christoph; Zielen, Stefan; Wagner, Thomas O F; Zeuzem, Stefan; Bojunga, Joerg

    2013-09-01

    Patients with cystic fibrosis (CF) have a relevant morbidity and mortality caused by CF-related liver-disease. While transient elastography (TE) is an established elastography method in hepatology centers, Acoustic-Radiation-Force-Impulse (ARFI)-Imaging is a novel ultrasound-based elastography method which is integrated in a conventional ultrasound-system. The aim of the present study was to evaluate the prevalence of liver-fibrosis in patients with CF using TE, ARFI-imaging and fibrosis blood tests. 106 patients with CF were prospectively included in the present study and received ARFI-imaging of the left and right liver-lobe, ARFI of the pancreas TE of the liver and laboratory evaluation. The prevalence of liver-fibrosis according to recently published best practice guidelines for CFLD was 22.6%. Prevalence of significant liver-fibrosis assessed by TE, ARFI-right-liver-lobe, ARFI-left-liver-lobe, Fibrotest, Fibrotest-corrected-by-haptoglobin was 17%, 24%, 40%, 7%, and 16%, respectively. The best agreement was found for TE, ARFI-right-liver-lobe and Fibrotest-corrected-by-haptoglobin. Patients with pancreatic-insufficiency had significantly lower pancreas-ARFI-values as compared to patients without. ARFI-imaging and TE seem to be promising non-invasive methods for detection of liver-fibrosis in patients with CF. Copyright © 2013 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.

  13. FACTORS AND METRICS THAT INFLUENCE FRANCHISEE PERFORMANCE: AN APPROACH BASED ON BRAZILIAN FRANCHISES

    OpenAIRE

    Aguiar, Helder de Souza; Consoni, Flavia

    2017-01-01

    The article searches to map the manager’s decisions in order to understand what has been the franchisor system for choose regarding to characteristics, and what the metrics has been adopted to measure the performance Though 15 interviews with Brazilian franchise there was confirmation that revenue is the main metric used by national franchises to measure performance, although other indicators are also used in a complementary way. In addition, two other factors were cited by the interviewees a...

  14. Noninvasive measurement of liver iron concentration at MRI in children with acute leukemia: initial results

    Energy Technology Data Exchange (ETDEWEB)

    Vag, Tibor; Krumbein, Ines; Reichenbach, Juergen R.; Lopatta, Eric; Stenzel, Martin; Kaiser, Werner A.; Mentzel, Hans-Joachim [Friedrich Schiller University Jena, Institute of Diagnostic and Interventional Radiology, Jena (Germany); Kentouche, Karim; Beck, James [Friedrich Schiller University Jena, Department of Pediatrics, Jena (Germany); Renz, Diane M. [Charite University Medicine Berlin, Department of Radiology, Campus Virchow Clinic, Berlin (Germany)

    2011-08-15

    Routine assessment of body iron load in patients with acute leukemia is usually done by serum ferritin (SF) assay; however, its sensitivity is impaired by different conditions including inflammation and malignancy. To estimate, using MRI, the extent of liver iron overload in children with acute leukemia and receiving blood transfusions, and to examine the association between the degree of hepatic iron overload and clinical parameters including SF and the transfusion iron load (TIL). A total of 25 MRI measurements of the liver were performed in 15 children with acute leukemia (mean age 9.75 years) using gradient-echo sequences. Signal intensity ratios between the liver and the vertebral muscle (L/M ratio) were calculated and compared with SF-levels. TIL was estimated from the cumulative blood volume received, assuming an amount of 200 mg iron per transfused red blood cell unit. Statistical analysis revealed good correlation between the L/M SI ratio and TIL (r = -0.67, P = 0.002, 95% confidence interval CI = -0.83 to -0.34) in patients with acute leukemia as well as between L/M SI ratio and SF (r = -0.76, P = 0.0003, 95% CI = -0.89 to -0.52). SF may reliably reflect liver iron stores as a routine marker in patients suffering from acute leukemia. (orig.)

  15. Quantification of liver fat: A comprehensive review.

    Science.gov (United States)

    Goceri, Evgin; Shah, Zarine K; Layman, Rick; Jiang, Xia; Gurcan, Metin N

    2016-04-01

    Fat accumulation in the liver causes metabolic diseases such as obesity, hypertension, diabetes or dyslipidemia by affecting insulin resistance, and increasing the risk of cardiac complications and cardiovascular disease mortality. Fatty liver diseases are often reversible in their early stage; therefore, there is a recognized need to detect their presence and to assess its severity to recognize fat-related functional abnormalities in the liver. This is crucial in evaluating living liver donors prior to transplantation because fat content in the liver can change liver regeneration in the recipient and donor. There are several methods to diagnose fatty liver, measure the amount of fat, and to classify and stage liver diseases (e.g. hepatic steatosis, steatohepatitis, fibrosis and cirrhosis): biopsy (the gold-standard procedure), clinical (medical physics based) and image analysis (semi or fully automated approaches). Liver biopsy has many drawbacks: it is invasive, inappropriate for monitoring (i.e., repeated evaluation), and assessment of steatosis is somewhat subjective. Qualitative biomarkers are mostly insufficient for accurate detection since fat has to be quantified by a varying threshold to measure disease severity. Therefore, a quantitative biomarker is required for detection of steatosis, accurate measurement of severity of diseases, clinical decision-making, prognosis and longitudinal monitoring of therapy. This study presents a comprehensive review of both clinical and automated image analysis based approaches to quantify liver fat and evaluate fatty liver diseases from different medical imaging modalities. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Numerical Calabi-Yau metrics

    International Nuclear Information System (INIS)

    Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, Rene

    2008-01-01

    We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results

  17. Commutators of Littlewood-Paley gκ∗$g_{\\kappa}^{*} $-functions on non-homogeneous metric measure spaces

    Directory of Open Access Journals (Sweden)

    Lu Guanghui

    2017-11-01

    Full Text Available The main purpose of this paper is to prove that the boundedness of the commutator Mκ,b∗$\\mathcal{M}_{\\kappa,b}^{*} $ generated by the Littlewood-Paley operator Mκ∗$\\mathcal{M}_{\\kappa}^{*} $ and RBMO (μ function on non-homogeneous metric measure spaces satisfying the upper doubling and the geometrically doubling conditions. Under the assumption that the kernel of Mκ∗$\\mathcal{M}_{\\kappa}^{*} $ satisfies a certain Hörmander-type condition, the authors prove that Mκ,b∗$\\mathcal{M}_{\\kappa,b}^{*} $ is bounded on Lebesgue spaces Lp(μ for 1 < p < ∞, bounded from the space L log L(μ to the weak Lebesgue space L1,∞(μ, and is bounded from the atomic Hardy spaces H1(μ to the weak Lebesgue spaces L1,∞(μ.

  18. Metrics for Evaluation of Student Models

    Science.gov (United States)

    Pelanek, Radek

    2015-01-01

    Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…

  19. 75 FR 51272 - Proposed Collection; Comment Request; STAR METRICS-Science and Technology in America's...

    Science.gov (United States)

    2010-08-19

    ... on Innovation, Competitiveness and Science SUMMARY: In compliance with the requirement of Section... Technology for America's Reinvestment: Measuring the Effects of Research on Innovation, Competitiveness and... aim of STAR METRICS is twofold. The initial goal of STAR METRICS is to provide mechanisms that will...

  20. Eyetracking Metrics in Young Onset Alzheimer's Disease: A Window into Cognitive Visual Functions.

    Science.gov (United States)

    Pavisic, Ivanna M; Firth, Nicholas C; Parsons, Samuel; Rego, David Martinez; Shakespeare, Timothy J; Yong, Keir X X; Slattery, Catherine F; Paterson, Ross W; Foulkes, Alexander J M; Macpherson, Kirsty; Carton, Amelia M; Alexander, Daniel C; Shawe-Taylor, John; Fox, Nick C; Schott, Jonathan M; Crutch, Sebastian J; Primativo, Silvia

    2017-01-01

    Young onset Alzheimer's disease (YOAD) is defined as symptom onset before the age of 65 years and is particularly associated with phenotypic heterogeneity. Atypical presentations, such as the clinic-radiological visual syndrome posterior cortical atrophy (PCA), often lead to delays in accurate diagnosis. Eyetracking has been used to demonstrate basic oculomotor impairments in individuals with dementia. In the present study, we aim to explore the relationship between eyetracking metrics and standard tests of visual cognition in individuals with YOAD. Fifty-seven participants were included: 36 individuals with YOAD ( n  = 26 typical AD; n  = 10 PCA) and 21 age-matched healthy controls. Participants completed three eyetracking experiments: fixation, pro-saccade, and smooth pursuit tasks. Summary metrics were used as outcome measures and their predictive value explored looking at correlations with visuoperceptual and visuospatial metrics. Significant correlations between eyetracking metrics and standard visual cognitive estimates are reported. A machine-learning approach using a classification method based on the smooth pursuit raw eyetracking data discriminates with approximately 95% accuracy patients and controls in cross-validation tests. Results suggest that the eyetracking paradigms of a relatively simple and specific nature provide measures not only reflecting basic oculomotor characteristics but also predicting higher order visuospatial and visuoperceptual impairments. Eyetracking measures can represent extremely useful markers during the diagnostic phase and may be exploited as potential outcome measures for clinical trials.

  1. MO-A-16A-01: QA Procedures and Metrics: In Search of QA Usability

    International Nuclear Information System (INIS)

    Sathiaseelan, V; Thomadsen, B

    2014-01-01

    Radiation therapy has undergone considerable changes in the past two decades with a surge of new technology and treatment delivery methods. The complexity of radiation therapy treatments has increased and there has been increased awareness and publicity about the associated risks. In response, there has been proliferation of guidelines for medical physicists to adopt to ensure that treatments are delivered safely. Task Group recommendations are copious, and clinical physicists' hours are longer, stretched to various degrees between site planning and management, IT support, physics QA, and treatment planning responsibilities.Radiation oncology has many quality control practices in place to ensure the delivery of high-quality, safe treatments. Incident reporting systems have been developed to collect statistics about near miss events at many radiation oncology centers. However, tools are lacking to assess the impact of these various control measures. A recent effort to address this shortcoming is the work of Ford et al (2012) who recently published a methodology enumerating quality control quantification for measuring the effectiveness of safety barriers. Over 4000 near-miss incidents reported from 2 academic radiation oncology clinics were analyzed using quality control quantification, and a profile of the most effective quality control measures (metrics) was identified.There is a critical need to identify a QA metric to help the busy clinical physicists to focus their limited time and resources most effectively in order to minimize or eliminate errors in the radiation treatment delivery processes. In this symposium the usefulness of workflows and QA metrics to assure safe and high quality patient care will be explored.Two presentations will be given:Quality Metrics and Risk Management with High Risk Radiation Oncology ProceduresStrategies and metrics for quality management in the TG-100 Era Learning Objectives: Provide an overview and the need for QA usability

  2. MO-A-16A-01: QA Procedures and Metrics: In Search of QA Usability

    Energy Technology Data Exchange (ETDEWEB)

    Sathiaseelan, V [Northwestern Memorial Hospital, Chicago, IL (United States); Thomadsen, B [University of Wisconsin, Madison, WI (United States)

    2014-06-15

    Radiation therapy has undergone considerable changes in the past two decades with a surge of new technology and treatment delivery methods. The complexity of radiation therapy treatments has increased and there has been increased awareness and publicity about the associated risks. In response, there has been proliferation of guidelines for medical physicists to adopt to ensure that treatments are delivered safely. Task Group recommendations are copious, and clinical physicists' hours are longer, stretched to various degrees between site planning and management, IT support, physics QA, and treatment planning responsibilities.Radiation oncology has many quality control practices in place to ensure the delivery of high-quality, safe treatments. Incident reporting systems have been developed to collect statistics about near miss events at many radiation oncology centers. However, tools are lacking to assess the impact of these various control measures. A recent effort to address this shortcoming is the work of Ford et al (2012) who recently published a methodology enumerating quality control quantification for measuring the effectiveness of safety barriers. Over 4000 near-miss incidents reported from 2 academic radiation oncology clinics were analyzed using quality control quantification, and a profile of the most effective quality control measures (metrics) was identified.There is a critical need to identify a QA metric to help the busy clinical physicists to focus their limited time and resources most effectively in order to minimize or eliminate errors in the radiation treatment delivery processes. In this symposium the usefulness of workflows and QA metrics to assure safe and high quality patient care will be explored.Two presentations will be given:Quality Metrics and Risk Management with High Risk Radiation Oncology ProceduresStrategies and metrics for quality management in the TG-100 Era Learning Objectives: Provide an overview and the need for QA usability

  3. Language Games: University Responses to Ranking Metrics

    Science.gov (United States)

    Heffernan, Troy A.; Heffernan, Amanda

    2018-01-01

    League tables of universities that measure performance in various ways are now commonplace, with numerous bodies providing their own rankings of how institutions throughout the world are seen to be performing on a range of metrics. This paper uses Lyotard's notion of language games to theorise that universities are regaining some power over being…

  4. The frequency and determinants of liver stiffness measurement failure: a retrospective study of "real-life" 38,464 examinations.

    Directory of Open Access Journals (Sweden)

    Dong Ji

    Full Text Available To investigate the frequency and determinants of liver stiffness measurement (LSM failure by means of FibroScan in "real-life" Chinese patients.A total of 38,464 "real-life" Chinese patients in 302 military hospital of China through the whole year of 2013, including asymptomatic carrier, chronic hepatitis B, chronic hepatitis C, liver cirrhosis (LC, alcoholic liver disease, autoimmune liver disease, hepatocellular carcinoma (HCC and other, were enrolled, their clinical and biological parameters were retrospectively investigated. Liver fibrosis was evaluated by FibroScan detection. S probe (for children with height less than 1.20 m and M probe (for adults were used. LSM failure defined as zero valid shots (unsuccessful LSM, or the ratio of the interquartile range to the median of 10 measurements (IQR/M greater than 0.30 plus median LSM greater or equal to 7.1 kPa (unreliable LSM.LSM failure occurred in 3.34% of all examinations (1286 patients out of 38,464, among them, there were 958 cases (2.49% with unsuccessful LSM, and 328 patients (0.85% with unreliable LSM. Statistical analyses showed that LSM failure was independently associated with body mass index (BMI greater than 30 kg/m(2, female sex, age greater than 50 years, intercostal spaces (IS less than 9 mm, decompensated liver cirrhosis and HCC patients. There were no significant differences among other diseases. By changing another skilled operator, success was achieved on 301 cases out of 1286, which reduced the failure rate to 2.56%, the decrease was significant (P<0.0001.The principal reasons of LSM failure are ascites, obesity and narrow of IS. The failure rates of HCC, decompensated LC, elder or female patients are higher. These results emphasize the need for adequate operator training, technological improvements and optimal criteria for specific patient subpopulations.

  5. Advanced Metrics for Assessing Holistic Care: The "Epidaurus 2" Project.

    Science.gov (United States)

    Foote, Frederick O; Benson, Herbert; Berger, Ann; Berman, Brian; DeLeo, James; Deuster, Patricia A; Lary, David J; Silverman, Marni N; Sternberg, Esther M

    2018-01-01

    In response to the challenge of military traumatic brain injury and posttraumatic stress disorder, the US military developed a wide range of holistic care modalities at the new Walter Reed National Military Medical Center, Bethesda, MD, from 2001 to 2017, guided by civilian expert consultation via the Epidaurus Project. These projects spanned a range from healing buildings to wellness initiatives and healing through nature, spirituality, and the arts. The next challenge was to develop whole-body metrics to guide the use of these therapies in clinical care. Under the "Epidaurus 2" Project, a national search produced 5 advanced metrics for measuring whole-body therapeutic effects: genomics, integrated stress biomarkers, language analysis, machine learning, and "Star Glyphs." This article describes the metrics, their current use in guiding holistic care at Walter Reed, and their potential for operationalizing personalized care, patient self-management, and the improvement of public health. Development of these metrics allows the scientific integration of holistic therapies with organ-system-based care, expanding the powers of medicine.

  6. Non-invasive Markers of Liver Fibrosis: Adjuncts or Alternatives to Liver Biopsy?

    Science.gov (United States)

    Chin, Jun L.; Pavlides, Michael; Moolla, Ahmad; Ryan, John D.

    2016-01-01

    Liver fibrosis reflects sustained liver injury often from multiple, simultaneous factors. Whilst the presence of mild fibrosis on biopsy can be a reassuring finding, the identification of advanced fibrosis is critical to the management of patients with chronic liver disease. This necessity has lead to a reliance on liver biopsy which itself is an imperfect test and poorly accepted by patients. The development of robust tools to non-invasively assess liver fibrosis has dramatically enhanced clinical decision making in patients with chronic liver disease, allowing a rapid and informed judgment of disease stage and prognosis. Should a liver biopsy be required, the appropriateness is clearer and the diagnostic yield is greater with the use of these adjuncts. While a number of non-invasive liver fibrosis markers are now used in routine practice, a steady stream of innovative approaches exists. With improvement in the reliability, reproducibility and feasibility of these markers, their potential role in disease management is increasing. Moreover, their adoption into clinical trials as outcome measures reflects their validity and dynamic nature. This review will summarize and appraise the current and novel non-invasive markers of liver fibrosis, both blood and imaging based, and look at their prospective application in everyday clinical care. PMID:27378924

  7. Assessment of Liver Viscoelasticity for the Diagnosis of Early Stage Fatty Liver Disease Using Transient Elastography

    Science.gov (United States)

    Remenieras, Jean-Pierre; Dejobert, Maelle; Bastard, Cécile; Miette, Véronique; Perarnau, Jean-Marc; Patat, Frédéric

    Nonalcoholic fatty liver disease (NAFLD) is characterized by accumulation of fat within the Liver. The main objective of this work is (1) to evaluate the feasibility of measuring in vivo in the liver the shear wave phase velocity dispersion cs(ω) between 20 Hz and 90 Hz using vibration-controlled transient elastography (VCTE); (2) to estimate through the rheological Kelvin-Voigt model the shear elastic μ and shear viscosity η modulus; (3) to correlate the evolution of these viscoelastic parameters on two patients at Tours Hospital with the hepatic fat percentage measured with T1-weighted gradient-echo in-and out-phase MRI sequence. For the first volunteer who has 2% of fat in the liver, we obtained μ = 1233 ± 133 Pa and η = 0.5 ± 0.4 Pa.s. For the patient with 22% of fat, we measure μ = 964 ± 91 Pa and η = 1.77 ± 0.3 Pa.s. In conclusion, this novel method showed to be sensitive in characterizing the visco-elastic properties of fatty liver.

  8. Factors influencing liver and spleen volume changes after donor hepatectomy for living donor liver transplantation

    International Nuclear Information System (INIS)

    Bae, Ji Hee; Ryeom, Hunku; Song, Jung Hup

    2013-01-01

    To define the changes in liver and spleen volumes in the early postoperative period after partial liver donation for living-donor liver transplantation (LDLT) and to determine factors that influence liver and spleen volume changes. 27 donors who underwent partial hepatectomy for LDLT were included in this study. The rates of liver and spleen volume change, measured with CT volumetry, were correlated with several factors. The analyzed factors included the indocyanine green (ICG) retention rate at 15 minutes after ICG administration, preoperative platelet count, preoperative liver and splenic volumes, resected liver volume, resected-to-whole liver volume ratio (LV R /LV W ), resected liver volume to the sum of whole liver and spleen volume ratio [LV R /(LV W + SV 0 )], and pre and post hepatectomy portal venous pressures. In all hepatectomy donors, the volumes of the remnant liver and spleen were increased (increased rates, 59.5 ± 50.5%, 47.9 ± 22.6%). The increment rate of the remnant liver volume revealed a positive correlation with LV R /LV W (r = 0.759, p R /LV W influences the increment rate of the remnant liver volume.

  9. Dr. Liver: A preoperative planning system of liver graft volumetry for living donor liver transplantation.

    Science.gov (United States)

    Yang, Xiaopeng; Yang, Jae Do; Yu, Hee Chul; Choi, Younggeun; Yang, Kwangho; Lee, Tae Beom; Hwang, Hong Pil; Ahn, Sungwoo; You, Heecheon

    2018-05-01

    Manual tracing of the right and left liver lobes from computed tomography (CT) images for graft volumetry in preoperative surgery planning of living donor liver transplantation (LDLT) is common at most medical centers. This study aims to develop an automatic system with advanced image processing algorithms and user-friendly interfaces for liver graft volumetry and evaluate its accuracy and efficiency in comparison with a manual tracing method. The proposed system provides a sequential procedure consisting of (1) liver segmentation, (2) blood vessel segmentation, and (3) virtual liver resection for liver graft volumetry. Automatic segmentation algorithms using histogram analysis, hybrid level-set methods, and a customized region growing method were developed. User-friendly interfaces such as sequential and hierarchical user menus, context-sensitive on-screen hotkey menus, and real-time sound and visual feedback were implemented. Blood vessels were excluded from the liver for accurate liver graft volumetry. A large sphere-based interactive method was developed for dividing the liver into left and right lobes with a customized cutting plane. The proposed system was evaluated using 50 CT datasets in terms of graft weight estimation accuracy and task completion time through comparison to the manual tracing method. The accuracy of liver graft weight estimation was assessed by absolute difference (AD) and percentage of AD (%AD) between preoperatively estimated graft weight and intraoperatively measured graft weight. Intra- and inter-observer agreements of liver graft weight estimation were assessed by intraclass correlation coefficients (ICCs) using ten cases randomly selected. The proposed system showed significantly higher accuracy and efficiency in liver graft weight estimation (AD = 21.0 ± 18.4 g; %AD = 3.1% ± 2.8%; percentage of %AD > 10% = none; task completion time = 7.3 ± 1.4 min) than the manual tracing method (AD = 70

  10. Electrical conductivity measurement of excised human metastatic liver tumours before and after thermal ablation.

    Science.gov (United States)

    Haemmerich, Dieter; Schutt, David J; Wright, Andrew W; Webster, John G; Mahvi, David M

    2009-05-01

    We measured the ex vivo electrical conductivity of eight human metastatic liver tumours and six normal liver tissue samples from six patients using the four electrode method over the frequency range 10 Hz to 1 MHz. In addition, in a single patient we measured the electrical conductivity before and after the thermal ablation of normal and tumour tissue. The average conductivity of tumour tissue was significantly higher than normal tissue over the entire frequency range (from 4.11 versus 0.75 mS cm(-1) at 10 Hz, to 5.33 versus 2.88 mS cm(-1) at 1 MHz). We found no significant correlation between tumour size and measured electrical conductivity. While before ablation tumour tissue had considerably higher conductivity than normal tissue, the two had similar conductivity throughout the frequency range after ablation. Tumour tissue conductivity changed by +25% and -7% at 10 Hz and 1 MHz after ablation (0.23-0.29 at 10 Hz, and 0.43-0.40 at 1 MHz), while normal tissue conductivity increased by +270% and +10% at 10 Hz and 1 MHz (0.09-0.32 at 10 Hz and 0.37-0.41 at 1 MHz). These data can potentially be used to differentiate tumour from normal tissue diagnostically.

  11. Evaluation of Vehicle-Based Crash Severity Metrics.

    Science.gov (United States)

    Tsoi, Ada H; Gabler, Hampton C

    2015-01-01

    estimate to be a significant predictor in the model (p feasible to improve injury prediction if we consider adding restraint performance to classic measures, e.g. delta-v. Applications, such as advanced automatic crash notification, should consider the use of different metrics for belted versus unbelted occupants.

  12. Plasma clearance of sup(99m)Tc-N/2,4-dimethyl-acetanilido/iminodiacetate complex as a measure of parenchymal liver damage

    International Nuclear Information System (INIS)

    Studniarek, M.; Durski, K.; Liniecki, J.; Akademia Medyczna, Lodz

    1983-01-01

    Fifty-two patients were studied with various diseases affecting liver parenchyma. Any disorders of bile transport were excluded on the basis of dynamic liver scintigraphy using intravenously injected N/2,4-dimethyl acetanilid/iminodiacetate sup(99m)Tc complex (HEPIDA). The activity concentration of sup(99m)Tc-HEPIDA in plasma was measured from 5 through 60 min post injection. Clearance of the substance (Clsub(B)) was calculated from blood plasma disappearance curves and compared with results of 13 laboratory tests used conventionally for assessment of damage of the liver and its functional capacity; age and body weight was also included in the analysis. Statistical relations were studied using linear regression analysis of two variables, multiple regression analysis as well as multidimensional analysis of variance. It was demonstrated that sup(99m)Tc-HEPIDA clearance is a simple, accurate and repeatable measure of liver parenchyma damage. In males, values of Clsub(B) above 245 ml min - 1 /1.73 m 2 exclude hepatic damage with high probability; values below 195 ml min - 1 /1.73 m 2 indicate evident impairment of liver parenchyma function. (orig.) [de

  13. Liver stiffness by transient elastography predicts liver-related complications and mortality in patients with chronic liver disease.

    Directory of Open Access Journals (Sweden)

    Jack X Q Pang

    Full Text Available Liver stiffness measurement (LSM by transient elastography (TE, FibroScan is a validated method for noninvasively staging liver fibrosis. Most hepatic complications occur in patients with advanced fibrosis. Our objective was to determine the ability of LSM by TE to predict hepatic complications and mortality in a large cohort of patients with chronic liver disease.In consecutive adults who underwent LSM by TE between July 2008 and June 2011, we used Cox regression to determine the independent association between liver stiffness and death or hepatic complications (decompensation, hepatocellular carcinoma, and liver transplantation. The performance of LSM to predict complications was determined using the c-statistic.Among 2,052 patients (median age 51 years, 65% with hepatitis B or C, 87 patients (4.2% died or developed a hepatic complication during a median follow-up period of 15.6 months (interquartile range, 11.0-23.5 months. Patients with complications had higher median liver stiffness than those without complications (13.5 vs. 6.0 kPa; P<0.00005. The 2-year incidence rates of death or hepatic complications were 2.6%, 9%, 19%, and 34% in patients with liver stiffness <10, 10-19.9, 20-39.9, and ≥40 kPa, respectively (P<0.00005. After adjustment for potential confounders, liver stiffness by TE was an independent predictor of complications (hazard ratio [HR] 1.05 per kPa; 95% confidence interval [CI] 1.03-1.06. The c-statistic of liver-stiffness for predicting complications was 0.80 (95% CI 0.75-0.85. A liver stiffness below 20 kPa effectively excluded complications (specificity 93%, negative predictive value 97%; however, the positive predictive value of higher results was sub-optimal (20%.Liver stiffness by TE accurately predicts the risk of death or hepatic complications in patients with chronic liver disease. TE may facilitate the estimation of prognosis and guide management of these patients.

  14. Robustness of climate metrics under climate policy ambiguity

    International Nuclear Information System (INIS)

    Ekholm, Tommi; Lindroos, Tomi J.; Savolainen, Ilkka

    2013-01-01

    Highlights: • We assess the economic impacts of using different climate metrics. • The setting is cost-efficient scenarios for three interpretations of the 2C target. • With each target setting, the optimal metric is different. • Therefore policy ambiguity prevents the selection of an optimal metric. • Robust metric values that perform well with multiple policy targets however exist. -- Abstract: A wide array of alternatives has been proposed as the common metrics with which to compare the climate impacts of different emission types. Different physical and economic metrics and their parameterizations give diverse weights between e.g. CH 4 and CO 2 , and fixing the metric from one perspective makes it sub-optimal from another. As the aims of global climate policy involve some degree of ambiguity, it is not possible to determine a metric that would be optimal and consistent with all policy aims. This paper evaluates the cost implications of using predetermined metrics in cost-efficient mitigation scenarios. Three formulations of the 2 °C target, including both deterministic and stochastic approaches, shared a wide range of metric values for CH 4 with which the mitigation costs are only slightly above the cost-optimal levels. Therefore, although ambiguity in current policy might prevent us from selecting an optimal metric, it can be possible to select robust metric values that perform well with multiple policy targets

  15. A new metric method-improved structural holes researches on software networks

    Science.gov (United States)

    Li, Bo; Zhao, Hai; Cai, Wei; Li, Dazhou; Li, Hui

    2013-03-01

    The scale software systems quickly increase with the rapid development of software technologies. Hence, how to understand, measure, manage and control software structure is a great challenge for software engineering. there are also many researches on software networks metrics: C&K, MOOD, McCabe and etc, the aim of this paper is to propose a new and better method to metric software networks. The metric method structural holes are firstly introduced to in this paper, which can not directly be applied as a result of modular characteristics on software network. Hence, structural holes is redefined in this paper and improved, calculation process and results are described in detail. The results shows that the new method can better reflect bridge role of vertexes on software network and there is a significant correlation between degree and improved structural holes. At last, a hydropower simulation system is taken as an example to show validity of the new metric method.

  16. Partial rectangular metric spaces and fixed point theorems.

    Science.gov (United States)

    Shukla, Satish

    2014-01-01

    The purpose of this paper is to introduce the concept of partial rectangular metric spaces as a generalization of rectangular metric and partial metric spaces. Some properties of partial rectangular metric spaces and some fixed point results for quasitype contraction in partial rectangular metric spaces are proved. Some examples are given to illustrate the observed results.

  17. MO-G-BRE-06: Metrics of Success: Measuring Participation and Attitudes Related to Near-Miss Incident Learning Systems

    Energy Technology Data Exchange (ETDEWEB)

    Nyflot, MJ; Kusano, AS; Zeng, J; Carlson, JC; Novak, A; Sponseller, P; Jordan, L; Kane, G; Ford, EC [University of Washington, Seattle, WA (United States)

    2014-06-15

    Purpose: Interest in incident learning systems (ILS) for improving safety and quality in radiation oncology is growing, as evidenced by the upcoming release of the national ILS. However, an institution implementing such a system would benefit from quantitative metrics to evaluate performance and impact. We developed metrics to measure volume of reporting, severity of reported incidents, and changes in staff attitudes over time from implementation of our institutional ILS. Methods: We analyzed 2023 incidents from our departmental ILS from 2/2012–2/2014. Incidents were prospectively assigned a near-miss severity index (NMSI) at multidisciplinary review to evaluate the potential for error ranging from 0 to 4 (no harm to critical). Total incidents reported, unique users reporting, and average NMSI were evaluated over time. Additionally, departmental safety attitudes were assessed through a 26 point survey adapted from the AHRQ Hospital Survey on Patient Safety Culture before, 12 months, and 24 months after implementation of the incident learning system. Results: Participation in the ILS increased as demonstrated by total reports (approximately 2.12 additional reports/month) and unique users reporting (0.51 additional users reporting/month). Also, the average NMSI of reports trended lower over time, significantly decreasing after 12 months of reporting (p<0.001) but with no significant change at months 18 or 24. In survey data significant improvements were noted in many dimensions, including perceived barriers to reporting incidents such as concern of embarrassment (37% to 18%; p=0.02) as well as knowledge of what incidents to report, how to report them, and confidence that these reports were used to improve safety processes. Conclusion: Over a two-year period, our departmental ILS was used more frequently, incidents became less severe, and staff confidence in the system improved. The metrics used here may be useful for other institutions seeking to create or evaluate

  18. MO-G-BRE-06: Metrics of Success: Measuring Participation and Attitudes Related to Near-Miss Incident Learning Systems

    International Nuclear Information System (INIS)

    Nyflot, MJ; Kusano, AS; Zeng, J; Carlson, JC; Novak, A; Sponseller, P; Jordan, L; Kane, G; Ford, EC

    2014-01-01

    Purpose: Interest in incident learning systems (ILS) for improving safety and quality in radiation oncology is growing, as evidenced by the upcoming release of the national ILS. However, an institution implementing such a system would benefit from quantitative metrics to evaluate performance and impact. We developed metrics to measure volume of reporting, severity of reported incidents, and changes in staff attitudes over time from implementation of our institutional ILS. Methods: We analyzed 2023 incidents from our departmental ILS from 2/2012–2/2014. Incidents were prospectively assigned a near-miss severity index (NMSI) at multidisciplinary review to evaluate the potential for error ranging from 0 to 4 (no harm to critical). Total incidents reported, unique users reporting, and average NMSI were evaluated over time. Additionally, departmental safety attitudes were assessed through a 26 point survey adapted from the AHRQ Hospital Survey on Patient Safety Culture before, 12 months, and 24 months after implementation of the incident learning system. Results: Participation in the ILS increased as demonstrated by total reports (approximately 2.12 additional reports/month) and unique users reporting (0.51 additional users reporting/month). Also, the average NMSI of reports trended lower over time, significantly decreasing after 12 months of reporting (p<0.001) but with no significant change at months 18 or 24. In survey data significant improvements were noted in many dimensions, including perceived barriers to reporting incidents such as concern of embarrassment (37% to 18%; p=0.02) as well as knowledge of what incidents to report, how to report them, and confidence that these reports were used to improve safety processes. Conclusion: Over a two-year period, our departmental ILS was used more frequently, incidents became less severe, and staff confidence in the system improved. The metrics used here may be useful for other institutions seeking to create or evaluate

  19. A Kerr-NUT metric

    International Nuclear Information System (INIS)

    Vaidya, P.C.; Patel, L.K.; Bhatt, P.V.

    1976-01-01

    Using Galilean time and retarded distance as coordinates the usual Kerr metric is expressed in form similar to the Newman-Unti-Tamburino (NUT) metric. The combined Kerr-NUT metric is then investigated. In addition to the Kerr and NUT solutions of Einstein's equations, three other types of solutions are derived. These are (i) the radiating Kerr solution, (ii) the radiating NUT solution satisfying Rsub(ik) = sigmaxisub(i)xisub(k), xisub(i)xisup(i) = 0, and (iii) the associated Kerr solution satisfying Rsub(ik) = 0. Solution (i) is distinct from and simpler than the one reported earlier by Vaidya and Patel (Phys. Rev.; D7:3590 (1973)). Solutions (ii) and (iii) gave line elements which have the axis of symmetry as a singular line. (author)

  20. CT-based liver volumetry in a porcine model: impact on clinical volumetry prior to living donated liver transplantation

    International Nuclear Information System (INIS)

    Frericks, B.B.J.; Kiene, T.; Stamm, G.; Shin, H.; Galanski, M.

    2004-01-01

    Purpose: Exact preoperative determination of the liver volume is of great importance prior to hepatobiliary surgery, especially in living donated liver transplantation (LDLT). In the current literature, a strong correlation between preoperatively calculated and intraoperatively measured liver volumes has been described. Such accuracy seems questionable, primarily due to a difference in the perfusion state of the liver in situ versus after explantation. Purpose of the study was to asses the influence of the perfusion state on liver volume and the validity of the preoperative liver volumetry prior to LDLT. Methods: In an experimental study, 20 porcine livers were examined. The livers were weighted and their volumes were determined by water displacement prior and after fluid infusion to achieve a pressure physiologically found in the liver veins. The liver volumes in the different perfusion states were calculated based on CT-data. The calculated values were compared with the volume measured by water displacement and the weight of the livers. Results: Assessment of calculated CT volumes and water displacements at identical perfusion states showed a tight correlation and differed on average by 4 ± 5%. However, livers before and after fluid infusion showed a 33 ± 8% (350 ± 150 ml) difference in volume. Conclusion: CT-volumetry acquires highly accurate data as confirmed by water displacement studies. However, the perfusion state has major impact on liver volume, which has to be accounted for in clinical use. (orig.) [de

  1. Metabolic liver function measured in vivo by dynamic (18)F-FDGal PET/CT without arterial blood sampling.

    Science.gov (United States)

    Horsager, Jacob; Munk, Ole Lajord; Sørensen, Michael

    2015-01-01

    Metabolic liver function can be measured by dynamic PET/CT with the radio-labelled galactose-analogue 2-[(18)F]fluoro-2-deoxy-D-galactose ((18)F-FDGal) in terms of hepatic systemic clearance of (18)F-FDGal (K, ml blood/ml liver tissue/min). The method requires arterial blood sampling from a radial artery (arterial input function), and the aim of this study was to develop a method for extracting an image-derived, non-invasive input function from a volume of interest (VOI). Dynamic (18)F-FDGal PET/CT data from 16 subjects without liver disease (healthy subjects) and 16 patients with liver cirrhosis were included in the study. Five different input VOIs were tested: four in the abdominal aorta and one in the left ventricle of the heart. Arterial input function from manual blood sampling was available for all subjects. K*-values were calculated using time-activity curves (TACs) from each VOI as input and compared to the K-value calculated using arterial blood samples as input. Each input VOI was tested on PET data reconstructed with and without resolution modelling. All five image-derived input VOIs yielded K*-values that correlated significantly with K calculated using arterial blood samples. Furthermore, TACs from two different VOIs yielded K*-values that did not statistically deviate from K calculated using arterial blood samples. A semicircle drawn in the posterior part of the abdominal aorta was the only VOI that was successful for both healthy subjects and patients as well as for PET data reconstructed with and without resolution modelling. Metabolic liver function using (18)F-FDGal PET/CT can be measured without arterial blood samples by using input data from a semicircle VOI drawn in the posterior part of the abdominal aorta.

  2. Background metric in supergravity theories

    International Nuclear Information System (INIS)

    Yoneya, T.

    1978-01-01

    In supergravity theories, we investigate the conformal anomaly of the path-integral determinant and the problem of fermion zero modes in the presence of a nontrivial background metric. Except in SO(3) -invariant supergravity, there are nonvanishing conformal anomalies. As a consequence, amplitudes around the nontrivial background metric contain unpredictable arbitrariness. The fermion zero modes which are explicitly constructed for the Euclidean Schwarzschild metric are interpreted as an indication of the supersymmetric multiplet structure of a black hole. The degree of degeneracy of a black hole is 2/sup 4n/ in SO(n) supergravity

  3. Daylight metrics and energy savings

    Energy Technology Data Exchange (ETDEWEB)

    Mardaljevic, John; Heschong, Lisa; Lee, Eleanor

    2009-12-31

    The drive towards sustainable, low-energy buildings has increased the need for simple, yet accurate methods to evaluate whether a daylit building meets minimum standards for energy and human comfort performance. Current metrics do not account for the temporal and spatial aspects of daylight, nor of occupants comfort or interventions. This paper reviews the historical basis of current compliance methods for achieving daylit buildings, proposes a technical basis for development of better metrics, and provides two case study examples to stimulate dialogue on how metrics can be applied in a practical, real-world context.

  4. Changes in liver and spleen volume in various types of compensated liver cirrhosis

    International Nuclear Information System (INIS)

    Hoshino, Hiroshi; Tarao, Kazuo; Ito, Yoshihiko

    1988-01-01

    Liver and spleen volume were measured by computed tomography in 8 healthy voluneteers and in 21 patients of various types of compensated liver cirrhosis. Patients were divided into 3 groups (8 of viral group, 6 of alcoholic group and 6 of combined group) according to the histopathological findings of the liver and also according to the history of blood transfusion, HBs-Ag and alcohol drinking habit. Each volume was calculated by adding together the area measurements obtained from successive transverse abdominal scans. In the liver volume (mean±S.E.) alcoholic group (1381±86cm 3 ) was significantly larger than the healthy volunteers (1027±27cm 3 ) (p 3 ) (p 3 ) (p 3 ) was significantly larger than the healthy volunteers (84±4.7cm 3 ) (p 3 ) was significantly larger than the viral group (302±10um 2 ) (p 2 ) was also significantly larger than viral group (p<0.04). (author)

  5. Perioperative liver and spleen elastography in patients without chronic liver disease.

    Science.gov (United States)

    Eriksson, Sam; Borsiin, Hanna; Öberg, Carl-Fredrik; Brange, Hannes; Mijovic, Zoran; Sturesson, Christian

    2018-02-27

    To investigate changes in hepatic and splenic stiffness in patients without chronic liver disease during liver resection for hepatic tumors. Patients scheduled for liver resection for hepatic tumors were considered for enrollment. Tissue stiffness measurements on liver and spleen were conducted before and two days after liver resection using point shear-wave elastography. Histological analysis of the resected liver specimen was conducted in all patients and patients with marked liver fibrosis were excluded from further study analysis. Patients were divided into groups depending on size of resection and whether they had received preoperative chemotherapy or not. The relation between tissue stiffness and postoperative biochemistry was investigated. Results are presented as median (interquartile range). 35 patients were included. The liver stiffness increased in patients undergoing a major resection from 1.41 (1.24-1.63) m/s to 2.20 (1.72-2.44) m/s ( P = 0.001). No change in liver stiffness in patients undergoing a minor resection was found [1.31 (1.15-1.52) m/s vs 1.37 (1.12-1.77) m/s, P = 0.438]. A major resection resulted in a 16% (7%-33%) increase in spleen stiffness, more ( P = 0.047) than after a minor resection [2 (-1-13) %]. Patients who underwent preoperative chemotherapy ( n = 20) did not differ from others in preoperative right liver lobe [1.31 (1.16-1.50) vs 1.38 (1.12-1.56) m/s, P = 0.569] or spleen [2.79 (2.33-3.11) vs 2.71 (2.37-2.86) m/s, P = 0.515] stiffness. Remnant liver stiffness on the second postoperative day did not show strong correlations with maximum postoperative increase in bilirubin ( R 2 = 0.154, Pearson's r = 0.392, P = 0.032) and international normalized ratio ( R 2 = 0.285, Pearson's r = 0.534, P = 0.003). Liver and spleen stiffness increase after a major liver resection for hepatic tumors in patients without chronic liver disease.

  6. Liver Hypertension: Treatment in Infancy !

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Liver Hypertension: Treatment in Infancy ! Liver Disease > Heart. No good non-invasive method. Repeated measurements problematic. Drug efficacy 50% at best. No predictors of response. We Need YOU !!

  7. Relative Citation Ratio of Top Twenty Macedonian Biomedical Scientists in PubMed: A New Metric that Uses Citation Rates to Measure Influence at the Article Level

    Directory of Open Access Journals (Sweden)

    Mirko Spiroski

    2016-06-01

    Conclusion: It is necessary to accept top twenty Macedonian biomedical scientists as an example of new metric that uses citation rates to measure influence at the article level, rather than qualification of the best Macedonian biomedical scientists.

  8. Balanced metrics for vector bundles and polarised manifolds

    DEFF Research Database (Denmark)

    Garcia Fernandez, Mario; Ross, Julius

    2012-01-01

    leads to a Hermitian-Einstein metric on E and a constant scalar curvature Kähler metric in c_1(L). For special values of α, limits of balanced metrics are solutions of a system of coupled equations relating a Hermitian-Einstein metric on E and a Kähler metric in c1(L). For this, we compute the top two......We consider a notion of balanced metrics for triples (X, L, E) which depend on a parameter α, where X is smooth complex manifold with an ample line bundle L and E is a holomorphic vector bundle over X. For generic choice of α, we prove that the limit of a convergent sequence of balanced metrics...

  9. Distribution of water, fat, and metals in normal liver and in liver metastases influencing attenuation on computed tomography

    International Nuclear Information System (INIS)

    Ueda, J.; Kobayashi, Y.; Kenko, Y.; Koike, H.; Kubo, T.; Takano, Y.; Hara, K.; Sumitomo Hospital, Osaka; Osaka National Hospital

    1988-01-01

    The quantity of water, lipid and some metals was measured in autopsy specimens of 8 normal livers, 9 livers with fatty change, and in 12 livers with metastases of various origins. These parameters contribute to the CT number measured in the liver. Water played a major role in demonstration of liver metastases as a low-density area on CT. Other contributory factors include iron, magnesium and zinc. Lipid and calcium had no influence in this respect. Heavy accumulation of calcium in a metastatic lesion gives a high-density area on CT. However, even when a metastatic lesion was perceived on CT as a low-density area, the calcium content of the lesion was not always lower than that of the non-tumour region. (orig.)

  10. A condition metric for Eucalyptus woodland derived from expert evaluations.

    Science.gov (United States)

    Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D

    2018-02-01

    The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.

  11. A City and National Metric measuring Isolation from the Global Market for Food Security Assessment

    Science.gov (United States)

    Brown, Molly E.; Silver, Kirk Coleman; Rajagopalan, Krishnan

    2013-01-01

    The World Bank has invested in infrastructure in developing countries for decades. This investment aims to reduce the isolation of markets, reducing both seasonality and variability in food availability and food prices. Here we combine city market price data, global distance to port, and country infrastructure data to create a new Isolation Index for countries and cities around the world. Our index quantifies the isolation of a city from the global market. We demonstrate that an index built at the country level can be applied at a sub-national level to quantify city isolation. In doing so, we offer policy makers with an alternative metric to assess food insecurity. We compare our isolation index with other indices and economic data found in the literature.We show that our Index measures economic isolation regardless of economic stability using correlation and analysis

  12. Measurement of the capability of DNA synthesis of human fetal liver cells by the assay of 3H-TdR incorporation

    International Nuclear Information System (INIS)

    Wang Tao; Ma Xiangrui; Wang Hongyun; Cao Xia

    1987-01-01

    The fetal liver is one of the major sites of hematopoiesis during gestation. Under erythropoietin (EPO) stimulation, in erythroid precusor cells of fetal liver, proliferation and differentiation occurred and function of metabolism was enhanced. The technique of 3 H-TdR incorporation was used to measure the function of fetal liver cellular DNA synthesis. As EPO concentration at the range of approximately 20 ∼ 100 mU/ml, the counts of 3 H-TdR incorporation into fetal liver cells increased. As the concentration of EPO increased, however, its incorporation counts are lower than that in bone marrow of either the fetal or the adult. It suggested that precusors of erythrocyte of fetal liver has differentiated to later phases with the progressive accumulation of mature cells, therefore, both proliferation and function of metabolism are more or less decreased respectively. Under EPO stimulation, however, precusor of erythroid of fetal liver can greatly increase potential effects on DNA synthesis

  13. Extending cosmology: the metric approach

    OpenAIRE

    Mendoza, S.

    2012-01-01

    Comment: 2012, Extending Cosmology: The Metric Approach, Open Questions in Cosmology; Review article for an Intech "Open questions in cosmology" book chapter (19 pages, 3 figures). Available from: http://www.intechopen.com/books/open-questions-in-cosmology/extending-cosmology-the-metric-approach

  14. Metrics, Media and Advertisers: Discussing Relationship

    Directory of Open Access Journals (Sweden)

    Marco Aurelio de Souza Rodrigues

    2014-11-01

    Full Text Available This study investigates how Brazilian advertisers are adapting to new media and its attention metrics. In-depth interviews were conducted with advertisers in 2009 and 2011. In 2009, new media and its metrics were celebrated as innovations that would increase advertising campaigns overall efficiency. In 2011, this perception has changed: New media’s profusion of metrics, once seen as an advantage, started to compromise its ease of use and adoption. Among its findings, this study argues that there is an opportunity for media groups willing to shift from a product-focused strategy towards a customer-centric one, through the creation of new, simple and integrative metrics

  15. Measurement of the coagulation dynamics of bovine liver using the modified microscopic Beer-Lambert law.

    Science.gov (United States)

    Terenji, Albert; Willmann, Stefan; Osterholz, Jens; Hering, Peter; Schwarzmaier, Hans-Joachim

    2005-06-01

    During heating, the optical properties of biological tissues change with the coagulation state. In this study, we propose a technique, which uses these changes to monitor the coagulation process during laser-induced interstitial thermotherapy (LITT). Untreated and coagulated (water bath, temperatures between 35 degrees C and 90 degrees C for 20 minutes.) samples of bovine liver tissue were examined using a Nd:YAG (lambda = 1064 nm) frequency-domain reflectance spectrometer. We determined the time integrated intensities (I(DC)) and the phase shifts (Phi) of the photon density waves after migration through the tissue. From these measured quantities, the time of flight (TOF) of the photons and the absorption coefficients of the samples were derived using the modified microscopic Beer-Lambert law. The absorption coefficients of the liver samples decreased significantly with the temperature in the range between 50 degrees C and 70 degrees C. At the same time, the TOF of the investigated photos was found increased indicating an increased scattering. The coagulation dynamics could be well described using the Arrhenius formalism with the activation energy of 106 kJ/mol and the frequency factor of 1.59 x 10(13)/second. Frequency-domain reflectance spectroscopy in combination with the modified microscopic Beer-Lambert (MBL) is suitable to measure heat induced changes in the absorption and scattering properties of bovine liver in vitro. The technique may be used to monitor the coagulation dynamics during local thermo-coagulation in vivo. Copyright 2005 Wiley-Liss, Inc.

  16. Evaluation of the performance of a micromethod for measuring urinary iodine by using six sigma quality metrics.

    Science.gov (United States)

    Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud

    2013-09-01

    The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)Sigma metrics at all concentrations. Only one laboratory had TEcalc

  17. Application of localized {sup 31}P MRS saturation transfer at 7 T for measurement of ATP metabolism in the liver: reproducibility and initial clinical application in patients with non-alcoholic fatty liver disease

    Energy Technology Data Exchange (ETDEWEB)

    Valkovic, Ladislav [Medical University of Vienna, High Field MR Centre, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria); Slovak Academy of Sciences, Department of Imaging Methods, Institute of Measurement Science, Bratislava (Slovakia); Gajdosik, Martin; Chmelik, Marek; Trattnig, Siegfried [Medical University of Vienna, High Field MR Centre, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria); Traussnigg, Stefan; Kienbacher, Christian; Trauner, Michael [Medical University of Vienna, Division of Gastroenterology and Hepatology, Department of Internal Medicine III, Vienna (Austria); Wolf, Peter; Krebs, Michael [Medical University of Vienna, Division of Endocrinology and Metabolism, Department of Internal Medicine III, Vienna (Austria); Bogner, Wolfgang [Medical University of Vienna, High Field MR Centre, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria); Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA (United States); Krssak, Martin [Medical University of Vienna, High Field MR Centre, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria); Medical University of Vienna, Division of Endocrinology and Metabolism, Department of Internal Medicine III, Vienna (Austria)

    2014-07-15

    Saturation transfer (ST) phosphorus MR spectroscopy ({sup 31}P MRS) enables in vivo insight into energy metabolism and thus could identify liver conditions currently diagnosed only by biopsy. This study assesses the reproducibility of the localized {sup 31}P MRS ST in liver at 7 T and tests its potential for noninvasive differentiation of non-alcoholic fatty liver (NAFL) and steatohepatitis (NASH). After the ethics committee approval, reproducibility of the localized {sup 31}P MRS ST at 7 T and the biological variation of acquired hepato-metabolic parameters were assessed in healthy volunteers. Subsequently, 16 suspected NAFL/NASH patients underwent MRS measurements and diagnostic liver biopsy. The Pi-to-ATP exchange parameters were compared between the groups by a Mann-Whitney U test and related to the liver fat content estimated by a single-voxel proton ({sup 1}H) MRS, measured at 3 T. The mean exchange rate constant (k) in healthy volunteers was 0.31 ± 0.03 s{sup -1} with a coefficient of variation of 9.0 %. Significantly lower exchange rates (p < 0.01) were found in NASH patients (k = 0.17 ± 0.04 s{sup -1}) when compared to healthy volunteers, and NAFL patients (k = 0.30 ± 0.05 s{sup -1}). Significant correlation was found between the k value and the liver fat content (r = 0.824, p < 0.01). Our data suggest that the {sup 31}P MRS ST technique provides a tool for gaining insight into hepatic ATP metabolism and could contribute to the differentiation of NAFL and NASH. (orig.)

  18. Active Metric Learning for Supervised Classification

    OpenAIRE

    Kumaran, Krishnan; Papageorgiou, Dimitri; Chang, Yutong; Li, Minhan; Takáč, Martin

    2018-01-01

    Clustering and classification critically rely on distance metrics that provide meaningful comparisons between data points. We present mixed-integer optimization approaches to find optimal distance metrics that generalize the Mahalanobis metric extensively studied in the literature. Additionally, we generalize and improve upon leading methods by removing reliance on pre-designated "target neighbors," "triplets," and "similarity pairs." Another salient feature of our method is its ability to en...

  19. A complete metric in the set of mixing transformations

    International Nuclear Information System (INIS)

    Tikhonov, Sergei V

    2007-01-01

    A metric in the set of mixing measure-preserving transformations is introduced making of it a complete separable metric space. Dense and massive subsets of this space are investigated. A generic mixing transformation is proved to have simple singular spectrum and to be a mixing of arbitrary order; all its powers are disjoint. The convolution powers of the maximal spectral type for such transformations are mutually singular if the ratio of the corresponding exponents is greater than 2. It is shown that the conjugates of a generic mixing transformation are dense, as are also the conjugates of an arbitrary fixed Cartesian product. Bibliography: 28 titles.

  20. Age-related changes in liver, kidney, and spleen stiffness in healthy children measured with acoustic radiation force impulse imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Mi-Jung, E-mail: mjl1213@yuhs.ac [Department of Radiology and Research Institute of Radiological Science, Severance Children' s Hospital, Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-752 (Korea, Republic of); Kim, Myung-Joon, E-mail: mjkim@yuhs.ac [Department of Radiology and Research Institute of Radiological Science, Severance Children' s Hospital, Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-752 (Korea, Republic of); Han, Kyung Hwa, E-mail: khhan@yuhs.ac [Biostatistics Collaboration Unit, Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-752 (Korea, Republic of); Yoon, Choon Sik, E-mail: yooncs58@yuhs.ac [Department of Radiology, Gangnam Severance Hospital, Yonsei University, College of Medicine, 211 Unjoo-ro, Gangnam-gu, Seoul (Korea, Republic of)

    2013-06-15

    Objectives: To evaluate the feasibility and age-related changes of shear wave velocity (SWV) in normal livers, kidneys, and spleens of children using acoustic radiation force impulse (ARFI) imaging. Materials and methods: Healthy pediatric volunteers prospectively underwent abdominal ultrasonography and ARFI. The subjects were divided into three groups according to age: group 1: <5 years old; group 2: 5–10 years old; and group 3: >10 years old. The SWV was measured using a 4–9 MHz linear probe for group 1 and a 1–4 MHz convex probe for groups 2 and 3. Three valid SWV measurements were acquired for each organ. Results: Two hundred and two children (92 male, 110 female) with an average age of 8.1 years (±4.7) were included in this study and had a successful measurement rate of 97% (196/202). The mean SWVs were 1.12 m/s for the liver, 2.19 m/s for the right kidney, 2.33 m/s for the left kidney, and 2.25 m/s for the spleen. The SWVs for the right and left kidneys, and the spleen showed age-related changes in all children (p < 0.001). And the SWVs for the kidneys increased with age in group 1, and those for the liver changed with age in group 3. Conclusions: ARFI measurements are feasible for solid abdominal organs in children using high or low frequency probes. The mean ARFI SWV for the kidneys increased according to age in children less than 5 years of age and in the liver, it changed with age in children over 10.

  1. Use of two population metrics clarifies biodiversity dynamics in large-scale monitoring: the case of trees in Japanese old-growth forests: the need for multiple population metrics in large-scale monitoring.

    Science.gov (United States)

    Ogawa, Mifuyu; Yamaura, Yuichi; Abe, Shin; Hoshino, Daisuke; Hoshizaki, Kazuhiko; Iida, Shigeo; Katsuki, Toshio; Masaki, Takashi; Niiyama, Kaoru; Saito, Satoshi; Sakai, Takeshi; Sugita, Hisashi; Tanouchi, Hiroyuki; Amano, Tatsuya; Taki, Hisatomo; Okabe, Kimiko

    2011-07-01

    Many indicators/indices provide information on whether the 2010 biodiversity target of reducing declines in biodiversity have been achieved. The strengths and limitations of the various measures used to assess the success of such measures are now being discussed. Biodiversity dynamics are often evaluated by a single biological population metric, such as the abundance of each species. Here we examined tree population dynamics of 52 families (192 species) at 11 research sites (three vegetation zones) of Japanese old-growth forests using two population metrics: number of stems and basal area. We calculated indices that track the rate of change in all species of tree by taking the geometric mean of changes in population metrics between the 1990s and the 2000s at the national level and at the levels of the vegetation zone and family. We specifically focused on whether indices based on these two metrics behaved similarly. The indices showed that (1) the number of stems declined, whereas basal area did not change at the national level and (2) the degree of change in the indices varied by vegetation zone and family. These results suggest that Japanese old-growth forests have not degraded and may even be developing in some vegetation zones, and indicate that the use of a single population metric (or indicator/index) may be insufficient to precisely understand the state of biodiversity. It is therefore important to incorporate more metrics into monitoring schemes to overcome the risk of misunderstanding or misrepresenting biodiversity dynamics.

  2. Multimetric indices: How many metrics?

    Science.gov (United States)

    Multimetric indices (MMI’s) often include 5 to 15 metrics, each representing a different attribute of assemblage condition, such as species diversity, tolerant taxa, and nonnative taxa. Is there an optimal number of metrics for MMIs? To explore this question, I created 1000 9-met...

  3. Association of liver enzymes and computed tomography markers of liver steatosis with familial longevity.

    Directory of Open Access Journals (Sweden)

    Michiel Sala

    Full Text Available OBJECTIVE: Familial longevity is marked by enhanced peripheral but not hepatic insulin sensitivity. The liver has a critical role in the pathogenesis of hepatic insulin resistance. Therefore we hypothesized that the extent of liver steatosis would be similar between offspring of long-lived siblings and control subjects. To test our hypothesis, we investigated the extent of liver steatosis in non-diabetic offspring of long-lived siblings and age-matched controls by measuring liver enzymes in plasma and liver fat by computed tomography (CT. RESEARCH DESIGN AND METHODS: We measured nonfasting alanine transaminase (ALT, aspartate aminotransferase (AST, and Υ-glutamyl transferase (GGT in 1625 subjects (736 men, mean age 59.1 years from the Leiden Longevity Study, comprising offspring of long-lived siblings and partners thereof. In a random subgroup, fasting serum samples (n = 230 were evaluated and CT was performed (n = 268 for assessment of liver-spleen (L/S ratio and the prevalence of moderate-to-severe non-alcoholic fatty liver disease (NAFLD. Linear mixed model analysis was performed adjusting for age, gender, body mass index, smoking, use of alcohol and hepatotoxic medication, and correlation of sibling relationship. RESULTS: Offspring of long-lived siblings had higher nonfasting ALT levels as compared to control subjects (24.3 mmol/L versus 23.2 mmol/L, p = 0.03, while AST and GGT levels were similar between the two groups. All fasting liver enzyme levels were similar between the two groups. CT L/S ratio and prevalence of moderate-to-severe NAFLD was similar between groups (1.12 vs 1.14, p = 0.25 and 8% versus 8%, p = 0.91, respectively. CONCLUSIONS: Except for nonfasting levels of ALT, which were slightly higher in the offspring of long-lived siblings compared to controls, no differences were found between groups in the extent of liver steatosis, as assessed with liver biochemical tests and CT. Thus, our data indicate that the extent of liver

  4. Association of liver enzymes and computed tomography markers of liver steatosis with familial longevity.

    Science.gov (United States)

    Sala, Michiel; Kroft, Lucia J M; Röell, Boudewijn; van der Grond, Jeroen; Slagboom, P Eline; Mooijaart, Simon P; de Roos, Albert; van Heemst, Diana

    2014-01-01

    Familial longevity is marked by enhanced peripheral but not hepatic insulin sensitivity. The liver has a critical role in the pathogenesis of hepatic insulin resistance. Therefore we hypothesized that the extent of liver steatosis would be similar between offspring of long-lived siblings and control subjects. To test our hypothesis, we investigated the extent of liver steatosis in non-diabetic offspring of long-lived siblings and age-matched controls by measuring liver enzymes in plasma and liver fat by computed tomography (CT). We measured nonfasting alanine transaminase (ALT), aspartate aminotransferase (AST), and Υ-glutamyl transferase (GGT) in 1625 subjects (736 men, mean age 59.1 years) from the Leiden Longevity Study, comprising offspring of long-lived siblings and partners thereof. In a random subgroup, fasting serum samples (n = 230) were evaluated and CT was performed (n = 268) for assessment of liver-spleen (L/S) ratio and the prevalence of moderate-to-severe non-alcoholic fatty liver disease (NAFLD). Linear mixed model analysis was performed adjusting for age, gender, body mass index, smoking, use of alcohol and hepatotoxic medication, and correlation of sibling relationship. Offspring of long-lived siblings had higher nonfasting ALT levels as compared to control subjects (24.3 mmol/L versus 23.2 mmol/L, p = 0.03), while AST and GGT levels were similar between the two groups. All fasting liver enzyme levels were similar between the two groups. CT L/S ratio and prevalence of moderate-to-severe NAFLD was similar between groups (1.12 vs 1.14, p = 0.25 and 8% versus 8%, p = 0.91, respectively). Except for nonfasting levels of ALT, which were slightly higher in the offspring of long-lived siblings compared to controls, no differences were found between groups in the extent of liver steatosis, as assessed with liver biochemical tests and CT. Thus, our data indicate that the extent of liver steatosis is similar between offspring of long-lived siblings and

  5. Factors influencing liver and spleen volume changes after donor hepatectomy for living donor liver transplantation

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Ji Hee; Ryeom, Hunku; Song, Jung Hup [Kyungpook National University Hospital, Daegu (Korea, Republic of)

    2013-11-15

    To define the changes in liver and spleen volumes in the early postoperative period after partial liver donation for living-donor liver transplantation (LDLT) and to determine factors that influence liver and spleen volume changes. 27 donors who underwent partial hepatectomy for LDLT were included in this study. The rates of liver and spleen volume change, measured with CT volumetry, were correlated with several factors. The analyzed factors included the indocyanine green (ICG) retention rate at 15 minutes after ICG administration, preoperative platelet count, preoperative liver and splenic volumes, resected liver volume, resected-to-whole liver volume ratio (LV{sub R}/LV{sub W}), resected liver volume to the sum of whole liver and spleen volume ratio [LV{sub R}/(LV{sub W} + SV{sub 0})], and pre and post hepatectomy portal venous pressures. In all hepatectomy donors, the volumes of the remnant liver and spleen were increased (increased rates, 59.5 ± 50.5%, 47.9 ± 22.6%). The increment rate of the remnant liver volume revealed a positive correlation with LV{sub R}/LV{sub W} (r = 0.759, p < 0.01). The other analyzed factors showed no correlation with changes in liver and spleen volumes. The spleen and remnant liver volumes were increased at CT volumetry performed 2 weeks after partial liver donation. Among the various analyzed factors, LV{sub R}/LV{sub W} influences the increment rate of the remnant liver volume.

  6. Eyetracking Metrics in Young Onset Alzheimer’s Disease: A Window into Cognitive Visual Functions

    Science.gov (United States)

    Pavisic, Ivanna M.; Firth, Nicholas C.; Parsons, Samuel; Rego, David Martinez; Shakespeare, Timothy J.; Yong, Keir X. X.; Slattery, Catherine F.; Paterson, Ross W.; Foulkes, Alexander J. M.; Macpherson, Kirsty; Carton, Amelia M.; Alexander, Daniel C.; Shawe-Taylor, John; Fox, Nick C.; Schott, Jonathan M.; Crutch, Sebastian J.; Primativo, Silvia

    2017-01-01

    Young onset Alzheimer’s disease (YOAD) is defined as symptom onset before the age of 65 years and is particularly associated with phenotypic heterogeneity. Atypical presentations, such as the clinic-radiological visual syndrome posterior cortical atrophy (PCA), often lead to delays in accurate diagnosis. Eyetracking has been used to demonstrate basic oculomotor impairments in individuals with dementia. In the present study, we aim to explore the relationship between eyetracking metrics and standard tests of visual cognition in individuals with YOAD. Fifty-seven participants were included: 36 individuals with YOAD (n = 26 typical AD; n = 10 PCA) and 21 age-matched healthy controls. Participants completed three eyetracking experiments: fixation, pro-saccade, and smooth pursuit tasks. Summary metrics were used as outcome measures and their predictive value explored looking at correlations with visuoperceptual and visuospatial metrics. Significant correlations between eyetracking metrics and standard visual cognitive estimates are reported. A machine-learning approach using a classification method based on the smooth pursuit raw eyetracking data discriminates with approximately 95% accuracy patients and controls in cross-validation tests. Results suggest that the eyetracking paradigms of a relatively simple and specific nature provide measures not only reflecting basic oculomotor characteristics but also predicting higher order visuospatial and visuoperceptual impairments. Eyetracking measures can represent extremely useful markers during the diagnostic phase and may be exploited as potential outcome measures for clinical trials. PMID:28824534

  7. Asset Decommissioning Risk Metrics for Floating Structures in the Gulf of Mexico.

    Science.gov (United States)

    Kaiser, Mark J

    2015-08-01

    Public companies in the United States are required to report standardized values of their proved reserves and asset retirement obligations on an annual basis. When compared, these two measures provide an aggregate indicator of corporate decommissioning risk but, because of their consolidated nature, cannot readily be decomposed at a more granular level. The purpose of this article is to introduce a decommissioning risk metric defined in terms of the ratio of the expected value of an asset's reserves to its expected cost of decommissioning. Asset decommissioning risk (ADR) is more difficult to compute than a consolidated corporate risk measure, but can be used to quantify the decommissioning risk of structures and to perform regional comparisons, and also provides market signals of future decommissioning activity. We formalize two risk metrics for decommissioning and apply the ADR metric to the deepwater Gulf of Mexico (GOM) floater inventory. Deepwater oil and gas structures are expensive to construct, and at the end of their useful life, will be expensive to decommission. The value of proved reserves for the 42 floating structures in the GOM circa January 2013 is estimated to range between $37 and $80 billion for future oil prices between 60 and 120 $/bbl, which is about 10 to 20 times greater than the estimated $4.3 billion to decommission the inventory. Eni's Allegheny and MC Offshore's Jolliet tension leg platforms have ADR metrics less than one and are approaching the end of their useful life. Application of the proposed metrics in the regulatory review of supplemental bonding requirements in the U.S. Outer Continental Shelf is suggested to complement the current suite of financial metrics employed. © 2015 Society for Risk Analysis.

  8. Metrics for Polyphonic Sound Event Detection

    Directory of Open Access Journals (Sweden)

    Annamaria Mesaros

    2016-05-01

    Full Text Available This paper presents and discusses various metrics proposed for evaluation of polyphonic sound event detection systems used in realistic situations where there are typically multiple sound sources active simultaneously. The system output in this case contains overlapping events, marked as multiple sounds detected as being active at the same time. The polyphonic system output requires a suitable procedure for evaluation against a reference. Metrics from neighboring fields such as speech recognition and speaker diarization can be used, but they need to be partially redefined to deal with the overlapping events. We present a review of the most common metrics in the field and the way they are adapted and interpreted in the polyphonic case. We discuss segment-based and event-based definitions of each metric and explain the consequences of instance-based and class-based averaging using a case study. In parallel, we provide a toolbox containing implementations of presented metrics.

  9. Building Cost and Performance Metrics: Data Collection Protocol, Revision 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, Kimberly M.; Solana, Amy E.; Spees, Kathleen L.

    2005-09-29

    This technical report describes the process for selecting and applying the building cost and performance metrics for measuring sustainably designed buildings in comparison to traditionally designed buildings.

  10. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  11. Common Metrics for Human-Robot Interaction

    Science.gov (United States)

    Steinfeld, Aaron; Lewis, Michael; Fong, Terrence; Scholtz, Jean; Schultz, Alan; Kaber, David; Goodrich, Michael

    2006-01-01

    This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.

  12. Defining normal liver stiffness range in a normal healthy Chinese population without liver disease.

    Directory of Open Access Journals (Sweden)

    James Fung

    Full Text Available BACKGROUND: For patients with chronic liver disease, different optimal liver stiffness cut-off values correspond to different stages of fibrosis, which are specific for the underlying liver disease and population. AIMS: To establish the normal ranges of liver stiffness in the healthy Chinese population without underlying liver disease. METHODS: This is a prospective cross sectional study of 2,528 healthy volunteers recruited from the general population and the Red Cross Transfusion Center in Hong Kong. All participants underwent a comprehensive questionnaire survey, measurement of weight, height, and blood pressure. Fasting liver function tests, glucose and cholesterol was performed. Abdominal ultrasound and transient elastography were performed on all participants. RESULTS: Of the 2,528 subjects, 1,998 were excluded with either abnormal liver parenchyma on ultrasound, chronic medical condition, abnormal blood tests including liver enzymes, fasting glucose, fasting cholesterol, high body mass index, high blood pressure, or invalid liver stiffness scan. The reference range for the 530 subjects without known liver disease was 2.3 to 5.9 kPa (mean 4.1, SD 0.89. The median liver stiffness was higher in males compared with females (4.3 vs 4.0 kPa respectively, p55 years (p=0.001. CONCLUSIONS: The healthy reference range for liver stiffness in the Chinese population is 2.3 to 5.9 kPa. Female gender and older age group was associated with a lower median liver stiffness.

  13. Incorporating big data into treatment plan evaluation: Development of statistical DVH metrics and visualization dashboards.

    Science.gov (United States)

    Mayo, Charles S; Yao, John; Eisbruch, Avraham; Balter, James M; Litzenberg, Dale W; Matuszak, Martha M; Kessler, Marc L; Weyburn, Grant; Anderson, Carlos J; Owen, Dawn; Jackson, William C; Haken, Randall Ten

    2017-01-01

    To develop statistical dose-volume histogram (DVH)-based metrics and a visualization method to quantify the comparison of treatment plans with historical experience and among different institutions. The descriptive statistical summary (ie, median, first and third quartiles, and 95% confidence intervals) of volume-normalized DVH curve sets of past experiences was visualized through the creation of statistical DVH plots. Detailed distribution parameters were calculated and stored in JavaScript Object Notation files to facilitate management, including transfer and potential multi-institutional comparisons. In the treatment plan evaluation, structure DVH curves were scored against computed statistical DVHs and weighted experience scores (WESs). Individual, clinically used, DVH-based metrics were integrated into a generalized evaluation metric (GEM) as a priority-weighted sum of normalized incomplete gamma functions. Historical treatment plans for 351 patients with head and neck cancer, 104 with prostate cancer who were treated with conventional fractionation, and 94 with liver cancer who were treated with stereotactic body radiation therapy were analyzed to demonstrate the usage of statistical DVH, WES, and GEM in a plan evaluation. A shareable dashboard plugin was created to display statistical DVHs and integrate GEM and WES scores into a clinical plan evaluation within the treatment planning system. Benchmarking with normal tissue complication probability scores was carried out to compare the behavior of GEM and WES scores. DVH curves from historical treatment plans were characterized and presented, with difficult-to-spare structures (ie, frequently compromised organs at risk) identified. Quantitative evaluations by GEM and/or WES compared favorably with the normal tissue complication probability Lyman-Kutcher-Burman model, transforming a set of discrete threshold-priority limits into a continuous model reflecting physician objectives and historical experience

  14. Metric adjusted skew information

    DEFF Research Database (Denmark)

    Hansen, Frank

    2008-01-01

    ) that vanishes for observables commuting with the state. We show that the skew information is a convex function on the manifold of states. It also satisfies other requirements, proposed by Wigner and Yanase, for an effective measure-of-information content of a state relative to a conserved observable. We...... establish a connection between the geometrical formulation of quantum statistics as proposed by Chentsov and Morozova and measures of quantum information as introduced by Wigner and Yanase and extended in this article. We show that the set of normalized Morozova-Chentsov functions describing the possible......We extend the concept of Wigner-Yanase-Dyson skew information to something we call "metric adjusted skew information" (of a state with respect to a conserved observable). This "skew information" is intended to be a non-negative quantity bounded by the variance (of an observable in a state...

  15. Narrowing the Gap Between QoS Metrics and Web QoE Using Above-the-fold Metrics

    OpenAIRE

    da Hora, Diego Neves; Asrese, Alemnew; Christophides, Vassilis; Teixeira, Renata; Rossi, Dario

    2018-01-01

    International audience; Page load time (PLT) is still the most common application Quality of Service (QoS) metric to estimate the Quality of Experience (QoE) of Web users. Yet, recent literature abounds with proposals for alternative metrics (e.g., Above The Fold, SpeedIndex and variants) that aim at better estimating user QoE. The main purpose of this work is thus to thoroughly investigate a mapping between established and recently proposed objective metrics and user QoE. We obtain ground tr...

  16. Examination of High Resolution Channel Topography to Determine Suitable Metrics to Characterize Morphological Complexity

    Science.gov (United States)

    Stewart, R. L.; Gaeuman, D.

    2015-12-01

    Complex bed morphology is deemed necessary to restore salmonid habitats, yet quantifiable metrics that capture channel complexity have remained elusive. This work utilizes high resolution topographic data from the 40 miles of the Trinity River of northern California to determine a suitable metric for characterizing morphological complexity at the reach scale. The study area is segregated into reaches defined by individual riffle pool units or aggregates of several consecutive units. Potential measures of complexity include rugosity and depth statistics such as standard deviation and interquartile range, yet previous research has shown these metrics are scale dependent and subject to sampling density-based bias. The effect of sampling density on the present analysis has been reduced by underrepresenting the high resolution topographic data as a 3'x 3' raster so that all areas are equally sampled. Standard rugosity, defined as the three-dimensional surface area divided by projected area, has been shown to be dependent on average depth. We therefore define R*, a empirically depth-corrected rugosity metric in which rugosity is corrected using an empirical relationship based on linear regression between the standard rugosity metric and average depth. By removing the dependence on depth using a regression based on the study reach, R* provides a measure reach scale complexity relative to the entire study area. The interquartile range of depths is also depth-dependent, so we defined a non-dimensional metric (IQR*) as the interquartile range dividing by median depth. These are calculated to develop rankings of channel complexity which, are found to closely agree with perceived channel complexity observed in the field. Current efforts combine these measures of morphological complexity with salmonid habitat suitability to evaluate the effects of channel complexity on the various life stages of salmonids. Future work will investigate the downstream sequencing of channel

  17. Liver Hypertension: Causes, Consequences and Prevention

    Indian Academy of Sciences (India)

    Table of contents. Liver Hypertension: Causes, Consequences and Prevention · Heart Pressure : Blood Pressure · Slide 3 · If you continue to have high BP · Doctor Measures Blood Pressure (BP): Medicines to Decrease BP · LIVER ~ ~ LIFE Rightists vs. Leftists · Slide 7 · Slide 8 · Slide 9 · Liver Spleen - Splanchnic ...

  18.  Usefulness of acoustic radiation force impulse and fibrotest in liver fibrosis assessment after liver transplant.

    Science.gov (United States)

    Bignulin, Sara; Falleti, Edmondo; Cmet, Sara; Cappello, Dario; Cussigh, Annarosa; Lenisa, Ilaria; Dissegna, Denis; Pugliese, Fabio; Vivarelli, Cinzia; Fabris, Carlo; Fabris, Carlo; Toniutto, Pierluigi

    2016-01-01

     Background and rationale. Acoustic radiation force impulse (ARFI) is a non-invasive tool used in the evaluation of liver fibrosis in HCV positive immune-competent patients. This study aimed to assess the accuracy of ARFI in discriminating liver transplanted patients with different graft fibrosis severity and to verify whether ARFI, eventually combined with non-invasive biochemical tests, could spare liver biopsies. This prospective study included 51 HCV positive liver transplanted patients who consecutively underwent to annual liver biopsy concomitantly with ARFI and blood chemistry tests measurements needed to calculate several non-invasive liver fibrosis tests. Overall ARFI showed an AUC of 0.885 in discriminating between patients without or with significant fibrosis (Ishak score 0-2vs. 3-6). Using a cut-off of 1.365 m/s, ARFI possesses a negative predictive value of 100% in identifying patients without significant fibrosis. AUC for Fibrotest was 0.848 in discriminating patients with Ishak fibrosis score 0-2 vs. 3-6. The combined assessment of ARFI and Fibro-test did not improve the results obtained by ARFI alone. ARFI measurement in HCV positive liver transplanted patients can be considered an easy and accurate non-invasive tool in identify patients with a benign course of HCV recurrence.

  19. A study of metrics of distance and correlation between ranked lists for compositionality detection

    DEFF Research Database (Denmark)

    Lioma, Christina; Hansen, Niels Dalum

    2017-01-01

    affects the measurement of semantic similarity. We propose a new compositionality detection method that represents phrases as ranked lists of term weights. Our method approximates the semantic similarity between two ranked list representations using a range of well-known distance and correlation metrics...... of compositionality using any of the distance and correlation metrics considered....

  20. Bioartificial liver and liver transplantation: new modalities for the treatment of liver failure

    Directory of Open Access Journals (Sweden)

    DING Yitao

    2017-09-01

    Full Text Available The main features of liver failure are extensive necrosis of hepatocytes, rapid disease progression, and poor prognosis, and at present, there are no effective drugs and methods for the treatment of liver failure. This article summarizes four treatment methods for liver failure, i.e., medical treatment, cell transplantation, liver transplantation, and artificial liver support therapy, and elaborates on the existing treatment methods. The current medical treatment regimen should be optimized; cell transplantation has not been used in clinical practice; liver transplantation is the most effective method, but it is limited by donor liver shortage and high costs; artificial liver can effectively remove toxic substances in human body. Therefore, this article puts forward artificial liver as a transition for liver transplantation; artificial liver can buy time for liver regeneration or liver transplantation and prolong patients′ survival time and thus has a promising future. The new treatment modality of bioartificial liver combined with liver transplantation may bring good news to patients with liver failure.

  1. Factor structure of the Tomimatsu-Sato metrics

    International Nuclear Information System (INIS)

    Perjes, Z.

    1989-02-01

    Based on an earlier result stating that δ = 3 Tomimatsu-Sato (TS) metrics can be factored over the field of integers, an analogous representation for higher TS metrics was sought. It is shown that the factoring property of TS metrics follows from the structure of special Hankel determinants. A set of linear algebraic equations determining the factors was defined, and the factors of the first five TS metrics were tabulated, together with their primitive factors. (R.P.) 4 refs.; 2 tabs

  2. Estimating fish swimming metrics and metabolic rates with accelerometers: the influence of sampling frequency.

    Science.gov (United States)

    Brownscombe, J W; Lennox, R J; Danylchuk, A J; Cooke, S J

    2018-06-21

    Accelerometry is growing in popularity for remotely measuring fish swimming metrics, but appropriate sampling frequencies for accurately measuring these metrics are not well studied. This research examined the influence of sampling frequency (1-25 Hz) with tri-axial accelerometer biologgers on estimates of overall dynamic body acceleration (ODBA), tail-beat frequency, swimming speed and metabolic rate of bonefish Albula vulpes in a swim-tunnel respirometer and free-swimming in a wetland mesocosm. In the swim tunnel, sampling frequencies of ≥ 5 Hz were sufficient to establish strong relationships between ODBA, swimming speed and metabolic rate. However, in free-swimming bonefish, estimates of metabolic rate were more variable below 10 Hz. Sampling frequencies should be at least twice the maximum tail-beat frequency to estimate this metric effectively, which is generally higher than those required to estimate ODBA, swimming speed and metabolic rate. While optimal sampling frequency probably varies among species due to tail-beat frequency and swimming style, this study provides a reference point with a medium body-sized sub-carangiform teleost fish, enabling researchers to measure these metrics effectively and maximize study duration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  3. ST-intuitionistic fuzzy metric space with properties

    Science.gov (United States)

    Arora, Sahil; Kumar, Tanuj

    2017-07-01

    In this paper, we define ST-intuitionistic fuzzy metric space and the notion of convergence and completeness properties of cauchy sequences is studied. Further, we prove some properties of ST-intuitionistic fuzzy metric space. Finally, we introduce the concept of symmetric ST Intuitionistic Fuzzy metric space.

  4. Comparison of Tc-99m labeled liver and liver pate as markers for solid-phase gastric emptying

    International Nuclear Information System (INIS)

    Christian, P.E.; Moore, J.G.; Datz, F.L.

    1984-01-01

    A radionuclide marker for studies of solid-phase gastric emptying should have a high labeling efficiency and remain relatively stable during gastric emptying. The availability of materials and the ease of preparation are also considerations in selecting radionuclide markers. The stability of intracellularly labeled chicken liver, surface-labeled chicken liver, and labeled pureed meat (liver pate) incubated with hydrochloric acid solution or gastric juice have been compared. Intracellularly labeled chicken liver and labeled liver pate were also compared in gastric emptying studies in humans. In vitro results demonstrated labeling efficiencies greater than 92% for both intracellularly labeled liver and labeled liver pate. The pate labeled with Tc-99m sulfur colloid was more stable than Tc-99m surface-labeled liver in vitro and its prepartion was easier than with the intracellular labeling technique. Gastric emptying studies on normal subjects demonstrated equal performance of the intracellularly labeled liver and the labeled liver pate. Labeled liver pate is thus an alternative to intracellularly labeled chicken liver in measuring solid-phase gastric emptying

  5. Some Metric Properties of Planar Gaussian Free Field

    Science.gov (United States)

    Goswami, Subhajit

    In this thesis we study the properties of some metrics arising from two-dimensional Gaussian free field (GFF), namely the Liouville first-passage percolation (Liouville FPP), the Liouville graph distance and an effective resistance metric. In Chapter 1, we define these metrics as well as discuss the motivations for studying them. Roughly speaking, Liouville FPP is the shortest path metric in a planar domain D where the length of a path P is given by ∫Pe gammah(z)|dz| where h is the GFF on D and gamma > 0. In Chapter 2, we present an upper bound on the expected Liouville FPP distance between two typical points for small values of gamma (the near-Euclidean regime). A similar upper bound is derived in Chapter 3 for the Liouville graph distance which is, roughly, the minimal number of Euclidean balls with comparable Liouville quantum gravity (LQG) measure whose union contains a continuous path between two endpoints. Our bounds seem to be in disagreement with Watabiki's prediction (1993) on the random metric of Liouville quantum gravity in this regime. The contents of these two chapters are based on a joint work with Jian Ding. In Chapter 4, we derive some asymptotic estimates for effective resistances on a random network which is defined as follows. Given any gamma > 0 and for eta = {etav}v∈Z2 denoting a sample of the two-dimensional discrete Gaussian free field on Z2 pinned at the origin, we equip the edge ( u, v) with conductance egamma(etau + eta v). The metric structure of effective resistance plays a crucial role in our proof of the main result in Chapter 4. The primary motivation behind this metric is to understand the random walk on Z 2 where the edge (u, v) has weight egamma(etau + etav). Using the estimates from Chapter 4 we show in Chapter 5 that for almost every eta, this random walk is recurrent and that, with probability tending to 1 as T → infinity, the return probability at time 2T decays as T-1+o(1). In addition, we prove a version of subdiffusive

  6. A guide to calculating habitat-quality metrics to inform conservation of highly mobile species

    Science.gov (United States)

    Bieri, Joanna A.; Sample, Christine; Thogmartin, Wayne E.; Diffendorfer, James E.; Earl, Julia E.; Erickson, Richard A.; Federico, Paula; Flockhart, D. T. Tyler; Nicol, Sam; Semmens, Darius J.; Skraber, T.; Wiederholt, Ruscena; Mattsson, Brady J.

    2018-01-01

    Many metrics exist for quantifying the relative value of habitats and pathways used by highly mobile species. Properly selecting and applying such metrics requires substantial background in mathematics and understanding the relevant management arena. To address this multidimensional challenge, we demonstrate and compare three measurements of habitat quality: graph-, occupancy-, and demographic-based metrics. Each metric provides insights into system dynamics, at the expense of increasing amounts and complexity of data and models. Our descriptions and comparisons of diverse habitat-quality metrics provide means for practitioners to overcome the modeling challenges associated with management or conservation of such highly mobile species. Whereas previous guidance for applying habitat-quality metrics has been scattered in diversified tracks of literature, we have brought this information together into an approachable format including accessible descriptions and a modeling case study for a typical example that conservation professionals can adapt for their own decision contexts and focal populations.Considerations for Resource ManagersManagement objectives, proposed actions, data availability and quality, and model assumptions are all relevant considerations when applying and interpreting habitat-quality metrics.Graph-based metrics answer questions related to habitat centrality and connectivity, are suitable for populations with any movement pattern, quantify basic spatial and temporal patterns of occupancy and movement, and require the least data.Occupancy-based metrics answer questions about likelihood of persistence or colonization, are suitable for populations that undergo localized extinctions, quantify spatial and temporal patterns of occupancy and movement, and require a moderate amount of data.Demographic-based metrics answer questions about relative or absolute population size, are suitable for populations with any movement pattern, quantify demographic

  7. MR and magnetisation transfer imaging in cirrhotic and fatty livers

    International Nuclear Information System (INIS)

    Alanen, A.; Komu, M.; Leino, R.; Toikkanen, S.

    1998-01-01

    Purpose: To determine whether low-field MR fat/water separation and magnetisation transfer (MT) techniques are useful in studying the livers of patients with parenchymal liver diseases in vivo. Material and Methods: MR and MT imaging of the liver in 33 patients (14 with primary biliary cirrhosis, 15 with alcohol-induced liver disease, and 4 with fatty liver) was performed by means of the fat/water separation technique at 0.1 T. The relaxation time T1 and the MT contrast (MTC) parameter of liver and spleen tissue were measured, and the relative proton density fat content N(%) and MTC of the liver were calculated from the separate fat and water images. The value of N(%) was also compared with the percentage of fatty hepatocytes at histology. Results: The relaxation rate R1 of liver measured from the magnitude image, and the difference in the value of MTC measured form the water image compared with the one measured from the fat and water magnitude image, both depended linearly on the value of N(%). The value of N(%) correlated significantly with the percentage of the fatty hepatocytes. In in vivo fatty tissue, fat infiltration increased both the observed relaxation rate R1 and the measured magnetisation ratio (the steady state magnetisation M s divided by the equilibrium magnetisation M o , M s /M o ) and consequently decreased the MT efficiency measured in a magnitude MR image. The amount of liver fibrosis did not correlate with the value of MTC measured after fat separation. Conclusion: Our results in studying fatty livers with MR imaging and the MT method show that the fat/water separation gives more reliable parametric results. Characterisation of liver cirrhosis by means of the MTC parameter is not reliable, even after fat separation. (orig.)

  8. Galactically inertial space probes for the direct measurement of the metric expansion of the universe

    International Nuclear Information System (INIS)

    Cagnani, Ivan

    2011-01-01

    Astrometric data from the future GAIA and OBSS missions will allow a more precise calculation of the local galactic circular speed, and better measurements of galactic movements relative to the CMB will be obtained by post-WMAP missions (ie Planck). Contemporary development of high specific impulse electric propulsion systems (ie VASIMIR) will enable the development of space probes able to properly compensate the galactic circular speed as well as the resulting attraction to the centre of our galaxy. The probes would appear immobile to an ideal observer fixed at the centre of the galaxy, in contrast of every other galactic object, which would appear moving according to their local galactic circular speed and their proper motions. Arranging at least three of these galactically static probes in an extended formation and measuring reciprocal distances of the probes over time with large angle laser ranges could allow a direct measurement of the metric expansion of the universe. Free-drifting laser-ranged targets released by the spacecrafts could also be used to measure and compensate solar system's induced local perturbations. For further reducing local effects and increase the accuracy of the results, the distance between the probes should be maximized and the location of the probes should be as far as possible from the Sun and any massive object (ie Jupiter, Saturn). Gravitational waves could also induce random errors but data from GW observatories like the planned LISA could be used to correct them.

  9. Comparing emergy accounting with well-known sustainability metrics: The case of Southern Cone Common Market, Mercosur

    International Nuclear Information System (INIS)

    Giannetti, B.F.; Almeida, C.M.V.B.; Bonilla, S.H.

    2010-01-01

    The quality and the power of human activities affect the external environment in different ways that can be measured and evaluated by means of several approaches and indicators. While the scientific community has been publishing several proposals for sustainable development indicators, there is still no consensus regarding the best approach to the use of these indicators and their reliability to measure sustainability. It is important, therefore, to question the effectiveness of sustainable development indicators in an effort to continue in the search for sustainability. This paper compares the results obtained with emergy accounting with five global Sustainability Metrics (SMs) proposed in the literature to verify if metrics are communicating coherent and similar information to guide decision makers towards sustainable development. Results obtained using emergy indices are discussed with the aid of emergy ternary diagrams. Metrics are confronted with emergy results, and the degree of variability among them is analyzed using a correlation matrix created for the Mercosur nations. The contrast of results clearly shows that metrics arrive at different interpretations about the sustainability of the nations studied, but also that some metrics may be grouped and used more prudently. Mercosur is presented as a case study to highlight and explain the discrepancies and similarities among Sustainability Metrics, and to expose the extent of emergy accounting.

  10. Epstein-Barr viral load before a liver transplant in children with chronic liver disease.

    Science.gov (United States)

    Shakibazad, Nader; Honar, Naser; Dehghani, Seyed Mohsen; Alborzi, Abdolvahab

    2014-12-01

    Many children with chronic liver disease require a liver transplant. These patients are prone to various infections, including Epstein-Barr virus infection. This study sought to measure the Epstein-Barr viral load by polymerase chain reaction before a liver transplant. This cross-sectional study was done at the Shiraz University of Medical Sciences, Shiraz, Iran, in 2011. All patients were aged younger than 18 years with chronic liver disease and were candidates for a liver transplant at the Shiraz Nemazee Hospital Organ Transplant Center. They had been investigated regarding their demographic characteristics, underlying disease, laboratory findings, and Epstein-Barr viral load by real-time TaqMan polymerase chain reaction. Ninety-eight patients were studied and the mean age was 6.5 ± 5.9 years. Cryptogenic cirrhosis was the most-prevalent reason for liver transplant, and the death rate before a transplant was 15%. Among the study subjects, 6 had measurable Epstein-Barr viral load by polymerase chain reaction before the transplant, and 4 of them had considerably higher Epstein-Barr viral loads (more than 1000 copies/mL). With respect to the close prevalence of posttransplant lymphoproliferative disease (6%) and the high Epstein-Barr viral load in the patients before a transplant (4%), high pretransplant Epstein-Barr viral load can be considered a risk factor for posttransplant lymphoproliferative disorder.

  11. Eckart frame vibration-rotation Hamiltonians: Contravariant metric tensor

    International Nuclear Information System (INIS)

    Pesonen, Janne

    2014-01-01

    Eckart frame is a unique embedding in the theory of molecular vibrations and rotations. It is defined by the condition that the Coriolis coupling of the reference structure of the molecule is zero for every choice of the shape coordinates. It is far from trivial to set up Eckart kinetic energy operators (KEOs), when the shape of the molecule is described by curvilinear coordinates. In order to obtain the KEO, one needs to set up the corresponding contravariant metric tensor. Here, I derive explicitly the Eckart frame rotational measuring vectors. Their inner products with themselves give the rotational elements, and their inner products with the vibrational measuring vectors (which, in the absence of constraints, are the mass-weighted gradients of the shape coordinates) give the Coriolis elements of the contravariant metric tensor. The vibrational elements are given as the inner products of the vibrational measuring vectors with themselves, and these elements do not depend on the choice of the body-frame. The present approach has the advantage that it does not depend on any particular choice of the shape coordinates, but it can be used in conjunction with all shape coordinates. Furthermore, it does not involve evaluation of covariant metric tensors, chain rules of derivation, or numerical differentiation, and it can be easily modified if there are constraints on the shape of the molecule. Both the planar and non-planar reference structures are accounted for. The present method is particular suitable for numerical work. Its computational implementation is outlined in an example, where I discuss how to evaluate vibration-rotation energies and eigenfunctions of a general N-atomic molecule, the shape of which is described by a set of local polyspherical coordinates

  12. Attenuation-based size metric for estimating organ dose to patients undergoing tube current modulated CT exams

    Energy Technology Data Exchange (ETDEWEB)

    Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Lu, Peiyun; Kim, Hyun J.; Cagnon, Chris H.; McNitt-Gray, Michael F. [Departments of Biomedical Physics and Radiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90024 (United States); DeMarco, John J. [Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)

    2015-02-15

    Purpose: Task Group 204 introduced effective diameter (ED) as the patient size metric used to correlate size-specific-dose-estimates. However, this size metric fails to account for patient attenuation properties and has been suggested to be replaced by an attenuation-based size metric, water equivalent diameter (D{sub W}). The purpose of this study is to investigate different size metrics, effective diameter, and water equivalent diameter, in combination with regional descriptions of scanner output to establish the most appropriate size metric to be used as a predictor for organ dose in tube current modulated CT exams. Methods: 101 thoracic and 82 abdomen/pelvis scans from clinically indicated CT exams were collected retrospectively from a multidetector row CT (Sensation 64, Siemens Healthcare) with Institutional Review Board approval to generate voxelized patient models. Fully irradiated organs (lung and breasts in thoracic scans and liver, kidneys, and spleen in abdominal scans) were segmented and used as tally regions in Monte Carlo simulations for reporting organ dose. Along with image data, raw projection data were collected to obtain tube current information for simulating tube current modulation scans using Monte Carlo methods. Additionally, previously described patient size metrics [ED, D{sub W}, and approximated water equivalent diameter (D{sub Wa})] were calculated for each patient and reported in three different ways: a single value averaged over the entire scan, a single value averaged over the region of interest, and a single value from a location in the middle of the scan volume. Organ doses were normalized by an appropriate mAs weighted CTDI{sub vol} to reflect regional variation of tube current. Linear regression analysis was used to evaluate the correlations between normalized organ doses and each size metric. Results: For the abdominal organs, the correlations between normalized organ dose and size metric were overall slightly higher for all three

  13. Resin 90Y microsphere activity measurements for liver brachytherapy

    International Nuclear Information System (INIS)

    Dezarn, William A.; Kennedy, Andrew S.

    2007-01-01

    The measurement of the radioactivity administered to the patient is one of the major components of 90 Y microsphere liver brachytherapy. The activity of 90 Y microspheres in a glass delivery vial was measured in a dose calibrator. The calibration value to use for 90 Y in the dose calibrator was verified using an activity calibration standard provided by the microsphere manufacturer. This method allowed for the determination of a consistent, reproducible local activity standard. Additional measurements were made to determine some of the factors that could affect activity measurement. The axial response of the dose calibrator was determined by the ratio of activity measurements at the bottom and center of the dose calibrator. The axial response was 0.964 for a glass shipping vial, 1.001 for a glass V-vial, and 0.988 for a polycarbonate V-vial. Comparisons between activity measurements in the dose calibrator and those using a radiation survey meter were found to agree within 10%. It was determined that the dose calibrator method was superior to the survey meter method because the former allowed better defined measurement geometry and traceability of the activity standard back to the manufacturer. Part of the preparation of resin 90 Y microspheres for patient delivery is to draw out a predetermined activity from a shipping vial and place it into a V-vial for delivery to the patient. If the drawn activity was placed in a glass V-vial, the activity measured in the dose calibrator with a glass V-vial was 4% higher than the drawn activity from the shipping vial standard. If the drawn activity was placed in a polycarbonate V-vial, the activity measured in the dose calibrator with a polycarbonate V-vial activity was 20% higher than the drawn activity from the shipping vial standard. Careful characterization of the local activity measurement standard is recommended instead of simply accepting the calibration value of the dose calibrator manufacturer

  14. Visible Contrast Energy Metrics for Detection and Discrimination

    Science.gov (United States)

    Ahumada, Albert; Watson, Andrew

    2013-01-01

    Contrast energy was proposed by Watson, Robson, & Barlow as a useful metric for representing luminance contrast target stimuli because it represents the detectability of the stimulus in photon noise for an ideal observer. Like the eye, the ear is a complex transducer system, but relatively simple sound level meters are used to characterize sounds. These meters provide a range of frequency sensitivity functions and integration times depending on the intended use. We propose here the use of a range of contrast energy measures with different spatial frequency contrast sensitivity weightings, eccentricity sensitivity weightings, and temporal integration times. When detection threshold are plotting using such measures, the results show what the eye sees best when these variables are taken into account in a standard way. The suggested weighting functions revise the Standard Spatial Observer for luminance contrast detection and extend it into the near periphery. Under the assumption that the detection is limited only by internal noise, discrimination performance can be predicted by metrics based on the visible energy of the difference images

  15. IT Project Management Metrics

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Many software and IT projects fail in completing theirs objectives because different causes of which the management of the projects has a high weight. In order to have successfully projects, lessons learned have to be used, historical data to be collected and metrics and indicators have to be computed and used to compare them with past projects and avoid failure to happen. This paper presents some metrics that can be used for the IT project management.

  16. ELF-test less accurately identifies liver cirrhosis diagnosed by liver stiffness measurement in non-Asian women with chronic hepatitis B

    NARCIS (Netherlands)

    Harkisoen, S.; Boland, G. J.; van den Hoek, J. A. R.; van Erpecum, K. J.; Hoepelman, A. I. M.; Arends, J. E.

    2014-01-01

    The enhanced liver fibrosis test (ELF-test) has been validated for several hepatic diseases. However, its performance in chronic hepatitis B virus (CHB) infected patients is uncertain. This study investigates the diagnostic value of the ELF test for cirrhosis identified by liver stiffness

  17. Metrical Phonology: German Sound System.

    Science.gov (United States)

    Tice, Bradley S.

    Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English and German languages. The objective is to promote use of metrical phonology as a tool for enhancing instruction in stress patterns in words and sentences, particularly in…

  18. Radiological evaluation of a liver simulator in comparison to a human real liver

    International Nuclear Information System (INIS)

    Toledo, Janine M.; Campos, Tarcisio P.R. de

    2009-01-01

    The present study evaluates the radiological features of a human real healthy liver reproducing its characteristics on a developed liver simulator. The radiological evaluation will be performed through radiological methods such as CT and X-ray images, density and weight measurements, as well as representation of the coloration and texture. According to literature, the liver is the highest weight organ and gland of the body, weighing approximately 1,5 kg. On the liver, the nutrients are absorbed from the digestive tract and are prosecuted and stored for future use by other organs. Also the liver is responsible for the neutralization and elimination of various toxic substances. Thus, it is an interface between the digestive system and the blood. Besides, this organ is the principal source of plasmatic proteins like the albumin, transport of graxos oily acids. Due to its proprieties, the liver holds a large amount of radionuclides on any uptake from external source. The liver simulator was designed to have the same density, weight and corresponding shape. The radiographic image was produced by conventional X-rays machine, in which the radiographic applied parameters were the same applied to abdomen. The result of the radiographic and CT images demonstrates radiological equivalence between the simulator and human real liver. Hounsfield number of the synthetic liver tissue was found on the range of human livers. Therefore, due to its similar shape, chemical composition, radiological response, the liver simulator can be used to investigate ionizing radiation procedures during radiation therapy intervention. (author)

  19. Construction of Einstein-Sasaki metrics in D≥7

    International Nuclear Information System (INIS)

    Lue, H.; Pope, C. N.; Vazquez-Poritz, J. F.

    2007-01-01

    We construct explicit Einstein-Kaehler metrics in all even dimensions D=2n+4≥6, in terms of a 2n-dimensional Einstein-Kaehler base metric. These are cohomogeneity 2 metrics which have the new feature of including a NUT-type parameter, or gravomagnetic charge, in addition to..' in addition to mass and rotation parameters. Using a canonical construction, these metrics all yield Einstein-Sasaki metrics in dimensions D=2n+5≥7. As is commonly the case in this type of construction, for suitable choices of the free parameters the Einstein-Sasaki metrics can extend smoothly onto complete and nonsingular manifolds, even though the underlying Einstein-Kaehler metric has conical singularities. We discuss some explicit examples in the case of seven-dimensional Einstein-Sasaki spaces. These new spaces can provide supersymmetric backgrounds in M theory, which play a role in the AdS 4 /CFT 3 correspondence

  20. National Metrical Types in Nineteenth Century Art Song

    Directory of Open Access Journals (Sweden)

    Leigh VanHandel

    2010-01-01

    Full Text Available William Rothstein’s article “National metrical types in music of the eighteenth and early nineteenth centuries” (2008 proposes a distinction between the metrical habits of 18th and early 19th century German music and those of Italian and French music of that period. Based on theoretical treatises and compositional practice, he outlines these national metrical types and discusses the characteristics of each type. This paper presents the results of a study designed to determine whether, and to what degree, Rothstein’s characterizations of national metrical types are present in 19th century French and German art song. Studying metrical habits in this genre may provide a lens into changing metrical conceptions of 19th century theorists and composers, as well as to the metrical habits and compositional style of individual 19th century French and German art song composers.

  1. A Metric on Phylogenetic Tree Shapes.

    Science.gov (United States)

    Colijn, C; Plazzotta, G

    2018-01-01

    The shapes of evolutionary trees are influenced by the nature of the evolutionary process but comparisons of trees from different processes are hindered by the challenge of completely describing tree shape. We present a full characterization of the shapes of rooted branching trees in a form that lends itself to natural tree comparisons. We use this characterization to define a metric, in the sense of a true distance function, on tree shapes. The metric distinguishes trees from random models known to produce different tree shapes. It separates trees derived from tropical versus USA influenza A sequences, which reflect the differing epidemiology of tropical and seasonal flu. We describe several metrics based on the same core characterization, and illustrate how to extend the metric to incorporate trees' branch lengths or other features such as overall imbalance. Our approach allows us to construct addition and multiplication on trees, and to create a convex metric on tree shapes which formally allows computation of average tree shapes. © The Author(s) 2017. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  2. Top 10 metrics for life science software good practices.

    Science.gov (United States)

    Artaza, Haydee; Chue Hong, Neil; Corpas, Manuel; Corpuz, Angel; Hooft, Rob; Jimenez, Rafael C; Leskošek, Brane; Olivier, Brett G; Stourac, Jan; Svobodová Vařeková, Radka; Van Parys, Thomas; Vaughan, Daniel

    2016-01-01

    Metrics for assessing adoption of good development practices are a useful way to ensure that software is sustainable, reusable and functional. Sustainability means that the software used today will be available - and continue to be improved and supported - in the future. We report here an initial set of metrics that measure good practices in software development. This initiative differs from previously developed efforts in being a community-driven grassroots approach where experts from different organisations propose good software practices that have reasonable potential to be adopted by the communities they represent. We not only focus our efforts on understanding and prioritising good practices, we assess their feasibility for implementation and publish them here.

  3. Measurement of binding of adenine nucleotides and phosphate to cytosolic proteins in permeabilized rat-liver cells

    NARCIS (Netherlands)

    Gankema, H. S.; Groen, A. K.; Wanders, R. J.; Tager, J. M.

    1983-01-01

    1. A method is described for measuring the binding of metabolites to cytosolic proteins in situ in isolated rat-liver cells treated with filipin to render the plasma membrane permeable to compounds of low molecular weight. 2. There is no binding of ATP or inorganic phosphate to cytosolic proteins,

  4. Natural metrics and least-committed priors for articulated tracking

    DEFF Research Database (Denmark)

    Hauberg, Søren; Sommer, Stefan Horst; Pedersen, Kim Steenstrup

    2012-01-01

    of joint positions, which is embedded in a high dimensional Euclidean space. This Riemannian manifold inherits the metric from the embedding space, such that distances are measured as the combined physical length that joints travel during movements. We then develop a least-committed Brownian motion model...

  5. Evaluation of Real-time Measurement Liver Tumor's Movement and SynchronyTM System's Accuracy of Radiosurgery using a Robot CyberKnife

    International Nuclear Information System (INIS)

    Kim, Gha Jung; Shim, Su Jung; Kim, Jeong Ho; Min, Chul Kee; Chung, Weon Kuu

    2008-01-01

    This study aimed to quantitatively measure the movement of tumors in real-time and evaluate the treatment accuracy, during the treatment of a liver tumor patient, who underwent radiosurgery with a Synchrony Respiratory motion tracking system of a robot CyberKnife. Materials and Methods: The study subjects included 24 liver tumor patients who underwent CyberKnife treatment, which included 64 times of treatment with the Synchrony Respiratory motion tracking system (SynchronyTM). The treatment involved inserting 4 to 6 acupuncture needles into the vicinity of the liver tumor in all the patients using ultrasonography as a guide. A treatment plan was set up using the CT images for treatment planning uses. The position of the acupuncture needle was identified for every treatment time by Digitally Reconstructed Radiography (DRR) prepared at the time of treatment planning and X-ray images photographed in real-time. Subsequent results were stored through a Motion Tracking System (MTS) using the Mtsmain.log treatment file. In this way, movement of the tumor was measured. Besides, the accuracy of radiosurgery using CyberKnife was evaluated by the correlation errors between the real-time positions of the acupuncture needles and the predicted coordinates. Results: The maximum and the average translational movement of the liver tumor were measured 23.5 mm and 13.9±5.5 mm, respectively from the superior to the inferior direction, 3.9 mm and 1.9±0.9 mm, respectively from left to right, and 8.3 mm and 4.9±1.9 mm, respectively from the anterior to the posterior direction. The maximum and the average rotational movement of the liver tumor were measured to be 3.3o and 2.6±1.3o, respectively for X (Left-Right) axis rotation, 4.8o and 2.3±1.0o, respectively for Y (Cranio-Caudal) axis rotation, 3.9o and 2.8±1.1o, respectively for Z (Anterior-Posterior) axis rotation. In addition, the average correlation error, which represents the treatment's accuracy was 1.1±0.7 mm. Conclusion

  6. Outcomes of Technical Variant Liver Transplantation versus Whole Liver Transplantation for Pediatric Patients: A Meta-Analysis.

    Science.gov (United States)

    Ye, Hui; Zhao, Qiang; Wang, Yufang; Wang, Dongping; Zheng, Zhouying; Schroder, Paul Michael; Lu, Yao; Kong, Yuan; Liang, Wenhua; Shang, Yushu; Guo, Zhiyong; He, Xiaoshun

    2015-01-01

    To overcome the shortage of appropriate-sized whole liver grafts for children, technical variant liver transplantation has been practiced for decades. We perform a meta-analysis to compare the survival rates and incidence of surgical complications between pediatric whole liver transplantation and technical variant liver transplantation. To identify relevant studies up to January 2014, we searched PubMed/Medline, Embase, and Cochrane library databases. The primary outcomes measured were patient and graft survival rates, and the secondary outcomes were the incidence of surgical complications. The outcomes were pooled using a fixed-effects model or random-effects model. The one-year, three-year, five-year patient survival rates and one-year, three-year graft survival rates were significantly higher in whole liver transplantation than technical variant liver transplantation (OR = 1.62, 1.90, 1.65, 1.78, and 1.62, respectively, ptechnical variant liver transplantation. Continuing efforts should be made to minimize surgical complications to improve the outcomes of technical variant liver transplantation.

  7. Degraded visual environment image/video quality metrics

    Science.gov (United States)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  8. Eyetracking Metrics in Young Onset Alzheimer’s Disease: A Window into Cognitive Visual Functions

    Directory of Open Access Journals (Sweden)

    Ivanna M. Pavisic

    2017-08-01

    Full Text Available Young onset Alzheimer’s disease (YOAD is defined as symptom onset before the age of 65 years and is particularly associated with phenotypic heterogeneity. Atypical presentations, such as the clinic-radiological visual syndrome posterior cortical atrophy (PCA, often lead to delays in accurate diagnosis. Eyetracking has been used to demonstrate basic oculomotor impairments in individuals with dementia. In the present study, we aim to explore the relationship between eyetracking metrics and standard tests of visual cognition in individuals with YOAD. Fifty-seven participants were included: 36 individuals with YOAD (n = 26 typical AD; n = 10 PCA and 21 age-matched healthy controls. Participants completed three eyetracking experiments: fixation, pro-saccade, and smooth pursuit tasks. Summary metrics were used as outcome measures and their predictive value explored looking at correlations with visuoperceptual and visuospatial metrics. Significant correlations between eyetracking metrics and standard visual cognitive estimates are reported. A machine-learning approach using a classification method based on the smooth pursuit raw eyetracking data discriminates with approximately 95% accuracy patients and controls in cross-validation tests. Results suggest that the eyetracking paradigms of a relatively simple and specific nature provide measures not only reflecting basic oculomotor characteristics but also predicting higher order visuospatial and visuoperceptual impairments. Eyetracking measures can represent extremely useful markers during the diagnostic phase and may be exploited as potential outcome measures for clinical trials.

  9. Quantitative Metrics for Generative Justice: Graphing the Value of Diversity

    Directory of Open Access Journals (Sweden)

    Brian Robert Callahan

    2016-12-01

    Full Text Available Scholarship utilizing the Generative Justice framework has focused primarily on qualitative data collection and analysis for its insights. This paper introduces a quantitative data measurement, contributory diversity, which can be used to enhance the analysis of ethical dimensions of value production under the Generative Justice lens. It is well known that the identity of contributors—gender, ethnicity, and other categories—is a key issue for social justice in general. Using the example of Open Source Software communities, we note that that typical diversity measures, focusing exclusively on workforce demographics, can fail to fully illuminate issues in value generation. Using Shannon’s entropy measure, we offer an alternative metric which combines the traditional assessment of demographics with a measure of value generation. This mapping allows for previously unacknowledged contributions to be recognized, and can avoid some of the ways in which exclusionary practices are obscured. We offer contributory diversity not as the single optimal metric, but rather as a call for others to begin investigating the possibilities for quantitative measurements of the communities and value flows that are studied using the Generative Justice framework. 

  10. The Jacobi metric for timelike geodesics in static spacetimes

    Science.gov (United States)

    Gibbons, G. W.

    2016-01-01

    It is shown that the free motion of massive particles moving in static spacetimes is given by the geodesics of an energy-dependent Riemannian metric on the spatial sections analogous to Jacobi's metric in classical dynamics. In the massless limit Jacobi's metric coincides with the energy independent Fermat or optical metric. For stationary metrics, it is known that the motion of massless particles is given by the geodesics of an energy independent Finslerian metric of Randers type. The motion of massive particles is governed by neither a Riemannian nor a Finslerian metric. The properies of the Jacobi metric for massive particles moving outside the horizon of a Schwarschild black hole are described. By constrast with the massless case, the Gaussian curvature of the equatorial sections is not always negative.

  11. Relaxed metrics and indistinguishability operators: the relationship

    Energy Technology Data Exchange (ETDEWEB)

    Martin, J.

    2017-07-01

    In 1982, the notion of indistinguishability operator was introduced by E. Trillas in order to fuzzify the crisp notion of equivalence relation (/cite{Trillas}). In the study of such a class of operators, an outstanding property must be pointed out. Concretely, there exists a duality relationship between indistinguishability operators and metrics. The aforesaid relationship was deeply studied by several authors that introduced a few techniques to generate metrics from indistinguishability operators and vice-versa (see, for instance, /cite{BaetsMesiar,BaetsMesiar2}). In the last years a new generalization of the metric notion has been introduced in the literature with the purpose of developing mathematical tools for quantitative models in Computer Science and Artificial Intelligence (/cite{BKMatthews,Ma}). The aforementioned generalized metrics are known as relaxed metrics. The main target of this talk is to present a study of the duality relationship between indistinguishability operators and relaxed metrics in such a way that the aforementioned classical techniques to generate both concepts, one from the other, can be extended to the new framework. (Author)

  12. A study of human liver ferritin and chicken liver and spleen using Moessbauer spectroscopy with high velocity resolution

    Energy Technology Data Exchange (ETDEWEB)

    Oshtrakh, M. I., E-mail: oshtrakh@mail.utnet.ru [Ural State Technical University-UPI, Faculty of Physical Techniques and Devices for Quality Control (Russian Federation); Milder, O. B.; Semionkin, V. A. [Ural State Technical University-UPI, Faculty of Experimental Physics (Russian Federation)

    2008-01-15

    Lyophilized samples of human liver ferritin and chicken liver and spleen were measured at room temperature using Moessbauer spectroscopy with high velocity resolution. An increase in the velocity resolution of Moessbauer spectroscopy permitted us to increase accuracy and decrease experimental error in determining the hyperfine parameters of human liver ferritin and chicken liver and spleen. Moessbauer spectroscopy with high velocity resolution may be very useful for revealing small differences in hyperfine parameters during biomedical research.

  13. Classification in medical images using adaptive metric k-NN

    Science.gov (United States)

    Chen, C.; Chernoff, K.; Karemore, G.; Lo, P.; Nielsen, M.; Lauze, F.

    2010-03-01

    The performance of the k-nearest neighborhoods (k-NN) classifier is highly dependent on the distance metric used to identify the k nearest neighbors of the query points. The standard Euclidean distance is commonly used in practice. This paper investigates the performance of k-NN classifier with respect to different adaptive metrics in the context of medical imaging. We propose using adaptive metrics such that the structure of the data is better described, introducing some unsupervised learning knowledge in k-NN. We investigated four different metrics are estimated: a theoretical metric based on the assumption that images are drawn from Brownian Image Model (BIM), the normalized metric based on variance of the data, the empirical metric is based on the empirical covariance matrix of the unlabeled data, and an optimized metric obtained by minimizing the classification error. The spectral structure of the empirical covariance also leads to Principal Component Analysis (PCA) performed on it which results the subspace metrics. The metrics are evaluated on two data sets: lateral X-rays of the lumbar aortic/spine region, where we use k-NN for performing abdominal aorta calcification detection; and mammograms, where we use k-NN for breast cancer risk assessment. The results show that appropriate choice of metric can improve classification.

  14. Circulating lipocalin 2 is neither related to liver steatosis in patients with non-alcoholic fatty liver disease nor to residual liver function in cirrhosis.

    Science.gov (United States)

    Meier, Elisabeth M; Pohl, Rebekka; Rein-Fischboeck, Lisa; Schacherer, Doris; Eisinger, Kristina; Wiest, Reiner; Krautbauer, Sabrina; Buechler, Christa

    2016-09-01

    Lipocalin 2 (LCN2) is induced in the injured liver and associated with inflammation. Aim of the present study was to evaluate whether serum LCN2 is a non-invasive marker to assess hepatic steatosis in patients with non-alcoholic fatty liver disease (NAFLD) or residual liver function in patients with liver cirrhosis. Therefore, LCN2 was measured by ELISA in serum of 32 randomly selected patients without fatty liver (controls), 24 patients with ultrasound diagnosed NAFLD and 42 patients with liver cirrhosis mainly due to alcohol. Systemic LCN2 was comparable in patients with liver steatosis, those with liver cirrhosis and controls. LCN2 negatively correlated with bilirubin in both cohorts. In cirrhosis, LCN2 was not associated with more advanced liver injury defined by the CHILD-PUGH score and model for end-stage liver disease score. Resistin but not C-reactive protein or chemerin positively correlated with LCN2. LCN2 levels were not increased in patients with ascites or patients with esophageal varices. Consequently, reduction of portal pressure by transjugular intrahepatic portosystemic shunt did not affect LCN2 levels. Hepatic venous blood (HVS), portal venous blood and systemic venous blood levels of LCN2 were similar. HVS LCN2 was unchanged in patients with end-stage liver cirrhosis compared to those with well-compensated disease arguing against increased hepatic release. Current data exclude that serum LCN2 is of any value as steatosis marker in patients with NAFLD and indicator of liver function in patients with alcoholic liver cirrhosis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. National evaluation of multidisciplinary quality metrics for head and neck cancer.

    Science.gov (United States)

    Cramer, John D; Speedy, Sedona E; Ferris, Robert L; Rademaker, Alfred W; Patel, Urjeet A; Samant, Sandeep

    2017-11-15

    The National Quality Forum has endorsed quality-improvement measures for multiple cancer types that are being developed into actionable tools to improve cancer care. No nationally endorsed quality metrics currently exist for head and neck cancer. The authors identified patients with surgically treated, invasive, head and neck squamous cell carcinoma in the National Cancer Data Base from 2004 to 2014 and compared the rate of adherence to 5 different quality metrics and whether compliance with these quality metrics impacted overall survival. The metrics examined included negative surgical margins, neck dissection lymph node (LN) yield ≥ 18, appropriate adjuvant radiation, appropriate adjuvant chemoradiation, adjuvant therapy within 6 weeks, as well as overall quality. In total, 76,853 eligible patients were identified. There was substantial variability in patient-level adherence, which was 80% for negative surgical margins, 73.1% for neck dissection LN yield, 69% for adjuvant radiation, 42.6% for adjuvant chemoradiation, and 44.5% for adjuvant therapy within 6 weeks. Risk-adjusted Cox proportional-hazard models indicated that all metrics were associated with a reduced risk of death: negative margins (hazard ratio [HR] 0.73; 95% confidence interval [CI], 0.71-0.76), LN yield ≥ 18 (HR, 0.93; 95% CI, 0.89-0.96), adjuvant radiation (HR, 0.67; 95% CI, 0.64-0.70), adjuvant chemoradiation (HR, 0.84; 95% CI, 0.79-0.88), and adjuvant therapy ≤6 weeks (HR, 0.92; 95% CI, 0.89-0.96). Patients who received high-quality care had a 19% reduced adjusted hazard of mortality (HR, 0.81; 95% CI, 0.79-0.83). Five head and neck cancer quality metrics were identified that have substantial variability in adherence and meaningfully impact overall survival. These metrics are appropriate candidates for national adoption. Cancer 2017;123:4372-81. © 2017 American Cancer Society. © 2017 American Cancer Society.

  16. Experiential space is hardly metric

    Czech Academy of Sciences Publication Activity Database

    Šikl, Radovan; Šimeček, Michal; Lukavský, Jiří

    2008-01-01

    Roč. 2008, č. 37 (2008), s. 58-58 ISSN 0301-0066. [European Conference on Visual Perception. 24.08-28.08.2008, Utrecht] R&D Projects: GA ČR GA406/07/1676 Institutional research plan: CEZ:AV0Z70250504 Keywords : visual space perception * metric and non-metric perceptual judgments * ecological validity Subject RIV: AN - Psychology

  17. High resolution metric imaging payload

    Science.gov (United States)

    Delclaud, Y.

    2017-11-01

    Alcatel Space Industries has become Europe's leader in the field of high and very high resolution optical payloads, in the frame work of earth observation system able to provide military government with metric images from space. This leadership allowed ALCATEL to propose for the export market, within a French collaboration frame, a complete space based system for metric observation.

  18. Integrated Metrics for Improving the Life Cycle Approach to Assessing Product System Sustainability

    Directory of Open Access Journals (Sweden)

    Wesley Ingwersen

    2014-03-01

    Full Text Available Life cycle approaches are critical for identifying and reducing environmental burdens of products. While these methods can indicate potential environmental impacts of a product, current Life Cycle Assessment (LCA methods fail to integrate the multiple impacts of a system into unified measures of social, economic or environmental performance related to sustainability. Integrated metrics that combine multiple aspects of system performance based on a common scientific or economic principle have proven to be valuable for sustainability evaluation. In this work, we propose methods of adapting four integrated metrics for use with LCAs of product systems: ecological footprint, emergy, green net value added, and Fisher information. These metrics provide information on the full product system in land, energy, monetary equivalents, and as a unitless information index; each bundled with one or more indicators for reporting. When used together and for relative comparison, integrated metrics provide a broader coverage of sustainability aspects from multiple theoretical perspectives that is more likely to illuminate potential issues than individual impact indicators. These integrated metrics are recommended for use in combination with traditional indicators used in LCA. Future work will test and demonstrate the value of using these integrated metrics and combinations to assess product system sustainability.

  19. A new liver function test using the asialoglycoprotein-receptor system on the liver cell membrane, 3

    International Nuclear Information System (INIS)

    Hazama, Hiroshi; Kawa, Soukichi; Kubota, Yoshitsugu

    1986-01-01

    We evaluated the vilidity of a new liver function test using liver scintigraphy based on the asialoglycoprotein (ASGP) receptor system on the liver cell membrane in rats with galactosamine-induced acute liver disorder and those with carbon tetra-chloride-induced chronic liver disorder. Neoglycoprotein (GHSA) produced by combining human serum albumin with 32 galactose units was labeled with 99m Tc and administered (50 μg/100 g body weight) to rats with acute or chronic liver disorder. Clearance curves were produced based on liver scintigrams and analysed using the two-compartment model to obtain parameters. In acute liver disorder, the prolongation of 99m Tc-GHSA clearance and the decrease in ASGP receptor activities correlated well to the increase in serum GOT and the decrease in the esterified to total cholesterol ratio (E/T ratio); in chronic liver disorder, they correlated significantly to the increase in the content of liver hydroxyproline (Hyp) which increased in proportion to the severity of liver fibrosis studied histologically, and to the decrease in the contents of cytochrome P-450 and cytochrome b 5 in liver microsomes. Significant correlation was observed between the prolongation of 99m Tc-GHSA clearance and the decrease in ASGP receptor activities in both acute and chronic liver disorders. These findings indicate that the measurement of 99m Tc-GHSA clearance can be a new liver function test sensitively reflecting the severity of liver damage. (author)

  20. Liver steatosis is associated with insulin resistance in skeletal muscle rather than in the liver in Japanese patients with non-alcoholic fatty liver disease.

    Science.gov (United States)

    Kato, Ken-Ichiro; Takeshita, Yumie; Misu, Hirofumi; Zen, Yoh; Kaneko, Shuichi; Takamura, Toshinari

    2015-03-01

    To examine the association between liver histological features and organ-specific insulin resistance indices calculated from 75-g oral glucose tolerance test data in patients with non-alcoholic fatty liver disease. Liver biopsy specimens were obtained from 72 patients with non-alcoholic fatty liver disease, and were scored for steatosis, grade and stage. Hepatic and skeletal muscle insulin resistance indices (hepatic insulin resistance index and Matsuda index, respectively) were calculated from 75-g oral glucose tolerance test data, and metabolic clearance rate was measured using the euglycemic hyperinsulinemic clamp method. The degree of hepatic steatosis, and grade and stage of non-alcoholic steatohepatitis were significantly correlated with Matsuda index (steatosis r = -0.45, P hepatic insulin resistance index. Multiple regression analyses adjusted for age, sex, body mass index and each histological score showed that the degree of hepatic steatosis (coefficient = -0.22, P steatosis and metabolic clearance rate (coefficient = -0.62, P = 0.059). Liver steatosis is associated with insulin resistance in skeletal muscle rather than in the liver in patients with non-alcoholic fatty liver disease, suggesting a central role of fatty liver in the development of peripheral insulin resistance and the existence of a network between the liver and skeletal muscle.

  1. Exploring s-CIELAB as a scanner metric for print uniformity

    Science.gov (United States)

    Hertel, Dirk W.

    2005-01-01

    The s-CIELAB color difference metric combines the standard CIELAB metric for perceived color difference with spatial contrast sensitivity filtering. When studying the performance of digital image processing algorithms, maps of spatial color difference between 'before' and 'after' images are a measure of perceived image difference. A general image quality metric can be obtained by modeling the perceived difference from an ideal image. This paper explores the s-CIELAB concept for evaluating the quality of digital prints. Prints present the challenge that the 'ideal print' which should serve as the reference when calculating the delta E* error map is unknown, and thus be estimated from the scanned print. A reasonable estimate of what the ideal print 'should have been' is possible at least for images of known content such as flat fields or continuous wedges, where the error map can be calculated against a global or local mean. While such maps showing the perceived error at each pixel are extremely useful when analyzing print defects, it is desirable to statistically reduce them to a more manageable dataset. Examples of digital print uniformity are given, and the effect of specific print defects on the s-CIELAB delta E* metric are discussed.

  2. Prototypic Development and Evaluation of a Medium Format Metric Camera

    Science.gov (United States)

    Hastedt, H.; Rofallski, R.; Luhmann, T.; Rosenbauer, R.; Ochsner, D.; Rieke-Zapp, D.

    2018-05-01

    Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2-3 m in each direction) and large volumes (around 20 x 20 x 1-10 m). The requested precision in object space (1σ RMS) is defined to be within 0.1-0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1) high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2) a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3) a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002). Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm-0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement). All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.

  3. PROTOTYPIC DEVELOPMENT AND EVALUATION OF A MEDIUM FORMAT METRIC CAMERA

    Directory of Open Access Journals (Sweden)

    H. Hastedt

    2018-05-01

    Full Text Available Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2–3 m in each direction and large volumes (around 20 x 20 x 1–10 m. The requested precision in object space (1σ RMS is defined to be within 0.1–0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1 high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2 a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3 a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002. Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm–0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement. All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.

  4. Smart Grid Status and Metrics Report Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Balducci, Patrick J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Antonopoulos, Chrissi A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Clements, Samuel L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gorrissen, Willy J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kirkham, Harold [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ruiz, Kathleen A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Smith, David L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Weimar, Mark R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gardner, Chris [APQC, Houston, TX (United States); Varney, Jeff [APQC, Houston, TX (United States)

    2014-07-01

    A smart grid uses digital power control and communication technology to improve the reliability, security, flexibility, and efficiency of the electric system, from large generation through the delivery systems to electricity consumers and a growing number of distributed generation and storage resources. To convey progress made in achieving the vision of a smart grid, this report uses a set of six characteristics derived from the National Energy Technology Laboratory Modern Grid Strategy. The Smart Grid Status and Metrics Report defines and examines 21 metrics that collectively provide insight into the grid’s capacity to embody these characteristics. This appendix presents papers covering each of the 21 metrics identified in Section 2.1 of the Smart Grid Status and Metrics Report. These metric papers were prepared in advance of the main body of the report and collectively form its informational backbone.

  5. Functional pitch of a liver: fatty liver disease diagnosis with photoacoustic spectrum analysis

    Science.gov (United States)

    Xu, Guan; Meng, Zhuoxian; Lin, Jiandie; Carson, Paul; Wang, Xueding

    2014-03-01

    To provide more information for classification and assessment of biological tissues, photoacoustic spectrum analysis (PASA) moves beyond the quantification of the intensities of the photoacoustic (PA) signals by the use of the frequency-domain power distribution, namely power spectrum, of broadband PA signals. The method of PASA quantifies the linear-fit to the power spectrum of the PA signals from a biological tissue with 3 parameters, including intercept, midband-fit and slope. Intercept and midband-fit reflect the total optical absorption of the tissues whereas slope reflects the heterogeneity of the tissue structure. Taking advantage of the optical absorption contrasts contributed by lipid and blood at 1200 and 532 nm, respectively and the heterogeneous tissue microstructure in fatty liver due to the lipid infiltration, we investigate the capability of PASA in identifying histological changes of fatty livers in mouse model. 6 and 9 pairs of normal and fatty liver tissues from rat models were examined by ex vivo experiment with a conventional rotational PA measurement system. One pair of rat models with normal and fatty livers was examined non-invasively and in situ with our recently developed ultrasound and PA parallel imaging system. The results support our hypotheses that the spectrum analysis of PA signals can provide quantitative measures of the differences between the normal and fatty liver tissues and that part of the PA power spectrum can suffice for characterization of microstructures in biological tissues. Experimental results also indicate that the vibrational absorption peak of lipid at 1200nm could facilitate fatty liver diagnosis.

  6. Implications of Metric Choice for Common Applications of Readmission Metrics

    OpenAIRE

    Davies, Sheryl; Saynina, Olga; Schultz, Ellen; McDonald, Kathryn M; Baker, Laurence C

    2013-01-01

    Objective. To quantify the differential impact on hospital performance of three readmission metrics: all-cause readmission (ACR), 3M Potential Preventable Readmission (PPR), and Centers for Medicare and Medicaid 30-day readmission (CMS).

  7. Prognostic Performance Metrics

    Data.gov (United States)

    National Aeronautics and Space Administration — This chapter presents several performance metrics for offline evaluation of prognostics algorithms. A brief overview of different methods employed for performance...

  8. Quality Markers in Cardiology. Main Markers to Measure Quality of Results (Outcomes) and Quality Measures Related to Better Results in Clinical Practice (Performance Metrics). INCARDIO (Indicadores de Calidad en Unidades Asistenciales del Área del Corazón): A SEC/SECTCV Consensus Position Paper.

    Science.gov (United States)

    López-Sendón, José; González-Juanatey, José Ramón; Pinto, Fausto; Cuenca Castillo, José; Badimón, Lina; Dalmau, Regina; González Torrecilla, Esteban; López-Mínguez, José Ramón; Maceira, Alicia M; Pascual-Figal, Domingo; Pomar Moya-Prats, José Luis; Sionis, Alessandro; Zamorano, José Luis

    2015-11-01

    Cardiology practice requires complex organization that impacts overall outcomes and may differ substantially among hospitals and communities. The aim of this consensus document is to define quality markers in cardiology, including markers to measure the quality of results (outcomes metrics) and quality measures related to better results in clinical practice (performance metrics). The document is mainly intended for the Spanish health care system and may serve as a basis for similar documents in other countries. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  9. 'In-vivo' measurement of selenium in liver using a cyclic activation method

    Energy Technology Data Exchange (ETDEWEB)

    Nicolaou, G E; Spyrou, N M [Surrey Univ., Guildford (UK). Dept. of Physics; Matthews, I P [UKAEA Atomic Energy Research Establishment, Harwell. Environmental and Medical Sciences Div.; Stephens-Newsham, L G [Alberta Univ., Edmonton (Canada)

    1982-01-01

    In-vivo cyclic neutron activation analysis was used to measure selenium concentrations in liver by means of sup(77m)Se (17.6 s). The cyclic activation facility incorporates an oscillating 5 Ci Am/Be neutron source while the 'patient' remains stationary during the examination. For a total experimental time of 1800 s and cyclic period of 26 s, a minimum detection limit of 0.6 ppm may be obtained, however, when comparison is made with in-vitro results, this limit may be significantly lower. The dose for such an investigation was approximately equal to 0.26x10/sup -2/ Sv.

  10. CT quantitative diagnosis in fatty liver. An experimental study

    International Nuclear Information System (INIS)

    Zhao Hong; Li Binxiang; Zhang Lizhong; Liang Jianfang

    1997-01-01

    Purpose: To evaluate the relation between liver fat content and CT value in animal experiment for the diagnosis and treatment of fatty liver in clinical practice. Materials and methods: Fatty liver model was established in 30 Wistar rats (experimental group), 5 rats was used as the control group. The 5 rats of the control group and 5 rats randomly chosen from experimental group at the first, second, third, and the fourth weekends, were measured for the CT number of total liver. Three pieces of liver specimen from each rats were removed from left, central and right lobes for histologic examination. The ratio of liver fat content to liver volume (Vv value) was measured by microscopic image pattern analyzer. Results: Significant linear negative correlation (r = -0.950, t = 12.90, P<0.001) was found between CT and Vv values. Conclusion: Using CT monitoring, the degree and amount of liver fat could be assessed and liver biopsy obviated in the diagnosis and follow up during treatment of fatty liver

  11. Monoparametric family of metrics derived from classical Jensen-Shannon divergence

    Science.gov (United States)

    Osán, Tristán M.; Bussandri, Diego G.; Lamberti, Pedro W.

    2018-04-01

    Jensen-Shannon divergence is a well known multi-purpose measure of dissimilarity between probability distributions. It has been proven that the square root of this quantity is a true metric in the sense that, in addition to the basic properties of a distance, it also satisfies the triangle inequality. In this work we extend this last result to prove that in fact it is possible to derive a monoparametric family of metrics from the classical Jensen-Shannon divergence. Motivated by our results, an application into the field of symbolic sequences segmentation is explored. Additionally, we analyze the possibility to extend this result into the quantum realm.

  12. In vivo assessment of intracellular redox state in rat liver using hyperpolarized [1-13 C]Alanine.

    Science.gov (United States)

    Park, Jae Mo; Khemtong, Chalermchai; Liu, Shie-Chau; Hurd, Ralph E; Spielman, Daniel M

    2017-05-01

    The intracellular lactate to pyruvate concentration ratio is a commonly used tissue assay biomarker of redox, being proportional to free cytosolic [NADH]/[NAD + ]. In this study, we assessed the use of hyperpolarized [1- 13 C]alanine and the subsequent detection of the intracellular products of [1- 13 C]pyruvate and [1- 13 C]lactate as a useful substrate for assessing redox levels in the liver in vivo. Animal experiments were conducted to measure in vivo metabolism at baseline and after ethanol infusion. A solution of 80-mM hyperpolarized [1- 13 C]alanine was injected intravenously at baseline (n = 8) and 45 min after ethanol infusion (n = 4), immediately followed by the dynamic acquisition of 13 C MRS spectra. In vivo rat liver spectra showed peaks from [1- 13 C] alanine and the products of [1- 13 C]lactate, [1- 13 C]pyruvate, and 13 C-bicarbonate. A significantly increased 13 C-lactate/ 13 C-pyruvate ratio was observed after ethanol infusion (8.46 ± 0.58 at baseline versus 13.58 ± 0.69 after ethanol infusion; P alanine is presented, with the validity of the proposed 13 C-pyruvate/ 13 C-lactate metric tested using an ethanol challenge to alter liver redox state. Magn Reson Med 77:1741-1748, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  14. Measuring Design Metrics In Websites

    OpenAIRE

    Navarro, Emilio; Fitzpatrick, Ronan

    2011-01-01

    The current state of the World Wide Web demands website designs that engage consumers in order to allow them to consume services or generate leads to maximize revenue. This paper describes a software quality factor to measure the success of websites by analyzing web design structure and not relying only on websites traffic data. It is also documents the requirements and architecture to build a software tool that measures criteria for determining Engagibility. A new set of social crit...

  15. Radiographic liver size in Pekingese dogs versus other dog breeds.

    Science.gov (United States)

    Choi, Jihye; Keh, Seoyeon; Kim, Hyunwook; Kim, Junyoung; Yoon, Junghee

    2013-01-01

    Differential diagnoses for canine liver disease are commonly based on radiographic estimates of liver size, however little has been published on breed variations. Aims of this study were to describe normal radiographic liver size in Pekingese dogs and to compare normal measurements for this breed with other dog breeds and Pekingese dogs with liver disease. Liver measurements were compared for clinically normal Pekingese (n = 61), normal non-Pekingese brachycephalic (n = 45), normal nonbrachycephalic (n = 71), and Pekingese breed dogs with liver disease (n = 22). For each dog, body weight, liver length, T11 vertebral length, thoracic depth, and thoracic width were measured on right lateral and ventrodorsal abdominal radiographs. Liver volume was calculated using a formula and ratios of liver length/T11 vertebral length and liver volume/body weight ratio were determined. Normal Pekingese dogs had a significantly smaller liver volume/body weight ratio (16.73 ± 5.67, P dogs (19.54 ± 5.03) and normal nonbrachycephalic breed dogs (18.72 ± 6.52). The liver length/T11 vertebral length ratio in normal Pekingese (4.64 ± 0.65) was significantly smaller than normal non-Pekingese brachycephalic breed dogs (5.16 ± 0.74) and normal nonbrachycephalic breed dogs (5.40 ± 0.74). Ratios of liver volume/body weight and liver length/T11 vertebral length in normal Pekingese were significantly different from Pekingese with liver diseases (P dogs have a smaller normal radiographic liver size than other breeds. We recommend using 4.64× the length of the T11 vertebra as a radiographic criterion for normal liver length in Pekingese dogs. © 2012 Veterinary Radiology & Ultrasound.

  16. Security camera resolution measurements: Horizontal TV lines versus modulation transfer function measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Birch, Gabriel Carisle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Griffin, John Clark [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    The horizontal television lines (HTVL) metric has been the primary quantity used by division 6000 related to camera resolution for high consequence security systems. This document shows HTVL measurements are fundamen- tally insufficient as a metric to determine camera resolution, and propose a quantitative, standards based methodology by measuring the camera system modulation transfer function (MTF), the most common and accepted metric of res- olution in the optical science community. Because HTVL calculations are easily misinterpreted or poorly defined, we present several scenarios in which HTVL is frequently reported, and discuss their problems. The MTF metric is discussed, and scenarios are presented with calculations showing the application of such a metric.

  17. Accuracy of liver lesion assessment using automated measurement and segmentation software in biphasic multislice CT (MSCT)

    International Nuclear Information System (INIS)

    Puesken, M.; Juergens, K.U.; Edenfeld, A.; Buerke, B.; Seifarth, H.; Beyer, F.; Heindel, W.; Wessling, J.; Suehling, M.; Osada, N.

    2009-01-01

    Purpose: To assess the accuracy of liver lesion measurement using automated measurement and segmentation software depending on the vascularization level. Materials and Methods: Arterial and portal venous phase multislice CT (MSCT) was performed for 58 patients. 94 liver lesions were evaluated and classified according to vascularity (hypervascular: 13 hepatocellular carcinomas, 20 hemangiomas; hypovascular: 31 metastases, 3 lymphomas, 4 abscesses; liquid: 23 cysts). The RECIST diameter and volume were obtained using automated measurement and segmentation software and compared to corresponding measurements derived visually by two experienced radiologists as a reference standard. Statistical analysis was performed using the Wilcoxon test and concordance correlation coefficients. Results: Automated measurements revealed no significant difference between the arterial and portal venous phase in hypovascular (mean RECIST diameter: 31.4 vs. 30.2 mm; p = 0.65; κ = 0.875) and liquid lesions (20.4 vs. 20.1 mm; p = 0.1; κ = 0.996). The RECIST diameter and volume of hypervascular lesions were significantly underestimated in the portal venous phase as compared to the arterial phase (30.3 vs. 26.9 mm, p = 0.007, κ 0.834; 10.7 vs. 7.9 ml, p = 0.0045, κ = 0.752). Automated measurements for hypovascular and liquid lesions in the arterial and portal venous phase were concordant to the reference standard. Hypervascular lesion measurements were in line with the reference standard for the arterial phase (30.3 vs. 32.2 mm, p 0.66, κ = 0.754), but revealed a significant difference for the portal venous phase (26.9 vs. 32.1 mm; p = 0.041; κ = 0.606). (orig.)

  18. Energy-Based Metrics for Arthroscopic Skills Assessment.

    Science.gov (United States)

    Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa

    2017-08-05

    Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.

  19. Principle of space existence and De Sitter metric

    International Nuclear Information System (INIS)

    Mal'tsev, V.K.

    1990-01-01

    The selection principle for the solutions of the Einstein equations suggested in a series of papers implies the existence of space (g ik ≠ 0) only in the presence of matter (T ik ≠0). This selection principle (principle of space existence, in the Markov terminology) implies, in the general case, the absence of the cosmological solution with the De Sitter metric. On the other hand, the De Sitter metric is necessary for describing both inflation and deflation periods of the Universe. It is shown that the De Sitter metric is also allowed by the selection principle under discussion if the metric experiences the evolution into the Friedmann metric

  20. Correlation between Abdominal Fat Amount and Fatty Liver, using Liver to Kidney Echo Ratio on Ultrasound

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yanhg Shin; Lee, Chang Hee; Choi, Kyung Mook; Lee, Jong Mee; Choi, Jae Woong; Kim, Kyeong Ah; Park, Cheol Min [Korea University Guro Hospital, Korea University College of Medicine, Seoul (Korea, Republic of)

    2012-08-15

    It has been generally recognized that fatty liver can often be seen in the obese population. This study was conducted in order to evaluate the association between fatty liver and abdominal fat volume. A total of 105 patients who visited our obesity clinic in the recent three years underwent fat CT scans and abdominal US. Attenuation difference between liver and spleen on CT was considered as a reference standard for the diagnosis of fatty liver. On US, the echogenicity of the liver parenchyma was measured in three different regions of interest (ROI) close to the adjacent right kidney in the same slice, avoiding vessels, bile duct, and calcification. Similar measurements were performed in the right renal cortex. The mean values were calculated automatically on the histogram of the ROI using the PACS program. The hepatorenal echogenicity ratio (HER; mean hepatic echogenicity/ mean renal echogenicity) was then calculated. Abdominal fat volume was measured using a 3 mm slice CT scan at the L4/5 level and was calculated automatically using a workstation. Abdominal fat was classified according to total fat (TF), visceral fat (VF), and subcutaneous fat (SF). We used Pearson's bivariate correlation method for assessment of the correlation between HER and TF, VF, and SF, respectively. Significant correlation was observed between HER and abdominal fat (TF, VF, and SF). HER showed significant correlation with VF and TF (r = 0.491 and 0.402, respectively; p = 0.000). The correlation between HER and SF (r = 0.255, p = 0.009) was less significant than for VF or TF. Fat measurement (HER) by hepatic ultrasound correlated well with the amount of abdominal fat. In particular, the VF was found to show a stronger association with fatty liver than SF.

  1. Correlation between Abdominal Fat Amount and Fatty Liver, using Liver to Kidney Echo Ratio on Ultrasound

    International Nuclear Information System (INIS)

    Park, Yanhg Shin; Lee, Chang Hee; Choi, Kyung Mook; Lee, Jong Mee; Choi, Jae Woong; Kim, Kyeong Ah; Park, Cheol Min

    2012-01-01

    It has been generally recognized that fatty liver can often be seen in the obese population. This study was conducted in order to evaluate the association between fatty liver and abdominal fat volume. A total of 105 patients who visited our obesity clinic in the recent three years underwent fat CT scans and abdominal US. Attenuation difference between liver and spleen on CT was considered as a reference standard for the diagnosis of fatty liver. On US, the echogenicity of the liver parenchyma was measured in three different regions of interest (ROI) close to the adjacent right kidney in the same slice, avoiding vessels, bile duct, and calcification. Similar measurements were performed in the right renal cortex. The mean values were calculated automatically on the histogram of the ROI using the PACS program. The hepatorenal echogenicity ratio (HER; mean hepatic echogenicity/ mean renal echogenicity) was then calculated. Abdominal fat volume was measured using a 3 mm slice CT scan at the L4/5 level and was calculated automatically using a workstation. Abdominal fat was classified according to total fat (TF), visceral fat (VF), and subcutaneous fat (SF). We used Pearson's bivariate correlation method for assessment of the correlation between HER and TF, VF, and SF, respectively. Significant correlation was observed between HER and abdominal fat (TF, VF, and SF). HER showed significant correlation with VF and TF (r = 0.491 and 0.402, respectively; p = 0.000). The correlation between HER and SF (r = 0.255, p = 0.009) was less significant than for VF or TF. Fat measurement (HER) by hepatic ultrasound correlated well with the amount of abdominal fat. In particular, the VF was found to show a stronger association with fatty liver than SF.

  2. What can article-level metrics do for you?

    Science.gov (United States)

    Fenner, Martin

    2013-10-01

    Article-level metrics (ALMs) provide a wide range of metrics about the uptake of an individual journal article by the scientific community after publication. They include citations, usage statistics, discussions in online comments and social media, social bookmarking, and recommendations. In this essay, we describe why article-level metrics are an important extension of traditional citation-based journal metrics and provide a number of example from ALM data collected for PLOS Biology.

  3. Wireless sensor network performance metrics for building applications

    Energy Technology Data Exchange (ETDEWEB)

    Jang, W.S. (Department of Civil Engineering Yeungnam University 214-1 Dae-Dong, Gyeongsan-Si Gyeongsangbuk-Do 712-749 South Korea); Healy, W.M. [Building and Fire Research Laboratory, 100 Bureau Drive, Gaithersburg, MD 20899-8632 (United States)

    2010-06-15

    Metrics are investigated to help assess the performance of wireless sensors in buildings. Wireless sensor networks present tremendous opportunities for energy savings and improvement in occupant comfort in buildings by making data about conditions and equipment more readily available. A key barrier to their adoption, however, is the uncertainty among users regarding the reliability of the wireless links through building construction. Tests were carried out that examined three performance metrics as a function of transmitter-receiver separation distance, transmitter power level, and obstruction type. These tests demonstrated, via the packet delivery rate, a clear transition from reliable to unreliable communications at different separation distances. While the packet delivery rate is difficult to measure in actual applications, the received signal strength indication correlated well with the drop in packet delivery rate in the relatively noise-free environment used in these tests. The concept of an equivalent distance was introduced to translate the range of reliability in open field operation to that seen in a typical building, thereby providing wireless system designers a rough estimate of the necessary spacing between sensor nodes in building applications. It is anticipated that the availability of straightforward metrics on the range of wireless sensors in buildings will enable more widespread sensing in buildings for improved control and fault detection. (author)

  4. Quantitative application of sigma metrics in medical biochemistry.

    Science.gov (United States)

    Nanda, Sunil Kumar; Ray, Lopamudra

    2013-12-01

    Laboratory errors are result of a poorly designed quality system in the laboratory. Six Sigma is an error reduction methodology that has been successfully applied at Motorola and General Electric. Sigma (σ) is the mathematical symbol for standard deviation (SD). Sigma methodology can be applied wherever an outcome of a process has to be measured. A poor outcome is counted as an error or defect. This is quantified as defects per million (DPM). A six sigma process is one in which 99.999666% of the products manufactured are statistically expected to be free of defects. Six sigma concentrates, on regulating a process to 6 SDs, represents 3.4 DPM (defects per million) opportunities. It can be inferred that as sigma increases, the consistency and steadiness of the test improves, thereby reducing the operating costs. We aimed to gauge performance of our laboratory parameters by sigma metrics. Evaluation of sigma metrics in interpretation of parameter performance in clinical biochemistry. The six month internal QC (October 2012 to march 2013) and EQAS (external quality assurance scheme) were extracted for the parameters-Glucose, Urea, Creatinine, Total Bilirubin, Total Protein, Albumin, Uric acid, Total Cholesterol, Triglycerides, Chloride, SGOT, SGPT and ALP. Coefficient of variance (CV) were calculated from internal QC for these parameters. Percentage bias for these parameters was calculated from the EQAS. Total allowable errors were followed as per Clinical Laboratory Improvement Amendments (CLIA) guidelines. Sigma metrics were calculated from CV, percentage bias and total allowable error for the above mentioned parameters. For parameters - Total bilirubin, uric acid, SGOT, SGPT and ALP, the sigma values were found to be more than 6. For parameters - glucose, Creatinine, triglycerides, urea, the sigma values were found to be between 3 to 6. For parameters - total protein, albumin, cholesterol and chloride, the sigma values were found to be less than 3. ALP was the best

  5. Alignment of breast cancer screening guidelines, accountability metrics, and practice patterns.

    Science.gov (United States)

    Onega, Tracy; Haas, Jennifer S; Bitton, Asaf; Brackett, Charles; Weiss, Julie; Goodrich, Martha; Harris, Kimberly; Pyle, Steve; Tosteson, Anna N A

    2017-01-01

    Breast cancer screening guidelines and metrics are inconsistent with each other and may differ from breast screening practice patterns in primary care. This study measured breast cancer screening practice patterns in relation to common evidence-based guidelines and accountability metrics. Cohort study using primary data collected from a regional breast cancer screening research network between 2011 and 2014. Using information on women aged 30 to 89 years within 21 primary care practices of 2 large integrated health systems in New England, we measured the proportion of women screened overall and by age using 2 screening definition categories: any mammogram and screening mammogram. Of the 81,352 women in our cohort, 54,903 (67.5%) had at least 1 mammogram during the time period, 48,314 (59.4%) had a screening mammogram. Women aged 50 to 69 years were the highest proportion screened (82.4% any mammogram, 75% screening indication); 72.6% of women at age 40 had a screening mammogram with a median of 70% (range = 54.3%-84.8%) among the practices. Of women aged at least 75 years, 63.3% had a screening mammogram, with the median of 63.9% (range = 37.2%-78.3%) among the practices. Of women who had 2 or more mammograms, 79.5% were screened annually. Primary care practice patterns for breast cancer screening are not well aligned with some evidence-based guidelines and accountability metrics. Metrics and incentives should be designed with more uniformity and should also include shared decision making when the evidence does not clearly support one single conclusion.

  6. ARM Data-Oriented Metrics and Diagnostics Package for Climate Model Evaluation Value-Added Product

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Chengzhu [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Xie, Shaocheng [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-10-15

    A Python-based metrics and diagnostics package is currently being developed by the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Infrastructure Team at Lawrence Livermore National Laboratory (LLNL) to facilitate the use of long-term, high-frequency measurements from the ARM Facility in evaluating the regional climate simulation of clouds, radiation, and precipitation. This metrics and diagnostics package computes climatological means of targeted climate model simulation and generates tables and plots for comparing the model simulation with ARM observational data. The Coupled Model Intercomparison Project (CMIP) model data sets are also included in the package to enable model intercomparison as demonstrated in Zhang et al. (2017). The mean of the CMIP model can serve as a reference for individual models. Basic performance metrics are computed to measure the accuracy of mean state and variability of climate models. The evaluated physical quantities include cloud fraction, temperature, relative humidity, cloud liquid water path, total column water vapor, precipitation, sensible and latent heat fluxes, and radiative fluxes, with plan to extend to more fields, such as aerosol and microphysics properties. Process-oriented diagnostics focusing on individual cloud- and precipitation-related phenomena are also being developed for the evaluation and development of specific model physical parameterizations. The version 1.0 package is designed based on data collected at ARM’s Southern Great Plains (SGP) Research Facility, with the plan to extend to other ARM sites. The metrics and diagnostics package is currently built upon standard Python libraries and additional Python packages developed by DOE (such as CDMS and CDAT). The ARM metrics and diagnostic package is available publicly with the hope that it can serve as an easy entry point for climate modelers to compare their models with ARM data. In this report, we first present the input data, which

  7. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  8. Comparing exposure zones by different exposure metrics using statistical parameters: contrast and precision.

    Science.gov (United States)

    Park, Ji Young; Ramachandran, Gurumurthy; Raynor, Peter C; Eberly, Lynn E; Olson, Greg

    2010-10-01

    Recently, the appropriateness of using the 'mass concentration' metric for ultrafine particles has been questioned and surface area (SA) or number concentration metrics has been proposed as alternatives. To assess the abilities of various exposure metrics to distinguish between different exposure zones in workplaces with nanoparticle aerosols, exposure concentrations were measured in preassigned 'high-' and 'low-'exposure zones in a restaurant, an aluminum die-casting factory, and a diesel engine laboratory using SA, number, and mass concentration metrics. Predetermined exposure classifications were compared by each metric using statistical parameters and concentration ratios that were calculated from the different exposure concentrations. In the restaurant, SA and fine particle number concentrations showed significant differences between the high- and low-exposure zones and they had higher contrast (the ratio of between-zone variance to the sum of the between-zone and within-zone variances) than mass concentrations. Mass concentrations did not show significant differences. In the die cast facility, concentrations of all metrics were significantly greater in the high zone than in the low zone. SA and fine particle number concentrations showed larger concentration ratios between the high and low zones and higher contrast than mass concentrations. None of the metrics were significantly different between the high- and low-exposure zones in the diesel engine laboratory. The SA and fine particle number concentrations appeared to be better at differentiating exposure zones and finding the particle generation sources in workplaces generating nanoparticles. Because the choice of an exposure metric has significant implications for epidemiologic studies and industrial hygiene practice, a multimetric sampling approach is recommended for nanoparticle exposure assessment.

  9. THE ROLE OF ARTICLE LEVEL METRICS IN SCIENTIFIC PUBLISHING

    Directory of Open Access Journals (Sweden)

    Vladimir TRAJKOVSKI

    2016-04-01

    Full Text Available Emerging metrics based on article-level does not exclude traditional metrics based on citations to the journal, but complements them. Article-level metrics (ALMs provide a wide range of metrics about the uptake of an individual journal article by the scientific community after publication. They include citations, statistics of usage, discussions in online comments and social media, social bookmarking, and recommendations. In this editorial, the role of article level metrics in publishing scientific papers has been described. Article-Level Metrics (ALMs are rapidly emerging as important tools to quantify how individual articles are being discussed, shared, and used. Data sources depend on the tool, but they include classic metrics indicators depending on citations, academic social networks (Mendeley, CiteULike, Delicious and social media (Facebook, Twitter, blogs, and Youtube. The most popular tools used to apply this new metrics are: Public Library of Science - Article-Level Metrics, Altmetric, Impactstory and Plum Analytics. Journal Impact Factor (JIF does not consider impact or influence beyond citations count as this count reflected only through Thomson Reuters’ Web of Science® database. JIF provides indicator related to the journal, but not related to a published paper. Thus, altmetrics now becomes an alternative metrics for performance assessment of individual scientists and their contributed scholarly publications. Macedonian scholarly publishers have to work on implementing of article level metrics in their e-journals. It is the way to increase their visibility and impact in the world of science.

  10. [Preoperative imaging/operation planning for liver surgery].

    Science.gov (United States)

    Schoening, W N; Denecke, T; Neumann, U P

    2015-12-01

    The currently established standard for planning liver surgery is multistage contrast media-enhanced multidetector computed tomography (CM-CT), which as a rule enables an appropriate resection planning, e.g. a precise identification and localization of primary and secondary liver tumors as well as the anatomical relation to extrahepatic and/or intrahepatic vascular and biliary structures. Furthermore, CM-CT enables the measurement of tumor volume, total liver volume and residual liver volume after resection. Under the condition of normal liver function a residual liver volume of 25 % is nowadays considered sufficient and safe. Recent studies in patients with liver metastases of colorectal cancer showed a clear staging advantage of contrast media-enhanced magnetic resonance imaging (CM-MRI) versus CM-CT. In addition, most recent data showed that the use of liver-specific MRI contrast media further increases the sensitivity and specificity of detection of liver metastases. This imaging technology seems to lead closer to the ideal "one stop shopping" diagnostic tool in preoperative planning of liver resection.

  11. Adipokines in Liver Cirrhosis.

    Science.gov (United States)

    Buechler, Christa; Haberl, Elisabeth M; Rein-Fischboeck, Lisa; Aslanidis, Charalampos

    2017-06-29

    Liver fibrosis can progress to cirrhosis, which is considered a serious disease. The Child-Pugh score and the model of end-stage liver disease score have been established to assess residual liver function in patients with liver cirrhosis. The development of portal hypertension contributes to ascites, variceal bleeding and further complications in these patients. A transjugular intrahepatic portosystemic shunt (TIPS) is used to lower portal pressure, which represents a major improvement in the treatment of patients. Adipokines are proteins released from adipose tissue and modulate hepatic fibrogenesis. These proteins affect various biological processes that are involved in liver function, including angiogenesis, vasodilation, inflammation and deposition of extracellular matrix proteins. The best studied adipokines are adiponectin and leptin. Adiponectin protects against hepatic inflammation and fibrogenesis, and leptin functions as a profibrogenic factor. These and other adipokines are supposed to modulate disease severity in patients with liver cirrhosis. Consequently, circulating levels of these proteins have been analyzed to identify associations with parameters of hepatic function, portal hypertension and its associated complications in patients with liver cirrhosis. This review article briefly addresses the role of adipokines in hepatitis and liver fibrosis. Here, studies having analyzed these proteins in systemic blood in cirrhotic patients are listed to identify adipokines that are comparably changed in the different cohorts of patients with liver cirrhosis. Some studies measured these proteins in systemic, hepatic and portal vein blood or after TIPS to specify the tissues contributing to circulating levels of these proteins and the effect of portal hypertension, respectively.

  12. Advanced Metrics for Assessing Holistic Care: The “Epidaurus 2” Project

    Science.gov (United States)

    Foote, Frederick O; Benson, Herbert; Berger, Ann; Berman, Brian; DeLeo, James; Deuster, Patricia A.; Lary, David J; Silverman, Marni N.; Sternberg, Esther M

    2018-01-01

    In response to the challenge of military traumatic brain injury and posttraumatic stress disorder, the US military developed a wide range of holistic care modalities at the new Walter Reed National Military Medical Center, Bethesda, MD, from 2001 to 2017, guided by civilian expert consultation via the Epidaurus Project. These projects spanned a range from healing buildings to wellness initiatives and healing through nature, spirituality, and the arts. The next challenge was to develop whole-body metrics to guide the use of these therapies in clinical care. Under the “Epidaurus 2” Project, a national search produced 5 advanced metrics for measuring whole-body therapeutic effects: genomics, integrated stress biomarkers, language analysis, machine learning, and “Star Glyphs.” This article describes the metrics, their current use in guiding holistic care at Walter Reed, and their potential for operationalizing personalized care, patient self-management, and the improvement of public health. Development of these metrics allows the scientific integration of holistic therapies with organ-system-based care, expanding the powers of medicine. PMID:29497586

  13. Using community-level metrics to monitor the effects of marine protected areas on biodiversity.

    Science.gov (United States)

    Soykan, Candan U; Lewison, Rebecca L

    2015-06-01

    Marine protected areas (MPAs) are used to protect species, communities, and their associated habitats, among other goals. Measuring MPA efficacy can be challenging, however, particularly when considering responses at the community level. We gathered 36 abundance and 14 biomass data sets on fish assemblages and used meta-analysis to evaluate the ability of 22 distinct community diversity metrics to detect differences in community structure between MPAs and nearby control sites. We also considered the effects of 6 covariates-MPA size and age, MPA size and age interaction, latitude, total species richness, and level of protection-on each metric. Some common metrics, such as species richness and Shannon diversity, did not differ consistently between MPA and control sites, whereas other metrics, such as total abundance and biomass, were consistently different across studies. Metric responses derived from the biomass data sets were more consistent than those based on the abundance data sets, suggesting that community-level biomass differs more predictably than abundance between MPA and control sites. Covariate analyses indicated that level of protection, latitude, MPA size, and the interaction between MPA size and age affect metric performance. These results highlight a handful of metrics, several of which are little known, that could be used to meet the increasing demand for community-level indicators of MPA effectiveness. © 2015 Society for Conservation Biology.

  14. Pazarlama Performans Ölçütleri: Bir Literatür Taraması(Marketing Metrics: A Literature Review)

    OpenAIRE

    Güngör HACIOĞLU

    2012-01-01

    Marketing’s inability to measure its contribution to firm performance leads to losing its status in the firm, and therefore recently marketing function is under increasing pressure to evaluate its performance and be accountable. In this context, determining appropriate metrics to measure marketing performance is discussed by both marketing practitioners and scholars. The aim of this study is to review the literature on marketing metrics used to measure marketing performance and importance att...

  15. Supplier selection using different metric functions

    Directory of Open Access Journals (Sweden)

    Omosigho S.E.

    2015-01-01

    Full Text Available Supplier selection is an important component of supply chain management in today’s global competitive environment. Hence, the evaluation and selection of suppliers have received considerable attention in the literature. Many attributes of suppliers, other than cost, are considered in the evaluation and selection process. Therefore, the process of evaluation and selection of suppliers is a multi-criteria decision making process. The methodology adopted to solve the supplier selection problem is intuitionistic fuzzy TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution. Generally, TOPSIS is based on the concept of minimum distance from the positive ideal solution and maximum distance from the negative ideal solution. We examine the deficiencies of using only one metric function in TOPSIS and propose the use of spherical metric function in addition to the commonly used metric functions. For empirical supplier selection problems, more than one metric function should be used.

  16. Regional Sustainability: The San Luis Basin Metrics Project

    Science.gov (United States)

    There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute. Moreover, individual metrics may not capture all aspects of a system that are relevant to sust...

  17. A simple method to approximate liver size on cross-sectional images using living liver models

    International Nuclear Information System (INIS)

    Muggli, D.; Mueller, M.A.; Karlo, C.; Fornaro, J.; Marincek, B.; Frauenfelder, T.

    2009-01-01

    Aim: To assess whether a simple. diameter-based formula applicable to cross-sectional images can be used to calculate the total liver volume. Materials and methods: On 119 cross-sectional examinations (62 computed tomography and 57 magnetic resonance imaging) a simple, formula-based method to approximate the liver volume was evaluated. The total liver volume was approximated measuring the largest craniocaudal (cc), ventrodorsal (vd), and coronal (cor) diameters by two readers and implementing the equation: Vol estimated =ccxvdxcorx0.31. Inter-rater reliability, agreement, and correlation between liver volume calculation and virtual liver volumetry were analysed. Results: No significant disagreement between the two readers was found. The formula correlated significantly with the volumetric data (r > 0.85, p < 0.0001). In 81% of cases the error of the approximated volume was <10% and in 92% of cases <15% compared to the volumetric data. Conclusion: Total liver volume can be accurately estimated on cross-sectional images using a simple, diameter-based equation.

  18. Diagnosis and treatment procedure for intractable liver ascites

    Directory of Open Access Journals (Sweden)

    FAN Zhidong

    2015-03-01

    Full Text Available Ascites is a common complication of liver cirrhosis. Liver ascites may occur repeatedly, which increases the therapeutic difficulty. This paper reviews the definition of intractable liver ascites, general treatment measures, and current treatment of common complications such as spontaneous bacterial peritonitis and hepatorenal syndrome, as well as the advances in conventional, unconventional, and surgical treatment of intractable liver ascites. It is pointed out that abdominocentesis for excessive drainage and active preparation for liver transplantation are the preferred approach to the treatment of intractable liver ascites.

  19. Analysis of fatty liver by CT values in obese children

    International Nuclear Information System (INIS)

    Naganuma, Yoshihiro; Tomizawa, Shuichi; Ikarashi, Kozo; Tohyama, Jun; Ozawa, Kanzi; Uchiyama, Makoto.

    1996-01-01

    Liver attenuation values were measured by CT in 97 (183 times) obese children with ages 3 to 18 years and a diagnosis of fatty liver was made in 42 subjects. Liver/spleen ration from CT measurements showed a significant negative correlation with the percentage of standard body weight, and with the systolic pressure. In children with fatty liver, systolic pressure and serum GOT, GPT, ChE, TC, TG, ApoB and insulin were significantly higher than those in children without fatty liver. After a low-calorie dietary regimen and exercise therapy, the liver/spleen ratio and GPT improved in all children. The diagnosis of fatty infiltration (fatty liver) was made with a liver/spleen ratio of less than 1.0 as determined by the number of measurements taken, a reasonable criterion for the diagnosis of fatty liver by CT in children. There were some children with elevated GPT who showed normal CT findings. This may be caused by overnutrition which was associated with fatty infiltration, since GPT decreased in all these children after treatment. The present study suggests that CT is a useful procedure in diagnosing fatty liver, and in monitoring and determining efficacy of treatment in obese children. (author)

  20. Student Borrowing in America: Metrics, Demographics, Default Aversion Strategies

    Science.gov (United States)

    Kesterman, Frank

    2006-01-01

    The use of Cohort Default Rate (CDR) as the primary measure of student loan defaults among undergraduates was investigated. The study used data extracted from the National Student Loan Data System (NSLDS), quantitative analysis of Likert-scale survey responses from 153 student financial aid professionals on proposed changes to present metrics and…

  1. Probabilistic metric spaces

    CERN Document Server

    Schweizer, B

    2005-01-01

    Topics include special classes of probabilistic metric spaces, topologies, and several related structures, such as probabilistic normed and inner-product spaces. 1983 edition, updated with 3 new appendixes. Includes 17 illustrations.

  2. Metric solution of a spinning mass

    International Nuclear Information System (INIS)

    Sato, H.

    1982-01-01

    Studies on a particular class of asymptotically flat and stationary metric solutions called the Kerr-Tomimatsu-Sato class are reviewed about its derivation and properties. For a further study, an almost complete list of the papers worked on the Tomimatsu-Sato metrics is given. (Auth.)

  3. Software architecture analysis tool : software architecture metrics collection

    NARCIS (Netherlands)

    Muskens, J.; Chaudron, M.R.V.; Westgeest, R.

    2002-01-01

    The Software Engineering discipline lacks the ability to evaluate software architectures. Here we describe a tool for software architecture analysis that is based on metrics. Metrics can be used to detect possible problems and bottlenecks in software architectures. Even though metrics do not give a

  4. Generalized tolerance sensitivity and DEA metric sensitivity

    OpenAIRE

    Neralić, Luka; E. Wendell, Richard

    2015-01-01

    This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA). Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  5. Generalized tolerance sensitivity and DEA metric sensitivity

    Directory of Open Access Journals (Sweden)

    Luka Neralić

    2015-03-01

    Full Text Available This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA. Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  6. Social Media Metrics Importance and Usage Frequency in Latvia

    Directory of Open Access Journals (Sweden)

    Ronalds Skulme

    2017-12-01

    Full Text Available Purpose of the article: The purpose of this paper was to explore which social media marketing metrics are most often used and are most important for marketing experts in Latvia and can be used to evaluate marketing campaign effectiveness. Methodology/methods: In order to achieve the aim of this paper several theoretical and practical research methods were used, such as theoretical literature analysis, surveying and grouping. First of all, theoretical research about social media metrics was conducted. Authors collected information about social media metric grouping methods and the most frequently mentioned social media metrics in the literature. The collected information was used as the foundation for the expert surveys. The expert surveys were used to collect information from Latvian marketing professionals to determine which social media metrics are used most often and which social media metrics are most important in Latvia. Scientific aim: The scientific aim of this paper was to identify if social media metrics importance varies depending on the consumer purchase decision stage. Findings: Information about the most important and most often used social media marketing metrics in Latvia was collected. A new social media grouping framework is proposed. Conclusions: The main conclusion is that the importance and the usage frequency of the social media metrics is changing depending of consumer purchase decisions stage the metric is used to evaluate.

  7. A comparison theorem of the Kobayashi metric and the Bergman metric on a class of Reinhardt domains

    International Nuclear Information System (INIS)

    Weiping Yin.

    1990-03-01

    A comparison theorem for the Kobayashi and Bergman metric is given on a class of Reinhardt domains in C n . In the meantime, we obtain a class of complete invariant Kaehler metrics for these domains of the special cases. (author). 5 refs

  8. Using Activity Metrics for DEVS Simulation Profiling

    Directory of Open Access Journals (Sweden)

    Muzy A.

    2014-01-01

    Full Text Available Activity metrics can be used to profile DEVS models before and during the simulation. It is critical to get good activity metrics of models before and during their simulation. Having a means to compute a-priori activity of components (analytic activity may be worth when simulating a model (or parts of it for the first time. After, during the simulation, analytic activity can be corrected using dynamic one. In this paper, we introduce McCabe cyclomatic complexity metric (MCA to compute analytic activity. Both static and simulation activity metrics have been implemented through a plug-in of the DEVSimPy (DEVS Simulator in Python language environment and applied to DEVS models.

  9. Biomechanical CT Metrics Are Associated With Patient Outcomes in COPD

    Science.gov (United States)

    Bodduluri, Sandeep; Bhatt, Surya P; Hoffman, Eric A.; Newell, John D.; Martinez, Carlos H.; Dransfield, Mark T.; Han, Meilan K.; Reinhardt, Joseph M.

    2017-01-01

    Background Traditional metrics of lung disease such as those derived from spirometry and static single-volume CT images are used to explain respiratory morbidity in patients with chronic obstructive pulmonary disease (COPD), but are insufficient. We hypothesized that the mean Jacobian determinant, a measure of local lung expansion and contraction with respiration, would contribute independently to clinically relevant functional outcomes. Methods We applied image registration techniques to paired inspiratory-expiratory CT scans and derived the Jacobian determinant of the deformation field between the two lung volumes to map local volume change with respiration. We analyzed 490 participants with COPD with multivariable regression models to assess strengths of association between traditional CT metrics of disease and the Jacobian determinant with respiratory morbidity including dyspnea (mMRC), St Georges Respiratory Questionnaire (SGRQ) score, six-minute walk distance (6MWD), and the BODE index, as well as all-cause mortality. Results The Jacobian determinant was significantly associated with SGRQ (adjusted regression co-efficient β = −11.75,95%CI −21.6 to −1.7;p=0.020), and with 6MWD (β=321.15, 95%CI 134.1 to 508.1;p<0.001), independent of age, sex, race, body-mass-index, FEV1, smoking pack-years, CT emphysema, CT gas trapping, airway wall thickness, and CT scanner protocol. The mean Jacobian determinant was also independently associated with the BODE index (β= −0.41, 95%CI −0.80 to −0.02; p = 0.039), and mortality on follow-up (adjusted hazards ratio = 4.26, 95%CI = 0.93 to 19.23; p = 0.064). Conclusion Biomechanical metrics representing local lung expansion and contraction improve prediction of respiratory morbidity and mortality and offer additional prognostic information beyond traditional measures of lung function and static single-volume CT metrics. PMID:28044005

  10. Hepatic mitochondrial function analysis using needle liver biopsy samples.

    Directory of Open Access Journals (Sweden)

    Michael J J Chu

    Full Text Available BACKGROUNDS AND AIM: Current assessment of pre-operative liver function relies upon biochemical blood tests and histology but these only indirectly measure liver function. Mitochondrial function (MF analysis allows direct measurement of cellular metabolic function and may provide an additional index of hepatic health. Conventional MF analysis requires substantial tissue samples (>100 mg obtained at open surgery. Here we report a method to assess MF using <3 mg of tissue obtained by a Tru-cut® biopsy needle making it suitable for percutaneous application. METHODS: An 18G Bard® Max-core® biopsy instrument was used to collect samples. The optimal Tru-cut® sample weight, stability in ice-cold University of Wisconsin solution, reproducibility and protocol utility was initially evaluated in Wistar rat livers then confirmed in human samples. MF was measured in saponin-permeabilized samples using high-resolution respirometry. RESULTS: The average mass of a single rat and human liver Tru-cut® biopsy was 5.60±0.30 and 5.16±0.15 mg, respectively (mean; standard error of mean. Two milligram of sample was found the lowest feasible mass for the MF assay. Tissue MF declined after 1 hour of cold storage. Six replicate measurements within rats and humans (n = 6 each showed low coefficient of variation (<10% in measurements of State-III respiration, electron transport chain (ETC capacity and respiratory control ratio (RCR. Ischemic rat and human liver samples consistently showed lower State-III respiration, ETC capacity and RCR, compared to normal perfused liver samples. CONCLUSION: Consistent measurement of liver MF and detection of derangement in a disease state was successfully demonstrated using less than half the tissue from a single Tru-cut® biopsy. Using this technique outpatient assessment of liver MF is now feasible, providing a new assay for the evaluation of hepatic function.

  11. Conformal and related changes of metric on the product of two almost contact metric manifolds.

    OpenAIRE

    Blair, D. E.

    1990-01-01

    This paper studies conformal and related changes of the product metric on the product of two almost contact metric manifolds. It is shown that if one factor is Sasakian, the other is not, but that locally the second factor is of the type studied by Kenmotsu. The results are more general and given in terms of trans-Sasakian, α-Sasakian and β-Kenmotsu structures.

  12. Extremal limits of the C metric: Nariai, Bertotti-Robinson, and anti-Nariai C metrics

    International Nuclear Information System (INIS)

    Dias, Oscar J.C.; Lemos, Jose P.S.

    2003-01-01

    In two previous papers we have analyzed the C metric in a background with a cosmological constant Λ, namely, the de-Sitter (dS) C metric (Λ>0), and the anti-de Sitter (AdS) C metric (Λ 0, Λ=0, and Λ 2 xS-tilde 2 ) to each point in the deformed two-sphere S-tilde 2 corresponds a dS 2 spacetime, except for one point which corresponds to a dS 2 spacetime with an infinite straight strut or string. There are other important new features that appear. One expects that the solutions found in this paper are unstable and decay into a slightly nonextreme black hole pair accelerated by a strut or by strings. Moreover, the Euclidean version of these solutions mediate the quantum process of black hole pair creation that accompanies the decay of the dS and AdS spaces

  13. Graev metrics on free products and HNN extensions

    DEFF Research Database (Denmark)

    Slutsky, Konstantin

    2014-01-01

    We give a construction of two-sided invariant metrics on free products (possibly with amalgamation) of groups with two-sided invariant metrics and, under certain conditions, on HNN extensions of such groups. Our approach is similar to the Graev's construction of metrics on free groups over pointed...

  14. g-Weak Contraction in Ordered Cone Rectangular Metric Spaces

    Directory of Open Access Journals (Sweden)

    S. K. Malhotra

    2013-01-01

    Full Text Available We prove some common fixed-point theorems for the ordered g-weak contractions in cone rectangular metric spaces without assuming the normality of cone. Our results generalize some recent results from cone metric and cone rectangular metric spaces into ordered cone rectangular metric spaces. Examples are provided which illustrate the results.

  15. Liver transplant for cholestatic liver diseases.

    Science.gov (United States)

    Carrion, Andres F; Bhamidimarri, Kalyan Ram

    2013-05-01

    Cholestatic liver diseases include a group of diverse disorders with different epidemiology, pathophysiology, clinical course, and prognosis. Despite significant advances in the clinical care of patients with cholestatic liver diseases, liver transplant (LT) remains the only definitive therapy for end-stage liver disease, regardless of the underlying cause. As per the United Network for Organ Sharing database, the rate of cadaveric LT for cholestatic liver disease was 18% in 1991, 10% in 2000, and 7.8% in 2008. This review summarizes the available evidence on various common and rare cholestatic liver diseases, disease-specific issues, and pertinent aspects of LT. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Liver transplant

    Science.gov (United States)

    Hepatic transplant; Transplant - liver; Orthotopic liver transplant; Liver failure - liver transplant; Cirrhosis - liver transplant ... The donated liver may be from: A donor who has recently died and has not had liver injury. This type of ...

  17. The dynamics of metric-affine gravity

    International Nuclear Information System (INIS)

    Vitagliano, Vincenzo; Sotiriou, Thomas P.; Liberati, Stefano

    2011-01-01

    Highlights: → The role and the dynamics of the connection in metric-affine theories is explored. → The most general second order action does not lead to a dynamical connection. → Including higher order invariants excites new degrees of freedom in the connection. → f(R) actions are also discussed and shown to be a non- representative class. - Abstract: Metric-affine theories of gravity provide an interesting alternative to general relativity: in such an approach, the metric and the affine (not necessarily symmetric) connection are independent quantities. Furthermore, the action should include covariant derivatives of the matter fields, with the covariant derivative naturally defined using the independent connection. As a result, in metric-affine theories a direct coupling involving matter and connection is also present. The role and the dynamics of the connection in such theories is explored. We employ power counting in order to construct the action and search for the minimal requirements it should satisfy for the connection to be dynamical. We find that for the most general action containing lower order invariants of the curvature and the torsion the independent connection does not carry any dynamics. It actually reduces to the role of an auxiliary field and can be completely eliminated algebraically in favour of the metric and the matter field, introducing extra interactions with respect to general relativity. However, we also show that including higher order terms in the action radically changes this picture and excites new degrees of freedom in the connection, making it (or parts of it) dynamical. Constructing actions that constitute exceptions to this rule requires significant fine tuned and/or extra a priori constraints on the connection. We also consider f(R) actions as a particular example in order to show that they constitute a distinct class of metric-affine theories with special properties, and as such they cannot be used as representative toy

  18. The definitive guide to IT service metrics

    CERN Document Server

    McWhirter, Kurt

    2012-01-01

    Used just as they are, the metrics in this book will bring many benefits to both the IT department and the business as a whole. Details of the attributes of each metric are given, enabling you to make the right choices for your business. You may prefer and are encouraged to design and create your own metrics to bring even more value to your business - this book will show you how to do this, too.

  19. A remodelling metric for angular fibre distributions and its application to diseased carotid bifurcations.

    LENUS (Irish Health Repository)

    Creane, Arthur

    2012-07-01

    Many soft biological tissues contain collagen fibres, which act as major load bearing constituents. The orientation and the dispersion of these fibres influence the macroscopic mechanical properties of the tissue and are therefore of importance in several areas of research including constitutive model development, tissue engineering and mechanobiology. Qualitative comparisons between these fibre architectures can be made using vector plots of mean orientations and contour plots of fibre dispersion but quantitative comparison cannot be achieved using these methods. We propose a \\'remodelling metric\\' between two angular fibre distributions, which represents the mean rotational effort required to transform one into the other. It is an adaptation of the earth mover\\'s distance, a similarity measure between two histograms\\/signatures used in image analysis, which represents the minimal cost of transforming one distribution into the other by moving distribution mass around. In this paper, its utility is demonstrated by considering the change in fibre architecture during a period of plaque growth in finite element models of the carotid bifurcation. The fibre architecture is predicted using a strain-based remodelling algorithm. We investigate the remodelling metric\\'s potential as a clinical indicator of plaque vulnerability by comparing results between symptomatic and asymptomatic carotid bifurcations. Fibre remodelling was found to occur at regions of plaque burden. As plaque thickness increased, so did the remodelling metric. A measure of the total predicted fibre remodelling during plaque growth, TRM, was found to be higher in the symptomatic group than in the asymptomatic group. Furthermore, a measure of the total fibre remodelling per plaque size, TRM\\/TPB, was found to be significantly higher in the symptomatic vessels. The remodelling metric may prove to be a useful tool in other soft tissues and engineered scaffolds where fibre adaptation is also present.

  20. Role of marketing metrics in strategic brand management

    Directory of Open Access Journals (Sweden)

    Mamula Tatjana

    2012-01-01

    Full Text Available This paper shows the role and importance of a brand as a strategic instrument of a company, that ensures sustainability of company's performance in the market on a longer-term basis. To achieve brand competitiveness, it is necessary to manage its equity, which is presented in this paper as an imperative of everyday business operations. Brand evaluation in strategic management is conducted by measuring brand performance in the market, finding measures and ways to manage the brand successfully in order to increase its equity using the set of marketing metric indicators.