WorldWideScience

Sample records for metrically measuring liver

  1. Mass Customization Measurements Metrics

    DEFF Research Database (Denmark)

    Nielsen, Kjeld; Brunø, Thomas Ditlev; Jørgensen, Kaj Asbjørn

    2014-01-01

    A recent survey has indicated that 17 % of companies have ceased mass customizing less than 1 year after initiating the effort. This paper presents measurement for a company’s mass customization performance, utilizing metrics within the three fundamental capabilities: robust process design, choice...... navigation, and solution space development. A mass customizer when assessing performance with these metrics can identify within which areas improvement would increase competitiveness the most and enable more efficient transition to mass customization....

  2. Measuring Information Security: Guidelines to Build Metrics

    Science.gov (United States)

    von Faber, Eberhard

    Measuring information security is a genuine interest of security managers. With metrics they can develop their security organization's visibility and standing within the enterprise or public authority as a whole. Organizations using information technology need to use security metrics. Despite the clear demands and advantages, security metrics are often poorly developed or ineffective parameters are collected and analysed. This paper describes best practices for the development of security metrics. First attention is drawn to motivation showing both requirements and benefits. The main body of this paper lists things which need to be observed (characteristic of metrics), things which can be measured (how measurements can be conducted) and steps for the development and implementation of metrics (procedures and planning). Analysis and communication is also key when using security metrics. Examples are also given in order to develop a better understanding. The author wants to resume, continue and develop the discussion about a topic which is or increasingly will be a critical factor of success for any security managers in larger organizations.

  3. Measuring Design Metrics In Websites

    OpenAIRE

    Navarro, Emilio; Fitzpatrick, Ronan

    2011-01-01

    The current state of the World Wide Web demands website designs that engage consumers in order to allow them to consume services or generate leads to maximize revenue. This paper describes a software quality factor to measure the success of websites by analyzing web design structure and not relying only on websites traffic data. It is also documents the requirements and architecture to build a software tool that measures criteria for determining Engagibility. A new set of social crit...

  4. Metrics for measuring distances in configuration spaces

    International Nuclear Information System (INIS)

    Sadeghi, Ali; Ghasemi, S. Alireza; Schaefer, Bastian; Mohr, Stephan; Goedecker, Stefan; Lill, Markus A.

    2013-01-01

    In order to characterize molecular structures we introduce configurational fingerprint vectors which are counterparts of quantities used experimentally to identify structures. The Euclidean distance between the configurational fingerprint vectors satisfies the properties of a metric and can therefore safely be used to measure dissimilarities between configurations in the high dimensional configuration space. In particular we show that these metrics are a perfect and computationally cheap replacement for the root-mean-square distance (RMSD) when one has to decide whether two noise contaminated configurations are identical or not. We introduce a Monte Carlo approach to obtain the global minimum of the RMSD between configurations, which is obtained from a global minimization over all translations, rotations, and permutations of atomic indices

  5. 22 CFR 226.15 - Metric system of measurement.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Metric system of measurement. 226.15 Section 226.15 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT ADMINISTRATION OF ASSISTANCE AWARDS TO U.S. NON-GOVERNMENTAL ORGANIZATIONS Pre-award Requirements § 226.15 Metric system of measurement. (a...

  6. 20 CFR 435.15 - Metric system of measurement.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Metric system of measurement. 435.15 Section 435.15 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE REQUIREMENTS FOR... metric system is the preferred measurement system for U.S. trade and commerce. The Act requires each...

  7. Path integral measure for first-order and metric gravities

    International Nuclear Information System (INIS)

    Aros, Rodrigo; Contreras, Mauricio; Zanelli, Jorge

    2003-01-01

    The equivalence between the path integrals for first-order gravity and the standard torsion-free, metric gravity in 3 + 1 dimensions is analysed. Starting with the path integral for first-order gravity, the correct measure for the path integral of the metric theory is obtained

  8. Measurable Control System Security through Ideal Driven Technical Metrics

    Energy Technology Data Exchange (ETDEWEB)

    Miles McQueen; Wayne Boyer; Sean McBride; Marie Farrar; Zachary Tudor

    2008-01-01

    The Department of Homeland Security National Cyber Security Division supported development of a small set of security ideals as a framework to establish measurable control systems security. Based on these ideals, a draft set of proposed technical metrics was developed to allow control systems owner-operators to track improvements or degradations in their individual control systems security posture. The technical metrics development effort included review and evaluation of over thirty metrics-related documents. On the bases of complexity, ambiguity, or misleading and distorting effects the metrics identified during the reviews were determined to be weaker than necessary to aid defense against the myriad threats posed by cyber-terrorism to human safety, as well as to economic prosperity. Using the results of our metrics review and the set of security ideals as a starting point for metrics development, we identified thirteen potential technical metrics - with at least one metric supporting each ideal. Two case study applications of the ideals and thirteen metrics to control systems were then performed to establish potential difficulties in applying both the ideals and the metrics. The case studies resulted in no changes to the ideals, and only a few deletions and refinements to the thirteen potential metrics. This led to a final proposed set of ten core technical metrics. To further validate the security ideals, the modifications made to the original thirteen potential metrics, and the final proposed set of ten core metrics, seven separate control systems security assessments performed over the past three years were reviewed for findings and recommended mitigations. These findings and mitigations were then mapped to the security ideals and metrics to assess gaps in their coverage. The mappings indicated that there are no gaps in the security ideals and that the ten core technical metrics provide significant coverage of standard security issues with 87% coverage. Based

  9. Measures of agreement between computation and experiment:validation metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Oberkampf, William Louis

    2005-08-01

    With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables and sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric and also features that should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent free-shear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.

  10. Measurement of vertical stability metrics in KSTAR

    Science.gov (United States)

    Hahn, Sang-Hee; Humphreys, D. A.; Mueller, D.; Bak, J. G.; Eidietis, N. W.; Kim, H.-S.; Ko, J. S.; Walker, M. L.; Kstar Team

    2017-10-01

    The paper summarizes results of multi-year ITPA experiments regarding measurement of the vertical stabilization capability of KSTAR discharges, including most recent measurements at the highest achievable elongation (κ 2.0 - 2.1). The measurements of the open-loop growth rate of VDE (γz) and the maximum controllable vertical displacement (ΔZmax) are done by the release-and-catch method. The dynamics of the vertical movement of the plasma is verified by both relevant magnetic reconstructions and non-magnetic diagnostics. The measurements of γz and ΔZmax were done for different plasma currents, βp, internal inductances, elongations and different configurations of the vessel conductors that surround the plasma as the first wall. Effects of control design choice and diagnostics noise are discussed, and comparison with the axisymmetric plasma response model is given for partial accounting for the measured control capability. This work supported by Ministry of Science, ICT, and Future Planning under KSTAR project.

  11. On the differential structure of metric measure spaces and applications

    CERN Document Server

    Gigli, Nicola

    2015-01-01

    The main goals of this paper are: (i) To develop an abstract differential calculus on metric measure spaces by investigating the duality relations between differentials and gradients of Sobolev functions. This will be achieved without calling into play any sort of analysis in charts, our assumptions being: the metric space is complete and separable and the measure is Radon and non-negative. (ii) To employ these notions of calculus to provide, via integration by parts, a general definition of distributional Laplacian, thus giving a meaning to an expression like \\Delta g=\\mu, where g is a functi

  12. Contrasting Various Metrics for Measuring Tropical Cyclone Activity

    Directory of Open Access Journals (Sweden)

    Jia-Yuh Yu Ping-Gin Chiu

    2012-01-01

    Full Text Available Popular metrics used for measuring the tropical cyclone (TC activity, including NTC (number of tropical cyclones, TCD (tropical cyclone days, ACE (accumulated cyclone energy, PDI (power dissipation index, along with two newly proposed indices: RACE (revised accumulated cyclone energy and RPDI (revised power dissipation index, are compared using the JTWC (Joint Typhoon Warning Center best-track data of TC over the western North Pacific basin. Our study shows that, while the above metrics have demonstrated various degrees of discrepancies, but in practical terms, they are all able to produce meaningful temporal and spatial changes in response to climate variability. Compared with the conventional ACE and PDI, RACE and RPDI seem to provide a more precise estimate of the total TC activity, especially in projecting the upswing trend of TC activity over the past few decades, simply because of a better approach in estimating TC wind energy. However, we would argue that there is still no need to find a ¡§universal¡¨ or ¡§best¡¨ metric for TC activity because different metrics are designed to stratify different aspects of TC activity, and whether the selected metric is appropriate or not should be determined solely by the purpose of study. Except for magnitude difference, the analysis results seem insensitive to the choice of the best-track datasets.

  13. 41 CFR 105-72.205 - Metric system of measurement.

    Science.gov (United States)

    2010-07-01

    ... Management Regulations System (Continued) GENERAL SERVICES ADMINISTRATION Regional Offices-General Services Administration 72-UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act...

  14. Metrics for measuring net-centric data strategy implementation

    Science.gov (United States)

    Kroculick, Joseph B.

    2010-04-01

    An enterprise data strategy outlines an organization's vision and objectives for improved collection and use of data. We propose generic metrics and quantifiable measures for each of the DoD Net-Centric Data Strategy (NCDS) data goals. Data strategy metrics can be adapted to the business processes of an enterprise and the needs of stakeholders in leveraging the organization's data assets to provide for more effective decision making. Generic metrics are applied to a specific application where logistics supply and transportation data is integrated across multiple functional groups. A dashboard presents a multidimensional view of the current progress to a state where logistics data shared in a timely and seamless manner among users, applications, and systems.

  15. Measuring the user experience collecting, analyzing, and presenting usability metrics

    CERN Document Server

    Tullis, Thomas

    2013-01-01

    Measuring the User Experience was the first book that focused on how to quantify the user experience. Now in the second edition, the authors include new material on how recent technologies have made it easier and more effective to collect a broader range of data about the user experience. As more UX and web professionals need to justify their design decisions with solid, reliable data, Measuring the User Experience provides the quantitative analysis training that these professionals need. The second edition presents new metrics such as emotional engagement, personas, k

  16. Age dependence of rat liver function measurements

    DEFF Research Database (Denmark)

    Fischer-Nielsen, A; Poulsen, H E; Hansen, B A

    1989-01-01

    Changes in the galactose elimination capacity, the capacity of urea-N synthesis and antipyrine clearance were studied in male Wistar rats at the age of 8, 20 and 44 weeks. Further, liver tissue concentrations of microsomal cytochrome P-450, microsomal protein and glutathione were measured. All...... liver function measurements increased from the age of 8 to 44 weeks when expressed in absolute values. In relation to body weight, these function measurements were unchanged or reduced from week 8 to week 20. At week 44, galactose elimination capacity and capacity of urea-N synthesis related to body...... weight were increased by 10% and 36%, respectively, and antipyrine plasma clearance was reduced to 50%. Liver tissue concentrations of microsomal cytochrome P-450 and microsomal protein increased with age when expressed in absolute values, but were unchanged per g liver, i.e., closely related to liver...

  17. 43 CFR 12.915 - Metric system of measurement.

    Science.gov (United States)

    2010-10-01

    ... procurements, grants, and other business-related activities. Metric implementation may take longer where the... recipient, such as when foreign competitors are producing competing products in non-metric units. (End of...

  18. An Introduction to the SI Metric System. Inservice Guide for Teaching Measurement, Kindergarten Through Grade Eight.

    Science.gov (United States)

    California State Dept. of Education, Sacramento.

    This handbook was designed to serve as a reference for teacher workshops that: (1) introduce the metric system and help teachers gain confidence with metric measurement, and (2) develop classroom measurement activities. One chapter presents the history and basic features of SI metrics. A second chapter presents a model for the measurement program.…

  19. Principles in selecting human capital measurements and metrics

    Directory of Open Access Journals (Sweden)

    Pharny D. Chrysler-Fox

    2014-09-01

    Research purpose: The study explored principles in selecting human capital measurements,drawing on the views and recommendations of human resource management professionals,all experts in human capital measurement. Motivation for the study: The motivation was to advance the understanding of selectingappropriate and strategic valid measurements, in order for human resource practitioners tocontribute to creating value and driving strategic change. Research design, approach and method: A qualitative approach, with purposively selectedcases from a selected panel of human capital measurement experts, generated a datasetthrough unstructured interviews, which were analysed thematically. Main findings: Nineteen themes were found. They represent a process that considers thecentrality of the business strategy and a systemic integration across multiple value chains inthe organisation through business partnering, in order to select measurements and generatemanagement level-appropriate information. Practical/managerial implications: Measurement practitioners, in partnership withmanagement from other functions, should integrate the business strategy across multiplevalue chains in order to select measurements. Analytics becomes critical in discoveringrelationships and formulating hypotheses to understand value creation. Higher educationinstitutions should produce graduates able to deal with systems thinking and to operatewithin complexity. Contribution: This study identified principles to select measurements and metrics. Noticeableis the move away from the interrelated scorecard perspectives to a systemic view of theorganisation in order to understand value creation. In addition, the findings may help toposition the human resource management function as a strategic asset.

  20. Risk Metrics and Measures for an Extended PSA

    International Nuclear Information System (INIS)

    Wielenberg, A.; Loeffler, H.; Hasnaoui, C.; Burgazzi, L.; Cazzoli, E.; Jan, P.; La Rovere, S.; Siklossy, T.; Vitazkova, J.; Raimond, E.

    2016-01-01

    This report provides a review of the main used risk measures for Level 1 and Level 2 PSA. It depicts their advantages, limitations and disadvantages and develops some more precise risk measures relevant for extended PSAs and helpful for decision-making. This report does not recommend or suggest any quantitative value for the risk measures. It does not discuss in details decision-making based on PSA results neither. The choice of one appropriate risk measure or a set of risk measures depends on the decision making approach as well as on the issue to be decided. The general approach for decision making aims at a multi-attribute approach. This can include the use of several risk measures as appropriate. Section 5 provides some recommendations on the main risk metrics to be used for an extended PSA. For Level 1 PSA, Fuel Damage Frequency and Radionuclide Mobilization Frequency are recommended. For Level 2 PSA, the characterization of loss of containment function and a total risk measure based on the aggregated activity releases of all sequences rated by their frequencies is proposed. (authors)

  1. Quality measurement and improvement in liver transplantation.

    Science.gov (United States)

    Mathur, Amit K; Talwalkar, Jayant

    2018-06-01

    There is growing interest in the quality of health care delivery in liver transplantation. Multiple stakeholders, including patients, transplant providers and their hospitals, payers, and regulatory bodies have an interest in measuring and monitoring quality in the liver transplant process, and understanding differences in quality across centres. This article aims to provide an overview of quality measurement and regulatory issues in liver transplantation performed within the United States. We review how broader definitions of health care quality should be applied to liver transplant care models. We outline the status quo including the current regulatory agencies, public reporting mechanisms, and requirements around quality assurance and performance improvement (QAPI) activities. Additionally, we further discuss unintended consequences and opportunities for growth in quality measurement. Quality measurement and the integration of quality improvement strategies into liver transplant programmes hold significant promise, but multiple challenges to successful implementation must be addressed to optimise value. Copyright © 2018 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  2. A new metric for measuring condition in large predatory sharks.

    Science.gov (United States)

    Irschick, D J; Hammerschlag, N

    2014-09-01

    A simple metric (span condition analysis; SCA) is presented for quantifying the condition of sharks based on four measurements of body girth relative to body length. Data on 104 live sharks from four species that vary in body form, behaviour and habitat use (Carcharhinus leucas, Carcharhinus limbatus, Ginglymostoma cirratum and Galeocerdo cuvier) are given. Condition shows similar levels of variability among individuals within each species. Carcharhinus leucas showed a positive relationship between condition and body size, whereas the other three species showed no relationship. There was little evidence for strong differences in condition between males and females, although more male sharks are needed for some species (e.g. G. cuvier) to verify this finding. SCA is potentially viable for other large marine or terrestrial animals that are captured live and then released. © 2014 The Fisheries Society of the British Isles.

  3. Measuring US Army medical evacuation: Metrics for performance improvement.

    Science.gov (United States)

    Galvagno, Samuel M; Mabry, Robert L; Maddry, Joseph; Kharod, Chetan U; Walrath, Benjamin D; Powell, Elizabeth; Shackelford, Stacy

    2018-01-01

    The US Army medical evacuation (MEDEVAC) community has maintained a reputation for high levels of success in transporting casualties from the point of injury to definitive care. This work served as a demonstration project to advance a model of quality assurance surveillance and medical direction for prehospital MEDEVAC providers within the Joint Trauma System. A retrospective interrupted time series analysis using prospectively collected data was performed as a process improvement project. Records were reviewed during two distinct periods: 2009 and 2014 to 2015. MEDEVAC records were matched to outcomes data available in the Department of Defense Trauma Registry. Abstracted deidentified data were reviewed for specific outcomes, procedures, and processes of care. Descriptive statistics were applied as appropriate. A total of 1,008 patients were included in this study. Nine quality assurance metrics were assessed. These metrics were: airway management, management of hypoxemia, compliance with a blood transfusion protocol, interventions for hypotensive patients, quality of battlefield analgesia, temperature measurement and interventions, proportion of traumatic brain injury (TBI) patients with hypoxemia and/or hypotension, proportion of traumatic brain injury patients with an appropriate assessment, and proportion of missing data. Overall survival in the subset of patients with outcomes data available in the Department of Defense Trauma Registry was 97.5%. The data analyzed for this study suggest overall high compliance with established tactical combat casualty care guidelines. In the present study, nearly 7% of patients had at least one documented oxygen saturation of less than 90%, and 13% of these patients had no documentation of any intervention for hypoxemia, indicating a need for training focus on airway management for hypoxemia. Advances in battlefield analgesia continued to evolve over the period when data for this study was collected. Given the inherent high

  4. 10 CFR 600.306 - Metric system of measurement.

    Science.gov (United States)

    2010-01-01

    ... cause significant inefficiencies or loss of markets to United States firms. (b) Recipients are... Requirements for Grants and Cooperative Agreements With For-Profit Organizations General § 600.306 Metric... Competitiveness Act of 1988 (15 U.S.C. 205) and implemented by Executive Order 12770, states that: (1) The metric...

  5. INFORMATIVE ENERGY METRIC FOR SIMILARITY MEASURE IN REPRODUCING KERNEL HILBERT SPACES

    Directory of Open Access Journals (Sweden)

    Songhua Liu

    2012-02-01

    Full Text Available In this paper, information energy metric (IEM is obtained by similarity computing for high-dimensional samples in a reproducing kernel Hilbert space (RKHS. Firstly, similar/dissimilar subsets and their corresponding informative energy functions are defined. Secondly, IEM is proposed for similarity measure of those subsets, which converts the non-metric distances into metric ones. Finally, applications of this metric is introduced, such as classification problems. Experimental results validate the effectiveness of the proposed method.

  6. An Observability Metric for Underwater Vehicle Localization Using Range Measurements

    Directory of Open Access Journals (Sweden)

    Filippo Arrichiello

    2013-11-01

    Full Text Available The paper addresses observability issues related to the general problem of single and multiple Autonomous Underwater Vehicle (AUV localization using only range measurements. While an AUV is submerged, localization devices, such as Global Navigation Satellite Systems, are ineffective, due to the attenuation of electromagnetic waves. AUV localization based on dead reckoning techniques and the use of affordable motion sensor units is also not practical, due to divergence caused by sensor bias and drift. For these reasons, localization systems often build on trilateration algorithms that rely on the measurements of the ranges between an AUV and a set of fixed transponders using acoustic devices. Still, such solutions are often expensive, require cumbersome calibration procedures and only allow for AUV localization in an area that is defined by the geometrical arrangement of the transponders. A viable alternative for AUV localization that has recently come to the fore exploits the use of complementary information on the distance from the AUV to a single transponder, together with information provided by on-board resident motion sensors, such as, for example, depth, velocity and acceleration measurements. This concept can be extended to address the problem of relative localization between two AUVs equipped with acoustic sensors for inter-vehicle range measurements. Motivated by these developments, in this paper, we show that both the problems of absolute localization of a single vehicle and the relative localization of multiple vehicles can be treated using the same mathematical framework, and tailoring concepts of observability derived for nonlinear systems, we analyze how the performance in localization depends on the types of motion imparted to the AUVs. For this effect, we propose a well-defined observability metric and validate its usefulness, both in simulation and by carrying out experimental tests with a real marine vehicle during which the

  7. Robust Design Impact Metrics: Measuring the effect of implementing and using Robust Design

    DEFF Research Database (Denmark)

    Ebro, Martin; Olesen, Jesper; Howard, Thomas J.

    2014-01-01

    Measuring the performance of an organisation’s product development process can be challenging due to the limited use of metrics in R&D. An organisation considering whether to use Robust Design as an integrated part of their development process may find it difficult to define whether it is relevant......, and afterwards measure the effect of having implemented it. This publication identifies and evaluates Robust Design-related metrics and finds that 2 metrics are especially useful: 1) Relative amount of R&D Resources spent after Design Verification and 2) Number of ‘change notes’ after Design Verification....... The metrics have been applied in a case company to test the assumptions made during the evaluation. It is concluded that the metrics are useful and relevant, but further work is necessary to make a proper overview and categorisation of different types of robustness related metrics....

  8. Impact of liver fibrosis and fatty liver on T1rho measurements: A prospective study

    International Nuclear Information System (INIS)

    Xie, Shuang Shuang; Li, Qing; Cheng, Yue; Shen, Wen; Zhang, Yu; Zhuo, Zhi Zheng; Zhao, Guiming

    2017-01-01

    To investigate the liver T1rho values for detecting fibrosis, and the potential impact of fatty liver on T1rho measurements. This study included 18 healthy subjects, 18 patients with fatty liver, and 18 patients with liver fibrosis, who underwent T1rho MRI and mDIXON collections. Liver T1rho, proton density fat fraction (PDFF) and T2* values were measured and compared among the three groups. Receiver operating characteristic (ROC) curve analysis was performed to evaluate the T1rho values for detecting liver fibrosis. Liver T1rho values were correlated with PDFF, T2* values and clinical data. Liver T1rho and PDFF values were significantly different (p 0.05). T1rho MRI is useful for noninvasive detection of liver fibrosis, and may not be affected with the presence of fatty liver

  9. Measurement of liver volume by emission computed tomography

    International Nuclear Information System (INIS)

    Kan, M.K.; Hopkins, G.B.

    1979-01-01

    In 22 volunteers without clinical or laboratory evidence of liver disease, liver volume was determined using single-photon emission computed tomography (ECT). This technique provided excellent object contrast between the liver and its surroundings and permitted calculation of liver volume without geometric assumptions about the liver's configuration. Reproducibility of results was satisfactory, with a root-mean-square error of less than 6% between duplicate measurements in 15 individuals. The volume measurements were validated by the use of phantoms

  10. Knowledge metrics of Brand Equity; critical measure of Brand Attachment

    OpenAIRE

    Arslan Rafi (Corresponding Author); Arslan Ali; Sidra Waris; Dr. Kashif-ur-Rehman

    2011-01-01

    Brand creation through an effective marketing strategy is necessary for creation of unique associations in the customers memory. Customers attitude, awareness and association towards the brand are primarily focused while evaluating performance of a brand, before designing the marketing strategies and subsequent evaluation of the progress. In this research, literature establishes a direct and significant effect of Knowledge metrics of the Brand equity, i.e. Brand Awareness and Brand Associatio...

  11. Acoustic radiation force impulse elastography of the liver. Can fat deposition in the liver affect the measurement of liver stiffness?

    International Nuclear Information System (INIS)

    Motosugi, Utaroh; Ichikawa, Tomoaki; Araki, Tsutomu; Niitsuma, Yoshibumi

    2011-01-01

    The aim of this study was to compare acoustic radiation force impulse (ARFI) results between livers with and without fat deposition. We studied 200 consecutive healthy individuals who underwent health checkups at our institution. The subjects were divided into three groups according to the echogenicity of the liver on ultrasonography (US) and the liver-spleen attenuation ratio index (LSR) on computed tomography: normal liver group (n=121, no evidence of bright liver on US and LSR >1); fatty liver group (n=46, bright liver on US and LSR 5 days a week (n=18) were excluded from the analysis. The velocities measured by ARFI in the normal and fatty liver groups were compared using the two one-sided test. The mean (SD) velocity measured in the normal and fatty liver groups were 1.03 (0.12) m/s and 1.02 (0.12) m/s, respectively. The ARFI results of the fatty liver group were similar to those of the normal liver group (P<0.0001). This study suggested that fat deposition in the liver does not affect the liver stiffness measurement determined by ARFI. (author)

  12. Assessment of Performance Measures for Security of the Maritime Transportation Network, Port Security Metrics : Proposed Measurement of Deterrence Capability

    Science.gov (United States)

    2007-01-03

    This report is the thirs in a series describing the development of performance measures pertaining to the security of the maritime transportation network (port security metrics). THe development of measures to guide improvements in maritime security ...

  13. Diffuse Reflectance Spectroscopy for Surface Measurement of Liver Pathology.

    Science.gov (United States)

    Nilsson, Jan H; Reistad, Nina; Brange, Hannes; Öberg, Carl-Fredrik; Sturesson, Christian

    2017-01-01

    Liver parenchymal injuries such as steatosis, steatohepatitis, fibrosis, and sinusoidal obstruction syndrome can lead to increased morbidity and liver failure after liver resection. Diffuse reflectance spectroscopy (DRS) is an optical measuring method that is fast, convenient, and established. DRS has previously been used on the liver with an invasive technique consisting of a needle that is inserted into the parenchyma. We developed a DRS system with a hand-held probe that is applied to the liver surface. In this study, we investigated the impact of the liver capsule on DRS measurements and whether liver surface measurements are representative of the whole liver. We also wanted to confirm that we could discriminate between tumor and liver parenchyma by DRS. The instrumentation setup consisted of a light source, a fiber-optic contact probe, and two spectrometers connected to a computer. Patients scheduled for liver resection due to hepatic malignancy were included, and DRS measurements were performed on the excised liver part with and without the liver capsule and alongside a newly cut surface. To estimate the scattering parameters and tissue chromophore volume fractions, including blood, bile, and fat, the measured diffuse reflectance spectra were applied to an analytical model. In total, 960 DRS spectra from the excised liver tissue of 18 patients were analyzed. All factors analyzed regarding tumor versus liver tissue were significantly different. When measuring through the capsule, the blood volume fraction was found to be 8.4 ± 3.5%, the lipid volume fraction was 9.9 ± 4.7%, and the bile volume fraction was 8.2 ± 4.6%. No differences could be found between surface measurements and cross-sectional measurements. In measurements with/without the liver capsule, the differences in volume fraction were 1.63% (0.75-2.77), -0.54% (-2.97 to 0.32), and -0.15% (-1.06 to 1.24) for blood, lipid, and bile, respectively. This study shows that it is possible to manage DRS

  14. The Measurement of Negative Creativity: Metrics and Relationships

    Science.gov (United States)

    Kapoor, Hansika; Khan, Azizuddin

    2016-01-01

    Although the dark side of creativity and negative creativity are shaping into legitimate subconstructs, measures to assess the same remain to be validated. To meet this goal, two studies assessed the convergent, predictive, and criterion-related validities of two valence-inclusive creativity measures. One measure assessed the self-report…

  15. Measuring Patient-Reported Outcomes: Key Metrics in Reconstructive Surgery.

    Science.gov (United States)

    Voineskos, Sophocles H; Nelson, Jonas A; Klassen, Anne F; Pusic, Andrea L

    2018-01-29

    Satisfaction and improved quality of life are among the most important outcomes for patients undergoing plastic and reconstructive surgery for a variety of diseases and conditions. Patient-reported outcome measures (PROMs) are essential tools for evaluating the benefits of newly developed surgical techniques. Modern PROMs are being developed with new psychometric approaches, such as Rasch Measurement Theory, and their measurement properties (validity, reliability, responsiveness) are rigorously tested. These advances have resulted in the availability of PROMs that provide clinically meaningful data and effectively measure functional as well as psychosocial outcomes. This article guides the reader through the steps of creating a PROM and highlights the potential research and clinical uses of such instruments. Limitations of PROMs and anticipated future directions in this field are discussed.

  16. Measuring Metrics for Social Media Marketing : Case: Marsaana Communications

    OpenAIRE

    Yli-Pietilä, Heidi

    2016-01-01

    This thesis looks into social media marketing, what relationship public relations has with social media marketing and brand equity. The challenge with utilizing social media marketing is identifying the right tools to use in measuring the success or effectiveness of it. In this thesis I investigate a set of tools a Finnish PR agency could utilize in measuring the effects of their social media marketing efforts on their client’s brand equity. This thesis topics include new media in specifi...

  17. Introducing the Balanced Scorecard: Creating Metrics to Measure Performance

    Science.gov (United States)

    Gumbus, Andra

    2005-01-01

    This experiential exercise presents the concept of the Balanced Scorecard (BSC) and applies it in a university setting. The Balanced Scorecard was developed 12 years ago and has grown in popularity and is used by more than 50% of the Fortune 500 companies as a performance measurement and strategic management tool. The BSC expands the traditional…

  18. 41 CFR 101-29.102 - Use of metric system of measurement in Federal product descriptions.

    Science.gov (United States)

    2010-07-01

    ... PROCUREMENT 29-FEDERAL PRODUCT DESCRIPTIONS 29.1-General § 101-29.102 Use of metric system of measurement in... measurement in Federal product descriptions. 101-29.102 Section 101-29.102 Public Contracts and Property... Federal agencies to: (a) Maintain close liaison with other Federal agencies, State and local governments...

  19. Rice by Weight, Other Produce by Bulk, and Snared Iguanas at So Much Per One. A Talk on Measurement Standards and on Metric Conversion.

    Science.gov (United States)

    Allen, Harold Don

    This script for a short radio broadcast on measurement standards and metric conversion begins by tracing the rise of the metric system in the international marketplace. Metric units are identified and briefly explained. Arguments for conversion to metric measures are presented. The history of the development and acceptance of the metric system is…

  20. Thermodynamic metrics for measuring the ``sustainability'' of design for recycling

    Science.gov (United States)

    Reuter, Markus; van Schaik, Antoinette

    2008-08-01

    In this article, exergy is applied as a parameter to measure the “sustainability” of a recycling system in addition to the fundamental prediction of material recycling and energy recovery, summarizing a development of over 20 years by the principal author supported by various co-workers, Ph.D., and M.Sc. students. In order to achieve this, recyclate qualities and particle size distributions throughout the system must be predicted as a function of product design, liberation during shredding, process dynamics, physical separation physics, and metallurgical thermodynamics. This crucial development enables the estimation of the true exergy of a recycling system from its inputs and outputs including all its realistic industrial traits. These models have among others been linked to computer aided design tools of the automotive industry and have been used to evaluate the performance of waste electric and electronic equipment recycling systems in The Netherlands. This paper also suggests that the complete system must be optimized to find a “truer” optimum of the material production system linked to the consumer market.

  1. Productivity in Pediatric Palliative Care: Measuring and Monitoring an Elusive Metric.

    Science.gov (United States)

    Kaye, Erica C; Abramson, Zachary R; Snaman, Jennifer M; Friebert, Sarah E; Baker, Justin N

    2017-05-01

    Workforce productivity is poorly defined in health care. Particularly in the field of pediatric palliative care (PPC), the absence of consensus metrics impedes aggregation and analysis of data to track workforce efficiency and effectiveness. Lack of uniformly measured data also compromises the development of innovative strategies to improve productivity and hinders investigation of the link between productivity and quality of care, which are interrelated but not interchangeable. To review the literature regarding the definition and measurement of productivity in PPC; to identify barriers to productivity within traditional PPC models; and to recommend novel metrics to study productivity as a component of quality care in PPC. PubMed ® and Cochrane Database of Systematic Reviews searches for scholarly literature were performed using key words (pediatric palliative care, palliative care, team, workforce, workflow, productivity, algorithm, quality care, quality improvement, quality metric, inpatient, hospital, consultation, model) for articles published between 2000 and 2016. Organizational searches of Center to Advance Palliative Care, National Hospice and Palliative Care Organization, National Association for Home Care & Hospice, American Academy of Hospice and Palliative Medicine, Hospice and Palliative Nurses Association, National Quality Forum, and National Consensus Project for Quality Palliative Care were also performed. Additional semistructured interviews were conducted with directors from seven prominent PPC programs across the U.S. to review standard operating procedures for PPC team workflow and productivity. Little consensus exists in the PPC field regarding optimal ways to define, measure, and analyze provider and program productivity. Barriers to accurate monitoring of productivity include difficulties with identification, measurement, and interpretation of metrics applicable to an interdisciplinary care paradigm. In the context of inefficiencies

  2. Measuring floodplain spatial patterns using continuous surface metrics at multiple scales

    Science.gov (United States)

    Scown, Murray W.; Thoms, Martin C.; DeJager, Nathan R.

    2015-01-01

    Interactions between fluvial processes and floodplain ecosystems occur upon a floodplain surface that is often physically complex. Spatial patterns in floodplain topography have only recently been quantified over multiple scales, and discrepancies exist in how floodplain surfaces are perceived to be spatially organised. We measured spatial patterns in floodplain topography for pool 9 of the Upper Mississippi River, USA, using moving window analyses of eight surface metrics applied to a 1 × 1 m2 DEM over multiple scales. The metrics used were Range, SD, Skewness, Kurtosis, CV, SDCURV,Rugosity, and Vol:Area, and window sizes ranged from 10 to 1000 m in radius. Surface metric values were highly variable across the floodplain and revealed a high degree of spatial organisation in floodplain topography. Moran's I correlograms fit to the landscape of each metric at each window size revealed that patchiness existed at nearly all window sizes, but the strength and scale of patchiness changed within window size, suggesting that multiple scales of patchiness and patch structure exist in the topography of this floodplain. Scale thresholds in the spatial patterns were observed, particularly between the 50 and 100 m window sizes for all surface metrics and between the 500 and 750 m window sizes for most metrics. These threshold scales are ~ 15–20% and 150% of the main channel width (1–2% and 10–15% of the floodplain width), respectively. These thresholds may be related to structuring processes operating across distinct scale ranges. By coupling surface metrics, multi-scale analyses, and correlograms, quantifying floodplain topographic complexity is possible in ways that should assist in clarifying how floodplain ecosystems are structured.

  3. Semantic metrics

    OpenAIRE

    Hu, Bo; Kalfoglou, Yannis; Dupplaw, David; Alani, Harith; Lewis, Paul; Shadbolt, Nigel

    2006-01-01

    In the context of the Semantic Web, many ontology-related operations, e.g. ontology ranking, segmentation, alignment, articulation, reuse, evaluation, can be boiled down to one fundamental operation: computing the similarity and/or dissimilarity among ontological entities, and in some cases among ontologies themselves. In this paper, we review standard metrics for computing distance measures and we propose a series of semantic metrics. We give a formal account of semantic metrics drawn from a...

  4. Impact of liver fibrosis and fatty liver on T1rho measurements: A prospective study

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Shuang Shuang; Li, Qing; Cheng, Yue; Shen, Wen [Dept. of Radiology, Tianjin First Center Hospital, Tianjin (China); Zhang, Yu; Zhuo, Zhi Zheng [Clinical Science, Philips Healthcare, Beijing (China); Zhao, Guiming [Dept. of Hepatology, Tianjin Second People' s Hospital, Tianjin (China)

    2017-11-15

    To investigate the liver T1rho values for detecting fibrosis, and the potential impact of fatty liver on T1rho measurements. This study included 18 healthy subjects, 18 patients with fatty liver, and 18 patients with liver fibrosis, who underwent T1rho MRI and mDIXON collections. Liver T1rho, proton density fat fraction (PDFF) and T2* values were measured and compared among the three groups. Receiver operating characteristic (ROC) curve analysis was performed to evaluate the T1rho values for detecting liver fibrosis. Liver T1rho values were correlated with PDFF, T2* values and clinical data. Liver T1rho and PDFF values were significantly different (p < 0.001), whereas the T2* (p = 0.766) values were similar, among the three groups. Mean liver T1rho values in the fibrotic group (52.6 ± 6.8 ms) were significantly higher than those of healthy subjects (44.9 ± 2.8 ms, p < 0.001) and fatty liver group (45.0 ± 3.5 ms, p < 0.001). Mean liver T1rho values were similar between healthy subjects and fatty liver group (p = 0.999). PDFF values in the fatty liver group (16.07 ± 10.59%) were significantly higher than those of healthy subjects (1.43 ± 1.36%, p < 0.001) and fibrosis group (1.07 ± 1.06%, p < 0.001). PDFF values were similar in healthy subjects and fibrosis group (p = 0.984). Mean T1rho values performed well to detect fibrosis at a threshold of 49.5 ms (area under the ROC curve, 0.855), had a moderate correlation with liver stiffness (r = 0.671, p = 0.012), and no correlation with PDFF, T2* values, subject age, or body mass index (p > 0.05). T1rho MRI is useful for noninvasive detection of liver fibrosis, and may not be affected with the presence of fatty liver.

  5. Using measures of information content and complexity of time series as hydrologic metrics

    Science.gov (United States)

    The information theory has been previously used to develop metrics that allowed to characterize temporal patterns in soil moisture dynamics, and to evaluate and to compare performance of soil water flow models. The objective of this study was to apply information and complexity measures to characte...

  6. Quantifying, Measuring, and Strategizing Energy Security: Determining the Most Meaningful Dimensions and Metrics

    DEFF Research Database (Denmark)

    Ren, Jingzheng; Sovacool, Benjamin

    2014-01-01

    subjective concepts of energy security into more objective criteria, to investigate the cause-effect relationships among these different metrics, and to provide some recommendations for the stakeholders to draft efficacious measures for enhancing energy security. To accomplish this feat, the study utilizes...

  7. Lean manufacturing measurement: the relationship between lean activities and lean metrics

    Directory of Open Access Journals (Sweden)

    Manotas Duque Diego Fernando

    2007-10-01

    Full Text Available Lean Manufacturing was developed by Toyota Motor company to address their specific needs in a restricted market in times of economic trouble. These concepts have been studied and proven to be transferrable and applicable to a wide variety of industries. This paper aims to integrate a set of metrics that have been proposed by different authors in such a way that they are consistent with the different stages and elements of Lean Manufacturing implementations. To achieve this, two frameworks for Lean implementations are presented and then the main factors for success are used as the basis to propose metrics that measure the advance in these factors. A tabular display of the impact of “Lean activities” on the metrics is presented, proposing that many a priori assumptions about the benefits on many different levels of improvement should be accurate. Finally, some ideas for future research and extension of the applications proposed on this paper are presented as closing points.

  8. An uncertainty importance measure using a distance metric for the change in a cumulative distribution function

    International Nuclear Information System (INIS)

    Chun, Moon-Hyun; Han, Seok-Jung; Tak, Nam-IL

    2000-01-01

    A simple measure of uncertainty importance using the entire change of cumulative distribution functions (CDFs) has been developed for use in probability safety assessments (PSAs). The entire change of CDFs is quantified in terms of the metric distance between two CDFs. The metric distance measure developed in this study reflects the relative impact of distributional changes of inputs on the change of an output distribution, while most of the existing uncertainty importance measures reflect the magnitude of relative contribution of input uncertainties to the output uncertainty. The present measure has been evaluated analytically for various analytical distributions to examine its characteristics. To illustrate the applicability and strength of the present measure, two examples are provided. The first example is an application of the present measure to a typical problem of a system fault tree analysis and the second one is for a hypothetical non-linear model. Comparisons of the present result with those obtained by existing uncertainty importance measures show that the metric distance measure is a useful tool to express the measure of uncertainty importance in terms of the relative impact of distributional changes of inputs on the change of an output distribution

  9. Information Entropy-Based Metrics for Measuring Emergences in Artificial Societies

    Directory of Open Access Journals (Sweden)

    Mingsheng Tang

    2014-08-01

    Full Text Available Emergence is a common phenomenon, and it is also a general and important concept in complex dynamic systems like artificial societies. Usually, artificial societies are used for assisting in resolving several complex social issues (e.g., emergency management, intelligent transportation system with the aid of computer science. The levels of an emergence may have an effect on decisions making, and the occurrence and degree of an emergence are generally perceived by human observers. However, due to the ambiguity and inaccuracy of human observers, to propose a quantitative method to measure emergences in artificial societies is a meaningful and challenging task. This article mainly concentrates upon three kinds of emergences in artificial societies, including emergence of attribution, emergence of behavior, and emergence of structure. Based on information entropy, three metrics have been proposed to measure emergences in a quantitative way. Meanwhile, the correctness of these metrics has been verified through three case studies (the spread of an infectious influenza, a dynamic microblog network, and a flock of birds with several experimental simulations on the Netlogo platform. These experimental results confirm that these metrics increase with the rising degree of emergences. In addition, this article also has discussed the limitations and extended applications of these metrics.

  10. Measuring scientific impact beyond academia: An assessment of existing impact metrics and proposed improvements.

    Science.gov (United States)

    Ravenscroft, James; Liakata, Maria; Clare, Amanda; Duma, Daniel

    2017-01-01

    How does scientific research affect the world around us? Being able to answer this question is of great importance in order to appropriately channel efforts and resources in science. The impact by scientists in academia is currently measured by citation based metrics such as h-index, i-index and citation counts. These academic metrics aim to represent the dissemination of knowledge among scientists rather than the impact of the research on the wider world. In this work we are interested in measuring scientific impact beyond academia, on the economy, society, health and legislation (comprehensive impact). Indeed scientists are asked to demonstrate evidence of such comprehensive impact by authoring case studies in the context of the Research Excellence Framework (REF). We first investigate the extent to which existing citation based metrics can be indicative of comprehensive impact. We have collected all recent REF impact case studies from 2014 and we have linked these to papers in citation networks that we constructed and derived from CiteSeerX, arXiv and PubMed Central using a number of text processing and information retrieval techniques. We have demonstrated that existing citation-based metrics for impact measurement do not correlate well with REF impact results. We also consider metrics of online attention surrounding scientific works, such as those provided by the Altmetric API. We argue that in order to be able to evaluate wider non-academic impact we need to mine information from a much wider set of resources, including social media posts, press releases, news articles and political debates stemming from academic work. We also provide our data as a free and reusable collection for further analysis, including the PubMed citation network and the correspondence between REF case studies, grant applications and the academic literature.

  11. Measuring scientific impact beyond academia: An assessment of existing impact metrics and proposed improvements.

    Directory of Open Access Journals (Sweden)

    James Ravenscroft

    Full Text Available How does scientific research affect the world around us? Being able to answer this question is of great importance in order to appropriately channel efforts and resources in science. The impact by scientists in academia is currently measured by citation based metrics such as h-index, i-index and citation counts. These academic metrics aim to represent the dissemination of knowledge among scientists rather than the impact of the research on the wider world. In this work we are interested in measuring scientific impact beyond academia, on the economy, society, health and legislation (comprehensive impact. Indeed scientists are asked to demonstrate evidence of such comprehensive impact by authoring case studies in the context of the Research Excellence Framework (REF. We first investigate the extent to which existing citation based metrics can be indicative of comprehensive impact. We have collected all recent REF impact case studies from 2014 and we have linked these to papers in citation networks that we constructed and derived from CiteSeerX, arXiv and PubMed Central using a number of text processing and information retrieval techniques. We have demonstrated that existing citation-based metrics for impact measurement do not correlate well with REF impact results. We also consider metrics of online attention surrounding scientific works, such as those provided by the Altmetric API. We argue that in order to be able to evaluate wider non-academic impact we need to mine information from a much wider set of resources, including social media posts, press releases, news articles and political debates stemming from academic work. We also provide our data as a free and reusable collection for further analysis, including the PubMed citation network and the correspondence between REF case studies, grant applications and the academic literature.

  12. Using Complexity Metrics With R-R Intervals and BPM Heart Rate Measures

    DEFF Research Database (Denmark)

    Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian

    2013-01-01

    Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of inter......Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker...... of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically...

  13. National Quality Forum Colon Cancer Quality Metric Performance: How Are Hospitals Measuring Up?

    Science.gov (United States)

    Mason, Meredith C; Chang, George J; Petersen, Laura A; Sada, Yvonne H; Tran Cao, Hop S; Chai, Christy; Berger, David H; Massarweh, Nader N

    2017-12-01

    To evaluate the impact of care at high-performing hospitals on the National Quality Forum (NQF) colon cancer metrics. The NQF endorses evaluating ≥12 lymph nodes (LNs), adjuvant chemotherapy (AC) for stage III patients, and AC within 4 months of diagnosis as colon cancer quality indicators. Data on hospital-level metric performance and the association with survival are unclear. Retrospective cohort study of 218,186 patients with resected stage I to III colon cancer in the National Cancer Data Base (2004-2012). High-performing hospitals (>75% achievement) were identified by the proportion of patients achieving each measure. The association between hospital performance and survival was evaluated using Cox shared frailty modeling. Only hospital LN performance improved (15.8% in 2004 vs 80.7% in 2012; trend test, P fashion [0 metrics, reference; 1, hazard ratio (HR) 0.96 (0.89-1.03); 2, HR 0.92 (0.87-0.98); 3, HR 0.85 (0.80-0.90); 2 vs 1, HR 0.96 (0.91-1.01); 3 vs 1, HR 0.89 (0.84-0.93); 3 vs 2, HR 0.95 (0.89-0.95)]. Performance on metrics in combination was associated with lower risk of death [LN + AC, HR 0.86 (0.78-0.95); AC + timely AC, HR 0.92 (0.87-0.98); LN + AC + timely AC, HR 0.85 (0.80-0.90)], whereas individual measures were not [LN, HR 0.95 (0.88-1.04); AC, HR 0.95 (0.87-1.05)]. Less than half of hospitals perform well on these NQF colon cancer metrics concurrently, and high performance on individual measures is not associated with improved survival. Quality improvement efforts should shift focus from individual measures to defining composite measures encompassing the overall multimodal care pathway and capturing successful transitions from one care modality to another.

  14. Particle image velocimetry correlation signal-to-noise ratio metrics and measurement uncertainty quantification

    International Nuclear Information System (INIS)

    Xue, Zhenyu; Charonko, John J; Vlachos, Pavlos P

    2014-01-01

    In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we extend the original work by Charonko and Vlachos and present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. Several corrections have been applied in this work. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations by applying a subtraction of the minimum correlation value to remove the effect of the background image noise. In addition, the notion of a ‘valid’ measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct from an ‘outlier’ measurement. Finally the type and significance of the error distribution function is investigated. These advancements lead to more robust and reliable uncertainty estimation models compared with the original work by Charonko and Vlachos. The models are tested against both synthetic benchmark data as well as experimental measurements. In this work, U 68.5 uncertainties are estimated at the 68.5% confidence level while U 95 uncertainties are estimated at 95% confidence level. For all cases the resulting calculated coverage factors approximate the expected theoretical confidence intervals, thus demonstrating the applicability of these new models for estimation of uncertainty for individual PIV measurements. (paper)

  15. Particle image velocimetry correlation signal-to-noise ratio metrics and measurement uncertainty quantification

    Science.gov (United States)

    Xue, Zhenyu; Charonko, John J.; Vlachos, Pavlos P.

    2014-11-01

    In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we extend the original work by Charonko and Vlachos and present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. Several corrections have been applied in this work. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations by applying a subtraction of the minimum correlation value to remove the effect of the background image noise. In addition, the notion of a ‘valid’ measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct from an ‘outlier’ measurement. Finally the type and significance of the error distribution function is investigated. These advancements lead to more robust and reliable uncertainty estimation models compared with the original work by Charonko and Vlachos. The models are tested against both synthetic benchmark data as well as experimental measurements. In this work, {{U}68.5} uncertainties are estimated at the 68.5% confidence level while {{U}95} uncertainties are estimated at 95% confidence level. For all cases the resulting calculated coverage factors approximate the expected theoretical confidence intervals, thus demonstrating the applicability of these new models for estimation of uncertainty for individual PIV measurements.

  16. Liver fibrosis: in vivo evaluation using intravoxel incoherent motion-derived histogram metrics with histopathologic findings at 3.0 T.

    Science.gov (United States)

    Hu, Fubi; Yang, Ru; Huang, Zixing; Wang, Min; Zhang, Hanmei; Yan, Xu; Song, Bin

    2017-12-01

    To retrospectively determine the feasibility of intravoxel incoherent motion (IVIM) imaging based on histogram analysis for the staging of liver fibrosis (LF) using histopathologic findings as the reference standard. 56 consecutive patients (14 men, 42 women; age range, 15-76, years) with chronic liver diseases (CLDs) were studied using IVIM-DWI with 9 b-values (0, 25, 50, 75, 100, 150, 200, 500, 800 s/mm 2 ) at 3.0 T. Fibrosis stage was evaluated using the METAVIR scoring system. Histogram metrics including mean, standard deviation (Std), skewness, kurtosis, minimum (Min), maximum (Max), range, interquartile (Iq) range, and percentiles (10, 25, 50, 75, 90th) were extracted from apparent diffusion coefficient (ADC), true diffusion coefficient (D), pseudo-diffusion coefficient (D*), and perfusion fraction (f) maps. All histogram metrics among different fibrosis groups were compared using one-way analysis of variance or nonparametric Kruskal-Wallis test. For significant parameters, receivers operating characteristic curve (ROC) analyses were further performed for the staging of LF. Based on their METAVIR stage, the 56 patients were reclassified into three groups as follows: F0-1 group (n = 25), F2-3 group (n = 21), and F4 group (n = 10). The mean, Iq range, percentiles (50, 75, and 90th) of D* maps between the groups were significant differences (all P histogram metrics of ADC, D, and f maps demonstrated no significant difference among the groups (all P > 0.05). Histogram analysis of D* map derived from IVIM can be used to stage liver fibrosis in patients with CLDs and provide more quantitative information beyond the mean value.

  17. Measures and metrics of sustainable diets with a focus on milk, yogurt, and dairy products

    Science.gov (United States)

    Drewnowski, Adam

    2018-01-01

    The 4 domains of sustainable diets are nutrition, economics, society, and the environment. To be sustainable, foods and food patterns need to be nutrient-rich, affordable, culturally acceptable, and sparing of natural resources and the environment. Each sustainability domain has its own measures and metrics. Nutrient density of foods has been assessed through nutrient profiling models, such as the Nutrient-Rich Foods family of scores. The Food Affordability Index, applied to different food groups, has measured both calories and nutrients per penny (kcal/$). Cultural acceptance measures have been based on relative food consumption frequencies across population groups. Environmental impact of individual foods and composite food patterns has been measured in terms of land, water, and energy use. Greenhouse gas emissions assess the carbon footprint of agricultural food production, processing, and retail. Based on multiple sustainability metrics, milk, yogurt, and other dairy products can be described as nutrient-rich, affordable, acceptable, and appealing. The environmental impact of dairy farming needs to be weighed against the high nutrient density of milk, yogurt, and cheese as compared with some plant-based alternatives. PMID:29206982

  18. Measurement of innovation in South Africa: An analysis of survey metrics and recommendations

    Directory of Open Access Journals (Sweden)

    Sibusiso T. Manzini

    2015-11-01

    Full Text Available The National System of Innovation (NSI is an important construct in South Africa’s policy discourse as illustrated in key national planning initiatives, such as the National Development Plan. The country’s capacity to innovate is linked to the prospects for industrial development leading to social and economic growth. Proper measurement of innovation activity is therefore crucial for policymaking. In this study, a constructive analytical critique of the innovation surveys that are conducted in South Africa is presented, the case for broadening current perspectives of innovation in the national policy discourse is reinforced, the significance of a broad perspective of innovation is demonstrated and new metrics for use in the measurement of the performance of the NSI are proposed. Current NSI survey instruments lack definition of non-technological innovation. They emphasise inputs rather than outputs, lack regional and sectoral analyses, give limited attention to innovation diffusion and are susceptible to respondent interpretation. Furthermore, there are gaps regarding the wider conditions of innovation and system linkages and learning. In order to obtain a comprehensive assessment of innovation in South Africa, there is a need to sharpen the metrics for measuring non-technological innovation and to define, account for and accurately measure the ‘hidden’ innovations that drive the realisation of value in management, the arts, public service and society in general. The new proposed indicators, which are mostly focused on innovation outputs, can be used as a basis for plugging the gaps identified in the existing surveys.

  19. Study of liver volume measurement and its clinical application for liver transplantation using multiple-slice spiral CT

    International Nuclear Information System (INIS)

    Peng Zhiyi; Yu Zhefeng; Kuang Pingding; Xiao Shengxiang; Huang Dongsheng; Zheng Shusen; Wu Jian

    2004-01-01

    Objective: To study the accuracy of liver volume measurement using MSCT and its application in liver transplantation. Methods: (1) Experimental study. Ten pig livers were scanned using MSCT with two collimations (3.2 mm and 6.5 mm) and pitch 1.25. Semi-automatic method was used to reconstruct 3D liver models to measure the liver volume. (2) Clinical study. Twenty-three patients received MSCT scan with collimation of 6.5 mm before liver transplantation. Same method was used to calculate the liver volume and the measurement was repeated by the same observer after 1 month. Results: (1) Experimental study. Actual liver volumes were (1134.1 ± 288.0) ml. Liver volumes by MSCT with two collimations were (1125.0 ± 282.5) ml (3.2 mm) and (1101.6 ± 277.6) ml (6.5 mm). The accuracy was (99.5 ± 0.8)% and (97.4 ± 0.8)%, respectively. Both showed same good agreement with actual liver volume: r=0.999, P<0.01 (2) Clinical study. Actual liver volumes were (1455.7±730.0) ml. Liver volume by MSCT was (1462.7 ± 774.1) ml. The accuracy was (99.5±9.6)%, r=0.986, P<0.01. Liver volume measured again was (1449.4 ± 768.9) ml, r=0.991 (P<0.01). Conclusion: MSCT can assess the liver volume correctly, and could be used as a routine step for evaluations before liver transplantation

  20. Estimates for Parameter Littlewood-Paley gκ⁎ Functions on Nonhomogeneous Metric Measure Spaces

    Directory of Open Access Journals (Sweden)

    Guanghui Lu

    2016-01-01

    Full Text Available Let (X,d,μ be a metric measure space which satisfies the geometrically doubling measure and the upper doubling measure conditions. In this paper, the authors prove that, under the assumption that the kernel of Mκ⁎ satisfies a certain Hörmander-type condition, Mκ⁎,ρ is bounded from Lebesgue spaces Lp(μ to Lebesgue spaces Lp(μ for p≥2 and is bounded from L1(μ into L1,∞(μ. As a corollary, Mκ⁎,ρ is bounded on Lp(μ for 1

  1. Metric-independent measures for supersymmetric extended object theories on curved backgrounds

    International Nuclear Information System (INIS)

    Nishino, Hitoshi; Rajpoot, Subhash

    2014-01-01

    For Green–Schwarz superstring σ-model on curved backgrounds, we introduce a non-metric measure Φ≡ϵ ij ϵ IJ (∂ i φ I )(∂ j φ J ) with two scalars φ I (I=1,2) used in ‘Two-Measure Theory’ (TMT). As in the flat-background case, the string tension T=(2πα ′ ) −1 emerges as an integration constant for the A i -field equation. This mechanism is further generalized to supermembrane theory, and to super-p-brane theory, both on general curved backgrounds. This shows the universal applications of dynamical measure of TMT to general supersymmetric extended objects on general curved backgrounds

  2. Measuring reliability under epistemic uncertainty: Review on non-probabilistic reliability metrics

    Directory of Open Access Journals (Sweden)

    Kang Rui

    2016-06-01

    Full Text Available In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability and uncertainty-theory-based reliability metrics (belief reliability. It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to (1 compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2 satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  3. RAAK PRO project: measuring safety in aviation : concept for the design of new metrics

    NARCIS (Netherlands)

    Karanikas, Nektarios; Kaspers, Steffen; Roelen, Alfred; Piric, Selma; van Aalst, Robbert; de Boer, Robert

    2017-01-01

    Following the completion of the 1st phase of the RAAK PRO project Aviation Safety Metrics, during which the researchers mapped the current practice in safety metrics and explored the validity of monotonic relationships of SMS, activity and demographic metrics with safety outcomes, this report

  4. Comparison of continuous versus categorical tumor measurement-based metrics to predict overall survival in cancer treatment trials

    Science.gov (United States)

    An, Ming-Wen; Mandrekar, Sumithra J.; Branda, Megan E.; Hillman, Shauna L.; Adjei, Alex A.; Pitot, Henry; Goldberg, Richard M.; Sargent, Daniel J.

    2011-01-01

    Purpose The categorical definition of response assessed via the Response Evaluation Criteria in Solid Tumors has documented limitations. We sought to identify alternative metrics for tumor response that improve prediction of overall survival. Experimental Design Individual patient data from three North Central Cancer Treatment Group trials (N0026, n=117; N9741, n=1109; N9841, n=332) were used. Continuous metrics of tumor size based on longitudinal tumor measurements were considered in addition to a trichotomized response (TriTR: Response vs. Stable vs. Progression). Cox proportional hazards models, adjusted for treatment arm and baseline tumor burden, were used to assess the impact of the metrics on subsequent overall survival, using a landmark analysis approach at 12-, 16- and 24-weeks post baseline. Model discrimination was evaluated using the concordance (c) index. Results The overall best response rates for the three trials were 26%, 45%, and 25% respectively. While nearly all metrics were statistically significantly associated with overall survival at the different landmark time points, the c-indices for the traditional response metrics ranged from 0.59-0.65; for the continuous metrics from 0.60-0.66 and for the TriTR metrics from 0.64-0.69. The c-indices for TriTR at 12-weeks were comparable to those at 16- and 24-weeks. Conclusions Continuous tumor-measurement-based metrics provided no predictive improvement over traditional response based metrics or TriTR; TriTR had better predictive ability than best TriTR or confirmed response. If confirmed, TriTR represents a promising endpoint for future Phase II trials. PMID:21880789

  5. Measuring and managing radiologist productivity, part 1: clinical metrics and benchmarks.

    Science.gov (United States)

    Duszak, Richard; Muroff, Lawrence R

    2010-06-01

    Physician productivity disparities are not uncommonly debated within radiology groups, sometimes in a contentious manner. Attempts to measure productivity, identify and motivate outliers, and develop equitable management policies can present challenges to private and academic practices alike but are often necessary for a variety of professional, financial, and personnel reasons. This is the first of a two-part series that will detail metrics for evaluating radiologist productivity and review published benchmarks, focusing primarily on clinical work. Issues and limitations that may prevent successful implementation of measurement systems are explored. Part 2 will expand that discussion to evaluating nonclinical administrative and academic activities, outlining advantages and disadvantages of addressing differential productivity, and introducing potential models for practices seeking to motivate physicians on the basis of both clinical and nonclinical work.

  6. The spleen-liver uptake ratio in liver scan: review of its measurement and correlation between hemodynamical changes of the liver in portal hypertension

    International Nuclear Information System (INIS)

    Lee, S. Y.; Chung, Y. A.; Chung, H. S.; Lee, H. G.; Kim, S. H.; Chung, S. K.

    1999-01-01

    We analyzed correlation between changes of the Spleen-Liver Ratio in liver scintigram and hemodynamical changes of the liver in overall grades of portal hypertension by non-invasive, scintigraphic method. And the methods for measurement of the Spleen-Liver Ratio were also reviewed. Hepatic scintiangiograms for 120 seconds with 250-333 MBq of 99mTc-Sn-phytate followed by liver scintigrams were performed in 62 patients group consisted with clinically proven norma and various diffuse hepatocellular diseases. Hepatic Perfusion indices were calculated from the Time-Activity Curves of hepatic scintiangiograms. Each Spleen-Liver Ratios of maximum, average and total counts within ROIs of the liver and spleen from both anterior and posterior liver scintigrams and their geometrical means were calculated. Linear correlations between each Spleen-Liver Ratios and Hepatic Perfusion indices were evaluated. There was strong correlation (y=0.0002x 2 -0.0049x+0.2746, R=0.8790, p<0.0001) between Hepatic Perfusion Indices and Spleen-Liver Ratios calculated from posterior maxium counts of the liver scintigrams. Weaker correlations with either geometrical means of maximum and average count methods (R=0.8101, 0.7268, p<0.0001) or average counts of both posterior and anterior veiws (R=0.8134, 0.6200, p<0.0001) were noted. We reconfirmed that changes of Spleen-Liver Ratio in liver scintigrams represent hemodynamical changes in portal hypertension of diffuse hepatocellular diseases. Among them, the posterior Spleen-Liver Ratio measured by maximum counts will give the best information. And matching with Hepatic Perfusion index will be another useful index to evaluate characteristics splenic extraction coefficient of a certain radiocolloid for liver scintigram

  7. Impact of Different Creatinine Measurement Methods on Liver Transplant Allocation

    Science.gov (United States)

    Kaiser, Thorsten; Kinny-Köster, Benedict; Bartels, Michael; Parthaune, Tanja; Schmidt, Michael; Thiery, Joachim

    2014-01-01

    Introduction The model for end-stage liver disease (MELD) score is used in many countries to prioritize organ allocation for the majority of patients who require orthotopic liver transplantation. This score is calculated based on the following laboratory parameters: creatinine, bilirubin and the international normalized ratio (INR). Consequently, high measurement accuracy is essential for equitable and fair organ allocation. For serum creatinine measurements, the Jaffé method and enzymatic detection are well-established routine diagnostic tests. Methods A total of 1,013 samples from 445 patients on the waiting list or in evaluation for liver transplantation were measured using both creatinine methods from November 2012 to September 2013 at the university hospital Leipzig, Germany. The measurements were performed in parallel according to the manufacturer’s instructions after the samples arrived at the institute of laboratory medicine. Patients who had required renal replacement therapy twice in the previous week were excluded from analyses. Results Despite the good correlation between the results of both creatinine quantification methods, relevant differences were observed, which led to different MELD scores. The Jaffé measurement led to greater MELD score in 163/1,013 (16.1%) samples with differences of up to 4 points in one patient, whereas differences of up to 2 points were identified in 15/1,013 (1.5%) samples using the enzymatic assay. Overall, 50/152 (32.9%) patients with MELD scores >20 had higher scores when the Jaffé method was used. Discussion Using the Jaffé method to measure creatinine levels in samples from patients who require liver transplantation may lead to a systematic preference in organ allocation. In this study, the differences were particularly pronounced in samples with MELD scores >20, which has clinical relevance in the context of urgency of transplantation. These data suggest that official recommendations are needed to determine which

  8. Metric learning

    CERN Document Server

    Bellet, Aurelien; Sebban, Marc

    2015-01-01

    Similarity between objects plays an important role in both human cognitive processes and artificial systems for recognition and categorization. How to appropriately measure such similarities for a given task is crucial to the performance of many machine learning, pattern recognition and data mining methods. This book is devoted to metric learning, a set of techniques to automatically learn similarity and distance functions from data that has attracted a lot of interest in machine learning and related fields in the past ten years. In this book, we provide a thorough review of the metric learnin

  9. A City and National Metric measuring Isolation from the Global Market for Food Security Assessment

    Science.gov (United States)

    Brown, Molly E.; Silver, Kirk Coleman; Rajagopalan, Krishnan

    2013-01-01

    The World Bank has invested in infrastructure in developing countries for decades. This investment aims to reduce the isolation of markets, reducing both seasonality and variability in food availability and food prices. Here we combine city market price data, global distance to port, and country infrastructure data to create a new Isolation Index for countries and cities around the world. Our index quantifies the isolation of a city from the global market. We demonstrate that an index built at the country level can be applied at a sub-national level to quantify city isolation. In doing so, we offer policy makers with an alternative metric to assess food insecurity. We compare our isolation index with other indices and economic data found in the literature.We show that our Index measures economic isolation regardless of economic stability using correlation and analysis

  10. Multi-linear model set design based on the nonlinearity measure and H-gap metric.

    Science.gov (United States)

    Shaghaghi, Davood; Fatehi, Alireza; Khaki-Sedigh, Ali

    2017-05-01

    This paper proposes a model bank selection method for a large class of nonlinear systems with wide operating ranges. In particular, nonlinearity measure and H-gap metric are used to provide an effective algorithm to design a model bank for the system. Then, the proposed model bank is accompanied with model predictive controllers to design a high performance advanced process controller. The advantage of this method is the reduction of excessive switch between models and also decrement of the computational complexity in the controller bank that can lead to performance improvement of the control system. The effectiveness of the method is verified by simulations as well as experimental studies on a pH neutralization laboratory apparatus which confirms the efficiency of the proposed algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Measuring solar reflectance - Part I: Defining a metric that accurately predicts solar heat gain

    Energy Technology Data Exchange (ETDEWEB)

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul [Heat Island Group, Environmental Energy Technologies Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States)

    2010-09-15

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective ''cool colored'' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland US latitudes, this metric R{sub E891BN} can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {<=} 5:12 [23 ]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool roof net energy savings by as much as 23%. We define clear sky air mass one global horizontal (''AM1GH'') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer. (author)

  12. Measuring solar reflectance Part I: Defining a metric that accurately predicts solar heat gain

    Energy Technology Data Exchange (ETDEWEB)

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul

    2010-05-14

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective 'cool colored' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland U.S. latitudes, this metric RE891BN can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {le} 5:12 [23{sup o}]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool-roof net energy savings by as much as 23%. We define clear-sky air mass one global horizontal ('AM1GH') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer.

  13. Measuring Success: Metrics that Link Supply Chain Management to Aircraft Readiness

    National Research Council Canada - National Science Library

    Balestreri, William

    2002-01-01

    This thesis evaluates and analyzes current strategic management planning methods that develop performance metrics linking supply chain management to aircraft readiness, Our primary focus is the Marine...

  14. Galactically inertial space probes for the direct measurement of the metric expansion of the universe

    International Nuclear Information System (INIS)

    Cagnani, Ivan

    2011-01-01

    Astrometric data from the future GAIA and OBSS missions will allow a more precise calculation of the local galactic circular speed, and better measurements of galactic movements relative to the CMB will be obtained by post-WMAP missions (ie Planck). Contemporary development of high specific impulse electric propulsion systems (ie VASIMIR) will enable the development of space probes able to properly compensate the galactic circular speed as well as the resulting attraction to the centre of our galaxy. The probes would appear immobile to an ideal observer fixed at the centre of the galaxy, in contrast of every other galactic object, which would appear moving according to their local galactic circular speed and their proper motions. Arranging at least three of these galactically static probes in an extended formation and measuring reciprocal distances of the probes over time with large angle laser ranges could allow a direct measurement of the metric expansion of the universe. Free-drifting laser-ranged targets released by the spacecrafts could also be used to measure and compensate solar system's induced local perturbations. For further reducing local effects and increase the accuracy of the results, the distance between the probes should be maximized and the location of the probes should be as far as possible from the Sun and any massive object (ie Jupiter, Saturn). Gravitational waves could also induce random errors but data from GW observatories like the planned LISA could be used to correct them.

  15. Business process performance measurement: a structured literature review of indicators, measures and metrics.

    Science.gov (United States)

    Van Looy, Amy; Shafagatova, Aygun

    2016-01-01

    Measuring the performance of business processes has become a central issue in both academia and business, since organizations are challenged to achieve effective and efficient results. Applying performance measurement models to this purpose ensures alignment with a business strategy, which implies that the choice of performance indicators is organization-dependent. Nonetheless, such measurement models generally suffer from a lack of guidance regarding the performance indicators that exist and how they can be concretized in practice. To fill this gap, we conducted a structured literature review to find patterns or trends in the research on business process performance measurement. The study also documents an extended list of 140 process-related performance indicators in a systematic manner by further categorizing them into 11 performance perspectives in order to gain a holistic view. Managers and scholars can consult the provided list to choose the indicators that are of interest to them, considering each perspective. The structured literature review concludes with avenues for further research.

  16. Metrics Feedback Cycle: measuring and improving user engagement in gamified eLearning systems

    Directory of Open Access Journals (Sweden)

    Adam Atkins

    2017-12-01

    Full Text Available This paper presents the identification, design and implementation of a set of metrics of user engagement in a gamified eLearning application. The 'Metrics Feedback Cycle' (MFC is introduced as a formal process prescribing the iterative evaluation and improvement of application-wide engagement, using data collected from metrics as input to improve related engagement features. This framework was showcased using a gamified eLearning application as a case study. In this paper, we designed a prototype and tested it with thirty-six (N=36 students to validate the effectiveness of the MFC. The analysis and interpretation of metrics data shows that the gamification features had a positive effect on user engagement, and helped identify areas in which this could be improved. We conclude that the MFC has applications in gamified systems that seek to maximise engagement by iteratively evaluating implemented features against a set of evolving metrics.

  17. Evidence-based Metrics Toolkit for Measuring Safety and Efficiency in Human-Automation Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — APRIL 2016 NOTE: Principal Investigator moved to Rice University in mid-2015. Project continues at Rice with the same title (Evidence-based Metrics Toolkit for...

  18. Project management metrics, KPIs, and dashboards a guide to measuring and monitoring project performance

    CERN Document Server

    Kerzner, Harold

    2013-01-01

    Today, with the growth of complex projects, stakeholder involvement in projects, advances in computer technology for dashboard designs, metrics, and key performance indicators for project management have become an important focus. This Second Edition of the bestselling book walks readers through everything from the basics of project management metrics and key performance indicators to establishing targets and using dashboards to monitor performance. The content is aligned with PMI's PMBOK Guide and stresses "value" as the main focal point.

  19. Fractional type Marcinkiewicz integrals over non-homogeneous metric measure spaces

    Directory of Open Access Journals (Sweden)

    Guanghui Lu

    2016-10-01

    Full Text Available Abstract The main goal of the paper is to establish the boundedness of the fractional type Marcinkiewicz integral M β , ρ , q $\\mathcal{M}_{\\beta,\\rho,q}$ on non-homogeneous metric measure space which includes the upper doubling and the geometrically doubling conditions. Under the assumption that the kernel satisfies a certain Hörmander-type condition, the authors prove that M β , ρ , q $\\mathcal{M}_{\\beta,\\rho,q}$ is bounded from Lebesgue space L 1 ( μ $L^{1}(\\mu$ into the weak Lebesgue space L 1 , ∞ ( μ $L^{1,\\infty}(\\mu$ , from the Lebesgue space L ∞ ( μ $L^{\\infty}(\\mu$ into the space RBLO ( μ $\\operatorname{RBLO}(\\mu$ , and from the atomic Hardy space H 1 ( μ $H^{1}(\\mu$ into the Lebesgue space L 1 ( μ $L^{1}(\\mu$ . Moreover, the authors also get a corollary, that is, M β , ρ , q $\\mathcal{M}_{\\beta,\\rho,q}$ is bounded on L p ( μ $L^{p}(\\mu$ with 1 < p < ∞ $1< p<\\infty$ .

  20. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection

    Science.gov (United States)

    DeWeber, Jefferson T.; Wagner, Tyler

    2018-01-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30‐day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species’ distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold‐water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid‐century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation

  1. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection.

    Science.gov (United States)

    DeWeber, Jefferson T; Wagner, Tyler

    2018-06-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30-day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species' distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold-water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid-century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our

  2. Project management metrics, KPIs, and dashboards a guide to measuring and monitoring project performance

    CERN Document Server

    Kerzner, Harold

    2017-01-01

    With the growth of complex projects, stakeholder involvement, and advancements in visual-based technology, metrics and KPIs (key performance indicators) are key factors in evaluating project performance. Dashboard reporting systems provide accessible project performance data, and sharing this vital data in a concise and consistent manner is a key communication responsibility of all project managers. This 3rd edition of Kerzner’s groundbreaking work includes the following updates: new sections on processing dashboard information, portfolio management PMO and metrics, and BI tool flexibility. PPT decks by chapter and a test bank will be available for use in seminar presentations and courses.

  3. Measurement of liver and spleen volume by computed tomography using point counting technique in chronic liver disease

    International Nuclear Information System (INIS)

    Sato, Hiroyuki

    1983-01-01

    Liver and spleen volume were measured by computed tomography (CT) using point counting technique. This method is very simple and applicable to any kind of CT scanner. The volumes of the livers and spleens estimated by this method correlated with the weights of the corresponding organs measured on autopsy or surgical operation, indication the accuracy and usefulness of this method. Hepatic and splenic volumes were estimated by this method in 48 patients with chronic liver disease and 13 subjects with non-hepatobiliary discase. The mean hepatic volume in non-alcoholic liver cirrhosis but not in alcoholic cirrhosis was significantly smaller than those in non-hepatobiliary disease and other chronic liver diseases. Alcoholic cirrhosis showed significantly larger liver volume than non-alcoholic cirrhosis. In alcoholic fibrosis, the mean hepatic volume was significantly larger than non-hepatobiliary disease. The mean splenic volumes both in alcoholic and non-alcoholic cirrhosis were significantly larger than in other disease. A significantly positive correlation between hepatic and splenic volumes was found in alcoholic cirrhosis but not in non-alcoholic cirrhosis. These results indicate that estimation of hepatic and splenic volumes by this method is useful for the analysis of the pathophysiology of chronic liver disease. (author)

  4. Resin 90Y microsphere activity measurements for liver brachytherapy

    International Nuclear Information System (INIS)

    Dezarn, William A.; Kennedy, Andrew S.

    2007-01-01

    The measurement of the radioactivity administered to the patient is one of the major components of 90 Y microsphere liver brachytherapy. The activity of 90 Y microspheres in a glass delivery vial was measured in a dose calibrator. The calibration value to use for 90 Y in the dose calibrator was verified using an activity calibration standard provided by the microsphere manufacturer. This method allowed for the determination of a consistent, reproducible local activity standard. Additional measurements were made to determine some of the factors that could affect activity measurement. The axial response of the dose calibrator was determined by the ratio of activity measurements at the bottom and center of the dose calibrator. The axial response was 0.964 for a glass shipping vial, 1.001 for a glass V-vial, and 0.988 for a polycarbonate V-vial. Comparisons between activity measurements in the dose calibrator and those using a radiation survey meter were found to agree within 10%. It was determined that the dose calibrator method was superior to the survey meter method because the former allowed better defined measurement geometry and traceability of the activity standard back to the manufacturer. Part of the preparation of resin 90 Y microspheres for patient delivery is to draw out a predetermined activity from a shipping vial and place it into a V-vial for delivery to the patient. If the drawn activity was placed in a glass V-vial, the activity measured in the dose calibrator with a glass V-vial was 4% higher than the drawn activity from the shipping vial standard. If the drawn activity was placed in a polycarbonate V-vial, the activity measured in the dose calibrator with a polycarbonate V-vial activity was 20% higher than the drawn activity from the shipping vial standard. Careful characterization of the local activity measurement standard is recommended instead of simply accepting the calibration value of the dose calibrator manufacturer

  5. The relationship between settlement population size and sustainable development measured by two sustainability metrics

    International Nuclear Information System (INIS)

    O'Regan, Bernadette; Morrissey, John; Foley, Walter; Moles, Richard

    2009-01-01

    This paper reports on a study of the relative sustainability of 79 Irish villages, towns and a small city (collectively called 'settlements') classified by population size. Quantitative data on more than 300 economic, social and environmental attributes of each settlement were assembled into a database. Two aggregated metrics were selected to model the relative sustainability of settlements: Ecological Footprint (EF) and Sustainable Development Index (SDI). Subsequently these were aggregated to create a single Combined Sustainable Development Index. Creation of this database meant that metric calculations did not rely on proxies, and were therefore considered to be robust. Methods employed provided values for indicators at various stages of the aggregation process. This allowed both the first reported empirical analysis of the relationship between settlement sustainability and population size, and the elucidation of information provided at different stages of aggregation. At the highest level of aggregation, settlement sustainability increased with population size, but important differences amongst individual settlements were masked by aggregation. EF and SDI metrics ranked settlements in differing orders of relative sustainability. Aggregation of indicators to provide Ecological Footprint values was found to be especially problematic, and this metric was inadequately sensitive to distinguish amongst the relative sustainability achieved by all settlements. Many authors have argued that, for policy makers to be able to inform planning decisions using sustainability indicators, it is necessary that they adopt a toolkit of aggregated indicators. Here it is argued that to interpret correctly each aggregated metric value, policy makers also require a hierarchy of disaggregated component indicator values, each explained fully. Possible implications for urban planning are briefly reviewed

  6. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies

    Science.gov (United States)

    Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora

    2017-01-01

    Objective Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to (a) catalog feasibility measures/metrics and (b) propose a framework. Methods For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. Findings We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Conclusions Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization. PMID:29333105

  7. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies: The Evidence and the Framework.

    Science.gov (United States)

    Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora

    2017-12-01

    Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to ( a ) catalog feasibility measures/metrics and ( b ) propose a framework. For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization.

  8. Liver

    International Nuclear Information System (INIS)

    Bernardino, M.E.; Sones, P.J. Jr.; Barton Price, R.; Berkman, W.A.

    1984-01-01

    Evaluation of the liver for focal lesions is extremely important because the liver is one of the most common sites for metastatic disease. Most patients with metastatic deposits to the liver have a survival rate of about 6 months. Thus, metastatic disease to the liver has an extremely grave prognosis. In the past patients with hepatic lesions had no therapeutic recourse. However, with recent aggressive surgical advances (such as partial hepatectomies) and hepatic artery embolization, survival of patients with hepatic metastases has increased. Thus it is important for noninvasive imaging not only to detect lesions early in their course, but also to give their true hepatic involvement and the extent of the neoplastic process elsewhere in the body. Recent advances in imaging have been rapidly changing over the past 5 years. These changes have been more rapid in computed tomography (CT) and ultrasound than in radionuclide imaging. Thus, the question addressed in this chapter is: What is the relationship of hepatic ultrasound to the other current diagnostic modalities in detecting metastatic liver disease and other focal liver lesions? Also, what is its possible future relationship to nuclear magnetic resonance?

  9. Technology transfer metrics: Measurement and verification of data/reusable launch vehicle business analysis

    Science.gov (United States)

    Trivoli, George W.

    1996-01-01

    Congress and the Executive Branch have mandated that all branches of the Federal Government exert a concentrated effort to transfer appropriate government and government contractor-developed technology to the industrial use in the U.S. economy. For many years, NASA has had a formal technology transfer program to transmit information about new technologies developed for space applications into the industrial or commercial sector. Marshall Space Flight Center (MSFC) has been in the forefront of the development of U.S. industrial assistance programs using technologies developed at the Center. During 1992-93, MSFC initiated a technology transfer metrics study. The MSFC study was the first of its kind among the various NASA centers. The metrics study is a continuing process, with periodic updates that reflect on-going technology transfer activities.

  10. Homodynamic changes with liver fibrosis measured by dynamic contrast-enhanced MRI in the rat

    International Nuclear Information System (INIS)

    Kubo, Hitoshi; Harada, Masafumi; Ishikawa, Makoto; Nishitani, Hiromu

    2006-01-01

    The purpose of this study was to evaluate the hemodynamic changes of liver cirrhosis in the rat and investigate the relationship between hemodynamic changes and properties of fibrotic change in the liver. Three rats with cirrhosis induced by thioacetamide (TAA), three with disease induced by carbon tetrachloride (CCl 4 ), and three with no treatment were measured on dynamic MRI using a 1.5T scanner. Compartment and moment analysis were used to quantitate hemodynamic changes. Compartment model analysis showed that increased transition speed from vessels to the liver correlated with grade of liver fibrosis. Moment analysis demonstrated that decrease of area under the curve (AUC), mean residence time (MRT), variance of residence time (VRT), half life (T1/2) and increased total clearance (CL) correlated with grade of liver fibrosis. Hemodynamic changes in injured fibrotic liver may be influenced by the grade of fibrosis. Compartment model and moment analysis may be useful for evaluating hemodynamic changes in injured liver. (author)

  11. Performance metrics for Inertial Confinement Fusion implosions: aspects of the technical framework for measuring progress in the National Ignition Campaign

    International Nuclear Information System (INIS)

    Spears, B.K.; Glenzer, S.; Edwards, M.J.; Brandon, S.; Clark, D.; Town, R.; Cerjan, C.; Dylla-Spears, R.; Mapoles, E.; Munro, D.; Salmonson, J.; Sepke, S.; Weber, S.; Hatchett, S.; Haan, S.; Springer, P.; Moses, E.; Mapoles, E.; Munro, D.; Salmonson, J.; Sepke, S.

    2011-01-01

    The National Ignition Campaign (NIC) uses non-igniting 'THD' capsules to study and optimize the hydrodynamic assembly of the fuel without burn. These capsules are designed to simultaneously reduce DT neutron yield and to maintain hydrodynamic similarity with the DT ignition capsule. We will discuss nominal THD performance and the associated experimental observables. We will show the results of large ensembles of numerical simulations of THD and DT implosions and their simulated diagnostic outputs. These simulations cover a broad range of both nominal and off nominal implosions. We will focus on the development of an experimental implosion performance metric called the experimental ignition threshold factor (ITFX). We will discuss the relationship between ITFX and other integrated performance metrics, including the ignition threshold factor (ITF), the generalized Lawson criterion (GLC), and the hot spot pressure (HSP). We will then consider the experimental results of the recent NIC THD campaign. We will show that we can observe the key quantities for producing a measured ITFX and for inferring the other performance metrics. We will discuss trends in the experimental data, improvement in ITFX, and briefly the upcoming tuning campaign aimed at taking the next steps in performance improvement on the path to ignition on NIF.

  12. Performance metrics for Inertial Confinement Fusion implosions: aspects of the technical framework for measuring progress in the National Ignition Campaign

    Energy Technology Data Exchange (ETDEWEB)

    Spears, B K; Glenzer, S; Edwards, M J; Brandon, S; Clark, D; Town, R; Cerjan, C; Dylla-Spears, R; Mapoles, E; Munro, D; Salmonson, J; Sepke, S; Weber, S; Hatchett, S; Haan, S; Springer, P; Moses, E; Mapoles, E; Munro, D; Salmonson, J; Sepke, S

    2011-12-16

    The National Ignition Campaign (NIC) uses non-igniting 'THD' capsules to study and optimize the hydrodynamic assembly of the fuel without burn. These capsules are designed to simultaneously reduce DT neutron yield and to maintain hydrodynamic similarity with the DT ignition capsule. We will discuss nominal THD performance and the associated experimental observables. We will show the results of large ensembles of numerical simulations of THD and DT implosions and their simulated diagnostic outputs. These simulations cover a broad range of both nominal and off nominal implosions. We will focus on the development of an experimental implosion performance metric called the experimental ignition threshold factor (ITFX). We will discuss the relationship between ITFX and other integrated performance metrics, including the ignition threshold factor (ITF), the generalized Lawson criterion (GLC), and the hot spot pressure (HSP). We will then consider the experimental results of the recent NIC THD campaign. We will show that we can observe the key quantities for producing a measured ITFX and for inferring the other performance metrics. We will discuss trends in the experimental data, improvement in ITFX, and briefly the upcoming tuning campaign aimed at taking the next steps in performance improvement on the path to ignition on NIF.

  13. Measuring distance “as the horse runs”: Cross-scale comparison of terrain-based metrics

    Science.gov (United States)

    Buttenfield, Barbara P.; Ghandehari, M; Leyk, S; Stanislawski, Larry V.; Brantley, M E; Qiang, Yi

    2016-01-01

    Distance metrics play significant roles in spatial modeling tasks, such as flood inundation (Tucker and Hancock 2010), stream extraction (Stanislawski et al. 2015), power line routing (Kiessling et al. 2003) and analysis of surface pollutants such as nitrogen (Harms et al. 2009). Avalanche risk is based on slope, aspect, and curvature, all directly computed from distance metrics (Gutiérrez 2012). Distance metrics anchor variogram analysis, kernel estimation, and spatial interpolation (Cressie 1993). Several approaches are employed to measure distance. Planar metrics measure straight line distance between two points (“as the crow flies”) and are simple and intuitive, but suffer from uncertainties. Planar metrics assume that Digital Elevation Model (DEM) pixels are rigid and flat, as tiny facets of ceramic tile approximating a continuous terrain surface. In truth, terrain can bend, twist and undulate within each pixel.Work with Light Detection and Ranging (lidar) data or High Resolution Topography to achieve precise measurements present challenges, as filtering can eliminate or distort significant features (Passalacqua et al. 2015). The current availability of lidar data is far from comprehensive in developed nations, and non-existent in many rural and undeveloped regions. Notwithstanding computational advances, distance estimation on DEMs has never been systematically assessed, due to assumptions that improvements are so small that surface adjustment is unwarranted. For individual pixels inaccuracies may be small, but additive effects can propagate dramatically, especially in regional models (e.g., disaster evacuation) or global models (e.g., sea level rise) where pixels span dozens to hundreds of kilometers (Usery et al 2003). Such models are increasingly common, lending compelling reasons to understand shortcomings in the use of planar distance metrics. Researchers have studied curvature-based terrain modeling. Jenny et al. (2011) use curvature to generate

  14. The role of metrics and measurements in a software intensive total quality management environment

    Science.gov (United States)

    Daniels, Charles B.

    1992-01-01

    Paramax Space Systems began its mission as a member of the Rockwell Space Operations Company (RSOC) team which was the successful bidder on a massive operations consolidation contract for the Mission Operations Directorate (MOD) at JSC. The contract awarded to the team was the Space Transportation System Operations Contract (STSOC). Our initial challenge was to accept responsibility for a very large, highly complex and fragmented collection of software from eleven different contractors and transform it into a coherent, operational baseline. Concurrently, we had to integrate a diverse group of people from eleven different companies into a single, cohesive team. Paramax executives recognized the absolute necessity to develop a business culture based on the concept of employee involvement to execute and improve the complex process of our new environment. Our executives clearly understood that management needed to set the example and lead the way to quality improvement. The total quality management policy and the metrics used in this endeavor are presented.

  15. Measurement of the Ecological Integrity of Cerrado Streams Using Biological Metrics and the Index of Habitat Integrity

    Directory of Open Access Journals (Sweden)

    Deusiano Florêncio dos Reis

    2017-01-01

    Full Text Available Generally, aquatic communities reflect the effects of anthropogenic changes such as deforestation or organic pollution. The Cerrado stands among the most threatened ecosystems by human activities in Brazil. In order to evaluate the ecological integrity of the streams in a preserved watershed in the Northern Cerrado biome corresponding to a mosaic of ecosystems in transition to the Amazonia biome in Brazil, biological metrics related to diversity, structure, and sensitivity of aquatic macroinvertebrates were calculated. Sampling included collections along stretches of 200 m of nine streams and measurements of abiotic variables (temperature, electrical conductivity, pH, total dissolved solids, dissolved oxygen, and discharge and the Index of Habitat Integrity (HII. The values of the abiotic variables and the HII indicated that most of the streams have good ecological integrity, due to high oxygen levels and low concentrations of dissolved solids and electric conductivity. Two streams showed altered HII scores mainly related to small dams for recreational and domestic use, use of Cerrado natural pasture for cattle raising, and spot deforestation in bathing areas. However, this finding is not reflected in the biological metrics that were used. Considering all nine streams, only two showed satisfactory ecological quality (measured by Biological Monitoring Working Party (BMWP, total richness, and EPT (Ephemeroptera, Plecoptera, and Trichoptera richness, only one of which had a low HII score. These results indicate that punctual measures of abiotic parameters do not reveal the long-term impacts of anthropic activities in these streams, including related fire management of pasture that annually alters the vegetation matrix and may act as a disturbance for the macroinvertebrate communities. Due to this, biomonitoring of low order streams in Cerrado ecosystems of the Northern Central Brazil by different biotic metrics and also physical attributes of the

  16. Overview of journal metrics

    Directory of Open Access Journals (Sweden)

    Kihong Kim

    2018-02-01

    Full Text Available Various kinds of metrics used for the quantitative evaluation of scholarly journals are reviewed. The impact factor and related metrics including the immediacy index and the aggregate impact factor, which are provided by the Journal Citation Reports, are explained in detail. The Eigenfactor score and the article influence score are also reviewed. In addition, journal metrics such as CiteScore, Source Normalized Impact per Paper, SCImago Journal Rank, h-index, and g-index are discussed. Limitations and problems that these metrics have are pointed out. We should be cautious to rely on those quantitative measures too much when we evaluate journals or researchers.

  17. Relative Citation Ratio of Top Twenty Macedonian Biomedical Scientists in PubMed: A New Metric that Uses Citation Rates to Measure Influence at the Article Level

    Directory of Open Access Journals (Sweden)

    Mirko Spiroski

    2016-06-01

    Conclusion: It is necessary to accept top twenty Macedonian biomedical scientists as an example of new metric that uses citation rates to measure influence at the article level, rather than qualification of the best Macedonian biomedical scientists.

  18. A Study on the Measurement of Intrapulmonary Shunt in Liver Diseases by the Nucleolide Method

    International Nuclear Information System (INIS)

    Yun, Sung Chul; Ahn, Jae Hee; Choi, Soo Bong

    1987-01-01

    The fact there are increase of intrapulmonary arteriovenous shunt amount in the liver cirrhosis patient has been known since 1950. And the method of shunt amount calculation by radionuclide method using 99m Tc-MAA was introduced in the middle of 1970. We measured intrapulmonary shunt amount by means of perfusion lung scan using 99m Tc-MAA in the various type of liver diseases especially in chronic liver diseases and acute liver disease. The results were as followed. 1) The amount of arteriovenous intrapulmonary shunt in the total case of liver disease was 9.3±3.9%, and that of in the control group was 4.6±2.1%. 2) The amount of arteriovenous intrapulmonary shunt in the chronic liver disease was 10.8±4.4%, and that of in the acute liver disease was 7.2±2.8%. We observed significant differences between normal control group and liver disease group, and between chronic liver disease group and acute liver disease group in the amount of shunt by the nucleolide method.

  19. Revisiting measurement invariance in intelligence testing in aging research: Evidence for almost complete metric invariance across age groups.

    Science.gov (United States)

    Sprague, Briana N; Hyun, Jinshil; Molenaar, Peter C M

    2017-01-01

    Invariance of intelligence across age is often assumed but infrequently explicitly tested. Horn and McArdle (1992) tested measurement invariance of intelligence, providing adequate model fit but might not consider all relevant aspects such as sub-test differences. The goal of the current paper is to explore age-related invariance of the WAIS-R using an alternative model that allows direct tests of age on WAIS-R subtests. Cross-sectional data on 940 participants aged 16-75 from the WAIS-R normative values were used. Subtests examined were information, comprehension, similarities, vocabulary, picture completion, block design, picture arrangement, and object assembly. The two intelligence factors considered were fluid and crystallized intelligence. Self-reported ages were divided into young (16-22, n = 300), adult (29-39, n = 275), middle (40-60, n = 205), and older (61-75, n = 160) adult groups. Results suggested partial metric invariance holds. Although most of the subtests reflected fluid and crystalized intelligence similarly across different ages, invariance did not hold for block design on fluid intelligence and picture arrangement on crystallized intelligence for older adults. Additionally, there was evidence of a correlated residual between information and vocabulary for the young adults only. This partial metric invariance model yielded acceptable model fit compared to previously-proposed invariance models of Horn and McArdle (1992). Almost complete metric invariance holds for a two-factor model of intelligence. Most of the subtests were invariant across age groups, suggesting little evidence for age-related bias in the WAIS-R. However, we did find unique relationships between two subtests and intelligence. Future studies should examine age-related differences in subtests when testing measurement invariance in intelligence.

  20. The PROMIS Physical Function item bank was calibrated to a standardized metric and shown to improve measurement efficiency

    DEFF Research Database (Denmark)

    Rose, Matthias; Bjørner, Jakob; Gandek, Barbara

    2014-01-01

    OBJECTIVE: To document the development and psychometric evaluation of the Patient-Reported Outcomes Measurement Information System (PROMIS) Physical Function (PF) item bank and static instruments. STUDY DESIGN AND SETTING: The items were evaluated using qualitative and quantitative methods. A total...... response model was used to estimate item parameters, which were normed to a mean of 50 (standard deviation [SD]=10) in a US general population sample. RESULTS: The final bank consists of 124 PROMIS items covering upper, central, and lower extremity functions and instrumental activities of daily living...... to identify differences between age and disease groups. CONCLUSION: The item bank provides a common metric and can improve the measurement of PF by facilitating the standardization of patient-reported outcome measures and implementation of CATs for more efficient PF assessments over a larger range....

  1. Sustainability, Health and Environmental Metrics: Impact on Ranking and Associations with Socioeconomic Measures for 50 U.S. Cities

    Directory of Open Access Journals (Sweden)

    Timothy Wade

    2013-02-01

    Full Text Available Waste and materials management, land use planning, transportation and infrastructure including water and energy can have indirect or direct beneficial impacts on the environment and public health. The potential for impact, however, is rarely viewed in an integrated fashion. To facilitate such an integrated view in support of community-based policy decision making, we catalogued and evaluated associations between common, publically available, Environmental (e, Health (h, and Sustainability (s metrics and sociodemographic measurements (n = 10 for 50 populous U.S. cities. E, H, S indices combined from two sources were derived from component (e (h (s metrics for each city. A composite EHS Index was derived to reflect the integration across the E, H, and S indices. Rank order of high performing cities was highly dependent on the E, H and S indices considered. When viewed together with sociodemographic measurements, our analyses further the understanding of the interplay between these broad categories and reveal significant sociodemographic disparities (e.g., race, education, income associated with low performing cities. Our analyses demonstrate how publically available environmental, health, sustainability and socioeconomic data sets can be used to better understand interconnections between these diverse domains for more holistic community assessments.

  2. Improved noninvasive prediction of liver fibrosis by liver stiffness measurement in patients with nonalcoholic fatty liver disease accounting for controlled attenuation parameter values.

    Science.gov (United States)

    Petta, Salvatore; Wong, Vincent Wai-Sun; Cammà, Calogero; Hiriart, Jean-Baptiste; Wong, Grace Lai-Hung; Marra, Fabio; Vergniol, Julien; Chan, Anthony Wing-Hung; Di Marco, Vito; Merrouche, Wassil; Chan, Henry Lik-Yuen; Barbara, Marco; Le-Bail, Brigitte; Arena, Umberto; Craxì, Antonio; de Ledinghen, Victor

    2017-04-01

    Liver stiffness measurement (LSM) frequently overestimates the severity of liver fibrosis in nonalcoholic fatty liver disease (NAFLD). Controlled attenuation parameter (CAP) is a new parameter provided by the same machine used for LSM and associated with both steatosis and body mass index, the two factors mostly affecting LSM performance in NAFLD. We aimed to determine whether prediction of liver fibrosis by LSM in NAFLD patients is affected by CAP values. Patients (n = 324) were assessed by clinical and histological (Kleiner score) features. LSM and CAP were performed using the M probe. CAP values were grouped by tertiles (lower 132-298, middle 299-338, higher 339-400 dB/m). Among patients with F0-F2 fibrosis, mean LSM values, expressed in kilopascals, increased according to CAP tertiles (6.8 versus 8.6 versus 9.4, P = 0.001), and along this line the area under the curve of LSM for the diagnosis of F3-F4 fibrosis was progressively reduced from lower to middle and further to higher CAP tertiles (0.915, 0.848-0.982; 0.830, 0.753-0.908; 0.806, 0.723-0.890). As a consequence, in subjects with F0-F2 fibrosis, the rates of false-positive LSM results for F3-F4 fibrosis increased according to CAP tertiles (7.2% in lower versus 16.6% in middle versus 18.1% in higher). Consistent with this, a decisional flowchart for predicting fibrosis was suggested by combining both LSM and CAP values. In patients with NAFLD, CAP values should always be taken into account in order to avoid overestimations of liver fibrosis assessed by transient elastography. (Hepatology 2017;65:1145-1155). © 2016 by the American Association for the Study of Liver Diseases.

  3. Quantitative dual energy CT measurements in rabbit VX2 liver tumors: Comparison to perfusion CT measurements and histopathological findings

    International Nuclear Information System (INIS)

    Zhang, Long Jiang; Wu, Shengyong; Wang, Mei; Lu, Li; Chen, Bo; Jin, Lixin; Wang, Jiandong; Larson, Andrew C.; Lu, Guang Ming

    2012-01-01

    Purpose: To evaluate the correlation between quantitative dual energy CT and perfusion CT measurements in rabbit VX2 liver tumors. Materials and methods: This study was approved by the institutional animal care and use committee at our institution. Nine rabbits with VX2 liver tumors underwent contrast-enhanced dual energy CT and perfusion CT. CT attenuation for the tumors and normal liver parenchyma and tumor-to-liver ratio were obtained at the 140 kVp, 80 kVp, average weighted images and dual energy CT iodine maps. Quantitative parameters for the viable tumor and adjacent liver were measured with perfusion CT. The correlation between the enhancement values of the tumor in iodine maps and perfusion CT parameters of each tumor was analyzed. Radiation dose from dual energy CT and perfusion CT was measured. Results: Enhancement values for the tumor were higher than that for normal liver parenchyma at the hepatic arterial phase (P < 0.05). The highest tumor-to-liver ratio was obtained in hepatic arterial phase iodine map. Hepatic blood flow of the tumor was higher than that for adjacent liver (P < 0.05). Enhancement values of hepatic tumors in the iodine maps positively correlated with permeability of capillary vessel surface (r = 0.913, P < 0.001), hepatic blood flow (r = 0.512, P = 0.010), and hepatic blood volume (r = 0.464, P = 0.022) at the hepatic arterial phases. The effective radiation dose from perfusion CT was higher than that from DECT (P < 0.001). Conclusions: The enhancement values for viable tumor tissues measured in iodine maps were well correlated to perfusion CT measurements in rabbit VX2 liver tumors. Compared with perfusion CT, dual energy CT of the liver required a lower radiation dose.

  4. MO-G-BRE-06: Metrics of Success: Measuring Participation and Attitudes Related to Near-Miss Incident Learning Systems

    International Nuclear Information System (INIS)

    Nyflot, MJ; Kusano, AS; Zeng, J; Carlson, JC; Novak, A; Sponseller, P; Jordan, L; Kane, G; Ford, EC

    2014-01-01

    Purpose: Interest in incident learning systems (ILS) for improving safety and quality in radiation oncology is growing, as evidenced by the upcoming release of the national ILS. However, an institution implementing such a system would benefit from quantitative metrics to evaluate performance and impact. We developed metrics to measure volume of reporting, severity of reported incidents, and changes in staff attitudes over time from implementation of our institutional ILS. Methods: We analyzed 2023 incidents from our departmental ILS from 2/2012–2/2014. Incidents were prospectively assigned a near-miss severity index (NMSI) at multidisciplinary review to evaluate the potential for error ranging from 0 to 4 (no harm to critical). Total incidents reported, unique users reporting, and average NMSI were evaluated over time. Additionally, departmental safety attitudes were assessed through a 26 point survey adapted from the AHRQ Hospital Survey on Patient Safety Culture before, 12 months, and 24 months after implementation of the incident learning system. Results: Participation in the ILS increased as demonstrated by total reports (approximately 2.12 additional reports/month) and unique users reporting (0.51 additional users reporting/month). Also, the average NMSI of reports trended lower over time, significantly decreasing after 12 months of reporting (p<0.001) but with no significant change at months 18 or 24. In survey data significant improvements were noted in many dimensions, including perceived barriers to reporting incidents such as concern of embarrassment (37% to 18%; p=0.02) as well as knowledge of what incidents to report, how to report them, and confidence that these reports were used to improve safety processes. Conclusion: Over a two-year period, our departmental ILS was used more frequently, incidents became less severe, and staff confidence in the system improved. The metrics used here may be useful for other institutions seeking to create or evaluate

  5. MO-G-BRE-06: Metrics of Success: Measuring Participation and Attitudes Related to Near-Miss Incident Learning Systems

    Energy Technology Data Exchange (ETDEWEB)

    Nyflot, MJ; Kusano, AS; Zeng, J; Carlson, JC; Novak, A; Sponseller, P; Jordan, L; Kane, G; Ford, EC [University of Washington, Seattle, WA (United States)

    2014-06-15

    Purpose: Interest in incident learning systems (ILS) for improving safety and quality in radiation oncology is growing, as evidenced by the upcoming release of the national ILS. However, an institution implementing such a system would benefit from quantitative metrics to evaluate performance and impact. We developed metrics to measure volume of reporting, severity of reported incidents, and changes in staff attitudes over time from implementation of our institutional ILS. Methods: We analyzed 2023 incidents from our departmental ILS from 2/2012–2/2014. Incidents were prospectively assigned a near-miss severity index (NMSI) at multidisciplinary review to evaluate the potential for error ranging from 0 to 4 (no harm to critical). Total incidents reported, unique users reporting, and average NMSI were evaluated over time. Additionally, departmental safety attitudes were assessed through a 26 point survey adapted from the AHRQ Hospital Survey on Patient Safety Culture before, 12 months, and 24 months after implementation of the incident learning system. Results: Participation in the ILS increased as demonstrated by total reports (approximately 2.12 additional reports/month) and unique users reporting (0.51 additional users reporting/month). Also, the average NMSI of reports trended lower over time, significantly decreasing after 12 months of reporting (p<0.001) but with no significant change at months 18 or 24. In survey data significant improvements were noted in many dimensions, including perceived barriers to reporting incidents such as concern of embarrassment (37% to 18%; p=0.02) as well as knowledge of what incidents to report, how to report them, and confidence that these reports were used to improve safety processes. Conclusion: Over a two-year period, our departmental ILS was used more frequently, incidents became less severe, and staff confidence in the system improved. The metrics used here may be useful for other institutions seeking to create or evaluate

  6. Use of performance metrics for the measurement of universal coverage for maternal care in Mexico.

    Science.gov (United States)

    Serván-Mori, Edson; Contreras-Loya, David; Gomez-Dantés, Octavio; Nigenda, Gustavo; Sosa-Rubí, Sandra G; Lozano, Rafael

    2017-06-01

    This study provides evidence for those working in the maternal health metrics and health system performance fields, as well as those interested in achieving universal and effective health care coverage. Based on the perspective of continuity of health care and applying quasi-experimental methods to analyse the cross-sectional 2009 National Demographic Dynamics Survey (n = 14 414 women), we estimated the middle-term effects of Mexico's new public health insurance scheme, Seguro Popular de Salud (SPS) (vs women without health insurance) on seven indicators related to maternal health care (according to official guidelines): (a) access to skilled antenatal care (ANC); (b) timely ANC; (c) frequent ANC; (d) adequate content of ANC; (e) institutional delivery; (f) postnatal consultation and (g) access to standardized comprehensive antenatal and postnatal care (or the intersection of the seven process indicators). Our results show that 94% of all pregnancies were attended by trained health personnel. However, comprehensive access to ANC declines steeply in both groups as we move along the maternal healthcare continuum. The percentage of institutional deliveries providing timely, frequent and adequate content of ANC reached 70% among SPS women (vs 64.7% in the uninsured), and only 57.4% of SPS-affiliated women received standardized comprehensive care (vs 53.7% in the uninsured group). In Mexico, access to comprehensive antenatal and postnatal care as defined by Mexican guidelines (in accordance to WHO recommendations) is far from optimal. Even though a positive influence of SPS on maternal care was documented, important challenges still remain. Our results identified key bottlenecks of the maternal healthcare continuum that should be addressed by policy makers through a combination of supply side interventions and interventions directed to social determinants of access to health care. © The Author 2017. Published by Oxford University Press in association with The

  7. Metric Indices for Performance Evaluation of a Mixed Measurement based State Estimator

    Directory of Open Access Journals (Sweden)

    Paula Sofia Vide

    2013-01-01

    Full Text Available With the development of synchronized phasor measurement technology in recent years, it gains great interest the use of PMU measurements to improve state estimation performances due to their synchronized characteristics and high data transmission speed. The ability of the Phasor Measurement Units (PMU to directly measure the system state is a key over SCADA measurement system. PMU measurements are superior to the conventional SCADA measurements in terms of resolution and accuracy. Since the majority of measurements in existing estimators are from conventional SCADA measurement system, it is hard to be fully replaced by PMUs in the near future so state estimators including both phasor and conventional SCADA measurements are being considered. In this paper, a mixed measurement (SCADA and PMU measurements state estimator is proposed. Several useful measures for evaluating various aspects of the performance of the mixed measurement state estimator are proposed and explained. State Estimator validity, performance and characteristics of the results on IEEE 14 bus test system and IEEE 30 bus test system are presented.

  8. Validating New Software for Semiautomated Liver Volumetry--Better than Manual Measurement?

    Science.gov (United States)

    Noschinski, L E; Maiwald, B; Voigt, P; Wiltberger, G; Kahn, T; Stumpp, P

    2015-09-01

    This prospective study compared a manual program for liver volumetry with semiautomated software. The hypothesis was that the semiautomated software would be faster, more accurate and less dependent on the evaluator's experience. Ten patients undergoing hemihepatectomy were included in this IRB approved study after written informed consent. All patients underwent a preoperative abdominal 3-phase CT scan, which was used for whole liver volumetry and volume prediction for the liver part to be resected. Two different types of software were used: 1) manual method: borders of the liver had to be defined per slice by the user; 2) semiautomated software: automatic identification of liver volume with manual assistance for definition of Couinaud segments. Measurements were done by six observers with different experience levels. Water displacement volumetry immediately after partial liver resection served as the gold standard. The resected part was examined with a CT scan after displacement volumetry. Volumetry of the resected liver scan showed excellent correlation to water displacement volumetry (manual: ρ = 0.997; semiautomated software: ρ = 0.995). The difference between the predicted volume and the real volume was significantly smaller with the semiautomated software than with the manual method (33% vs. 57%, p = 0.002). The semiautomated software was almost four times faster for volumetry of the whole liver (manual: 6:59 ± 3:04 min; semiautomated: 1:47 ± 1:11 min). Both methods for liver volumetry give an estimated liver volume close to the real one. The tested semiautomated software is faster, more accurate in predicting the volume of the resected liver part, gives more reproducible results and is less dependent on the user's experience. Both tested types of software allow exact volumetry of resected liver parts. Preoperative prediction can be performed more accurately with the semiautomated software. The semiautomated software is nearly four times faster than the

  9. CWT and RWT Metrics Measure the Performance of the Army's Logistics Chain for Spare Parts

    National Research Council Canada - National Science Library

    2003-01-01

    .... As part of its efforts to improve the logistics chain for spare parts, the Army must measure the performance of its supply system in filling orders for materiel. Velocity Management (VM) is a RAND-developed and Army implemented system that measures such performance and seeks ways to improve it through its Define-Measure- Improve (DMI) methodology. As the term DMI implies, measurement is central to this improvement approach.

  10. Cardiovascular health metrics and accelerometer-measured physical activity levels: National Health and Nutrition Examination Survey, 2003-2006.

    Science.gov (United States)

    Barreira, Tiago V; Harrington, Deirdre M; Katzmarzyk, Peter T

    2014-01-01

    To determine whether relationships exist between accelerometer-measured moderate-to-vigorous physical activity (MVPA) and other cardiovascular (CV) health metrics in a large sample. Data from the 2003-2006 National Health and Nutrition Examination Survey (NHANES) collected from January 1, 2003, through December 31, 2006, were used. Overall, 3454 nonpregnant adults 20 years or older who fasted for 6 hours or longer, with valid accelerometer data and with CV health metrics, were included in the study. Blood pressure (BP), body mass index (BMI), smoking status, diet, fasting plasma glucose level, and total cholesterol level were defined as ideal, intermediate, and poor on the basis of American Heart Association criteria. Results were weighted to account for sampling design, oversampling, and nonresponse. Significant increasing linear trends in mean daily MVPA were observed across CV health levels for BMI, BP, and fasting plasma glucose (Pphysical activity in the overall definition of ideal CV health. Copyright © 2014 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  11. Measurement of liver and spleen volume by computed tomography using point counting technique

    International Nuclear Information System (INIS)

    Matsuda, Yoshiro; Sato, Hiroyuki; Nei, Jinichi; Takada, Akira

    1982-01-01

    We devised a new method for measurement of liver and spleen volume by computed tomography using point counting technique. This method is very simple and applicable to any kind of CT scanner. The volumes of the livers and spleens estimated by this method were significantly correlated with the weights of the corresponding organs measured on autopsy or surgical operation, indicating clinical usefulness of this method. Hepatic and splenic volumes were estimated by this method in 43 patients with chronic liver disease and 9 subjects with non-hepatobiliary disease. The mean hepatic volume in non-alcoholic liver cirrhosis was significantly smaller than those in non-hepatobiliary disease and other chronic liver diseases. The mean hepatic volume in alcoholic cirrhosis and alcoholic fibrosis tended to be slightly larger than that in non-hepatobiliary disease. The mean splenic volume in liver cirrhosis was significantly larger than those in non-hepatobiliary disease and other chronic liver diseases. However, there was no significant difference of the mean splenic volume between alcoholic and non-alcoholic cirrhosis. Significantly positive correlation between hepatic and splenic volumes was found in alcoholic cirrhosis, but not in non-alcoholic cirrhosis. These results indicate that estimation of hepatic and splenic volumes by this method is useful for the analysis of the pathophysiological condition of chronic liver diseases. (author)

  12. Tracker Performance Metric

    National Research Council Canada - National Science Library

    Olson, Teresa; Lee, Harry; Sanders, Johnnie

    2002-01-01

    .... We have developed the Tracker Performance Metric (TPM) specifically for this purpose. It was designed to measure the output performance, on a frame-by-frame basis, using its output position and quality...

  13. Validating new software for semiautomated liver volumetry. Better than manual measurement?

    Energy Technology Data Exchange (ETDEWEB)

    Noschinski, L.E.; Maiwald, B.; Voigt, P.; Kahn, T.; Stumpp, P. [University Hospital Leipzig (Germany). Dept. of Diagnostic and Interventional Radiology; Wiltberger, G. [University Hospital Leipzig (Germany). Dept. of Visceral, Transplantation, Thoracic and Vascular Surgery

    2015-09-15

    This prospective study compared a manual program for liver volumetry with semiautomated software. The hypothesis was that the semiautomated software would be faster, more accurate and less dependent on the evaluator's experience. Ten patients undergoing hemihepatectomy were included in this IRB approved study after written informed consent. All patients underwent a preoperative abdominal 3-phase CT scan, which was used for whole liver volumetry and volume prediction for the liver part to be resected. Two different types of software were used: 1) manual method: borders of the liver had to be defined per slice by the user; 2) semiautomated software: automatic identification of liver volume with manual assistance for definition of Couinaud segments. Measurements were done by six observers with different experience levels. Water displacement volumetry immediately after partial liver resection served as the gold standard. The resected part was examined with a CT scan after displacement volumetry. Volumetry of the resected liver scan showed excellent correlation to water displacement volumetry (manual: ρ = 0.997; semiautomated software: ρ = 0.995). The difference between the predicted volume and the real volume was significantly smaller with the semiautomated software than with the manual method (33 % vs. 57 %, p = 0.002). The semiautomated software was almost four times faster for volumetry of the whole liver (manual: 6:59 ± 3:04min; semiautomated: 1:47 ± 1:11 min). Both methods for liver volumetry give an estimated liver volume close to the real one. The tested semiautomated software is faster, more accurate in predicting the volume of the resected liver part, gives more reproducible results and is less dependent on the user's experience.

  14. Validating new software for semiautomated liver volumetry. Better than manual measurement?

    International Nuclear Information System (INIS)

    Noschinski, L.E.; Maiwald, B.; Voigt, P.; Kahn, T.; Stumpp, P.; Wiltberger, G.

    2015-01-01

    This prospective study compared a manual program for liver volumetry with semiautomated software. The hypothesis was that the semiautomated software would be faster, more accurate and less dependent on the evaluator's experience. Ten patients undergoing hemihepatectomy were included in this IRB approved study after written informed consent. All patients underwent a preoperative abdominal 3-phase CT scan, which was used for whole liver volumetry and volume prediction for the liver part to be resected. Two different types of software were used: 1) manual method: borders of the liver had to be defined per slice by the user; 2) semiautomated software: automatic identification of liver volume with manual assistance for definition of Couinaud segments. Measurements were done by six observers with different experience levels. Water displacement volumetry immediately after partial liver resection served as the gold standard. The resected part was examined with a CT scan after displacement volumetry. Volumetry of the resected liver scan showed excellent correlation to water displacement volumetry (manual: ρ = 0.997; semiautomated software: ρ = 0.995). The difference between the predicted volume and the real volume was significantly smaller with the semiautomated software than with the manual method (33 % vs. 57 %, p = 0.002). The semiautomated software was almost four times faster for volumetry of the whole liver (manual: 6:59 ± 3:04min; semiautomated: 1:47 ± 1:11 min). Both methods for liver volumetry give an estimated liver volume close to the real one. The tested semiautomated software is faster, more accurate in predicting the volume of the resected liver part, gives more reproducible results and is less dependent on the user's experience.

  15. Commutators of Littlewood-Paley gκ∗$g_{\\kappa}^{*} $-functions on non-homogeneous metric measure spaces

    Directory of Open Access Journals (Sweden)

    Lu Guanghui

    2017-11-01

    Full Text Available The main purpose of this paper is to prove that the boundedness of the commutator Mκ,b∗$\\mathcal{M}_{\\kappa,b}^{*} $ generated by the Littlewood-Paley operator Mκ∗$\\mathcal{M}_{\\kappa}^{*} $ and RBMO (μ function on non-homogeneous metric measure spaces satisfying the upper doubling and the geometrically doubling conditions. Under the assumption that the kernel of Mκ∗$\\mathcal{M}_{\\kappa}^{*} $ satisfies a certain Hörmander-type condition, the authors prove that Mκ,b∗$\\mathcal{M}_{\\kappa,b}^{*} $ is bounded on Lebesgue spaces Lp(μ for 1 < p < ∞, bounded from the space L log L(μ to the weak Lebesgue space L1,∞(μ, and is bounded from the atomic Hardy spaces H1(μ to the weak Lebesgue spaces L1,∞(μ.

  16. Model and methods to assess hepatic function from indocyanine green fluorescence dynamical measurements of liver tissue.

    Science.gov (United States)

    Audebert, Chloe; Vignon-Clementel, Irene E

    2018-03-30

    The indocyanine green (ICG) clearance, presented as plasma disappearance rate is, presently, a reliable method to estimate the hepatic "function". However, this technique is not instantaneously available and thus cannot been used intra-operatively (during liver surgery). Near-infrared spectroscopy enables to assess hepatic ICG concentration over time in the liver tissue. This article proposes to extract more information from the liver intensity dynamics by interpreting it through a dedicated pharmacokinetics model. In order to account for the different exchanges between the liver tissues, the proposed model includes three compartments for the liver model (sinusoids, hepatocytes and bile canaliculi). The model output dependency to parameters is studied with sensitivity analysis and solving an inverse problem on synthetic data. The estimation of model parameters is then performed with in-vivo measurements in rabbits (El-Desoky et al. 1999). Parameters for different liver states are estimated, and their link with liver function is investigated. A non-linear (Michaelis-Menten type) excretion rate from the hepatocytes to the bile canaliculi was necessary to reproduce the measurements for different liver conditions. In case of bile duct ligation, the model suggests that this rate is reduced, and that the ICG is stored in the hepatocytes. Moreover, the level of ICG remains high in the blood following the ligation of the bile duct. The percentage of retention of indocyanine green in blood, which is a common test for hepatic function estimation, is also investigated with the model. The impact of bile duct ligation and reduced liver inflow on the percentage of ICG retention in blood is studied. The estimation of the pharmacokinetics model parameters may lead to an evaluation of different liver functions. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Methods of measuring metabolism during surgery in humans: focus on the liver-brain relationship.

    Science.gov (United States)

    Battezzati, Alberto; Bertoli, Simona

    2004-09-01

    The purpose of this work is to review recent advances in setting methods and models for measuring metabolism during surgery in humans. Surgery, especially solid organ transplantation, may offer unique experimental models in which it is ethically acceptable to gain information on difficult problems of amino acid and protein metabolism. Two areas are reviewed: the metabolic study of the anhepatic phase during liver transplantation and brain microdialysis during cerebral surgery. The first model offers an innovative approach to understand the relative role of liver and extrahepatic organs in gluconeogenesis, and to evaluate whether other organs can perform functions believed to be exclusively or almost exclusively performed by the liver. The second model offers an insight to intracerebral metabolism that is closely bound to that of the liver. The recent advances in metabolic research during surgery provide knowledge immediately useful for perioperative patient management and for a better control of surgical stress. The studies during the anhepatic phase of liver transplantation have showed that gluconeogenesis and glutamine metabolism are very active processes outside the liver. One of the critical organs for extrahepatic glutamine metabolism is the brain. Microdialysis studies helped to prove that in humans there is an intense trafficking of glutamine, glutamate and alanine among neurons and astrocytes. This delicate network is influenced by systemic amino acid metabolism. The metabolic dialogue between the liver and the brain is beginning to be understood in this light in order to explain the metabolic events of brain damage during liver failure.

  18. The PROMIS Physical Function item bank was calibrated to a standardized metric and shown to improve measurement efficiency.

    Science.gov (United States)

    Rose, Matthias; Bjorner, Jakob B; Gandek, Barbara; Bruce, Bonnie; Fries, James F; Ware, John E

    2014-05-01

    To document the development and psychometric evaluation of the Patient-Reported Outcomes Measurement Information System (PROMIS) Physical Function (PF) item bank and static instruments. The items were evaluated using qualitative and quantitative methods. A total of 16,065 adults answered item subsets (n>2,200/item) on the Internet, with oversampling of the chronically ill. Classical test and item response theory methods were used to evaluate 149 PROMIS PF items plus 10 Short Form-36 and 20 Health Assessment Questionnaire-Disability Index items. A graded response model was used to estimate item parameters, which were normed to a mean of 50 (standard deviation [SD]=10) in a US general population sample. The final bank consists of 124 PROMIS items covering upper, central, and lower extremity functions and instrumental activities of daily living. In simulations, a 10-item computerized adaptive test (CAT) eliminated floor and decreased ceiling effects, achieving higher measurement precision than any comparable length static tool across four SDs of the measurement range. Improved psychometric properties were transferred to the CAT's superior ability to identify differences between age and disease groups. The item bank provides a common metric and can improve the measurement of PF by facilitating the standardization of patient-reported outcome measures and implementation of CATs for more efficient PF assessments over a larger range. Copyright © 2014. Published by Elsevier Inc.

  19. Measuring Cognitive Load and Cognition: Metrics for Technology-Enhanced Learning

    Science.gov (United States)

    Martin, Stewart

    2014-01-01

    This critical and reflective literature review examines international research published over the last decade to summarise the different kinds of measures that have been used to explore cognitive load and critiques the strengths and limitations of those focussed on the development of direct empirical approaches. Over the last 40 years, cognitive…

  20. Liver stiffness measurement-based scoring system for significant inflammation related to chronic hepatitis B.

    Directory of Open Access Journals (Sweden)

    Mei-Zhu Hong

    Full Text Available Liver biopsy is indispensable because liver stiffness measurement alone cannot provide information on intrahepatic inflammation. However, the presence of fibrosis highly correlates with inflammation. We constructed a noninvasive model to determine significant inflammation in chronic hepatitis B patients by using liver stiffness measurement and serum markers.The training set included chronic hepatitis B patients (n = 327, and the validation set included 106 patients; liver biopsies were performed, liver histology was scored, and serum markers were investigated. All patients underwent liver stiffness measurement.An inflammation activity scoring system for significant inflammation was constructed. In the training set, the area under the curve, sensitivity, and specificity of the fibrosis-based activity score were 0.964, 91.9%, and 90.8% in the HBeAg(+ patients and 0.978, 85.0%, and 94.0% in the HBeAg(- patients, respectively. In the validation set, the area under the curve, sensitivity, and specificity of the fibrosis-based activity score were 0.971, 90.5%, and 92.5% in the HBeAg(+ patients and 0.977, 95.2%, and 95.8% in the HBeAg(- patients. The liver stiffness measurement-based activity score was comparable to that of the fibrosis-based activity score in both HBeAg(+ and HBeAg(- patients for recognizing significant inflammation (G ≥3.Significant inflammation can be accurately predicted by this novel method. The liver stiffness measurement-based scoring system can be used without the aid of computers and provides a noninvasive alternative for the prediction of chronic hepatitis B-related significant inflammation.

  1. Square root metric in the analysis of the measurements of high energy physics instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Petrillo, L; Severi, M [Istituto Nazionale di Fisica Nucleare, Rome (Italy); Rome Univ. (Italy). Ist. di Fisica)

    1982-09-15

    A vast category of detectors are characterized by a charge output with a distribution which has a variance proportional to the mean value. In all these cases a square root scale achieved in the hardware seems more suitable from the point of view of the resolution of the measurements, of the number of ADC channels needed, and of a preliminary analysis in the stage of tuning and checking of the detector.

  2. Assessment of impact factors on shear wave based liver stiffness measurement

    Energy Technology Data Exchange (ETDEWEB)

    Ling, Wenwu, E-mail: lingwenwubing@163.com [Department of Ultrasound, West China Hospital of Sichuan University, Chengdu 610041 (China); Lu, Qiang, E-mail: wsluqiang@126.com [Department of Ultrasound, West China Hospital of Sichuan University, Chengdu 610041 (China); Quan, Jierong, E-mail: quanjierong@163.com [Department of Ultrasound, West China Hospital of Sichuan University, Chengdu 610041 (China); Ma, Lin, E-mail: malin2010US@163.com [Department of Ultrasound, West China Hospital of Sichuan University, Chengdu 610041 (China); Luo, Yan, E-mail: huaxiluoyan@gmail.com [Department of Ultrasound, West China Hospital of Sichuan University, Chengdu 610041 (China)

    2013-02-15

    Shear wave based ultrasound elastographies have been implemented as non-invasive methods for quantitative assessment of liver stiffness. Nonetheless, there are only a few studies that have investigated impact factors on liver stiffness measurement (LSM). Moreover, standard examination protocols for LSM are still lacking in clinical practice. Our study aimed to assess the impact factors on LSM to establish its standard examination protocols in clinical practice. We applied shear wave based elastography point quantification (ElastPQ) in 21 healthy individuals to determine the impact of liver location (segments I–VIII), breathing phase (end-inspiration and end-expiration), probe position (sub-costal and inter-costal position) and examiner on LSM. Additional studies in 175 healthy individuals were also performed to determine the influence of gender and age on liver stiffness. We found significant impact of liver location on LSM, while the liver segment V displayed the lowest coefficient of variation (CV 21%). The liver stiffness at the end-expiration was significantly higher than that at the end-inspiration (P = 2.1E−05). The liver stiffness was 8% higher in men than in women (3.8 ± 0.7 kPa vs. 3.5 ± 0.4 kPa, P = 0.0168). In contrast, the liver stiffness was comparable in the different probe positions, examiners and age groups (P > 0.05). In conclusion, this study reveals significant impact from liver location, breathing phase and gender on LSM, while furthermore strengthening the necessity for the development of standard examination protocols on LSM.

  3. Microcomputer-based tests for repeated-measures: Metric properties and predictive validities

    Science.gov (United States)

    Kennedy, Robert S.; Baltzley, Dennis R.; Dunlap, William P.; Wilkes, Robert L.; Kuntz, Lois-Ann

    1989-01-01

    A menu of psychomotor and mental acuity tests were refined. Field applications of such a battery are, for example, a study of the effects of toxic agents or exotic environments on performance readiness, or the determination of fitness for duty. The key requirement of these tasks is that they be suitable for repeated-measures applications, and so questions of stability and reliability are a continuing, central focus of this work. After the initial (practice) session, seven replications of 14 microcomputer-based performance tests (32 measures) were completed by 37 subjects. Each test in the battery had previously been shown to stabilize in less than five 90-second administrations and to possess retest reliabilities greater than r = 0.707 for three minutes of testing. However, all the tests had never been administered together as a battery and they had never been self-administered. In order to provide predictive validity for intelligence measurement, the Wechsler Adult Intelligence Scale-Revised and the Wonderlic Personnel Test were obtained on the same subjects.

  4. Measuring fitness of Kenyan children with polyparasitic infections using the 20-meter shuttle run test as a morbidity metric.

    Directory of Open Access Journals (Sweden)

    Amaya L Bustinduy

    2011-07-01

    Full Text Available To date, there has been no standardized approach to the assessment of aerobic fitness among children who harbor parasites. In quantifying the disability associated with individual or multiple chronic infections, accurate measures of physical fitness are important metrics. This is because exercise intolerance, as seen with anemia and many other chronic disorders, reflects the body's inability to maintain adequate oxygen supply (VO(2 max to the motor tissues, which is frequently linked to reduced quality-of-life in terms of physical and job performance. The objective of our study was to examine the associations between polyparasitism, anemia, and reduced fitness in a high risk Kenyan population using novel implementation of the 20-meter shuttle run test (20mSRT, a well-standardized, low-technology physical fitness test.Four villages in coastal Kenya were surveyed during 2009-2010. Children 5-18 years were tested for infection with Schistosoma haematobium (Sh, malaria, filaria, and geohelminth infections by standard methods. After anthropometric and hemoglobin testing, fitness was assessed with the 20 mSRT. The 20 mSRT proved easy to perform, requiring only minimal staff training. Parasitology revealed high prevalence of single and multiple parasitic infections in all villages, with Sh being the most common (25-62%. Anemia prevalence was 45-58%. Using multiply-adjusted linear modeling that accounted for household clustering, decreased aerobic capacity was significantly associated with anemia, stunting, and wasting, with some gender differences.The 20 mSRT, which has excellent correlation with VO(2, is a highly feasible fitness test for low-resource settings. Our results indicate impaired fitness is common in areas endemic for parasites, where, at least in part, low fitness scores are likely to result from anemia and stunting associated with chronic infection. The 20 mSRT should be used as a common metric to quantify physical fitness and compare sub

  5. Reference Clinical Database for Fixation Stability Metrics in Normal Subjects Measured with the MAIA Microperimeter.

    Science.gov (United States)

    Morales, Marco U; Saker, Saker; Wilde, Craig; Pellizzari, Carlo; Pallikaris, Aristophanes; Notaroberto, Neil; Rubinstein, Martin; Rui, Chiara; Limoli, Paolo; Smolek, Michael K; Amoaku, Winfried M

    2016-11-01

    The purpose of this study was to establish a normal reference database for fixation stability measured with the bivariate contour ellipse area (BCEA) in the Macular Integrity Assessment (MAIA) microperimeter. Subjects were 358 healthy volunteers who had the MAIA examination. Fixation stability was assessed using two BCEA fixation indices (63% and 95% proportional values) and the percentage of fixation points within 1° and 2° from the fovea (P1 and P2). Statistical analysis was performed with linear regression and Pearson's product moment correlation coefficient. Average areas of 0.80 deg 2 (min = 0.03, max = 3.90, SD = 0.68) for the index BCEA@63% and 2.40 deg 2 (min = 0.20, max = 11.70, SD = 2.04) for the index BCEA@95% were found. The average values of P1 and P2 were 95% (min = 76, max = 100, SD = 5.31) and 99% (min = 91, max = 100, SD = 1.42), respectively. The Pearson's product moment test showed an almost perfect correlation index, r = 0.999, between BCEA@63% and BCEA@95%. Index P1 showed a very strong correlation with BCEA@63%, r = -0.924, as well as with BCEA@95%, r = -0.925. Index P2 demonstrated a slightly lower correlation with both BCEA@63% and BCEA@95%, r = -0.874 and -0.875, respectively. The single parameter of the BCEA@95% may be taken as accurately reporting fixation stability and serves as a reference database of normal subjects with a cutoff area of 2.40 ± 2.04 deg 2 in MAIA microperimeter. Fixation stability can be measured with different indices. This study originates reference fixation values for the MAIA using a single fixation index.

  6. From Fractals to Fractional Vector Calculus: Measurement in the Correct Metric

    Science.gov (United States)

    Wheatcraft, S. W.; Meerschaert, M. M.; Mortensen, J.

    2005-12-01

    Traditional (stationary) stochastic theories have been fairly successful in reproducing transport behavior at relatively homogeneous field sites such as the Borden and Cape Code sites. However, the highly heterogeneous MADE site has produced tracer data that can not be adequately explained with traditional stochastic theories. In recent years, considerable attention has been focused on developing more sophisticated theories that can predict or reproduce the behavior of complex sites such as the MADE site. People began to realize that the model for geologic complexity may in many cases be very different than the model required for stochastic theory. Fractal approaches were useful in conceptualizing scale-invariant heterogeneity by demonstrating that scale dependant transport was just an artifact of our measurement system. Fractal media have dimensions larger than the dimension that measurement is taking place in, thus assuring the scale-dependence of parameters such as dispersivity. What was needed was a rigorous way to develop a theory that was consistent with the fractal dimension of the heterogeneity. The fractional advection-dispersion equation (FADE) was developed with this idea in mind. The second derivative in the dispersion term of the advection-dispersion equation is replaced with a fractional derivative. The order of differentiation, α, is fractional. Values of α in the range: 1 equation is recovered. The 1-D version of the FADE has been used successfully to back-predict tracer test behavior at several heterogeneous field sites, including the MADE site. It has been hypothesized that the order of differentiation in the FADE is equivalent to (or at least related to) the fractal dimension of the particle tracks (or geologic heterogeneity). With this way of thinking, one can think of the FADE as a governing equation written for the correct dimension, thus eliminating scale-dependent behavior. Before a generalized multi-dimensional form of the FADE can be

  7. A sequence identification measurement model to investigate the implicit learning of metrical temporal patterns.

    Directory of Open Access Journals (Sweden)

    Benjamin G Schultz

    Full Text Available Implicit learning (IL occurs unconsciously and without intention. Perceptual fluency is the ease of processing elicited by previous exposure to a stimulus. It has been assumed that perceptual fluency is associated with IL. However, the role of perceptual fluency following IL has not been investigated in temporal pattern learning. Two experiments by Schultz, Stevens, Keller, and Tillmann demonstrated the IL of auditory temporal patterns using a serial reaction-time task and a generation task based on the process dissociation procedure. The generation task demonstrated that learning was implicit in both experiments via motor fluency, that is, the inability to suppress learned information. With the aim to disentangle conscious and unconscious processes, we analyze unreported recognition data associated with the Schultz et al. experiments using the sequence identification measurement model. The model assumes that perceptual fluency reflects unconscious processes and IL. For Experiment 1, the model indicated that conscious and unconscious processes contributed to recognition of temporal patterns, but that unconscious processes had a greater influence on recognition than conscious processes. In the model implementation of Experiment 2, there was equal contribution of conscious and unconscious processes in the recognition of temporal patterns. As Schultz et al. demonstrated IL in both experiments using a generation task, and the conditions reported here in Experiments 1 and 2 were identical, two explanations are offered for the discrepancy in model and behavioral results based on the two tasks: 1 perceptual fluency may not be necessary to infer IL, or 2 conscious control over implicitly learned information may vary as a function of perceptual fluency and motor fluency.

  8. Elements in normal and cirrhotic human liver. Potassium, iron, copper, zinc and bromine measured by X-ray fluorescence spectrometry

    DEFF Research Database (Denmark)

    Laursen, J.; Milman, N.; Leth, Peter Mygind

    1990-01-01

    Various elements (K, Fe, Cu, Zn, Br) were measured by X-ray flourescence spectrometry in cellular and connective tissue fractions of normal and cirrhotic liver samples obtained at autopsy. Normal livers: 32 subjects (16 males, 16 females) median age 69 years. Cirrhotic livers: 14 subjects (13 mal...

  9. Liver Stiffness Measurement among Patients with Chronic Hepatitis B and C

    DEFF Research Database (Denmark)

    Christiansen, Karen M; Mössner, Belinda K; Hansen, Janne F

    2014-01-01

    viral hepatitis and valid LSM using Fibroscan. Information about liver biopsy, antiviral treatment, and clinical outcome was obtained from medical records and national registers. The study included 845 patients: 597 (71%) with hepatitis C virus (HCV), 235 (28%) with hepatitis B virus (HBV) and 13 (2......Liver stiffness measurement (LSM) is widely used to evaluate liver fibrosis, but longitudinal studies are rare. The current study was aimed to monitor LSM during follow-up, and to evaluate the association of LSM data with mortality and liver-related outcomes. We included all patients with chronic......%) with dual infection. The initial LSM distribution (patients with initial LSM values of 7-9.9 kPa, 60% of HCV patients and 83% of HBV patients showed LSM values of 20% and >2 kPa increase...

  10. Assessment of Performance Measures for Security of the Maritime Transportation Network. Port Security Metrics: Proposed Measurement of Deterrence Capability

    National Research Council Canada - National Science Library

    Hoaglund, Robert; Gazda, Walter

    2007-01-01

    The goal of this analysis is to provide ASCO and its customers with a comprehensive approach to the development of quantitative performance measures to assess security improvements to the port system...

  11. Cyber threat metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Frye, Jason Neal; Veitch, Cynthia K.; Mateski, Mark Elliot; Michalski, John T.; Harris, James Mark; Trevino, Cassandra M.; Maruoka, Scott

    2012-03-01

    Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.

  12. MEASURING THE PERFORMANCE OF GUYANA’S CONSTRUCTION INDUSTRY USING A SET OF PROJECT PERFORMANCE BENCHMARKING METRICS

    Directory of Open Access Journals (Sweden)

    Christopher J. Willis

    2011-10-01

    Full Text Available A study measuring the performance of Guyana’s construction industry using a set of project performance benchmarking metrics was recently completed. The underlying premise of the study was that the aggregated performance of construction projects provides a realistic assessment of the performance of the construction industry, on the basis that construction projects are the mechanism through which the construction industry creates its tangible products. The fact that an influential government agency acted as owner of the study was critical to the data collection phase. The best approach for collecting project performance data in Guyana involves the utilisation of a researcher or team of researchers mining electronic and hard copy project documents. This study analysed approximately 270 construction projects to obtain an indication of the performance of guyana’s construction industry. It was found that sea defence projects performed the worst, whereas health facility projects performed the best. The main implication of this is that sea defence projects are likely to be the least efficient and, given their critical nature, there is an argument for urgent performance improvement interventions.

  13. Does Objective Quality of Physicians Correlate with Patient Satisfaction Measured by Hospital Compare Metrics in New York State?

    Science.gov (United States)

    Bekelis, Kimon; Missios, Symeon; MacKenzie, Todd A; O'Shaughnessy, Patrick M

    2017-07-01

    It is unclear whether publicly reported benchmarks correlate with quality of physicians and institutions. We investigated the association of patient satisfaction measures from a public reporting platform with performance of neurosurgeons in New York State. This cohort study comprised patients undergoing neurosurgical operations from 2009 to 2013 who were registered in the Statewide Planning and Research Cooperative System database. The cohort was merged with publicly available data from the Centers for Medicare and Medicaid Services Hospital Compare website. Propensity-adjusted regression analysis was used to investigate the association of patient satisfaction metrics with neurosurgeon quality, as measured by the neurosurgeon's individual rate of mortality and average length of stay. During the study period, 166,365 patients underwent neurosurgical procedures. Using propensity-adjusted multivariable regression analysis, we demonstrated that undergoing neurosurgical operations in hospitals with a greater percentage of patient-assigned "high" scores was associated with higher chance of being treated by a physician with superior performance in terms of mortality (odds ratio 1.90, 95% confidence interval 1.86-1.95), and a higher chance of being treated by a physician with superior performance in terms of length of stay (odds ratio 1.24, 95% confidence interval 1.21-1.27). Similar associations were identified for hospitals with a higher percentage of patients who claimed they would recommend these institutions to others. Merging a comprehensive all-payer cohort of neurosurgery patients in New York State with data from the Hospital Compare website, we observed an association of superior hospital-level patient satisfaction measures with objective performance of individual neurosurgeons in the corresponding hospitals. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Interrelationships Among Several Variables Reflecting Quantitative Thinking in Elementary School Children with Particular Emphasis upon Those Measures Involving Metric and Decimal Skills

    Science.gov (United States)

    Selman, Delon; And Others

    1976-01-01

    The relationships among measures of quantitative thinking in first through fifth grade children assigned either to an experimental math program emphasizing tactile, manipulative, or individual activity in learning metric and decimal concepts, or to a control group, were examined. Tables are presented and conclusions discussed. (Author/JKS)

  15. Metrication manual

    International Nuclear Information System (INIS)

    Harper, A.F.A.; Digby, R.B.; Thong, S.P.; Lacey, F.

    1978-04-01

    In April 1978 a meeting of senior metrication officers convened by the Commonwealth Science Council of the Commonwealth Secretariat, was held in London. The participants were drawn from Australia, Bangladesh, Britain, Canada, Ghana, Guyana, India, Jamaica, Papua New Guinea, Solomon Islands and Trinidad and Tobago. Among other things, the meeting resolved to develop a set of guidelines to assist countries to change to SI and to compile such guidelines in the form of a working manual

  16. Measurement of trace elements in liver biopsy samples from cattle

    NARCIS (Netherlands)

    Ouweltjes, W.; Zeeuw, de A.C.; Moen, A.; Counotte, G.H.M.

    2007-01-01

    Serum, plasma, or urine samples are usually used for the measurement of the trace elements copper, zinc, iron, selenium, because these samples are easy to obtain; however, these samples are not always appropriate. For example, it is not possible to measure molybdenum, the major antagonist of copper,

  17. A new formula for estimation of standard liver volume using computed tomography-measured body thickness.

    Science.gov (United States)

    Ma, Ka Wing; Chok, Kenneth S H; Chan, Albert C Y; Tam, Henry S C; Dai, Wing Chiu; Cheung, Tan To; Fung, James Y Y; Lo, Chung Mau

    2017-09-01

    The objective of this article is to derive a more accurate and easy-to-use formula for finding estimated standard liver volume (ESLV) using novel computed tomography (CT) measurement parameters. New formulas for ESLV have been emerging that aim to improve the accuracy of estimation. However, many of these formulas contain body surface area measurements and logarithms in the equations that lead to a more complicated calculation. In addition, substantial errors in ESLV using these old formulas have been shown. An improved version of the formula for ESLV is needed. This is a retrospective cohort of consecutive living donor liver transplantations from 2005 to 2016. Donors were randomly assigned to either the formula derivation or validation groups. Total liver volume (TLV) measured by CT was used as the reference for a linear regression analysis against various patient factors. The derived formula was compared with the existing formulas. There were 722 patients (197 from the derivation group, 164 from the validation group, and 361 from the recipient group) involved in the study. The donor's body weight (odds ratio [OR], 10.42; 95% confidence interval [CI], 7.25-13.60; P Liver Transplantation 23 1113-1122 2017 AASLD. © 2017 by the American Association for the Study of Liver Diseases.

  18. The challenge of measuring emergency preparedness: integrating component metrics to build system-level measures for strategic national stockpile operations.

    Science.gov (United States)

    Jackson, Brian A; Faith, Kay Sullivan

    2013-02-01

    Although significant progress has been made in measuring public health emergency preparedness, system-level performance measures are lacking. This report examines a potential approach to such measures for Strategic National Stockpile (SNS) operations. We adapted an engineering analytic technique used to assess the reliability of technological systems-failure mode and effects analysis-to assess preparedness. That technique, which includes systematic mapping of the response system and identification of possible breakdowns that affect performance, provides a path to use data from existing SNS assessment tools to estimate likely future performance of the system overall. Systems models of SNS operations were constructed and failure mode analyses were performed for each component. Linking data from existing assessments, including the technical assistance review and functional drills, to reliability assessment was demonstrated using publicly available information. The use of failure mode and effects estimates to assess overall response system reliability was demonstrated with a simple simulation example. Reliability analysis appears an attractive way to integrate information from the substantial investment in detailed assessments for stockpile delivery and dispensing to provide a view of likely future response performance.

  19. Interobserver agreement of semi-automated and manual measurements of functional MRI metrics of treatment response in hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Bonekamp, David; Bonekamp, Susanne; Halappa, Vivek Gowdra; Geschwind, Jean-Francois H.; Eng, John; Corona-Villalobos, Celia Pamela; Pawlik, Timothy M.; Kamel, Ihab R.

    2014-01-01

    Purpose: To assess the interobserver agreement in 50 patients with hepatocellular carcinoma (HCC) before and 1 month after intra-arterial therapy (IAT) using two semi-automated methods and a manual approach for the following functional, volumetric and morphologic parameters: (1) apparent diffusion coefficient (ADC), (2) arterial phase enhancement (AE), (3) portal venous phase enhancement (VE), (4) tumor volume, and assessment according to (5) the Response Evaluation Criteria in Solid Tumors (RECIST), and (6) the European Association for the Study of the Liver (EASL). Materials and methods: This HIPAA-compliant retrospective study had institutional review board approval. The requirement for patient informed consent was waived. Tumor ADC, AE, VE, volume, RECIST, and EASL in 50 index lesions was measured by three observers. Interobserver reproducibility was evaluated using intraclass correlation coefficients (ICC). P < 0.05 was considered to indicate a significant difference. Results: Semi-automated volumetric measurements of functional parameters (ADC, AE, and VE) before and after IAT as well as change in tumor ADC, AE, or VE had better interobserver agreement (ICC = 0.830–0.974) compared with manual ROI-based axial measurements (ICC = 0.157–0.799). Semi-automated measurements of tumor volume and size in the axial plane before and after IAT had better interobserver agreement (ICC = 0.854–0.996) compared with manual size measurements (ICC = 0.543–0.596), and interobserver agreement for change in tumor RECIST size was also higher using semi-automated measurements (ICC = 0.655) compared with manual measurements (ICC = 0.169). EASL measurements of tumor enhancement in the axial plane before and after IAT ((ICC = 0.758–0.809), and changes in EASL after IAT (ICC = 0.653) had good interobserver agreement. Conclusion: Semi-automated measurements of functional changes assessed by ADC and VE based on whole-lesion segmentation demonstrated better reproducibility than

  20. Sharp metric obstructions for quasi-Einstein metrics

    Science.gov (United States)

    Case, Jeffrey S.

    2013-02-01

    Using the tractor calculus to study smooth metric measure spaces, we adapt results of Gover and Nurowski to give sharp metric obstructions to the existence of quasi-Einstein metrics on suitably generic manifolds. We do this by introducing an analogue of the Weyl tractor W to the setting of smooth metric measure spaces. The obstructions we obtain can be realized as tensorial invariants which are polynomial in the Riemann curvature tensor and its divergence. By taking suitable limits of their tensorial forms, we then find obstructions to the existence of static potentials, generalizing to higher dimensions a result of Bartnik and Tod, and to the existence of potentials for gradient Ricci solitons.

  1. Development of the Digital Arthritis Index, a Novel Metric to Measure Disease Parameters in a Rat Model of Rheumatoid Arthritis

    Directory of Open Access Journals (Sweden)

    Maria A. Lim

    2017-11-01

    Full Text Available Despite a broad spectrum of anti-arthritic drugs currently on the market, there is a constant demand to develop improved therapeutic agents. Efficient compound screening and rapid evaluation of treatment efficacy in animal models of rheumatoid arthritis (RA can accelerate the development of clinical candidates. Compound screening by evaluation of disease phenotypes in animal models facilitates preclinical research by enhancing understanding of human pathophysiology; however, there is still a continuous need to improve methods for evaluating disease. Current clinical assessment methods are challenged by the subjective nature of scoring-based methods, time-consuming longitudinal experiments, and the requirement for better functional readouts with relevance to human disease. To address these needs, we developed a low-touch, digital platform for phenotyping preclinical rodent models of disease. As a proof-of-concept, we utilized the rat collagen-induced arthritis (CIA model of RA and developed the Digital Arthritis Index (DAI, an objective and automated behavioral metric that does not require human-animal interaction during the measurement and calculation of disease parameters. The DAI detected the development of arthritis similar to standard in vivo methods, including ankle joint measurements and arthritis scores, as well as demonstrated a positive correlation to ankle joint histopathology. The DAI also determined responses to multiple standard-of-care (SOC treatments and nine repurposed compounds predicted by the SMarTRTM Engine to have varying degrees of impact on RA. The disease profiles generated by the DAI complemented those generated by standard methods. The DAI is a highly reproducible and automated approach that can be used in-conjunction with standard methods for detecting RA disease progression and conducting phenotypic drug screens.

  2. Measurement of peroxisomal enzyme activities in the liver of brown trout (Salmo trutta, using spectrophotometric methods

    Directory of Open Access Journals (Sweden)

    Resende Albina D

    2003-03-01

    Full Text Available Abstract Background This study was aimed primarily at testing in the liver of brown trout (Salmo trutta spectrophotometric methods previously used to measure the activities of catalase and hydrogen peroxide producing oxidases in mammals. To evaluate the influence of temperature on the activities of those peroxisomal enzymes was the second objective. A third goal of this work was the study of enzyme distribution in crude cell fractions of brown trout liver. Results The assays revealed a linear increase in the activity of all peroxisomal enzymes as the temperature rose from 10° to 37°C. However, while the activities of hydrogen peroxide producing oxidases were strongly influenced by temperature, catalase activity was only slightly affected. A crude fraction enriched with peroxisomes was obtained by differential centrifugation of liver homogenates, and the contamination by other organelles was evaluated by the activities of marker enzymes for mitochondria (succinate dehydrogenase, lysosomes (aryl sulphatase and microsomes (NADPH cytochrome c reductase. For peroxisomal enzymes, the activities per mg of protein (specific activity in liver homogenates were strongly correlated with the activities per g of liver and with the total activities per liver. These correlations were not obtained with crude peroxisomal fractions. Conclusions The spectrophotometric protocols originally used to quantify the activity of mammalian peroxisomal enzymes can be successfully applied to the study of those enzymes in brown trout. Because the activity of all studied peroxisomal enzymes rose in a linear mode with temperature, their activities can be correctly measured between 10° and 37°C. Probably due to contamination by other organelles and losses of soluble matrix enzymes during homogenisation, enzyme activities in crude peroxisomal fractions do not correlate with the activities in liver homogenates. Thus, total homogenates will be used in future seasonal and

  3. Multi-slice CT three dimensional volume measurement of tumors and livers in hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Yu Yuanlong; Li Liangcai; Tang Binghang; Hu Zemin

    2004-01-01

    Objective: To examine the accuracy of multi-slice CT (MSCT) three dimensional (3D) volume measurement of tumors and livers in hepatocellular carcinoma cases by using immersion method as the standard. Methods: (1) The volume of 25 porkling livers was measured using immersion method in experiment group in vitro. Then the models were built according to Matsumoto's method and CT scanning and special software were used to measure the volume of the livers. (2) The volume of the tumors in 25 cases of hepatocellular carcinoma was measured using diameter measurement method and special volume measurement software (tissue measurements). Two tumors of them were measured respectively using MSCT 3D measurement, diameter measurement before the operation and immersion method after the operation. The data of the two groups were examined using pairing t test. Results: (1) The volume range of 25 porkling livers was 68.50-1150.10 ml using immersion method and 69.78-1069.97 ml using MSCT 3D measurement. There was no significant difference of the data in these two groups using t-test (t=1.427, P>0.05). (2) The volume range of 25 hepatocellular tumors was 395.16-2747.7 ml using diameter measurement and 203.10-1463.19 ml using MSCT 3D measurement before the operation. There was significant difference of the data in these two groups using t-test (t=7.689, P<0.001). In 2 ablated tumors, 1 case's volume was (21.75±0.60) ml using MSCT 3D measurement and 33.73 ml using diameter measurement before the operation and 21.50 ml using immersion measurement after the operation. The other case's volume was (696.13±5.30) ml using MSCT 3D measurement and 1323.51 ml using diameter measurement before the operation and 685.50 ml using immersion measurement after the operation. Conclusion: MSCT 3D volume measurement can accurately measure the volume of tumor and liver and has important clinical application value. There is no significant difference between MSCT 3D volume measurement and immersion method

  4. Relationship of liver stiffness and controlled attenuation parameter measured by transient elastography with diabetes mellitus in patients with chronic liver disease.

    Science.gov (United States)

    Ahn, Jem Ma; Paik, Yong-Han; Kim, So Hyun; Lee, Jun Hee; Cho, Ju Yeon; Sohn, Won; Gwak, Geum-Youn; Choi, Moon Seok; Lee, Joon Hyeok; Koh, Kwang Cheol; Paik, Seung Woon; Yoo, Byung Chul

    2014-08-01

    High prevalence of diabetes mellitus in patients with liver cirrhosis has been reported in many studies. The aim of our study was to evaluate the relationship of hepatic fibrosis and steatosis assessed by transient elastography with diabetes in patients with chronic liver disease. The study population consisted of 979 chronic liver disease patients. Liver fibrosis and steatosis were assessed by liver stiffness measurement (LSM) and controlled attenuation parameter (CAP) on transient elastography. Diabetes was diagnosed in 165 (16.9%) of 979 patients. The prevalence of diabetes had significant difference among the etiologies of chronic liver disease. Higher degrees of liver fibrosis and steatosis, assessed by LSM and CAP score, showed higher prevalence of diabetes (F0/1 [14%], F2/3 [18%], F4 [31%], Pdiabetes were hypertension (OR, 1.98; P=0.001), LSM F4 (OR, 1.86; P=0.010), male gender (OR, 1.60; P=0.027), and age>50 yr (OR, 1.52; P=0.046). The degree of hepatic fibrosis but not steatosis assessed by transient elastography has significant relationship with the prevalence of diabetes in patients with chronic liver disease.

  5. Indications for portal pressure measurement in chronic liver disease

    DEFF Research Database (Denmark)

    Hobolth, Lise; Bendtsen, Flemming; Møller, Søren

    2012-01-01

    Portal hypertension leads to development of serious complications such as esophageal varices, ascites, renal and cardiovascular dysfunction. The importance of the degree of portal hypertension has been substantiated within recent years. Measurement of the portal pressure is simple and safe...... and the hepatic venous pressure gradient (HVPG) independently predicts survival and development of complications such as ascites, HCC and bleeding from esophageal varices. Moreover, measurements of HVPG can be used to guide pharmacotherapy for primary and secondary prophylaxis for variceal bleeding. Assessment...... of HVPG should therefore be considered as a part of the general characterization of patients with portal hypertension in departments assessing and treating this condition....

  6. Calculations of two new dose metrics proposed by AAPM Task Group 111 using the measurements with standard CT dosimetry phantoms

    International Nuclear Information System (INIS)

    Li, Xinhua; Zhang, Da; Liu, Bob

    2013-01-01

    Purpose: AAPM Task Group 111 proposed to measure the equilibrium dose-pitch product D-caret eq for scan modes involving table translation and the midpoint dose D L (0) for stationary-table modes on the central and peripheral axes of sufficiently long (e.g., at least 40 cm) phantoms. This paper presents an alternative approach to calculate both metrics using the measurements of scanning the standard computed tomographic (CT) dosimetry phantoms on CT scanners.Methods: D-caret eq was calculated from CTDI 100 and ε(CTDI 100 ) (CTDI 100 efficiency), and D L (0) was calculated from D-caret eq and the approach to equilibrium function H(L) =D L (0)/D eq , where D eq was the equilibrium dose. CTDI 100 may be directly obtained from several sources (such as medical physicist's CT scanner performance evaluation or the IMPACT CT patient dosimetry calculator), or be derived from CTDI Vol using the central to peripheral CTDI 100 ratio (R 100 ). The authors have provided the required ε(CTDI 100 ) and H(L) data in two previous papers [X. Li, D. Zhang, and B. Liu, Med. Phys. 39, 901–905 (2012); and ibid. 40, 031903 (10pp.) (2013)]. R 100 was assessed for a series of GE, Siemens, Philips, and Toshiba CT scanners with multiple settings of scan field of view, tube voltage, and bowtie filter.Results: The calculated D L (0) and D L (0)/D eq in PMMA and water cylinders were consistent with the measurements on two GE CT scanners (LightSpeed 16 and VCT) by Dixon and Ballard [Med. Phys. 34, 3399–3413 (2007)], the measurements on a Siemens CT scanner (SOMATOM Spirit Power) by Descamps et al. [J. Appl. Clin. Med. Phys. 13, 293–302 (2012)], and the Monte Carlo simulations by Boone [Med. Phys. 36, 4547–4554 (2009)].Conclusions: D-caret eq and D L (0) can be calculated using the alternative approach. The authors have provided the required ε(CTDI 100 ) and H(L) data in two previous papers. R 100 is presented for a majority of multidetector CT scanners currently on the market, and can be

  7. Fault Management Metrics

    Science.gov (United States)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  8. Engineering performance metrics

    Science.gov (United States)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  9. Clinical evaluation of preoperative measurement of liver volume by CT volumetry

    International Nuclear Information System (INIS)

    Takahashi, Masahiro; Sasaki, Ryoko; Kato, Kenichi

    2003-01-01

    The utility of measuring liver volume by CT volumetry prior to hepatectomy for treatment of hepatobiliary diseases was assessed by investigating the relationship between liver volume and perioperative hepatic function, and some perioperative factors. Both residual liver volume (RLV) and functional residual liver volume rate (%FRLV) had a significant negative correlation with maximum postoperative total bilirubin (T. Bil) (r=-0.318, r=-0.477, respectively). Further, RLV and %FRLV exhibited a negative correlation with length of intensive care unit (ICU) stay (r=-0.297, r=-0.397, respectively). The ratio of patients with maximum postoperative T. Bil≥10 mg/dl among patients with RLV<500 ml was significantly higher than that among patients with RLV≥500 ml (p<0.05). Similarly, the ratio of patients with maximum postoperative T. Bil≥10 mg/dl among patients with %FRLV<40% was significantly higher than that among patients with %FRLV≥40% (p<0.05). Among patients with %FRLV<40%, maximum T. Bil for patients who underwent portal vein embolization (PVE) was significantly lower than that for patients who did not undergo PVE (p<0.05). When performing hepatectomy, the risk of severe postoperative liver failure is low as long as %FRLV and RLV are above 40% and 500 ml, respectively, and PVE is useful for performing extended hepatectomy when %FRLV is <40%. (author)

  10. In vivo measurements of relaxation process in the human liver by MRI. The role of respiratory gating/triggering

    DEFF Research Database (Denmark)

    Thomsen, C; Henriksen, O; Ring, P

    1988-01-01

    In vivo estimation of relaxation processes in the liver by magnetic resonance imaging (MRI) may be helpful for characterization of various pathological conditions in the liver. However, such measurements may be significantly hampered by movement of the liver with the respiration. The effect...... of synchronization of data acquisition to the respiratory cycle on measured T1- and T2-relaxation curves was studied in normal subjects, patients with diffuse liver disease, and patients with focal liver pathology. Multi spin echo sequences with five different repetition times were used. The measurements were...... carried out with and without respiratory gating/triggering. In the healthy subjects as well as in the patients with diffuse liver diseases respiratory synchronization did not alter the obtained relaxation curves. However, in the patients with focal pathology the relaxation curves were significantly...

  11. Technical Privacy Metrics: a Systematic Survey

    OpenAIRE

    Wagner, Isabel; Eckhoff, David

    2018-01-01

    The file attached to this record is the author's final peer reviewed version The goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature makes an informed choice of metrics challenging. As a result, instead of using existing metrics, n...

  12. Magnetic resonance elastography: Feasibility of liver stiffness measurements in healthy volunteers at 3 T

    International Nuclear Information System (INIS)

    Mannelli, L.; Godfrey, E.; Graves, M.J.; Patterson, A.J.; Beddy, P.; Bowden, D.; Joubert, I.; Priest, A.N.; Lomas, D.J.

    2012-01-01

    Aim: To demonstrate the feasibility of obtaining liver stiffness measurements with magnetic resonance elastography (MRE) at 3 T in normal healthy volunteers using the same technique that has been successfully applied at 1.5 T. Methods and materials: The study was approved by the local ethics committee and written informed consent was obtained from all volunteers. Eleven volunteers (mean age 35 ± 9 years) with no history of gastrointestinal, hepatobiliary, or cardiovascular disease were recruited. The magnetic resonance imaging (MRI) protocol included a gradient echo-based MRE sequence using a 60 Hz pneumatic excitation. The MRE images were processed using a local frequency estimation inversion algorithm to provide quantitative stiffness maps. Adequate image quality was assessed subjectively by demonstrating the presence of visible propagating waves within the liver parenchyma underlying the driver location. Liver stiffness values were obtained using manually placed regions of interest (ROI) outlining the liver margins on the gradient echo wave images, which were then mapped onto the corresponding stiffness image. The mean stiffness values from two adjacent sections were recorded. Results: Eleven volunteers underwent MRE. The quality of the MRE images was adequate in all the volunteers. The mean liver stiffness for the group was 2.3 ± 0.38 kPa (ranging from 1.7–2.8 kPa). Conclusions: This preliminary work using MRE at 3 T in healthy volunteers demonstrates the feasibility of liver stiffness evaluation at 3 T without modification of the approach used at 1.5 T. Adequate image quality and normal MRE values were obtained in all volunteers. The obtained stiffness values were in the range of those reported for healthy volunteers in previous studies at 1.5 T. There was good interobserver reproducibility in the stiffness measurements.

  13. Magnetic resonance elastography: Feasibility of liver stiffness measurements in healthy volunteers at 3 T

    Energy Technology Data Exchange (ETDEWEB)

    Mannelli, L., E-mail: mannellilorenzo@yahoo.it [Department of Radiology, Addenbrooke' s Hospital and University of Cambridge, Cambridge (United Kingdom); Department of Radiology, University of Washington, Seattle, WA (United States); Godfrey, E.; Graves, M.J.; Patterson, A.J.; Beddy, P.; Bowden, D.; Joubert, I.; Priest, A.N.; Lomas, D.J. [Department of Radiology, Addenbrooke' s Hospital and University of Cambridge, Cambridge (United Kingdom)

    2012-03-15

    Aim: To demonstrate the feasibility of obtaining liver stiffness measurements with magnetic resonance elastography (MRE) at 3 T in normal healthy volunteers using the same technique that has been successfully applied at 1.5 T. Methods and materials: The study was approved by the local ethics committee and written informed consent was obtained from all volunteers. Eleven volunteers (mean age 35 {+-} 9 years) with no history of gastrointestinal, hepatobiliary, or cardiovascular disease were recruited. The magnetic resonance imaging (MRI) protocol included a gradient echo-based MRE sequence using a 60 Hz pneumatic excitation. The MRE images were processed using a local frequency estimation inversion algorithm to provide quantitative stiffness maps. Adequate image quality was assessed subjectively by demonstrating the presence of visible propagating waves within the liver parenchyma underlying the driver location. Liver stiffness values were obtained using manually placed regions of interest (ROI) outlining the liver margins on the gradient echo wave images, which were then mapped onto the corresponding stiffness image. The mean stiffness values from two adjacent sections were recorded. Results: Eleven volunteers underwent MRE. The quality of the MRE images was adequate in all the volunteers. The mean liver stiffness for the group was 2.3 {+-} 0.38 kPa (ranging from 1.7-2.8 kPa). Conclusions: This preliminary work using MRE at 3 T in healthy volunteers demonstrates the feasibility of liver stiffness evaluation at 3 T without modification of the approach used at 1.5 T. Adequate image quality and normal MRE values were obtained in all volunteers. The obtained stiffness values were in the range of those reported for healthy volunteers in previous studies at 1.5 T. There was good interobserver reproducibility in the stiffness measurements.

  14. Evaluation of the performance of a micromethod for measuring urinary iodine by using six sigma quality metrics.

    Science.gov (United States)

    Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud

    2013-09-01

    The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)Sigma metrics at all concentrations. Only one laboratory had TEcalc

  15. The development of a practical and uncomplicated predictive equation to determine liver volume from simple linear ultrasound measurements of the liver

    International Nuclear Information System (INIS)

    Childs, Jessie T.; Thoirs, Kerry A.; Esterman, Adrian J.

    2016-01-01

    This study sought to develop a practical and uncomplicated predictive equation that could accurately calculate liver volumes, using multiple simple linear ultrasound measurements combined with measurements of body size. Penalized (lasso) regression was used to develop a new model and compare it to the ultrasonic linear measurements currently used clinically. A Bland–Altman analysis showed that the large limits of agreement of the new model render it too inaccurate to be of clinical use for estimating liver volume per se, but it holds value in tracking disease progress or response to treatment over time in individuals, and is certainly substantially better as an indicator of overall liver size than the ultrasonic linear measurements currently being used clinically. - Highlights: • A new model to calculate liver volumes from simple linear ultrasound measurements. • This model was compared to the linear measurements currently used clinically. • The new model holds value in tracking disease progress or response to treatment. • This model is better as an indicator of overall liver size.

  16. Metrics for Probabilistic Geometries

    DEFF Research Database (Denmark)

    Tosi, Alessandra; Hauberg, Søren; Vellido, Alfredo

    2014-01-01

    the distribution over mappings is given by a Gaussian process. We treat the corresponding latent variable model as a Riemannian manifold and we use the expectation of the metric under the Gaussian process prior to define interpolating paths and measure distance between latent points. We show how distances...

  17. Is Time the Best Metric to Measure Carbon-Related Climate Change Potential and Tune the Economy Toward Reduced Fossil Carbon Extraction?

    Science.gov (United States)

    DeGroff, F. A.

    2016-12-01

    Anthropogenic changes to non-anthropogenic carbon fluxes are a primary driver of climate change. There currently exists no comprehensive metric to measure and value anthropogenic changes in carbon flux between all states of carbon. Focusing on atmospheric carbon emissions as a measure of anthropogenic activity on the environment ignores the fungible characteristics of carbon that are crucial in both the biosphere and the worldwide economy. Focusing on a single form of inorganic carbon as a proxy metric for the plethora of anthropogenic activity and carbon compounds will prove inadequate, convoluted, and unmanageable. A broader, more basic metric is needed to capture the entirety of carbon activity, particularly in an economic, profit-driven environment. We propose a new metric to measure changes in the temporal distance of any form or state of carbon from one state to another. Such a metric would be especially useful to measure the temporal distance of carbon from sinks such as the atmosphere or oceans. The effect of changes in carbon flux as a result of any human activity can be measured by the difference between the anthropogenic and non-anthropogenic temporal distance. The change in the temporal distance is a measure of the climate change potential much like voltage is a measure of electrical potential. The integral of the climate change potential is proportional to the anthropogenic climate change. We also propose a logarithmic vector scale for carbon quality, cq, as a measure of anthropogenic changes in carbon flux. The distance between the cq vector starting and ending temporal distances represents the change in cq. A base-10 logarithmic scale would allow the addition and subtraction of exponents to calculate changes in cq. As anthropogenic activity changes the temporal distance of carbon, the change in cq is measured as: cq = ß ( log10 [mean carbon temporal distance] ) where ß represents the carbon price coefficient for a particular country. For any

  18. Software Quality Assurance Metrics

    Science.gov (United States)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  19. Critical concentrations of cadmium in human liver and kidney measured by prompt-gamma neutron activation

    International Nuclear Information System (INIS)

    Cohn, S.H.; Vartsky, D.; Yasumura, S.; Zanzi, I.; Ellis, K.J.

    1979-01-01

    Few data exist on Cd metabolism in human beings. In particular, data are needed on the role of parameters such as age, sex, weight, diet, smoking habits, and state of health. Prompt-gamma neutron activation analysis (PGNAA) provides the only currently available means for measuring in vivo levels of liver and kidney Cd. The method employs an 85 Ci, 235 Pu,Be neutron source and a gamma ray detection system consisting of two Ge(Li) detector. The dose delivered to the liver and left kidney is 666 mrem (detection limit is 1.4 μg/g Cd in the liver and 2.0 mg Cd for one kidney). Absolute levels of Cd in the kidney and concentrations of Cd in the liver were measured in vivo in twenty healthy adult males using 238 Pu,Be neutron sources. Organ Cd levels of smokers were significantly elevated above those of nonsmokers. Biological half-time for Cd in the body was estimated to be 15.7 yr. Cigarette smoking was estimated to result in the absorption of 1.9 μg of Cd per pack. No relationship was bound between body stores of Cd (liver and kidney) and Cd or β-microglobulin levels in urine and blood. Currently the above neutron activation facility is being mounted on a 34-ft mobile trailer unit. This unit will be used to monitor levels of Cd in industrial workers. It is anticipated that critically important data, particularly on industrially exposed workers, will provide a better basis for determining critical concentrations and for the setting or revision of standards for industrial and environmental Cd pollution

  20. Point shear wave speed measurement in differentiating benign and malignant focal liver lesions.

    Science.gov (United States)

    Dong, Yi; Wang, Wen-Ping; Xu, Yadan; Cao, Jiaying; Mao, Feng; Dietrich, Cristoph F

    2017-06-26

    To investigate the value of ElastPQ measurement for differential diagnosis of benign and malignant focal liver lesions (FLLs) by using histologic results as a reference standard. A total of 154 patients were included. ElastPQ measurement was performed for each lesion in which the shear wave speed (SWS) was measured. The difference in SWS and SWS ratio of FLL to surrounding liver were evaluated, and the cut off value was investigated. Receiver operating characteristic (ROC) curve was plotted to evaluate the diagnostic performance. Histology as a gold standard was obtained by surgery in all patients. A total of 154 lesions including 129 (83.7 %) malignant FLLs and 25 (16.3 %) benign ones were analysed. The SWS of malignant and benign FLLs was significantly different, 2.77±0.68 m/s and 1.57±0.55 m/s (p<0.05). The SWS ratio of each FLL to surrounding liver parenchyma was 2.23±0.49 for malignant and 1.14±0.36 for benign FLLs (p<0.05). The cut off value for differential diagnosis was 2.06 m/s for SWS and 1.67 for SWS ratio.  ElastPQ measurement provides reliable quantitative stiffness information of FLLs and may be helpful in the differential diagnosis between malignant and benign FLLs.

  1. Noninvasive measurement of liver iron concentration at MRI in children with acute leukemia: initial results

    Energy Technology Data Exchange (ETDEWEB)

    Vag, Tibor; Krumbein, Ines; Reichenbach, Juergen R.; Lopatta, Eric; Stenzel, Martin; Kaiser, Werner A.; Mentzel, Hans-Joachim [Friedrich Schiller University Jena, Institute of Diagnostic and Interventional Radiology, Jena (Germany); Kentouche, Karim; Beck, James [Friedrich Schiller University Jena, Department of Pediatrics, Jena (Germany); Renz, Diane M. [Charite University Medicine Berlin, Department of Radiology, Campus Virchow Clinic, Berlin (Germany)

    2011-08-15

    Routine assessment of body iron load in patients with acute leukemia is usually done by serum ferritin (SF) assay; however, its sensitivity is impaired by different conditions including inflammation and malignancy. To estimate, using MRI, the extent of liver iron overload in children with acute leukemia and receiving blood transfusions, and to examine the association between the degree of hepatic iron overload and clinical parameters including SF and the transfusion iron load (TIL). A total of 25 MRI measurements of the liver were performed in 15 children with acute leukemia (mean age 9.75 years) using gradient-echo sequences. Signal intensity ratios between the liver and the vertebral muscle (L/M ratio) were calculated and compared with SF-levels. TIL was estimated from the cumulative blood volume received, assuming an amount of 200 mg iron per transfused red blood cell unit. Statistical analysis revealed good correlation between the L/M SI ratio and TIL (r = -0.67, P = 0.002, 95% confidence interval CI = -0.83 to -0.34) in patients with acute leukemia as well as between L/M SI ratio and SF (r = -0.76, P = 0.0003, 95% CI = -0.89 to -0.52). SF may reliably reflect liver iron stores as a routine marker in patients suffering from acute leukemia. (orig.)

  2. Metric and structural equivalence of core cognitive abilities measured with the Wechsler Adult Intelligence Scale-III in the United States and Australia.

    Science.gov (United States)

    Bowden, Stephen C; Lissner, Dianne; McCarthy, Kerri A L; Weiss, Lawrence G; Holdnack, James A

    2007-10-01

    Equivalence of the psychological model underlying Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) scores obtained in the United States and Australia was examined in this study. Examination of metric invariance involves testing the hypothesis that all components of the measurement model relating observed scores to latent variables are numerically equal in different samples. The assumption of metric invariance is necessary for interpretation of scores derived from research studies that seek to generalize patterns of convergent and divergent validity and patterns of deficit or disability. An Australian community volunteer sample was compared to the US standardization data. A pattern of strict metric invariance was observed across samples. In addition, when the effects of different demographic characteristics of the US and Australian samples were included, structural parameters reflecting values of the latent cognitive variables were found not to differ. These results provide important evidence for the equivalence of measurement of core cognitive abilities with the WAIS-III and suggest that latent cognitive abilities in the US and Australia do not differ.

  3. Metrics for energy resilience

    International Nuclear Information System (INIS)

    Roege, Paul E.; Collier, Zachary A.; Mancillas, James; McDonagh, John A.; Linkov, Igor

    2014-01-01

    Energy lies at the backbone of any advanced society and constitutes an essential prerequisite for economic growth, social order and national defense. However there is an Achilles heel to today's energy and technology relationship; namely a precarious intimacy between energy and the fiscal, social, and technical systems it supports. Recently, widespread and persistent disruptions in energy systems have highlighted the extent of this dependence and the vulnerability of increasingly optimized systems to changing conditions. Resilience is an emerging concept that offers to reconcile considerations of performance under dynamic environments and across multiple time frames by supplementing traditionally static system performance measures to consider behaviors under changing conditions and complex interactions among physical, information and human domains. This paper identifies metrics useful to implement guidance for energy-related planning, design, investment, and operation. Recommendations are presented using a matrix format to provide a structured and comprehensive framework of metrics relevant to a system's energy resilience. The study synthesizes previously proposed metrics and emergent resilience literature to provide a multi-dimensional model intended for use by leaders and practitioners as they transform our energy posture from one of stasis and reaction to one that is proactive and which fosters sustainable growth. - Highlights: • Resilience is the ability of a system to recover from adversity. • There is a need for methods to quantify and measure system resilience. • We developed a matrix-based approach to generate energy resilience metrics. • These metrics can be used in energy planning, system design, and operations

  4. Volume measurement variability in three-dimensional high-frequency ultrasound images of murine liver metastases

    International Nuclear Information System (INIS)

    Wirtzfeld, L A; Graham, K C; Groom, A C; MacDonald, I C; Chambers, A F; Fenster, A; Lacefield, J C

    2006-01-01

    The identification and quantification of tumour volume measurement variability is imperative for proper study design of longitudinal non-invasive imaging of pre-clinical mouse models of cancer. Measurement variability will dictate the minimum detectable volume change, which in turn influences the scheduling of imaging sessions and the interpretation of observed changes in tumour volume. In this paper, variability is quantified for tumour volume measurements from 3D high-frequency ultrasound images of murine liver metastases. Experimental B16F1 liver metastases were analysed in different size ranges including less than 1 mm 3 , 1-4 mm 3 , 4-8 mm 3 and 8-70 mm 3 . The intra- and inter-observer repeatability was high over a large range of tumour volumes, but the coefficients of variation (COV) varied over the volume ranges. The minimum and maximum intra-observer COV were 4% and 14% for the 1-4 mm 3 and 3 tumours, respectively. For tumour volumes measured by segmenting parallel planes, the maximum inter-slice distance that maintained acceptable measurement variability increased from 100 to 600 μm as tumour volume increased. Comparison of free breathing versus ventilated animals demonstrated that respiratory motion did not significantly change the measured volume. These results enable design of more efficient imaging studies by using the measured variability to estimate the time required to observe a significant change in tumour volume

  5. Measurement of the coagulation dynamics of bovine liver using the modified microscopic Beer-Lambert law.

    Science.gov (United States)

    Terenji, Albert; Willmann, Stefan; Osterholz, Jens; Hering, Peter; Schwarzmaier, Hans-Joachim

    2005-06-01

    During heating, the optical properties of biological tissues change with the coagulation state. In this study, we propose a technique, which uses these changes to monitor the coagulation process during laser-induced interstitial thermotherapy (LITT). Untreated and coagulated (water bath, temperatures between 35 degrees C and 90 degrees C for 20 minutes.) samples of bovine liver tissue were examined using a Nd:YAG (lambda = 1064 nm) frequency-domain reflectance spectrometer. We determined the time integrated intensities (I(DC)) and the phase shifts (Phi) of the photon density waves after migration through the tissue. From these measured quantities, the time of flight (TOF) of the photons and the absorption coefficients of the samples were derived using the modified microscopic Beer-Lambert law. The absorption coefficients of the liver samples decreased significantly with the temperature in the range between 50 degrees C and 70 degrees C. At the same time, the TOF of the investigated photos was found increased indicating an increased scattering. The coagulation dynamics could be well described using the Arrhenius formalism with the activation energy of 106 kJ/mol and the frequency factor of 1.59 x 10(13)/second. Frequency-domain reflectance spectroscopy in combination with the modified microscopic Beer-Lambert (MBL) is suitable to measure heat induced changes in the absorption and scattering properties of bovine liver in vitro. The technique may be used to monitor the coagulation dynamics during local thermo-coagulation in vivo. Copyright 2005 Wiley-Liss, Inc.

  6. Performance-Based Measures Associate With Frailty in Patients With End-Stage Liver Disease.

    Science.gov (United States)

    Lai, Jennifer C; Volk, Michael L; Strasburg, Debra; Alexander, Neil

    2016-12-01

    Physical frailty, as measured by the Fried Frailty Index, is increasingly recognized as a critical determinant of outcomes in patients with cirrhosis. However, its utility is limited by the inclusion of self-reported components. We aimed to identify performance-based measures associated with frailty in patients with cirrhosis. Patients with cirrhosis, aged 50 years or older, underwent: 6-minute walk test (cardiopulmonary endurance), chair stands in 30 seconds (muscle endurance), isometric knee extension (lower extremity strength), unipedal stance time (static balance), and maximal step length (dynamic balance/coordination). Linear regression associated each physical performance test with frailty. Principal components exploratory factor analysis evaluated the interrelatedness of frailty and the 5 physical performance tests. Of 40 patients with cirrhosis, with a median age of 64 years and Model for End-stage Liver Disease (MELD) MELD of 12.10 (25%) were frail by Fried Frailty Index ≥3. Frail patients with cirrhosis had poorer performance in 6-minute walk test distance (231 vs 338 m), 30-second chair stands (7 vs 10), isometric knee extension (86 vs 122 Newton meters), and maximal step length (22 vs 27 in. (P ≤ 0.02 for each). Each physical performance test was significantly associated with frailty (P test to a single factor-frailty. Frailty in cirrhosis is a multidimensional construct that is distinct from liver dysfunction and incorporates endurance, strength, and balance. Our data provide specific targets for prehabilitation interventions aimed at reducing frailty in patients with cirrhosis in preparation for liver transplantation.

  7. The error analysis of Lobular and segmental division of right liver by volume measurement.

    Science.gov (United States)

    Zhang, Jianfei; Lin, Weigang; Chi, Yanyan; Zheng, Nan; Xu, Qiang; Zhang, Guowei; Yu, Shengbo; Li, Chan; Wang, Bin; Sui, Hongjin

    2017-07-01

    The aim of this study is to explore the inconsistencies between right liver volume as measured by imaging and the actual anatomical appearance of the right lobe. Five healthy donated livers were studied. The liver slices were obtained with hepatic segments multicolor-infused through the portal vein. In the slices, the lobes were divided by two methods: radiological landmarks and real anatomical boundaries. The areas of the right anterior lobe (RAL) and right posterior lobe (RPL) on each slice were measured using Photoshop CS5 and AutoCAD, and the volumes of the two lobes were calculated. There was no statistically significant difference between the volumes of the RAL or RPL as measured by the radiological landmarks (RL) and anatomical boundaries (AB) methods. However, the curves of the square error value of the RAL and RPL measured using CT showed that the three lowest points were at the cranial, intermediate, and caudal levels. The U- or V-shaped curves of the square error rate of the RAL and RPL revealed that the lowest value is at the intermediate level and the highest at the cranial and caudal levels. On CT images, less accurate landmarks were used to divide the RAL and RPL at the cranial and caudal layers. The measured volumes of hepatic segments VIII and VI would be less than their true values, and the measured volumes of hepatic segments VII and V would be greater than their true values, according to radiological landmarks. Clin. Anat. 30:585-590, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. Enterprise Sustainment Metrics

    Science.gov (United States)

    2015-06-19

    are negatively impacting KPIs” (Parmenter, 2010: 31). In the current state, the Air Force’s AA and PBL metrics are once again split . AA does...must have the authority to “take immediate action to rectify situations that are negatively impacting KPIs” (Parmenter, 2010: 31). 3. Measuring...highest profitability and shareholder value for each company” (2014: 273). By systematically diagraming a process, either through a swim lane flowchart

  9. Liver stiffness measured by magnetic resonance elastography as a risk factor for hepatocellular carcinoma: a preliminary case-control study

    Energy Technology Data Exchange (ETDEWEB)

    Motosugi, Utaroh; Ichikawa, Tomoaki; Koshiishi, Tsuyota; Sano, Katsuhiro; Morisaka, Hiroyuki; Ichikawa, Shintaro; Araki, Tsutomu [University of Yamanashi, Department of Radiology, Yamanashi-ken (Japan); Enomoto, Nobuyuki [University of Yamanashi, 1st Department of Internal Medicine, Yamanashi (Japan); Matsuda, Masanori; Fujii, Hideki [University of Yamanashi, 1st Department of Surgery, Yamanashi (Japan)

    2013-01-15

    To examine if liver stiffness measured by magnetic resonance elastography (MRE) is a risk factor for hepatocellular carcinoma (HCC) in patients with chronic liver disease. By reviewing the records of magnetic resonance (MR) examinations performed at our institution, we selected 301 patients with chronic liver disease who did not have a previous medical history of HCC. All patients underwent MRE and gadoxetic acid-enhanced MR imaging. HCC was identified on MR images in 66 of the 301 patients, who were matched to controls from the remaining patients without HCC according to age. MRE images were obtained by visualising elastic waves generated in the liver by pneumatic vibration transferred via a cylindrical passive driver. Risk factors of HCC development were determined by the odds ratio with logistic regression analysis; gender and liver stiffness by MRE and serum levels of aspartate transferase, alanine transferase, alpha-fetoprotein, and protein induced by vitamin K absence-II. Multivariate analysis revealed that only liver stiffness by MRE was a significant risk factor for HCC with an odds ratio (95 % confidence interval) of 1.38 (1.05-1.84). Liver stiffness measured by MRE is an independent risk factor for HCC in patients with chronic liver disease. (orig.)

  10. Climate Change: A New Metric to Measure Changes in the Frequency of Extreme Temperatures using Record Data

    Science.gov (United States)

    Munasinghe, L.; Jun, T.; Rind, D. H.

    2012-01-01

    Consensus on global warming is the result of multiple and varying lines of evidence, and one key ramification is the increase in frequency of extreme climate events including record high temperatures. Here we develop a metric- called "record equivalent draws" (RED)-based on record high (low) temperature observations, and show that changes in RED approximate changes in the likelihood of extreme high (low) temperatures. Since we also show that this metric is independent of the specifics of the underlying temperature distributions, RED estimates can be aggregated across different climates to provide a genuinely global assessment of climate change. Using data on monthly average temperatures across the global landmass we find that the frequency of extreme high temperatures increased 10-fold between the first three decades of the last century (1900-1929) and the most recent decade (1999-2008). A more disaggregated analysis shows that the increase in frequency of extreme high temperatures is greater in the tropics than in higher latitudes, a pattern that is not indicated by changes in mean temperature. Our RED estimates also suggest concurrent increases in the frequency of both extreme high and extreme low temperatures during 2002-2008, a period when we observe a plateauing of global mean temperature. Using daily extreme temperature observations, we find that the frequency of extreme high temperatures is greater in the daily minimum temperature time-series compared to the daily maximum temperature time-series. There is no such observable difference in the frequency of extreme low temperatures between the daily minimum and daily maximum.

  11. Correlation study of spleen stiffness measured by FibroTouch with esophageal and gastric varices in patients with liver cirrhosis

    Directory of Open Access Journals (Sweden)

    WEI Yutong

    2015-03-01

    Full Text Available ObjectiveTo explore the correlation of spleen stiffness measured by FibroScan with esophageal and gastric varices in patients with liver cirrhosis. MethodsSpleen and liver stiffness was measured by FibroScan in 72 patients with liver cirrhosis who received gastroscopy in our hospital from December 2012 to December 2013. Categorical data were analyzed by χ2 test, and continuous data were analyzed by t test. Pearson's correlation analysis was used to investigate the correlation between the degree of esophageal varices and spleen stiffness. ResultsWith the increase in the Child-Pugh score in patients, the measurements of liver and spleen stiffness showed a rising trend. Correlation was found between the measurements of spleen and liver stiffness (r=0.367, P<0.05. The differences in measurements of spleen stiffness between patients with Child-Pugh classes A, B, and C were all significant (t=5.149, 7.231, and 6.119, respectively; P=0031, 0.025, and 0.037, respectively. The measurements of spleen and liver stiffness showed marked increases in patients with moderate and severe esophageal and gastric varices. The receiver operating characteristic (ROC curve analysis showed that the area under the ROC curve, sensitivity, and specificity for spleen stiffness were significantly higher than those for liver stiffness and platelet count/spleen thickness. ConclusionThe spleen stiffness measurement by FibroScan shows a good correlation with the esophageal and gastric varices in patients with liver cirrhosis. FibroScan is safe and noninvasive, and especially useful for those who are not suitable for gastroscopy.

  12. Non-Invasive Assessment of Hepatic Fibrosis by Elastic Measurement of Liver Using Magnetic Resonance Tagging Images

    Directory of Open Access Journals (Sweden)

    Xuejun Zhang

    2018-03-01

    Full Text Available To date, the measurement of the stiffness of liver requires a special vibrational tool that limits its application in many hospitals. In this study, we developed a novel method for automatically assessing the elasticity of the liver without any use of contrast agents or mechanical devices. By calculating the non-rigid deformation of the liver from magnetic resonance (MR tagging images, the stiffness was quantified as the displacement of grids on the liver image during a forced exhalation cycle. Our methods include two major processes: (1 quantification of the non-rigid deformation as the bending energy (BE based on the thin-plate spline method in the spatial domain and (2 calculation of the difference in the power spectrum from the tagging images, by using fast Fourier transform in the frequency domain. By considering 34 cases (17 normal and 17 abnormal liver cases, a remarkable difference between the two groups was found by both methods. The elasticity of the liver was finally analyzed by combining the bending energy and power spectral features obtained through MR tagging images. The result showed that only one abnormal case was misclassified in our dataset, which implied our method for non-invasive assessment of liver fibrosis has the potential to reduce the traditional liver biopsy.

  13. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  14. 'In-vivo' measurement of selenium in liver using a cyclic activation method

    Energy Technology Data Exchange (ETDEWEB)

    Nicolaou, G E; Spyrou, N M [Surrey Univ., Guildford (UK). Dept. of Physics; Matthews, I P [UKAEA Atomic Energy Research Establishment, Harwell. Environmental and Medical Sciences Div.; Stephens-Newsham, L G [Alberta Univ., Edmonton (Canada)

    1982-01-01

    In-vivo cyclic neutron activation analysis was used to measure selenium concentrations in liver by means of sup(77m)Se (17.6 s). The cyclic activation facility incorporates an oscillating 5 Ci Am/Be neutron source while the 'patient' remains stationary during the examination. For a total experimental time of 1800 s and cyclic period of 26 s, a minimum detection limit of 0.6 ppm may be obtained, however, when comparison is made with in-vitro results, this limit may be significantly lower. The dose for such an investigation was approximately equal to 0.26x10/sup -2/ Sv.

  15. Liver stiffness measurement by transient elastography predicts late posthepatectomy outcomes in patients undergoing resection for hepatocellular carcinoma.

    Science.gov (United States)

    Rajakannu, Muthukumarassamy; Cherqui, Daniel; Ciacio, Oriana; Golse, Nicolas; Pittau, Gabriella; Allard, Marc Antoine; Antonini, Teresa Maria; Coilly, Audrey; Sa Cunha, Antonio; Castaing, Denis; Samuel, Didier; Guettier, Catherine; Adam, René; Vibert, Eric

    2017-10-01

    Postoperative hepatic decompensation is a serious complication of liver resection in patients undergoing hepatectomy for hepatocellular carcinoma. Liver fibrosis and clinical significant portal hypertension are well-known risk factors for hepatic decompensation. Liver stiffness measurement is a noninvasive method of evaluating hepatic venous pressure gradient and functional hepatic reserve by estimating hepatic fibrosis. Effectiveness of liver stiffness measurement in predicting persistent postoperative hepatic decompensation has not been investigated. Consecutive patients with resectable hepatocellular carcinoma were recruited prospectively and liver stiffness measurement of nontumoral liver was measured using FibroScan. Hepatic venous pressure gradient was measured intraoperatively by direct puncture of portal vein and inferior vena cava. Hepatic venous pressure gradient ≥10 mm Hg was defined as clinically significant portal hypertension. Primary outcome was persistent hepatic decompensation defined as the presence of at least one of the following: unresolved ascites, jaundice, and/or encephalopathy >3 months after hepatectomy. One hundred and six hepatectomies, including 22 right hepatectomy (20.8%), 3 central hepatectomy (2.8%), 12 left hepatectomy (11.3%), 11 bisegmentectomy (10.4%), 30 unisegmentectomy (28.3%), and 28 partial hepatectomy (26.4%) were performed in patients for hepatocellular carcinoma (84 men and 22 women with median age of 67.5 years; median model for end-stage liver disease score of 8). Ninety-day mortality was 4.7%. Nine patients (8.5%) developed postoperative hepatic decompensation. Multivariate logistic regression bootstrapped at 1,000 identified liver stiffness measurement (P = .001) as the only preoperative predictor of postoperative hepatic decompensation. Area under receiver operating characteristic curve for liver stiffness measurement and hepatic venous pressure gradient was 0.81 (95% confidence interval, 0.506-0.907) and 0

  16. Electrical conductivity measurement of excised human metastatic liver tumours before and after thermal ablation.

    Science.gov (United States)

    Haemmerich, Dieter; Schutt, David J; Wright, Andrew W; Webster, John G; Mahvi, David M

    2009-05-01

    We measured the ex vivo electrical conductivity of eight human metastatic liver tumours and six normal liver tissue samples from six patients using the four electrode method over the frequency range 10 Hz to 1 MHz. In addition, in a single patient we measured the electrical conductivity before and after the thermal ablation of normal and tumour tissue. The average conductivity of tumour tissue was significantly higher than normal tissue over the entire frequency range (from 4.11 versus 0.75 mS cm(-1) at 10 Hz, to 5.33 versus 2.88 mS cm(-1) at 1 MHz). We found no significant correlation between tumour size and measured electrical conductivity. While before ablation tumour tissue had considerably higher conductivity than normal tissue, the two had similar conductivity throughout the frequency range after ablation. Tumour tissue conductivity changed by +25% and -7% at 10 Hz and 1 MHz after ablation (0.23-0.29 at 10 Hz, and 0.43-0.40 at 1 MHz), while normal tissue conductivity increased by +270% and +10% at 10 Hz and 1 MHz (0.09-0.32 at 10 Hz and 0.37-0.41 at 1 MHz). These data can potentially be used to differentiate tumour from normal tissue diagnostically.

  17. Metrics of quantum states

    International Nuclear Information System (INIS)

    Ma Zhihao; Chen Jingling

    2011-01-01

    In this work we study metrics of quantum states, which are natural generalizations of the usual trace metric and Bures metric. Some useful properties of the metrics are proved, such as the joint convexity and contractivity under quantum operations. Our result has a potential application in studying the geometry of quantum states as well as the entanglement detection.

  18. An Experimental Study to Measure the Mechanical Properties of the Human Liver.

    Science.gov (United States)

    Karimi, Alireza; Shojaei, Ahmad

    2018-01-01

    Since the liver is one of the most important organs of the body that can be injured during trauma, that is, during accidents like car crashes, understanding its mechanical properties is of great interest. Experimental data is needed to address the mechanical properties of the liver to be used for a variety of applications, such as the numerical simulations for medical purposes, including the virtual reality simulators, trauma research, diagnosis objectives, as well as injury biomechanics. However, the data on the mechanical properties of the liver capsule is limited to the animal models or confined to the tensile/compressive loading under single direction. Therefore, this study was aimed at experimentally measuring the axial and transversal mechanical properties of the human liver capsule under both the tensile and compressive loadings. To do that, 20 human cadavers were autopsied and their liver capsules were excised and histologically analyzed to extract the mean angle of a large fibers population (bundle of the fine collagen fibers). Thereafter, the samples were cut and subjected to a series of axial and transversal tensile/compressive loadings. The results revealed the tensile elastic modulus of 12.16 ± 1.20 (mean ± SD) and 7.17 ± 0.85 kPa under the axial and transversal loadings respectively. Correspondingly, the compressive elastic modulus of 196.54 ± 13.15 and 112.41 ± 8.98 kPa were observed under the axial and transversal loadings respectively. The compressive axial and transversal maximum/failure stress of the capsule were 32.54 and 37.30 times higher than that of the tensile ones respectively. The capsule showed a stiffer behavior under the compressive load compared to the tensile one. In addition, the axial elastic modulus of the capsule was found to be higher than that of the transversal one. The findings of the current study have implications not only for understanding the mechanical properties of the human capsule tissue under tensile

  19. Diagnostic accuracy and prognostic significance of blood fibrosis tests and liver stiffness measurement by FibroScan in non-alcoholic fatty liver disease.

    Science.gov (United States)

    Boursier, Jérôme; Vergniol, Julien; Guillet, Anne; Hiriart, Jean-Baptiste; Lannes, Adrien; Le Bail, Brigitte; Michalak, Sophie; Chermak, Faiza; Bertrais, Sandrine; Foucher, Juliette; Oberti, Frédéric; Charbonnier, Maude; Fouchard-Hubert, Isabelle; Rousselet, Marie-Christine; Calès, Paul; de Lédinghen, Victor

    2016-09-01

    NAFLD is highly prevalent but only a small subset of patients develop advanced liver fibrosis with impaired liver-related prognosis. We aimed to compare blood fibrosis tests and liver stiffness measurement (LSM) by FibroScan for the diagnosis of liver fibrosis and the evaluation of prognosis in NAFLD. Diagnostic accuracy was evaluated in a cross-sectional study including 452 NAFLD patients with liver biopsy (NASH-CRN fibrosis stage), LSM, and eight blood fibrosis tests (BARD, NAFLD fibrosis score, FibroMeter(NAFLD), aspartate aminotransferase to platelet ratio index (APRI), FIB4, FibroTest, Hepascore, FibroMeter(V2G)). Prognostic accuracy was evaluated in a longitudinal study including 360 NAFLD patients. LSM and FibroMeter(V2G) were the two best-performing tests in the cross-sectional study: AUROCs for advanced fibrosis (F3/4) were, respectively, 0.831±0.019 and 0.817±0.020 (p⩽0.041 vs. other tests); rates of patients with ⩾90% negative/positive predictive values for F3/4 were 56.4% and 46.7% (ptests); Obuchowski indexes were 0.834±0.014 and 0.798±0.016 (p⩽0.036 vs. other tests). Two fibrosis classifications were developed to precisely estimate the histological fibrosis stage from LSM or FibroMeter(V2G) results without liver biopsy (diagnostic accuracy, respectively: 80.8% vs. 77.4%, p=0.190). Kaplan-Meier curves in the longitudinal study showed that both classifications categorised NAFLD patients into subgroups with significantly different prognoses (pfibrosis classification, the worse was the prognosis. LSM and FibroMeter(V2G) were the most accurate of nine evaluated tests for the non-invasive diagnosis of liver fibrosis in NAFLD. LSM and FibroMeter(V2G) fibrosis classifications help physicians estimate both fibrosis stage and patient prognosis in clinical practice. The amount of liver fibrosis is the main determinant of the liver-related prognosis in patients with non-alcoholic fatty liver disease (NAFLD). We evaluated eight blood tests and Fibro

  20. Accuracy of liver lesion assessment using automated measurement and segmentation software in biphasic multislice CT (MSCT)

    International Nuclear Information System (INIS)

    Puesken, M.; Juergens, K.U.; Edenfeld, A.; Buerke, B.; Seifarth, H.; Beyer, F.; Heindel, W.; Wessling, J.; Suehling, M.; Osada, N.

    2009-01-01

    Purpose: To assess the accuracy of liver lesion measurement using automated measurement and segmentation software depending on the vascularization level. Materials and Methods: Arterial and portal venous phase multislice CT (MSCT) was performed for 58 patients. 94 liver lesions were evaluated and classified according to vascularity (hypervascular: 13 hepatocellular carcinomas, 20 hemangiomas; hypovascular: 31 metastases, 3 lymphomas, 4 abscesses; liquid: 23 cysts). The RECIST diameter and volume were obtained using automated measurement and segmentation software and compared to corresponding measurements derived visually by two experienced radiologists as a reference standard. Statistical analysis was performed using the Wilcoxon test and concordance correlation coefficients. Results: Automated measurements revealed no significant difference between the arterial and portal venous phase in hypovascular (mean RECIST diameter: 31.4 vs. 30.2 mm; p = 0.65; κ = 0.875) and liquid lesions (20.4 vs. 20.1 mm; p = 0.1; κ = 0.996). The RECIST diameter and volume of hypervascular lesions were significantly underestimated in the portal venous phase as compared to the arterial phase (30.3 vs. 26.9 mm, p = 0.007, κ 0.834; 10.7 vs. 7.9 ml, p = 0.0045, κ = 0.752). Automated measurements for hypovascular and liquid lesions in the arterial and portal venous phase were concordant to the reference standard. Hypervascular lesion measurements were in line with the reference standard for the arterial phase (30.3 vs. 32.2 mm, p 0.66, κ = 0.754), but revealed a significant difference for the portal venous phase (26.9 vs. 32.1 mm; p = 0.041; κ = 0.606). (orig.)

  1. CT head-scan dosimetry in an anthropomorphic phantom and associated measurement of ACR accreditation-phantom imaging metrics under clinically representative scan conditions

    Energy Technology Data Exchange (ETDEWEB)

    Brunner, Claudia C.; Stern, Stanley H.; Chakrabarti, Kish [U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993 (United States); Minniti, Ronaldo [National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, Maryland 20899 (United States); Parry, Marie I. [Walter Reed National Military Medical Center, 8901 Rockville Pike, Bethesda, Maryland 20889 (United States); Skopec, Marlene [National Institutes of Health, 9000 Rockville Pike, Bethesda, Maryland 20892 (United States)

    2013-08-15

    Purpose: To measure radiation absorbed dose and its distribution in an anthropomorphic head phantom under clinically representative scan conditions in three widely used computed tomography (CT) scanners, and to relate those dose values to metrics such as high-contrast resolution, noise, and contrast-to-noise ratio (CNR) in the American College of Radiology CT accreditation phantom.Methods: By inserting optically stimulated luminescence dosimeters (OSLDs) in the head of an anthropomorphic phantom specially developed for CT dosimetry (University of Florida, Gainesville), we measured dose with three commonly used scanners (GE Discovery CT750 HD, Siemens Definition, Philips Brilliance 64) at two different clinical sites (Walter Reed National Military Medical Center, National Institutes of Health). The scanners were set to operate with the same data-acquisition and image-reconstruction protocols as used clinically for typical head scans, respective of the practices of each facility for each scanner. We also analyzed images of the ACR CT accreditation phantom with the corresponding protocols. While the Siemens Definition and the Philips Brilliance protocols utilized only conventional, filtered back-projection (FBP) image-reconstruction methods, the GE Discovery also employed its particular version of an adaptive statistical iterative reconstruction (ASIR) algorithm that can be blended in desired proportions with the FBP algorithm. We did an objective image-metrics analysis evaluating the modulation transfer function (MTF), noise power spectrum (NPS), and CNR for images reconstructed with FBP. For images reconstructed with ASIR, we only analyzed the CNR, since MTF and NPS results are expected to depend on the object for iterative reconstruction algorithms.Results: The OSLD measurements showed that the Siemens Definition and the Philips Brilliance scanners (located at two different clinical facilities) yield average absorbed doses in tissue of 42.6 and 43.1 m

  2. Relative Citation Ratio of Top Twenty Macedonian Biomedical Scientists in PubMed: A New Metric that Uses Citation Rates to Measure Influence at the Article Level.

    Science.gov (United States)

    Spiroski, Mirko

    2016-06-15

    The aim of this study was to analyze relative citation ratio (RCR) of top twenty Macedonian biomedical scientists with a new metric that uses citation rates to measure influence at the article level. Top twenty Macedonian biomedical scientists were identified by GoPubMed on the base of the number of deposited abstracts in PubMed, corrected with the data from previously published paper, and completed with the Macedonian biomedical scientists working in countries outside the Republic of Macedonia, but born or previously worked in the country. iCite was used as a tool to access a dashboard of bibliometrics for papers associated with a portfolio. The biggest number of top twenty Macedonian biomedical scientists has RCR lower than one. Only four Macedonian biomedical scientists have bigger RCR in comparison with those in PubMed. The most prominent RCR of 2.29 has Rosoklija G. RCR of the most influenced individual papers deposited in PubMed has shown the biggest value for the paper of Efremov D (35.19). This paper has the biggest number of authors (860). It is necessary to accept top twenty Macedonian biomedical scientists as an example of new metric that uses citation rates to measure influence at the article level, rather than qualification of the best Macedonian biomedical scientists.

  3. Coverage Metrics for Model Checking

    Science.gov (United States)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  4. Factors influencing reliability of liver stiffness measurements using transient elastography (M-probe)—Monocentric experience

    Energy Technology Data Exchange (ETDEWEB)

    Şirli, Roxana, E-mail: roxanasirli@gmail.com; Sporea, Ioan, E-mail: isporea@umft.ro; Bota, Simona, E-mail: bota_simona1982@yahoo.com; Jurchiş, Ana, E-mail: ana.jurchis@yahoo.com

    2013-08-15

    Aim: To retrospectively assess the feasibility of transient elastography (TE) and the factors associated with failed and unreliable liver stiffness measurements (LSMs), in patients with chronic liver diseases. Material and methods: Our retrospective study included 8218 consecutive adult patients with suspected chronic liver diseases. In each patient, LSMs were performed with a FibroScan{sup ®} device (Echosens, France), with the M probe. Failure of TE measurements was defined if no valid measurement was obtained after at least 10 shots and unreliable if fewer than 10 valid shots were obtained, success rate (SR) <60% and/or interquartile range interval/median value (IQR/Med) ≥30%. Results: From the 8218 patients, failed and unreliable LSMs were observed in 29.2% of cases. In univariant analysis, the following risk factors were associated with failed and unreliable measurements: age over 50 years (OR 2.04; 95%CI 1.84–2.26), female gender (OR 1.32; 95%CI 1.20–1.45), BMI > 27.7 kg/m{sup 2} (OR 2.89, 95%CI 2.62–3.19), weight > 77 kg (OR 2.17; 95%CI 1.97–2.40) and height < 162 cm (OR 1.26; 95%CI 1.14–1.40). In multivariate analysis all the factors mentioned above were independently associated with the risk of failed and unreliable measurements. If all the negative predictive factors were present (woman, older than 50 years, with BMI > 27.7 kg/m{sup 2}, heavier than 77 kg and shorter than 162 cm), the rate of failed and unreliable measurements was 58.5%. In obese patients (BMI ≥ 30 kg/m{sup 2}), the rate of failed and unreliable measurements was 49.5%. Conclusion: Failed and unreliable LSMs were observed in 29.1% of patients. Female gender, older age, higher BMI, higher weight and smaller height were significantly associated with failed and unreliable LSMs.

  5. $\\eta$-metric structures

    OpenAIRE

    Gaba, Yaé Ulrich

    2017-01-01

    In this paper, we discuss recent results about generalized metric spaces and fixed point theory. We introduce the notion of $\\eta$-cone metric spaces, give some topological properties and prove some fixed point theorems for contractive type maps on these spaces. In particular we show that theses $\\eta$-cone metric spaces are natural generalizations of both cone metric spaces and metric type spaces.

  6. Measuring population transmission risk for HIV: an alternative metric of exposure risk in men who have sex with men (MSM in the US.

    Directory of Open Access Journals (Sweden)

    Colleen F Kelley

    Full Text Available Various metrics for HIV burden and treatment success [e.g. HIV prevalence, community viral load (CVL, population viral load (PVL, percent of HIV-positive persons with undetectable viral load] have important public health limitations for understanding disparities.Using data from an ongoing HIV incidence cohort of black and white men who have sex with men (MSM, we propose a new metric to measure the prevalence of those at risk of transmitting HIV and illustrate its value. MSM with plasma VL>400 copies/mL were defined as having 'transmission risk'. We calculated HIV prevalence, CVL, PVL, percent of HIV-positive with undetectable viral loads, and prevalence of plasma VL>400 copies/ml (%VL400 for black and white MSM. We used Monte Carlo simulation incorporating data on sexual mixing by race to estimate exposure of black and white HIV-negative MSM to a partner with transmission risk via unprotected anal intercourse (UAI. Of 709 MSM recruited, 42% (168/399 black and 14% (44/310 white MSM tested HIV-positive (p<.0001. No significant differences were seen in CVL, PVL, or percent of HIV positive with undetectable viral loads. The %VL400 was 25% (98/393 for black vs. 8% (25/310 for white MSM (p<.0001. Black MSM with 2 UAI partners were estimated to have 40% probability (95% CI: 35%, 45% of having ≥1 UAI partner with transmission risk vs. 20% for white MSM (CI: 15%, 24%.Despite similarities in other metrics, black MSM in our cohort are three times as likely as white MSM to have HIV transmission risk. With comparable risk behaviors, HIV-negative black MSM have a substantially higher likelihood of encountering a UAI partner at risk of transmitting HIV. Our results support increasing HIV testing, linkage to care, and antiretroviral treatment of HIV-positive MSM to reduce prevalence of those with transmission risk, particularly for black MSM.

  7. Defining a Progress Metric for CERT RMM Improvement

    Science.gov (United States)

    2017-09-14

    REV-03.18.2016.0 Defining a Progress Metric for CERT-RMM Improvement Gregory Crabb Nader Mehravari David Tobar September 2017 TECHNICAL ...fendable resource allocation decisions. Technical metrics measure aspects of controls implemented through technology (systems, soft- ware, hardware...implementation metric would be the percentage of users who have received anti-phishing training . • Effectiveness/efficiency metrics measure whether

  8. NASA education briefs for the classroom. Metrics in space

    Science.gov (United States)

    The use of metric measurement in space is summarized for classroom use. Advantages of the metric system over the English measurement system are described. Some common metric units are defined, as are special units for astronomical study. International system unit prefixes and a conversion table of metric/English units are presented. Questions and activities for the classroom are recommended.

  9. Measuring the Rate of Change in Sea Level and Its Adherence to USACE Sea Level Rise Planning Scenarios Using Timeseries Metrics

    Science.gov (United States)

    White, K. D.; Huang, N.; Huber, M.; Veatch, W.; Moritz, H.; Obrien, P. S.; Friedman, D.

    2017-12-01

    In 2013, the United States Army Corps of Engineers (USACE) issued guidance for all Civil Works activities to incorporate the effects of sea level change as described in three distinct planning scenarios.[1] These planning scenarios provided a useful framework to incorporate these effects into Civil Works activities, but required the manual calculation of these scenarios for a given gage and set of datum. To address this need, USACE developed the Sea Level Change Curve Calculator (SLCCC) in 2014 which provided a "simple, web-based tool to provide repeatable analytical results."[2]USACE has been developing a successor to the SLCCC application which retains the same, intuitive functionality to calculate these planning scenarios, but it also allows the comparison of actual sea level change between 1992 and today against the projections, and builds on the user's ability to understand the rate of change using a variety of timeseries metrics (e.g. moving averages, trends) and related visualizations. These new metrics help both illustrate and measure the complexity and nuances of sea level change. [1] ER 1000-2-8162. http://www.publications.usace.army.mil/Portals/76/Publications/EngineerRegulations/ER_1100-2-8162.pdf. [2] SLCC Manual. http://www.corpsclimate.us/docs/SLC_Calculator_Manual_2014_88.pdf.

  10. [Combination of NAFLD Fibrosis Score and liver stiffness measurement for identification of moderate fibrosis stages (II & III) in non-alcoholic fatty liver disease].

    Science.gov (United States)

    Drolz, Andreas; Wehmeyer, Malte; Diedrich, Tom; Piecha, Felix; Schulze Zur Wiesch, Julian; Kluwe, Johannes

    2018-01-01

    Non-alcoholic fatty liver disease (NAFLD) has become one of the most frequent causes of chronic liver disease. Currently, therapeutic options for NAFLD patients are limited, but new pharmacologic agents are being investigated in the course of clinical trials. Because most of these studies are focusing on patients with fibrosis stages II and III (according to Kleiner), non-invasive identification of patients with intermediate fibrosis stages (II and III) is of increasing interest. Evaluation of NAFLD Fibrosis Score (NFS) and liver stiffness measurement (LSM) for prediction of fibrosis stages II/III. Patients with histologically confirmed NAFLD diagnosis were included in the study. All patients underwent a clinical and laboratory examination as well as a LSM prior to liver biopsy. Predictive value of NFS and LSM with respect to identification of fibrosis stages II/III was assessed. 134 NAFLD patients were included and analyzed. Median age was 53 (IQR 36 - 60) years, 55 patients (41 %) were female. 82 % of our patients were overweight/obese with typical aspects of metabolic syndrome. 84 patients (66 %) had liver fibrosis, 42 (50 %) advanced fibrosis. LSM and NFS correlated with fibrosis stage (r = 0.696 and r = 0.685, respectively; p stages II/III. If both criteria were met, probability of fibrosis stage II/III was 61 %. If none of the two criteria was met, chance for fibrosis stage II/III was only 6 % (negative predictive value 94 %). Combination of LSM and NFS enables identification of patients with significant probability of fibrosis stage II/III. Accordingly, these tests, especially in combination, may be a suitable screening tool for fibrosis stages II/III in NAFLD. The use of these non-invasive methods might also help to avoid unnecessary biopsies. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Measuring emergency physicians' work: factoring in clinical hours, patients seen, and relative value units into 1 metric.

    Science.gov (United States)

    Silich, Bert A; Yang, James J

    2012-05-01

    Measuring workplace performance is important to emergency department management. If an unreliable model is used, the results will be inaccurate. Use of inaccurate results to make decisions, such as how to distribute the incentive pay, will lead to rewarding the wrong people and will potentially demoralize top performers. This article demonstrates a statistical model to reliably measure the work accomplished, which can then be used as a performance measurement.

  12. Measuring Emergency Physicians’ Work: Factoring in Clinical Hours, Patients Seen, and Relative Value Units into 1 Metric

    Directory of Open Access Journals (Sweden)

    Bert A. Silich, MD, MS

    2012-05-01

    Full Text Available Measuring workplace performance is important to emergency department management. If anunreliable model is used, the results will be inaccurate. Use of inaccurate results to make decisions,such as how to distribute the incentive pay, will lead to rewarding the wrong people and will potentiallydemoralize top performers. This article demonstrates a statistical model to reliably measure the workaccomplished, which can then be used as a performance measurement.

  13. Fatty Liver

    International Nuclear Information System (INIS)

    Filippone, A.; Digiovandomenico, V.; Digiovandomenico, E.; Genovesi, N.; Bonomo, L.

    1991-01-01

    The authors report their experience with the combined use of US and CT in the study of diffuse and subtotal fatty infiltration of the liver. An apparent disagreement was initially found between the two examinations in the study of fatty infiltration. Fifty-five patients were studied with US and CT of the upper abdomen, as suggested by clinics. US showed normal liver echogenicity in 30 patients and diffuse increased echogenicity (bright liver) in 25 cases. In 5 patients with bright liver, US demonstrated a solitary hypoechoic area, appearing as a 'skip area', in the quadrate lobe. In 2 patients with bright liver, the hypoechoic area was seen in the right lobe and exhibited no typical US features of 'Skip area'. Bright liver was quantified by measuring CT density of both liver and spleen. The relative attenuation values of spleen and liver were compared on plain and enhanced CT scans. In 5 cases with a hypoechoic area in the right lobe, CT findings were suggestive of hemangioma. A good correlation was found between broght liver and CT attenuation values, which decrease with increasing fat content of the liver. Moreover, CT attenuation values confirmed US findings in the study of typical 'skip area', by demonstrating normal density - which suggests that CT can characterize normal tissue in atypical 'skip area'

  14. Regge calculus from discontinuous metrics

    International Nuclear Information System (INIS)

    Khatsymovsky, V.M.

    2003-01-01

    Regge calculus is considered as a particular case of the more general system where the linklengths of any two neighbouring 4-tetrahedra do not necessarily coincide on their common face. This system is treated as that one described by metric discontinuous on the faces. In the superspace of all discontinuous metrics the Regge calculus metrics form some hypersurface defined by continuity conditions. Quantum theory of the discontinuous metric system is assumed to be fixed somehow in the form of quantum measure on (the space of functionals on) the superspace. The problem of reducing this measure to the Regge hypersurface is addressed. The quantum Regge calculus measure is defined from a discontinuous metric measure by inserting the δ-function-like phase factor. The requirement that continuity conditions be imposed in a 'face-independent' way fixes this factor uniquely. The term 'face-independent' means that this factor depends only on the (hyper)plane spanned by the face, not on it's form and size. This requirement seems to be natural from the viewpoint of existence of the well-defined continuum limit maximally free of lattice artefacts

  15. Web metrics for library and information professionals

    CERN Document Server

    Stuart, David

    2014-01-01

    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional.Th...

  16. Measuring Progress in Conflict Environments (MPICE) - A Metrics Framework for Assessing Conflict Transformation and Stabilization. Version 1.0

    National Research Council Canada - National Science Library

    Dziedzic, Michael; Sotirin, Barbara; Agoglia, John

    2008-01-01

    There has been a long standing need for "Measures of Effectiveness," as they are often called in the private sector, focused on diplomatic, military and development efforts in places prone to conflict. Traditionally, U.S...

  17. Sex determination in femurs of modern Egyptians: A comparative study between metric measurements and SRY gene detection

    Directory of Open Access Journals (Sweden)

    Iman F. Gaballah

    2014-12-01

    Conclusion: The SRY gene detection method for sex determination is quick and simple, requiring only one PCR reaction. It corroborates the results obtained from anatomical measurements and further confirms the sex of the femur bone in question.

  18. Non-invasive measurement of liver and pancreas fibrosis in patients with cystic fibrosis.

    Science.gov (United States)

    Friedrich-Rust, Mireen; Schlueter, Nina; Smaczny, Christina; Eickmeier, Olaf; Rosewich, Martin; Feifel, Kirstin; Herrmann, Eva; Poynard, Thierry; Gleiber, Wolfgang; Lais, Christoph; Zielen, Stefan; Wagner, Thomas O F; Zeuzem, Stefan; Bojunga, Joerg

    2013-09-01

    Patients with cystic fibrosis (CF) have a relevant morbidity and mortality caused by CF-related liver-disease. While transient elastography (TE) is an established elastography method in hepatology centers, Acoustic-Radiation-Force-Impulse (ARFI)-Imaging is a novel ultrasound-based elastography method which is integrated in a conventional ultrasound-system. The aim of the present study was to evaluate the prevalence of liver-fibrosis in patients with CF using TE, ARFI-imaging and fibrosis blood tests. 106 patients with CF were prospectively included in the present study and received ARFI-imaging of the left and right liver-lobe, ARFI of the pancreas TE of the liver and laboratory evaluation. The prevalence of liver-fibrosis according to recently published best practice guidelines for CFLD was 22.6%. Prevalence of significant liver-fibrosis assessed by TE, ARFI-right-liver-lobe, ARFI-left-liver-lobe, Fibrotest, Fibrotest-corrected-by-haptoglobin was 17%, 24%, 40%, 7%, and 16%, respectively. The best agreement was found for TE, ARFI-right-liver-lobe and Fibrotest-corrected-by-haptoglobin. Patients with pancreatic-insufficiency had significantly lower pancreas-ARFI-values as compared to patients without. ARFI-imaging and TE seem to be promising non-invasive methods for detection of liver-fibrosis in patients with CF. Copyright © 2013 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.

  19. Virtual reality as a metric for the assessment of laparoscopic psychomotor skills. Learning curves and reliability measures.

    Science.gov (United States)

    Gallagher, A G; Satava, R M

    2002-12-01

    The objective assessment of the psychomotor skills of surgeons is now a priority; however, this is a difficult task because of measurement difficulties associated with the assessment of surgery in vivo. In this study, virtual reality (VR) was used to overcome these problems. Twelve experienced (>50 minimal-access procedures), 12 inexperienced laparoscopic surgeons (Virtual Reality (MIST VR). Experienced laparoscopic surgeons performed the tasks significantly (p < 0.01) faster, with less error, more economy in the movement of instruments and the use of diathermy, and with greater consistency in performance. The standardized coefficient alpha for performance measures ranged from a = 0.89 to 0.98, showing high internal measurement consistency. Test-retest reliability ranged from r = 0.96 to r = 0.5. VR is a useful tool for evaluating the psychomotor skills needed to perform laparoscopic surgery.

  20. Light diffuseness metric, part 2 : Describing, measuring and visualizing the light flow and diffuseness in three-dimensional spaces

    NARCIS (Netherlands)

    Xia, L.; Pont, S.C.; Heynderickx, I.E.J.

    2017-01-01

    We introduce a way to simultaneously measure the light density, light vector and diffuseness of the light field using a cubic illumination meter based on the spherical harmonics representation of the light field. This approach was applied to six light probe images of natural scenes and four real

  1. How to measure top-down vs. bottom-up effects: A new population metric and its calibration on Daphnia

    NARCIS (Netherlands)

    Polishchuk, L.; Vijverberg, J.; Voronov, D.A.; Mooij, W.M.

    2013-01-01

    Research on the role of top–down (predation) and bottom–up (food) effects in food webs has led to the understanding that the variability of these effects in space and time is a fundamental feature of natural systems. Consequently, our measurement tools must allow us to evaluate the effects from a

  2. A framework for operationalization of strategic plans and metrics for corporate performance measurement in transportation asset management

    Science.gov (United States)

    Mteri, Hassan H.

    This thesis investigated the business processes required to translate corporate-level strategic plans into tactical and operational plans in the context of transportation asset management. The study also developed a framework for effective performance measure for departments of transportation. The thesis was based on a case study of transportation agencies in the U.S.A. and Canada. The scope is therefore limited or more directly applicable to transportation assets such as pavement, bridges and culverts. The goal was to address the problem of translating or managing strategic plans, especially in the context of the public sector responsible for operating transportation infrastructure. It was observed that many agencies have been successful in formulating good strategic plans but they have performed relatively poorly in translating such corporate-level strategic plans into operational activities. A questionnaire survey was designed and targeted about 30 state agencies that are currently active in transportation asset management. Twenty one (21) transportation agencies in the USA and Canada responded to the questionnaire. The analysis of the questionnaire data showed that there is a lack of a standard approach to managing corporate strategic plans in transportation agencies. The results also indicated that most transportation agencies operate in three organizational levels but there was no systematic approach of translating goal and objectives from high level to lower levels. Approaches in performance measurement were found to vary from agency to agency. A number of limitations were identified in the existing practice on performance measurements. Key weaknesses include the large number of measures in use (as many as 25 or more), and the disconnection between the measures used and the corporate goals and objectives. Lessons from the private sector were thoroughly reviewed in order to build the groundwork for adapting existing tools to the public sector. The existing

  3. Liver transplant

    Science.gov (United States)

    Hepatic transplant; Transplant - liver; Orthotopic liver transplant; Liver failure - liver transplant; Cirrhosis - liver transplant ... The donated liver may be from: A donor who has recently died and has not had liver injury. This type of ...

  4. Quantitative MRI for hepatic fat fraction and T2* measurement in pediatric patients with non-alcoholic fatty liver disease.

    Science.gov (United States)

    Deng, Jie; Fishbein, Mark H; Rigsby, Cynthia K; Zhang, Gang; Schoeneman, Samantha E; Donaldson, James S

    2014-11-01

    Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease in children. The gold standard for diagnosis is liver biopsy. MRI is a non-invasive imaging method to provide quantitative measurement of hepatic fat content. The methodology is particularly appealing for the pediatric population because of its rapidity and radiation-free imaging techniques. To develop a multi-point Dixon MRI method with multi-interference models (multi-fat-peak modeling and bi-exponential T2* correction) for accurate hepatic fat fraction (FF) and T2* measurements in pediatric patients with NAFLD. A phantom study was first performed to validate the accuracy of the MRI fat fraction measurement by comparing it with the chemical fat composition of the ex-vivo pork liver-fat homogenate. The most accurate model determined from the phantom study was used for fat fraction and T2* measurements in 52 children and young adults referred from the pediatric hepatology clinic with suspected or identified NAFLD. Separate T2* values of water (T2*W) and fat (T2*F) components derived from the bi-exponential fitting were evaluated and plotted as a function of fat fraction. In ten patients undergoing liver biopsy, we compared histological analysis of liver fat fraction with MRI fat fraction. In the phantom study the 6-point Dixon with 5-fat-peak, bi-exponential T2* modeling demonstrated the best precision and accuracy in fat fraction measurements compared with other methods. This model was further calibrated with chemical fat fraction and applied in patients, where similar patterns were observed as in the phantom study that conventional 2-point and 3-point Dixon methods underestimated fat fraction compared to the calibrated 6-point 5-fat-peak bi-exponential model (P fat fraction, T2*W (27.9 ± 3.5 ms) decreased, whereas T2*F (20.3 ± 5.5 ms) increased; and T2*W and T2*F became increasingly more similar when fat fraction was higher than 15-20%. Histological fat

  5. Quantitative MRI for hepatic fat fraction and T2* measurement in pediatric patients with non-alcoholic fatty liver disease

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Jie; Rigsby, Cynthia K.; Donaldson, James S. [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Medical Imaging, Chicago, IL (United States); Northwestern University, Department of Radiology, Feinberg School of Medicine, Chicago, IL (United States); Fishbein, Mark H. [Ann and Robert H. Lurie Children' s Hospital of Chicago, Division of Gastroenterology, Hepatology, and Nutrition, Chicago, IL (United States); Zhang, Gang [Ann and Robert H. Lurie Children' s Hospital of Chicago, Biostatistics Research Core, Chicago, IL (United States); Schoeneman, Samantha E. [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Medical Imaging, Chicago, IL (United States)

    2014-11-15

    Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease in children. The gold standard for diagnosis is liver biopsy. MRI is a non-invasive imaging method to provide quantitative measurement of hepatic fat content. The methodology is particularly appealing for the pediatric population because of its rapidity and radiation-free imaging techniques. To develop a multi-point Dixon MRI method with multi-interference models (multi-fat-peak modeling and bi-exponential T2* correction) for accurate hepatic fat fraction (FF) and T2* measurements in pediatric patients with NAFLD. A phantom study was first performed to validate the accuracy of the MRI fat fraction measurement by comparing it with the chemical fat composition of the ex-vivo pork liver-fat homogenate. The most accurate model determined from the phantom study was used for fat fraction and T2* measurements in 52 children and young adults referred from the pediatric hepatology clinic with suspected or identified NAFLD. Separate T2* values of water (T2*{sub W}) and fat (T2*{sub F}) components derived from the bi-exponential fitting were evaluated and plotted as a function of fat fraction. In ten patients undergoing liver biopsy, we compared histological analysis of liver fat fraction with MRI fat fraction. In the phantom study the 6-point Dixon with 5-fat-peak, bi-exponential T2* modeling demonstrated the best precision and accuracy in fat fraction measurements compared with other methods. This model was further calibrated with chemical fat fraction and applied in patients, where similar patterns were observed as in the phantom study that conventional 2-point and 3-point Dixon methods underestimated fat fraction compared to the calibrated 6-point 5-fat-peak bi-exponential model (P < 0.0001). With increasing fat fraction, T2*{sub W} (27.9 ± 3.5 ms) decreased, whereas T2*{sub F} (20.3 ± 5.5 ms) increased; and T2*{sub W} and T2*{sub F} became increasingly more similar when fat

  6. Impact of hepatitis C virus genotype-4 eradication following direct acting antivirals on liver stiffness measurement

    Directory of Open Access Journals (Sweden)

    Tag-Adeen M

    2017-10-01

    Full Text Available Mohammed Tag-Adeen,1,2 Ahlam Mohamed Sabra,1 Yuko Akazawa,2 Ken Ohnita,2 Kazuhiko Nakao2 1Department of Internal Medicine, Qena School of Medicine, South Valley University, Qena, Egypt; 2Department of Gastroenterology and Hepatology, Nagasaki School of Biomedical Sciences, Nagasaki University, Nagasaki, Japan Background: Liver fibrosis is the most important prognostic factor in chronic hepatitis C virus (HCV patients, and Egypt shows the highest worldwide HCV prevalence with genotype-4 predominance. The aim of this study was to investigate the degree of liver stiffness measurement (LSM improvement after successful HCV eradication.Patients and methods: The study included 84 chronic HCV Egyptian patients, and was conducted at Qena University Hospital from November 1, 2015 till October 31, 2016. LSM was obtained by FibroScan® before starting direct acting antiviral (DAA treatment and after achieving sustained virologic response-24 (SVR-24. Based on baseline LSM, patients were stratified into F0–F1, F2, F3 and F4 groups (METAVIR. LSM and laboratory data after achieving SVR-24 was compared with that before starting therapy in each fibrosis group (F0-F4, p-value <0.05 was statistically significant.Results: Following DAA treatment, 80 patients achieved SVR-24; of these, 50 were males (62.5%, mean age: 54.2±7.6 years, and mean body mass index: 28.6±2.2 kg/m2. Mean baseline LSM dropped from 15.6±10.8 to 12.1±8.7 kPa post-SVR; the maximum change of −5.8 occurred in F4 versus −2.79, −1.28 and +0.08 in F3, F2 and F0–F1 respectively (p<0.0001. At baseline, 41 patients were in the F4 group; only 16 (39% regressed to non-cirrhotic range (<12.5 kPa, while 25 (61% were still cirrhotic despite achieving SVR-24 (p<0.0001. Patients who achieved LSM improvement (n=64 have had significantly higher baseline aspartate transferase (AST and alanine transaminase (ALT. Also, those patients showed significant improvement in AST, AST/platelets ratio index

  7. Multivariate analytical figures of merit as a metric for evaluation of quantitative measurements using comprehensive two-dimensional gas chromatography-mass spectrometry.

    Science.gov (United States)

    Eftekhari, Ali; Parastar, Hadi

    2016-09-30

    The present contribution is devoted to develop multivariate analytical figures of merit (AFOMs) as a new metric for evaluation of quantitative measurements using comprehensive two-dimensional gas chromatography-mass spectrometry (GC×GC-MS). In this regard, new definition of sensitivity (SEN) is extended to GC×GC-MS data and then, other multivariate AFOMs including analytical SEN (γ), selectivity (SEL) and limit of detection (LOD) are calculated. Also, two frequently used second- and third-order calibration algorithms of multivariate curve resolution-alternating least squares (MCR-ALS) as representative of multi-set methods and parallel factor analysis (PARAFAC) as representative of multi-way methods are discussed to exploit pure component profiles and to calculate multivariate AFOMs. Different GC×GC-MS data sets with different number of components along with various levels of artifacts are simulated and analyzed. Noise, elution time shifts in both chromatographic dimensions, peak overlap and interferences are considered as the main artifacts in this work. Additionally, a new strategy is developed to estimate the noise level using variance-covariance matrix of residuals which is very important to calculate multivariate AFOMs. Finally, determination of polycyclic aromatic hydrocarbons (PAHs) in aromatic fraction of heavy fuel oil (HFO) analyzed by GC×GC-MS is considered as real case to confirm applicability of the proposed metric in real samples. It should be pointed out that the proposed strategy in this work can be used for other types of comprehensive two-dimensional chromatographic (CTDC) techniques like comprehensive two dimensional liquid chromatography (LC×LC). Copyright © 2016 Elsevier B.V. All rights reserved.

  8. METRIC context unit architecture

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, R.O.

    1988-01-01

    METRIC is an architecture for a simple but powerful Reduced Instruction Set Computer (RISC). Its speed comes from the simultaneous processing of several instruction streams, with instructions from the various streams being dispatched into METRIC's execution pipeline as they become available for execution. The pipeline is thus kept full, with a mix of instructions for several contexts in execution at the same time. True parallel programming is supported within a single execution unit, the METRIC Context Unit. METRIC's architecture provides for expansion through the addition of multiple Context Units and of specialized Functional Units. The architecture thus spans a range of size and performance from a single-chip microcomputer up through large and powerful multiprocessors. This research concentrates on the specification of the METRIC Context Unit at the architectural level. Performance tradeoffs made during METRIC's design are discussed, and projections of METRIC's performance are made based on simulation studies.

  9. A Risk Metric Assessment of Scenario-Based Market Risk Measures for Volatility and Risk Estimation: Evidence from Emerging Markets

    Directory of Open Access Journals (Sweden)

    Sitima Innocent

    2015-03-01

    Full Text Available The study evaluated the sensitivity of the Value- at- Risk (VaR and Expected Shortfalls (ES with respect to portfolio allocation in emerging markets with an index portfolio of a developed market. This study utilised different models for VaR and ES techniques using various scenario-based models such as Covariance Methods, Historical Simulation and the GARCH (1, 1 for the predictive ability of these models in both relatively stable market conditions and extreme market conditions. The results showed that Expected Shortfall has less risk tolerance than VaR based on the same scenario-based market risk measures

  10. Metric adjusted skew information

    DEFF Research Database (Denmark)

    Hansen, Frank

    2008-01-01

    ) that vanishes for observables commuting with the state. We show that the skew information is a convex function on the manifold of states. It also satisfies other requirements, proposed by Wigner and Yanase, for an effective measure-of-information content of a state relative to a conserved observable. We...... establish a connection between the geometrical formulation of quantum statistics as proposed by Chentsov and Morozova and measures of quantum information as introduced by Wigner and Yanase and extended in this article. We show that the set of normalized Morozova-Chentsov functions describing the possible......We extend the concept of Wigner-Yanase-Dyson skew information to something we call "metric adjusted skew information" (of a state with respect to a conserved observable). This "skew information" is intended to be a non-negative quantity bounded by the variance (of an observable in a state...

  11. Elevated C-reactive protein and hypoalbuminemia measured before resection of colorectal liver metastases predict postoperative survival.

    Science.gov (United States)

    Kobayashi, Takashi; Teruya, Masanori; Kishiki, Tomokazu; Endo, Daisuke; Takenaka, Yoshiharu; Miki, Kenji; Kobayashi, Kaoru; Morita, Koji

    2010-01-01

    Few studies have investigated whether the Glasgow Prognostic Score (GPS), an inflammation-based prognostic score measured before resection of colorectal liver metastasis (CRLM), can predict postoperative survival. Sixty-three consecutive patients who underwent curative resection for CRLM were investigated. GPS was calculated on the basis of admission data as follows: patients with both an elevated C-reactive protein (>10 mg/l) and hypoalbuminemia (l) were allocated a GPS score of 2. Patients in whom only one of these biochemical abnormalities was present were allocated a GPS score of 1, and patients with a normal C-reactive protein and albumin were allocated a score of 0. Significant factors concerning survival were the number of liver metastases (p = 0.0044), carcinoembryonic antigen level (p = 0.0191), GPS (p = 0.0029), grade of liver metastasis (p = 0.0033), and the number of lymph node metastases around the primary cancer (p = 0.0087). Multivariate analysis showed the two independent prognostic variables: liver metastases > or =3 (relative risk 2.83) and GPS1/2 (relative risk 3.07). GPS measured before operation and the number of liver metastases may be used as novel predictors of postoperative outcomes in patients who underwent curative resection for CRLM. Copyright 2010 S. Karger AG, Basel.

  12. On Nakhleh's metric for reduced phylogenetic networks

    OpenAIRE

    Cardona, Gabriel; Llabrés, Mercè; Rosselló, Francesc; Valiente Feruglio, Gabriel Alejandro

    2009-01-01

    We prove that Nakhleh’s metric for reduced phylogenetic networks is also a metric on the classes of tree-child phylogenetic networks, semibinary tree-sibling time consistent phylogenetic networks, and multilabeled phylogenetic trees. We also prove that it separates distinguishable phylogenetic networks. In this way, it becomes the strongest dissimilarity measure for phylogenetic networks available so far. Furthermore, we propose a generalization of that metric that separates arbitrary phyl...

  13. Gravitational lensing in metric theories of gravity

    International Nuclear Information System (INIS)

    Sereno, Mauro

    2003-01-01

    Gravitational lensing in metric theories of gravity is discussed. I introduce a generalized approximate metric element, inclusive of both post-post-Newtonian contributions and a gravitomagnetic field. Following Fermat's principle and standard hypotheses, I derive the time delay function and deflection angle caused by an isolated mass distribution. Several astrophysical systems are considered. In most of the cases, the gravitomagnetic correction offers the best perspectives for an observational detection. Actual measurements distinguish only marginally different metric theories from each other

  14. Phosphodiester content measured in human liver by in vivo 31 P MR spectroscopy at 7 tesla.

    Science.gov (United States)

    Purvis, Lucian A B; Clarke, William T; Valkovič, Ladislav; Levick, Christina; Pavlides, Michael; Barnes, Eleanor; Cobbold, Jeremy F; Robson, Matthew D; Rodgers, Christopher T

    2017-12-01

    Phosphorus ( 31 P) metabolites are emerging liver disease biomarkers. Of particular interest are phosphomonoester and phosphodiester (PDE) "peaks" that comprise multiple overlapping resonances in 31 P spectra. This study investigates the effect of improved spectral resolution at 7 Tesla (T) on quantifying hepatic metabolites in cirrhosis. Five volunteers were scanned to determine metabolite T 1 s. Ten volunteers and 11 patients with liver cirrhosis were scanned at 7T. Liver spectra were acquired in 28 min using a 16-channel 31 P array and 3D chemical shift imaging. Concentrations were calculated using γ-adenosine-triphosphate (γ-ATP) = 2.65 mmol/L wet tissue. T 1 means ± standard deviations: phosphatidylcholine 1.05 ± 0.28 s, nicotinamide-adenine-dinucleotide (NAD + ) 2.0 ± 1.0 s, uridine-diphosphoglucose (UDPG) 3.3 ± 1.4 s. Concentrations in healthy volunteers: α-ATP 2.74 ± 0.11 mmol/L wet tissue, inorganic phosphate 2.23 ± 0.20 mmol/L wet tissue, glycerophosphocholine 2.34 ± 0.46 mmol/L wet tissue, glycerophosphoethanolamine 1.50 ± 0.28 mmol/L wet tissue, phosphocholine 1.06 ± 0.16 mmol/L wet tissue, phosphoethanolamine 0.77 ± 0.14 mmol/L wet tissue, NAD + 2.37 ± 0.14 mmol/L wet tissue, UDPG 2.00 ± 0.22 mmol/L wet tissue, phosphatidylcholine 1.38 ± 0.31 mmol/L wet tissue. Inorganic phosphate and phosphatidylcholine concentrations were significantly lower in patients; glycerophosphoethanolamine concentrations were significantly higher (P < 0.05). We report human in vivo hepatic T 1 s for phosphatidylcholine, NAD + , and UDPG for the first time at 7T. Our protocol allows high signal-to-noise, repeatable measurement of metabolite concentrations in human liver. The splitting of PDE into its constituent peaks at 7T may allow more insight into changes in metabolism. Magn Reson Med 78:2095-2105, 2017. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on

  15. The relationship between HbA(1c) and fasting plasma glucose in patients with increased plasma liver enzyme measurements

    DEFF Research Database (Denmark)

    Christiansen, R; Rasmussen, L Melholt; Nybo, H

    2012-01-01

    levels of increased liver enzyme concentrations. Methods:  Data from 10 065 patients with simultaneous measurement of HbA(1c) , venous fasting plasma glucose, alanine aminotransferase and γ-glutamyl transferase were extracted from our laboratory database. Correlations were investigated in four patient...

  16. Measurement of binding of adenine nucleotides and phosphate to cytosolic proteins in permeabilized rat-liver cells

    NARCIS (Netherlands)

    Gankema, H. S.; Groen, A. K.; Wanders, R. J.; Tager, J. M.

    1983-01-01

    1. A method is described for measuring the binding of metabolites to cytosolic proteins in situ in isolated rat-liver cells treated with filipin to render the plasma membrane permeable to compounds of low molecular weight. 2. There is no binding of ATP or inorganic phosphate to cytosolic proteins,

  17. Airborne protein concentration: a key metric for type 1 allergy risk assessment-in home measurement challenges and considerations.

    Science.gov (United States)

    Tulum, Liz; Deag, Zoë; Brown, Matthew; Furniss, Annette; Meech, Lynn; Lalljie, Anja; Cochrane, Stella

    2018-01-01

    Exposure to airborne proteins can be associated with the development of immediate, IgE-mediated respiratory allergies, with genetic, epigenetic and environmental factors also playing a role in determining the likelihood that sensitisation will be induced. The main objective of this study was to determine whether airborne concentrations of selected common aeroallergens could be quantified in the air of homes using easily deployable, commercially available equipment and analytical methods, at low levels relevant to risk assessment of the potential to develop respiratory allergies. Additionally, air and dust sampling were compared and the influence of factors such as different filter types on allergen quantification explored. Low volume air sampling pumps and DUSTREAM ® dust samplers were used to sample 20 homes and allergen levels were quantified using a MARIA ® immunoassay. It proved possible to detect a range of common aeroallergens in the home with sufficient sensitivity to quantify airborne concentrations in ranges relevant to risk assessment (Limits of Detection of 0.005-0.03 ng/m 3 ). The methodology discriminates between homes related to pet ownership and there were clear advantages to sampling air over dust which are described in this paper. Furthermore, in an adsorption-extraction study, PTFE (polytetrafluoroethylene) filters gave higher and more consistent recovery values than glass fibre (grade A) filters for the range of aeroallergens studied. Very low airborne concentrations of allergenic proteins in home settings can be successfully quantified using commercially available pumps and immunoassays. Considering the greater relevance of air sampling to human exposure of the respiratory tract and its other advantages, wider use of standardised, sensitive techniques to measure low airborne protein concentrations and how they influence development of allergic sensitisation and symptoms could accelerate our understanding of human dose-response relationships

  18. Speech-in-Noise Tests and Supra-threshold Auditory Evoked Potentials as Metrics for Noise Damage and Clinical Trial Outcome Measures.

    Science.gov (United States)

    Le Prell, Colleen G; Brungart, Douglas S

    2016-09-01

    In humans, the accepted clinical standards for detecting hearing loss are the behavioral audiogram, based on the absolute detection threshold of pure-tones, and the threshold auditory brainstem response (ABR). The audiogram and the threshold ABR are reliable and sensitive measures of hearing thresholds in human listeners. However, recent results from noise-exposed animals demonstrate that noise exposure can cause substantial neurodegeneration in the peripheral auditory system without degrading pure-tone audiometric thresholds. It has been suggested that clinical measures of auditory performance conducted with stimuli presented above the detection threshold may be more sensitive than the behavioral audiogram in detecting early-stage noise-induced hearing loss in listeners with audiometric thresholds within normal limits. Supra-threshold speech-in-noise testing and supra-threshold ABR responses are reviewed here, given that they may be useful supplements to the behavioral audiogram for assessment of possible neurodegeneration in noise-exposed listeners. Supra-threshold tests may be useful for assessing the effects of noise on the human inner ear, and the effectiveness of interventions designed to prevent noise trauma. The current state of the science does not necessarily allow us to define a single set of best practice protocols. Nonetheless, we encourage investigators to incorporate these metrics into test batteries when feasible, with an effort to standardize procedures to the greatest extent possible as new reports emerge.

  19. Complexity Metrics for Workflow Nets

    DEFF Research Database (Denmark)

    Lassen, Kristian Bisgaard; van der Aalst, Wil M.P.

    2009-01-01

    analysts have difficulties grasping the dynamics implied by a process model. Recent empirical studies show that people make numerous errors when modeling complex business processes, e.g., about 20 percent of the EPCs in the SAP reference model have design flaws resulting in potential deadlocks, livelocks......, etc. It seems obvious that the complexity of the model contributes to design errors and a lack of understanding. It is not easy to measure complexity, however. This paper presents three complexity metrics that have been implemented in the process analysis tool ProM. The metrics are defined...... for a subclass of Petri nets named Workflow nets, but the results can easily be applied to other languages. To demonstrate the applicability of these metrics, we have applied our approach and tool to 262 relatively complex Protos models made in the context of various student projects. This allows us to validate...

  20. Otherwise Engaged : Social Media from Vanity Metrics to Critical Analytics

    NARCIS (Netherlands)

    Rogers, R.

    2018-01-01

    Vanity metrics is a term that captures the measurement and display of how well one is doing in the “success theater” of social media. The notion of vanity metrics implies a critique of metrics concerning both the object of measurement as well as their capacity to measure unobtrusively or only to

  1. The frequency and determinants of liver stiffness measurement failure: a retrospective study of "real-life" 38,464 examinations.

    Directory of Open Access Journals (Sweden)

    Dong Ji

    Full Text Available To investigate the frequency and determinants of liver stiffness measurement (LSM failure by means of FibroScan in "real-life" Chinese patients.A total of 38,464 "real-life" Chinese patients in 302 military hospital of China through the whole year of 2013, including asymptomatic carrier, chronic hepatitis B, chronic hepatitis C, liver cirrhosis (LC, alcoholic liver disease, autoimmune liver disease, hepatocellular carcinoma (HCC and other, were enrolled, their clinical and biological parameters were retrospectively investigated. Liver fibrosis was evaluated by FibroScan detection. S probe (for children with height less than 1.20 m and M probe (for adults were used. LSM failure defined as zero valid shots (unsuccessful LSM, or the ratio of the interquartile range to the median of 10 measurements (IQR/M greater than 0.30 plus median LSM greater or equal to 7.1 kPa (unreliable LSM.LSM failure occurred in 3.34% of all examinations (1286 patients out of 38,464, among them, there were 958 cases (2.49% with unsuccessful LSM, and 328 patients (0.85% with unreliable LSM. Statistical analyses showed that LSM failure was independently associated with body mass index (BMI greater than 30 kg/m(2, female sex, age greater than 50 years, intercostal spaces (IS less than 9 mm, decompensated liver cirrhosis and HCC patients. There were no significant differences among other diseases. By changing another skilled operator, success was achieved on 301 cases out of 1286, which reduced the failure rate to 2.56%, the decrease was significant (P<0.0001.The principal reasons of LSM failure are ascites, obesity and narrow of IS. The failure rates of HCC, decompensated LC, elder or female patients are higher. These results emphasize the need for adequate operator training, technological improvements and optimal criteria for specific patient subpopulations.

  2. Metric diffusion along foliations

    CERN Document Server

    Walczak, Szymon M

    2017-01-01

    Up-to-date research in metric diffusion along compact foliations is presented in this book. Beginning with fundamentals from the optimal transportation theory and the theory of foliations; this book moves on to cover Wasserstein distance, Kantorovich Duality Theorem, and the metrization of the weak topology by the Wasserstein distance. Metric diffusion is defined, the topology of the metric space is studied and the limits of diffused metrics along compact foliations are discussed. Essentials on foliations, holonomy, heat diffusion, and compact foliations are detailed and vital technical lemmas are proved to aide understanding. Graduate students and researchers in geometry, topology and dynamics of foliations and laminations will find this supplement useful as it presents facts about the metric diffusion along non-compact foliation and provides a full description of the limit for metrics diffused along foliation with at least one compact leaf on the two dimensions.

  3. Metric modular spaces

    CERN Document Server

    Chistyakov, Vyacheslav

    2015-01-01

    Aimed toward researchers and graduate students familiar with elements of functional analysis, linear algebra, and general topology; this book contains a general study of modulars, modular spaces, and metric modular spaces. Modulars may be thought of as generalized velocity fields and serve two important purposes: generate metric spaces in a unified manner and provide a weaker convergence, the modular convergence, whose topology is non-metrizable in general. Metric modular spaces are extensions of metric spaces, metric linear spaces, and classical modular linear spaces. The topics covered include the classification of modulars, metrizability of modular spaces, modular transforms and duality between modular spaces, metric  and modular topologies. Applications illustrated in this book include: the description of superposition operators acting in modular spaces, the existence of regular selections of set-valued mappings, new interpretations of spaces of Lipschitzian and absolutely continuous mappings, the existe...

  4. About the possibility of a generalized metric

    International Nuclear Information System (INIS)

    Lukacs, B.; Ladik, J.

    1991-10-01

    The metric (the structure of the space-time) may be dependent on the properties of the object measuring it. The case of size dependence of the metric was examined. For this dependence the simplest possible form of the metric tensor has been constructed which fulfils the following requirements: there be two extremal characteristic scales; the metric be unique and the usual between them; the change be sudden in the neighbourhood of these scales; the size of the human body appear as a parameter (postulated on the basis of some philosophical arguments). Estimates have been made for the two extremal length scales according to existing observations. (author) 19 refs

  5. Prognostic Performance Metrics

    Data.gov (United States)

    National Aeronautics and Space Administration — This chapter presents several performance metrics for offline evaluation of prognostics algorithms. A brief overview of different methods employed for performance...

  6. Validation of Metrics for Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2008-01-01

    Full Text Available This paper describe the new concepts of collaborative systems metrics validation. The paper define the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.

  7. Validation of Metrics for Collaborative Systems

    OpenAIRE

    Ion IVAN; Cristian CIUREA

    2008-01-01

    This paper describe the new concepts of collaborative systems metrics validation. The paper define the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.

  8. Software Power Metric Model: An Implementation | Akwukwuma ...

    African Journals Online (AJOL)

    ... and the execution time (TIME) in each case was recorded. We then obtain the application functions point count. Our result shows that the proposed metric is computable, consistent in its use of unit, and is programming language independent. Keywords: Software attributes, Software power, measurement, Software metric, ...

  9. TU-FG-209-05: Demonstration of the Line Focus Principle Using the Generalized Measured-Relative Object Detectability (GM-ROD) Metric

    Energy Technology Data Exchange (ETDEWEB)

    Russ, M; Shankar, A; Lau, A; Bednarek, D; Rudin, S [University at Buffalo (SUNY), Buffalo, NY (United States)

    2016-06-15

    Purpose: Demonstrate and quantify the augmented resolution due to focalspot size decrease in images acquired on the anode side of the field, for both small and medium (0.3 and 0.6mm) focal-spot sizes using the experimental task-based GM-ROD metric. Theoretical calculations have shown that a medium focal-spot can achieve the resolution of a small focal-spot if acquired with a tilted anode, effectively providing a higher-output small focal-spot. Methods: The MAF-CMOS (micro-angiographic fluoroscopic complementary-metal-oxide semiconductor) detector (75µm pixel pitch) imaged two copper wire segments of different diameter and a pipeline stent at the central axis and on the anode side of the beam, achieved by tilting the x-ray C-arm (Toshiba Infinix) to 6° and realigning the detector with the perpendicular ray to correct for x-ray obliquity. The relative gain in resolution was determined using the GM-ROD metric, which compares images on the basis of the Fourier transform of the image and the measured NNPS. To emphasize the geometric unsharpness, images were acquired at a magnification of two. Results: Images acquired on the anode side were compared to those acquired on the central axis with the same target-area focal-spot to consider the effect of an angled tube, and for all three objects the advantage of the smaller effective focal-spot was clear, showing a maximum improvement of 36% in GM-ROD. The images obtained with the small focal-spot at the central axis were compared to those of the medium focal-spot at the anode side and, for all objects, the relative performance was comparable. Conclusion: For three objects, the GM-ROD demonstrated the advantage of the anode side focal-spot. The comparable performance of the medium focal-spot on the anode side will allow for a high-output small focal-spot; a necessity in endovascular image-guided interventions. Partial support from an NIH grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.

  10. TU-FG-209-05: Demonstration of the Line Focus Principle Using the Generalized Measured-Relative Object Detectability (GM-ROD) Metric

    International Nuclear Information System (INIS)

    Russ, M; Shankar, A; Lau, A; Bednarek, D; Rudin, S

    2016-01-01

    Purpose: Demonstrate and quantify the augmented resolution due to focalspot size decrease in images acquired on the anode side of the field, for both small and medium (0.3 and 0.6mm) focal-spot sizes using the experimental task-based GM-ROD metric. Theoretical calculations have shown that a medium focal-spot can achieve the resolution of a small focal-spot if acquired with a tilted anode, effectively providing a higher-output small focal-spot. Methods: The MAF-CMOS (micro-angiographic fluoroscopic complementary-metal-oxide semiconductor) detector (75µm pixel pitch) imaged two copper wire segments of different diameter and a pipeline stent at the central axis and on the anode side of the beam, achieved by tilting the x-ray C-arm (Toshiba Infinix) to 6° and realigning the detector with the perpendicular ray to correct for x-ray obliquity. The relative gain in resolution was determined using the GM-ROD metric, which compares images on the basis of the Fourier transform of the image and the measured NNPS. To emphasize the geometric unsharpness, images were acquired at a magnification of two. Results: Images acquired on the anode side were compared to those acquired on the central axis with the same target-area focal-spot to consider the effect of an angled tube, and for all three objects the advantage of the smaller effective focal-spot was clear, showing a maximum improvement of 36% in GM-ROD. The images obtained with the small focal-spot at the central axis were compared to those of the medium focal-spot at the anode side and, for all objects, the relative performance was comparable. Conclusion: For three objects, the GM-ROD demonstrated the advantage of the anode side focal-spot. The comparable performance of the medium focal-spot on the anode side will allow for a high-output small focal-spot; a necessity in endovascular image-guided interventions. Partial support from an NIH grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.

  11. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less

  12. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    International Nuclear Information System (INIS)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-01-01

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F≤f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  13. Clinical benefit of liver stiffness measurement at 3 months after Kasai hepatoportoenterostomy to predict the liver related events in biliary atresia.

    Directory of Open Access Journals (Sweden)

    Seung Min Hahn

    Full Text Available BACKGROUND: The progression of hepatic fibrosis may result in decompensated hepatic failure with cirrhosis, liver related events (LRE such as ascites, variceal bleeding, and death after successful and timely Kasai hepatoportoenterostomy (HPE in biliary atresia. The aim of this study is to suggest clinical benefit of the liver stiffness measurement (LSM using transient elastography at 3 months after the Kasai operation to predict LRE. METHODS: Between January 2007 and December 2011, 69 eligible biliary atresia patients who underwent Kasai HPE and performed transient elastography before and 3 months after HPE were included. The occurrences of LRE were analyzed for all patients. All patients were divided into 2 groups (with and without LRE for comparison. Multivariate analysis was used to detect the independent risk factors of LRE. The area under the receiver operation characteristics curve (AUROC was used to establish the LSM optimal cutoff value of 3 months after Kasai operation in predicting LRE. RESULTS: LSM value, aminotransferase, albumin, bilirubin, and PT-INR significantly differed among the two groups. Multivariate analysis demonstrated LSM value as the most powerful independent factor of the development of LRE. The cut-off value of 19.9 kPa was calculated to be optimal for predicting LRE development with total sensitivity and specificity of 1.804. AUROC resulted in 0.943, with sensitivity of 85.3% and specificity of 95.2%. CONCLUSIONS: The LSM value of 3 months after Kasai HPE can be a useful predictor of LRE development.

  14. MEASUREMENT OF LARGE-SCALE SOLAR POWER PLANT BY USING IMAGES ACQUIRED BY NON-METRIC DIGITAL CAMERA ON BOARD UAV

    Directory of Open Access Journals (Sweden)

    R. Matsuoka

    2012-07-01

    Full Text Available This paper reports an experiment conducted in order to investigate the feasibility of the deformation measurement of a large-scale solar power plant on reclaimed land by using images acquired by a non-metric digital camera on board a micro unmanned aerial vehicle (UAV. It is required that a root mean squares of errors (RMSE in height measurement should be less than 26 mm that is 1/3 of the critical limit of deformation of 78 mm off the plane of a solar panel. Images utilized in the experiment have been obtained by an Olympus PEN E-P2 digital camera on board a Microdrones md4-1000 quadrocopter. The planned forward and side overlap ratios of vertical image acquisition have been 60 % and 60 % respectively. The planned flying height of the UAV has been 20 m above the ground level and the ground resolution of an image is approximately 5.0 mm by 5.0 mm. 8 control points around the experiment area are utilized for orientation. Measurement results are evaluated by the space coordinates of 220 check points which are corner points of 55 solar panels selected from 1768 solar panels in the experiment area. Two teams engage in the experiment. One carries out orientation and measurement by using 171 images following the procedure of conventional aerial photogrammetry, and the other executes those by using 126 images in the manner of close range photogrammetry. The former fails to satisfy the required accuracy, while the RMSE in height measurement by the latter is 8.7 mm that satisfies the required accuracy. From the experiment results, we conclude that the deformation measurement of a large-scale solar power plant on reclaimed land by using images acquired by a nonmetric digital camera on board a micro UAV would be feasible if points utilized in orientation and measurement have a sufficient number of bundles in good geometry and self-calibration in orientation is carried out.

  15. Automated measurement of uptake in cerebellum, liver, and aortic arch in full-body FDG PET/CT scans.

    Science.gov (United States)

    Bauer, Christian; Sun, Shanhui; Sun, Wenqing; Otis, Justin; Wallace, Audrey; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M; Beichel, Reinhard R

    2012-06-01

    The purpose of this work was to develop and validate fully automated methods for uptake measurement of cerebellum, liver, and aortic arch in full-body PET/CT scans. Such measurements are of interest in the context of uptake normalization for quantitative assessment of metabolic activity and/or automated image quality control. Cerebellum, liver, and aortic arch regions were segmented with different automated approaches. Cerebella were segmented in PET volumes by means of a robust active shape model (ASM) based method. For liver segmentation, a largest possible hyperellipsoid was fitted to the liver in PET scans. The aortic arch was first segmented in CT images of a PET/CT scan by a tubular structure analysis approach, and the segmented result was then mapped to the corresponding PET scan. For each of the segmented structures, the average standardized uptake value (SUV) was calculated. To generate an independent reference standard for method validation, expert image analysts were asked to segment several cross sections of each of the three structures in 134 F-18 fluorodeoxyglucose (FDG) PET/CT scans. For each case, the true average SUV was estimated by utilizing statistical models and served as the independent reference standard. For automated aorta and liver SUV measurements, no statistically significant scale or shift differences were observed between automated results and the independent standard. In the case of the cerebellum, the scale and shift were not significantly different, if measured in the same cross sections that were utilized for generating the reference. In contrast, automated results were scaled 5% lower on average although not shifted, if FDG uptake was calculated from the whole segmented cerebellum volume. The estimated reduction in total SUV measurement error ranged between 54.7% and 99.2%, and the reduction was found to be statistically significant for cerebellum and aortic arch. With the proposed methods, the authors have demonstrated that

  16. Brand metrics that matter

    NARCIS (Netherlands)

    Muntinga, D.; Bernritter, S.

    2017-01-01

    Het merk staat steeds meer centraal in de organisatie. Het is daarom essentieel om de gezondheid, prestaties en ontwikkelingen van het merk te meten. Het is echter een uitdaging om de juiste brand metrics te selecteren. Een enorme hoeveelheid metrics vraagt de aandacht van merkbeheerders. Maar welke

  17. Privacy Metrics and Boundaries

    NARCIS (Netherlands)

    L-F. Pau (Louis-François)

    2005-01-01

    textabstractThis paper aims at defining a set of privacy metrics (quantitative and qualitative) in the case of the relation between a privacy protector ,and an information gatherer .The aims with such metrics are: -to allow to assess and compare different user scenarios and their differences; for

  18. Prevalence, severity, and relationships of lung lesions, liver abnormalities, and rumen health scores measured at slaughter in beef cattle.

    Science.gov (United States)

    Rezac, D J; Thomson, D U; Bartle, S J; Osterstock, J B; Prouty, F L; Reinhardt, C D

    2014-06-01

    An array of management tools exists within the beef industry to improve animal welfare and productivity; however, the ability to assess the outcomes of these tools is needed. Deficiencies in management commonly manifest as bovine respiratory disease complex or nutritional disorders such as acidosis; therefore, lung, liver, and rumen gross pathology lesions present at slaughter were measured as part of the Harvest Audit Program (HAP) and associations with performance determined. Individual gross pathology data from 19,229 cattle at commercial packing plants in Kansas and Texas were collected. Corresponding individual preharvest and carcass data were obtained on a subset of 13,226 cattle. Associations between lesions and performance were modeled using multivariable mixed effect models. Regression coefficients were used for estimation of lesion associative effects on continuous outcomes and odds ratios for dichotomous outcomes. Across the entire population, 67.3% of the cattle had no pulmonary lesions; 22.5 and 9.8% of cattle displayed mild and severe lesions, respectively. Severe pulmonary lesions were associated with a decreased ADG of 0.07 kg and a HCW 7.1 kg less than cohorts with no pulmonary lesions (P < 0.01). Overall, 68.6% of cattle observed had normal livers. Of cattle severely affected by liver abscesses (A+; 4.6%), 14.9% also displayed severe pulmonary lesions and 28.3% displayed mild pulmonary lesions. Rumenitis lesions were observed in 24.1% of the overall study population. Of cattle with mildly abscessed livers (A-), moderately abscessed livers (A), and severely abscessed livers, 20.6, 21.6, and 9.24% displayed mild or severe rumenitis lesions at slaughter. Severe rumenitis lesions were associated with a significant decrease in ADG and HCW (0.025 and 2.20 kg, respectively; P < 0.001). Although the majority of the cattle in this population would be considered low risk, after adjustments for cattle with multiple lesions, 22.9% of cattle in the overall

  19. www.common-metrics.org: a web application to estimate scores from different patient-reported outcome measures on a common scale.

    Science.gov (United States)

    Fischer, H Felix; Rose, Matthias

    2016-10-19

    Recently, a growing number of Item-Response Theory (IRT) models has been published, which allow estimation of a common latent variable from data derived by different Patient Reported Outcomes (PROs). When using data from different PROs, direct estimation of the latent variable has some advantages over the use of sum score conversion tables. It requires substantial proficiency in the field of psychometrics to fit such models using contemporary IRT software. We developed a web application ( http://www.common-metrics.org ), which allows estimation of latent variable scores more easily using IRT models calibrating different measures on instrument independent scales. Currently, the application allows estimation using six different IRT models for Depression, Anxiety, and Physical Function. Based on published item parameters, users of the application can directly estimate latent trait estimates using expected a posteriori (EAP) for sum scores as well as for specific response patterns, Bayes modal (MAP), Weighted likelihood estimation (WLE) and Maximum likelihood (ML) methods and under three different prior distributions. The obtained estimates can be downloaded and analyzed using standard statistical software. This application enhances the usability of IRT modeling for researchers by allowing comparison of the latent trait estimates over different PROs, such as the Patient Health Questionnaire Depression (PHQ-9) and Anxiety (GAD-7) scales, the Center of Epidemiologic Studies Depression Scale (CES-D), the Beck Depression Inventory (BDI), PROMIS Anxiety and Depression Short Forms and others. Advantages of this approach include comparability of data derived with different measures and tolerance against missing values. The validity of the underlying models needs to be investigated in the future.

  20. The metrics of science and technology

    CERN Document Server

    Geisler, Eliezer

    2000-01-01

    Dr. Geisler's far-reaching, unique book provides an encyclopedic compilation of the key metrics to measure and evaluate the impact of science and technology on academia, industry, and government. Focusing on such items as economic measures, patents, peer review, and other criteria, and supported by an extensive review of the literature, Dr. Geisler gives a thorough analysis of the strengths and weaknesses inherent in metric design, and in the use of the specific metrics he cites. His book has already received prepublication attention, and will prove especially valuable for academics in technology management, engineering, and science policy; industrial R&D executives and policymakers; government science and technology policymakers; and scientists and managers in government research and technology institutions. Geisler maintains that the application of metrics to evaluate science and technology at all levels illustrates the variety of tools we currently possess. Each metric has its own unique strengths and...

  1. Metabolic liver function measured in vivo by dynamic (18)F-FDGal PET/CT without arterial blood sampling.

    Science.gov (United States)

    Horsager, Jacob; Munk, Ole Lajord; Sørensen, Michael

    2015-01-01

    Metabolic liver function can be measured by dynamic PET/CT with the radio-labelled galactose-analogue 2-[(18)F]fluoro-2-deoxy-D-galactose ((18)F-FDGal) in terms of hepatic systemic clearance of (18)F-FDGal (K, ml blood/ml liver tissue/min). The method requires arterial blood sampling from a radial artery (arterial input function), and the aim of this study was to develop a method for extracting an image-derived, non-invasive input function from a volume of interest (VOI). Dynamic (18)F-FDGal PET/CT data from 16 subjects without liver disease (healthy subjects) and 16 patients with liver cirrhosis were included in the study. Five different input VOIs were tested: four in the abdominal aorta and one in the left ventricle of the heart. Arterial input function from manual blood sampling was available for all subjects. K*-values were calculated using time-activity curves (TACs) from each VOI as input and compared to the K-value calculated using arterial blood samples as input. Each input VOI was tested on PET data reconstructed with and without resolution modelling. All five image-derived input VOIs yielded K*-values that correlated significantly with K calculated using arterial blood samples. Furthermore, TACs from two different VOIs yielded K*-values that did not statistically deviate from K calculated using arterial blood samples. A semicircle drawn in the posterior part of the abdominal aorta was the only VOI that was successful for both healthy subjects and patients as well as for PET data reconstructed with and without resolution modelling. Metabolic liver function using (18)F-FDGal PET/CT can be measured without arterial blood samples by using input data from a semicircle VOI drawn in the posterior part of the abdominal aorta.

  2. Adding Liver Stiffness Measurement to the Routine Evaluation of Hepatocellular Carcinoma Resectability Can Optimize Clinical Outcome.

    Science.gov (United States)

    Cucchetti, Alessandro; Cescon, Matteo; Colecchia, Antonio; Neri, Flavia; Cappelli, Alberta; Ravaioli, Matteo; Mazzotti, Federico; Ercolani, Giorgio; Festi, Davide; Pinna, Antonio Daniele

    2017-10-01

    Purpose  Liver stiffness (LS) has been shown to be of use in chronic liver disease patients but its utility in surgical judgment still needs to be proven. A decision-making approach was applied to evaluate whether LS measurement before surgery of hepatocellular carcinoma (HCC) can be useful in avoiding post-hepatectomy liver failure (PHLF). Materials and Methods  Decision curve analysis (DCA) was applied to 202 HCC patients (2008 - 14) with LS measurement prior to hepatectomy to verify whether the occurrence of PHLF grades B/C should be reduced through a decision-making approach with LS.  Results  Within 90 days of surgery, 4 patients died (2 %) and grades B/C PHLF occurred in 29.7 % of cases. Ascites and/or pleural effusion, treatable with medical therapy, were the most frequent complications. DCA showed that using the "expected utility theory" LS measurement can reduce up to 39 % of cases of PHLF without the exclusion of any patient from surgery that duly undergoes an uncomplicated postoperative course. LS measurement does not add any information to normal clinical judgment for patients with a low (expected utility theory" fulfilment. However, the degree of PHLF can be minor and "risk seeking" individuals can accept such a risk on the basis of surgical benefits. © Georg Thieme Verlag KG Stuttgart · New York.

  3. Measurement of the capability of DNA synthesis of human fetal liver cells by the assay of 3H-TdR incorporation

    International Nuclear Information System (INIS)

    Wang Tao; Ma Xiangrui; Wang Hongyun; Cao Xia

    1987-01-01

    The fetal liver is one of the major sites of hematopoiesis during gestation. Under erythropoietin (EPO) stimulation, in erythroid precusor cells of fetal liver, proliferation and differentiation occurred and function of metabolism was enhanced. The technique of 3 H-TdR incorporation was used to measure the function of fetal liver cellular DNA synthesis. As EPO concentration at the range of approximately 20 ∼ 100 mU/ml, the counts of 3 H-TdR incorporation into fetal liver cells increased. As the concentration of EPO increased, however, its incorporation counts are lower than that in bone marrow of either the fetal or the adult. It suggested that precusors of erythrocyte of fetal liver has differentiated to later phases with the progressive accumulation of mature cells, therefore, both proliferation and function of metabolism are more or less decreased respectively. Under EPO stimulation, however, precusor of erythroid of fetal liver can greatly increase potential effects on DNA synthesis

  4. Age-related changes in liver, kidney, and spleen stiffness in healthy children measured with acoustic radiation force impulse imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Mi-Jung, E-mail: mjl1213@yuhs.ac [Department of Radiology and Research Institute of Radiological Science, Severance Children' s Hospital, Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-752 (Korea, Republic of); Kim, Myung-Joon, E-mail: mjkim@yuhs.ac [Department of Radiology and Research Institute of Radiological Science, Severance Children' s Hospital, Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-752 (Korea, Republic of); Han, Kyung Hwa, E-mail: khhan@yuhs.ac [Biostatistics Collaboration Unit, Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-752 (Korea, Republic of); Yoon, Choon Sik, E-mail: yooncs58@yuhs.ac [Department of Radiology, Gangnam Severance Hospital, Yonsei University, College of Medicine, 211 Unjoo-ro, Gangnam-gu, Seoul (Korea, Republic of)

    2013-06-15

    Objectives: To evaluate the feasibility and age-related changes of shear wave velocity (SWV) in normal livers, kidneys, and spleens of children using acoustic radiation force impulse (ARFI) imaging. Materials and methods: Healthy pediatric volunteers prospectively underwent abdominal ultrasonography and ARFI. The subjects were divided into three groups according to age: group 1: <5 years old; group 2: 5–10 years old; and group 3: >10 years old. The SWV was measured using a 4–9 MHz linear probe for group 1 and a 1–4 MHz convex probe for groups 2 and 3. Three valid SWV measurements were acquired for each organ. Results: Two hundred and two children (92 male, 110 female) with an average age of 8.1 years (±4.7) were included in this study and had a successful measurement rate of 97% (196/202). The mean SWVs were 1.12 m/s for the liver, 2.19 m/s for the right kidney, 2.33 m/s for the left kidney, and 2.25 m/s for the spleen. The SWVs for the right and left kidneys, and the spleen showed age-related changes in all children (p < 0.001). And the SWVs for the kidneys increased with age in group 1, and those for the liver changed with age in group 3. Conclusions: ARFI measurements are feasible for solid abdominal organs in children using high or low frequency probes. The mean ARFI SWV for the kidneys increased according to age in children less than 5 years of age and in the liver, it changed with age in children over 10.

  5. Holographic Spherically Symmetric Metrics

    Science.gov (United States)

    Petri, Michael

    The holographic principle (HP) conjectures, that the maximum number of degrees of freedom of any realistic physical system is proportional to the system's boundary area. The HP has its roots in the study of black holes. It has recently been applied to cosmological solutions. In this article we apply the HP to spherically symmetric static space-times. We find that any regular spherically symmetric object saturating the HP is subject to tight constraints on the (interior) metric, energy-density, temperature and entropy-density. Whenever gravity can be described by a metric theory, gravity is macroscopically scale invariant and the laws of thermodynamics hold locally and globally, the (interior) metric of a regular holographic object is uniquely determined up to a constant factor and the interior matter-state must follow well defined scaling relations. When the metric theory of gravity is general relativity, the interior matter has an overall string equation of state (EOS) and a unique total energy-density. Thus the holographic metric derived in this article can serve as simple interior 4D realization of Mathur's string fuzzball proposal. Some properties of the holographic metric and its possible experimental verification are discussed. The geodesics of the holographic metric describe an isotropically expanding (or contracting) universe with a nearly homogeneous matter-distribution within the local Hubble volume. Due to the overall string EOS the active gravitational mass-density is zero, resulting in a coasting expansion with Ht = 1, which is compatible with the recent GRB-data.

  6. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  7. 1H-MRS Measured Ectopic Fat in Liver and Muscle in Danish Lean and Obese Children and Adolescents.

    Science.gov (United States)

    Fonvig, Cilius Esmann; Chabanova, Elizaveta; Andersson, Ehm Astrid; Ohrt, Johanne Dam; Pedersen, Oluf; Hansen, Torben; Thomsen, Henrik S; Holm, Jens-Christian

    2015-01-01

    This cross sectional study aims to investigate the associations between ectopic lipid accumulation in liver and skeletal muscle and biochemical measures, estimates of insulin resistance, anthropometry, and blood pressure in lean and overweight/obese children. Fasting plasma glucose, serum lipids, serum insulin, and expressions of insulin resistance, anthropometry, blood pressure, and magnetic resonance spectroscopy of liver and muscle fat were obtained in 327 Danish children and adolescents aged 8-18 years. In 287 overweight/obese children, the prevalences of hepatic and muscular steatosis were 31% and 68%, respectively, whereas the prevalences in 40 lean children were 3% and 10%, respectively. A multiple regression analysis adjusted for age, sex, body mass index z-score (BMI SDS), and pubertal development showed that the OR of exhibiting dyslipidemia was 4.2 (95%CI: [1.8; 10.2], p = 0.0009) when hepatic steatosis was present. Comparing the simultaneous presence of hepatic and muscular steatosis with no presence of steatosis, the OR of exhibiting dyslipidemia was 5.8 (95%CI: [2.0; 18.6], p = 0.002). No significant associations between muscle fat and dyslipidemia, impaired fasting glucose, or blood pressure were observed. Liver and muscle fat, adjusted for age, sex, BMI SDS, and pubertal development, associated to BMI SDS and glycosylated hemoglobin, while only liver fat associated to visceral and subcutaneous adipose tissue and intramyocellular lipid associated inversely to high density lipoprotein cholesterol. Hepatic steatosis is associated with dyslipidemia and liver and muscle fat depositions are linked to obesity-related metabolic dysfunctions, especially glycosylated hemoglobin, in children and adolescents, which suggest an increased cardiovascular disease risk.

  8. Correlation of liver stiffness measured by FibroScan with sex and age in healthy adults undergoing physical examination

    Directory of Open Access Journals (Sweden)

    ZHAO Chongshan

    2016-04-01

    Full Text Available ObjectiveTo determine the reference range of liver stiffness in healthy population, and to investigate the influence of age and sex on liver stiffness. MethodsA total of 1794 healthy subjects who underwent physical examination in China National Petroleum Corporation Central Hospital from October 1, 2012 to October 31, 2014 were enrolled, and FibroScan was used to perform liver stiffness measurement (LSM. Since LSM value was not normally distributed, the Wilcoxon rank sum test was used to compare LSM value between male and female patients, the Kruskal-Wallis test was used to compare LSM value between different age groups, and the Spearman's rank correlation analysis was used to analyze the correlation between LSM value and age. The one-sided percentile method was used to determine the range of normal reference values in male and female subjects or in different age groups. ResultsLSM was successfully performed in 1590 patients, and the rate of successful measurement was 88.63%. A total of 107 patients were excluded due to abnormal liver enzymes. The analysis showed that LSM value showed a significant difference between male and female subjects (Z=-4.980, P<0.001, as well as between different age groups (χ2=16.983, P=0.001. Age was positively correlated with LSM value (r=0.087, P=0.001. The reference range was estimated to be ≤7.1 kPa in adults, ≤7.0 kPa in females, and ≤7.2 kPa in males. From the perspective of age, the reference range was estimated to be ≤6.8 kPa in persons aged 20-29 years, ≤6.7 kPa in persons aged 30-44 years, ≤7.8 kPa in persons aged 45-59 years, and ≤8.8 kPa in persons aged 60-74 years. ConclusionLiver stiffness value is influenced by sex and age. Sex and age should be taken into account while performing liver stiffness measurement in healthy subjects.

  9. Liver Transplant

    Science.gov (United States)

    ... Liver Function Tests Clinical Trials Liver Transplant FAQs Medical Terminology Diseases of the Liver Alagille Syndrome Alcohol-Related ... the Liver The Progression of Liver Disease FAQs Medical Terminology HOW YOU CAN HELP Sponsorship Ways to Give ...

  10. Networks and centroid metrics for understanding football

    African Journals Online (AJOL)

    Gonçalo Dias

    games. However, it seems that the centroid metric, supported only by the position of players in the field ...... the strategy adopted by the coach (Gama et al., 2014). ... centroid distance as measures of team's tactical performance in youth football.

  11. Caffeine demethylation measured by breath analysis in experimental liver injury in the rat

    Energy Technology Data Exchange (ETDEWEB)

    Schaad, H.J.; Renner, E.L.; Wietholtz, H.; Preisig, R. [University of Berne, Department of Clinical Pharmaceology, Berne (Switzerland); Arnaud, M.J. [Nestle Research Center, Nestec Ltd., Vevey (Switzerland)

    1995-01-01

    To assess the effect of experimental liver injury on caffeine metabolism, 1 {mu}{sup C}i/kg b.w. of [3-methyl{sup 14}C]-caffeine (together with 5 mg/kg b.w. of the cold compound) was injected i.p. to four different experimental groups and respective controls of unanesthetized male Sprague-Dawley rats. Exhaled {sup 14}CO{sub 2} was completely collected during 4 h and peak exhalation rate and fraction of dose recovered were calculated. 1/3 hepatectomy affected {sup 14}CO{sub 2} exhalation to a limited extent, decreasing solely peak exhalation rate (p<0.05 compared to sham-operated control). 2/3 hepatectomy, on the other hand, resulted in significant reduction (p<0.01) in both peak exhalation rate (by 59%) and fraction of dose recovered (by 47%), that were proportionate to the loss of liver mass (50%). End-to-side portocaval shunt led to the well-documented hepatic `atrophy`, liver weight being diminished on average to 50% within 2 weeks of surgery; however, reductions in peak exhalation rate (by 75%) and fraction of dose recovered (by 64%) were even more pronounced. Finally, 48 h bile duct ligation was equivalent to `functional 2/3 hepatectomy`, peak exhalation rate (by 65%) and fraction of dose recovered (by 56%) being markedly diminished despite increased liver weight. These results indicate that {sup 14}CO{sub 2} exhalation curves following administration of specifically labelled caffeine are quantitative indicators of acute or chronic loss of functioning liver mass. In addition, the 3-demethylation pathway appears to be particularly sensitive to the inhibitory effects of cholestasis on microsomal function. (au) (30 refs.).

  12. Probabilistic metric spaces

    CERN Document Server

    Schweizer, B

    2005-01-01

    Topics include special classes of probabilistic metric spaces, topologies, and several related structures, such as probabilistic normed and inner-product spaces. 1983 edition, updated with 3 new appendixes. Includes 17 illustrations.

  13. IT Project Management Metrics

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Many software and IT projects fail in completing theirs objectives because different causes of which the management of the projects has a high weight. In order to have successfully projects, lessons learned have to be used, historical data to be collected and metrics and indicators have to be computed and used to compare them with past projects and avoid failure to happen. This paper presents some metrics that can be used for the IT project management.

  14. Liver stiffness measurement in cirrhotic patient — Implications of disease activity and treatment efficacy

    Directory of Open Access Journals (Sweden)

    Huang-Wei Xu

    2012-12-01

    Full Text Available Liver stiffness measurement (LSM is a noninvasive method for the diagnosis of hepatic fibrosis. The aim of this study was to evaluate the effects of hepatitis activity and antiviral therapy on LSM in cirrhotic patients. Consecutive patients with compensated hepatic cirrhosis were enrolled for LSM. The medical records of hepatitis activity and antiviral therapy before enrollment were reviewed. Patients were stratified into inactive, fluctuating, and active groups by serial change of alanine transaminase level. For chronic hepatitis C, patients were stratified into sustained virological response (SVR and non-SVR (NSVR by effect of antiviral treatment. LSM results were compared among different groups. A total of 163 patients (mean age = 57.2 ± 11.0 years were enrolled. The median (range LSM values were 9.6 (4.2–20.6, 10.25 (3.9–49.6, and 15.75 (4.8–61.5 kPa in the inactive, fluctuating, and active groups, respectively. Patients in the active group had significantly higher LSM values. For chronic hepatitis C, median (range LSM values were 16.6 (8.1–61.5, 22.9 (11.1–37.4, and 11.2 (3.9–27.0 kPa in patients without antiviral therapy, in NSVR, and in SVR groups, respectively. Patients with SVR had significantly lower LSM values. For chronic hepatitis B, median (range LSM values were 11.8 (5.1–46.6, 16.85 (4.2–48, and 10.6 (4.3–46.4 kPa kPa in patients without oral nucleos(tide analogue (NA therapy, with NA < 12, and ≧12 months, respectively. There was a significantly lower LSM value in patients with NA therapy≧12 months. There were low LSM values in cirrhotic patients without hepatitis activity, as well as with SVR in chronic hepatitis C and long-term NA therapy in chronic hepatitis B.

  15. ELF-test less accurately identifies liver cirrhosis diagnosed by liver stiffness measurement in non-Asian women with chronic hepatitis B

    NARCIS (Netherlands)

    Harkisoen, S.; Boland, G. J.; van den Hoek, J. A. R.; van Erpecum, K. J.; Hoepelman, A. I. M.; Arends, J. E.

    2014-01-01

    The enhanced liver fibrosis test (ELF-test) has been validated for several hepatic diseases. However, its performance in chronic hepatitis B virus (CHB) infected patients is uncertain. This study investigates the diagnostic value of the ELF test for cirrhosis identified by liver stiffness

  16. Marketing communication metrics for social media

    OpenAIRE

    Töllinen, Aarne; Karjaluoto, Heikki

    2011-01-01

    The objective of this paper is to develop a conceptual framework for measuring the effectiveness of social media marketing communications. Specifically, we study whether the existing marketing communications performance metrics are still valid in the changing digitalised communications landscape, or whether it is time to rethink them, or even to devise entirely new metrics. Recent advances in information technology and marketing bring a need to re-examine measurement models. We combine two im...

  17. Liver kinetics of glucose analogs measured in pigs by PET: importance of dual-input blood sampling

    DEFF Research Database (Denmark)

    Munk, O L; Bass, L; Roelsgaard, K

    2001-01-01

    -input functions were very similar. CONCLUSION: Compartmental analysis of MG and FDG kinetics using dynamic PET data requires measurements of dual-input activity concentrations. Using the dual-input function, physiologically reasonable parameter estimates of K1, k2, and Vp were obtained, whereas the use......Metabolic processes studied by PET are quantified traditionally using compartmental models, which relate the time course of the tracer concentration in tissue to that in arterial blood. For liver studies, the use of arterial input may, however, cause systematic errors to the estimated kinetic...... parameters, because of ignorance of the dual blood supply from the hepatic artery and the portal vein to the liver. METHODS: Six pigs underwent PET after [15O]carbon monoxide inhalation, 3-O-[11C]methylglucose (MG) injection, and [18F]FDG injection. For the glucose scans, PET data were acquired for 90 min...

  18. Ideal Based Cyber Security Technical Metrics for Control Systems

    Energy Technology Data Exchange (ETDEWEB)

    W. F. Boyer; M. A. McQueen

    2007-10-01

    Much of the world's critical infrastructure is at risk from attack through electronic networks connected to control systems. Security metrics are important because they provide the basis for management decisions that affect the protection of the infrastructure. A cyber security technical metric is the security relevant output from an explicit mathematical model that makes use of objective measurements of a technical object. A specific set of technical security metrics are proposed for use by the operators of control systems. Our proposed metrics are based on seven security ideals associated with seven corresponding abstract dimensions of security. We have defined at least one metric for each of the seven ideals. Each metric is a measure of how nearly the associated ideal has been achieved. These seven ideals provide a useful structure for further metrics development. A case study shows how the proposed metrics can be applied to an operational control system.

  19. Understanding Acceptance of Software Metrics--A Developer Perspective

    Science.gov (United States)

    Umarji, Medha

    2009-01-01

    Software metrics are measures of software products and processes. Metrics are widely used by software organizations to help manage projects, improve product quality and increase efficiency of the software development process. However, metrics programs tend to have a high failure rate in organizations, and developer pushback is one of the sources…

  20. Application of localized 31P MRS saturation transfer at 7 T for measurement of ATP metabolism in the liver: reproducibility and initial clinical application in patients with non-alcoholic fatty liver disease

    International Nuclear Information System (INIS)

    Valkovic, Ladislav; Gajdosik, Martin; Chmelik, Marek; Trattnig, Siegfried; Traussnigg, Stefan; Kienbacher, Christian; Trauner, Michael; Wolf, Peter; Krebs, Michael; Bogner, Wolfgang; Krssak, Martin

    2014-01-01

    Saturation transfer (ST) phosphorus MR spectroscopy ( 31 P MRS) enables in vivo insight into energy metabolism and thus could identify liver conditions currently diagnosed only by biopsy. This study assesses the reproducibility of the localized 31 P MRS ST in liver at 7 T and tests its potential for noninvasive differentiation of non-alcoholic fatty liver (NAFL) and steatohepatitis (NASH). After the ethics committee approval, reproducibility of the localized 31 P MRS ST at 7 T and the biological variation of acquired hepato-metabolic parameters were assessed in healthy volunteers. Subsequently, 16 suspected NAFL/NASH patients underwent MRS measurements and diagnostic liver biopsy. The Pi-to-ATP exchange parameters were compared between the groups by a Mann-Whitney U test and related to the liver fat content estimated by a single-voxel proton ( 1 H) MRS, measured at 3 T. The mean exchange rate constant (k) in healthy volunteers was 0.31 ± 0.03 s -1 with a coefficient of variation of 9.0 %. Significantly lower exchange rates (p -1 ) when compared to healthy volunteers, and NAFL patients (k = 0.30 ± 0.05 s -1 ). Significant correlation was found between the k value and the liver fat content (r = 0.824, p 31 P MRS ST technique provides a tool for gaining insight into hepatic ATP metabolism and could contribute to the differentiation of NAFL and NASH. (orig.)

  1. Characterising risk - aggregated metrics: radiation and noise

    International Nuclear Information System (INIS)

    Passchier, W.

    1998-01-01

    The characterisation of risk is an important phase in the risk assessment - risk management process. From the multitude of risk attributes a few have to be selected to obtain a risk characteristic or profile that is useful for risk management decisions and implementation of protective measures. One way to reduce the number of attributes is aggregation. In the field of radiation protection such an aggregated metric is firmly established: effective dose. For protection against environmental noise the Health Council of the Netherlands recently proposed a set of aggregated metrics for noise annoyance and sleep disturbance. The presentation will discuss similarities and differences between these two metrics and practical limitations. The effective dose has proven its usefulness in designing radiation protection measures, which are related to the level of risk associated with the radiation practice in question, given that implicit judgements on radiation induced health effects are accepted. However, as the metric does not take into account the nature of radiation practice, it is less useful in policy discussions on the benefits and harm of radiation practices. With respect to the noise exposure metric, only one effect is targeted (annoyance), and the differences between sources are explicitly taken into account. This should make the metric useful in policy discussions with respect to physical planning and siting problems. The metric proposed has only significance on a population level, and can not be used as a predictor for individual risk. (author)

  2. Prospective comparison of liver stiffness measurements between two point wave elastography methods: Virtual ouch quantification and elastography point quantification

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Hyun Suk; Lee, Jeong Min; Yoon, Jeong Hee; Lee, Dong Ho; Chang, Won; Han, Joon Koo [Seoul National University Hospital, Seoul (Korea, Republic of)

    2016-09-15

    To prospectively compare technical success rate and reliable measurements of virtual touch quantification (VTQ) elastography and elastography point quantification (ElastPQ), and to correlate liver stiffness (LS) measurements obtained by the two elastography techniques. Our study included 85 patients, 80 of whom were previously diagnosed with chronic liver disease. The technical success rate and reliable measurements of the two kinds of point shear wave elastography (pSWE) techniques were compared by χ{sup 2} analysis. LS values measured using the two techniques were compared and correlated via Wilcoxon signed-rank test, Spearman correlation coefficient, and 95% Bland-Altman limit of agreement. The intraobserver reproducibility of ElastPQ was determined by 95% Bland-Altman limit of agreement and intraclass correlation coefficient (ICC). The two pSWE techniques showed similar technical success rate (98.8% for VTQ vs. 95.3% for ElastPQ, p = 0.823) and reliable LS measurements (95.3% for VTQ vs. 90.6% for ElastPQ, p = 0.509). The mean LS measurements obtained by VTQ (1.71 ± 0.47 m/s) and ElastPQ (1.66 ± 0.41 m/s) were not significantly different (p = 0.209). The LS measurements obtained by the two techniques showed strong correlation (r = 0.820); in addition, the 95% limit of agreement of the two methods was 27.5% of the mean. Finally, the ICC of repeat ElastPQ measurements was 0.991. Virtual touch quantification and ElastPQ showed similar technical success rate and reliable measurements, with strongly correlated LS measurements. However, the two methods are not interchangeable due to the large limit of agreement.

  3. Software metrics a rigorous and practical approach

    CERN Document Server

    Fenton, Norman

    2014-01-01

    A Framework for Managing, Measuring, and Predicting Attributes of Software Development Products and ProcessesReflecting the immense progress in the development and use of software metrics in the past decades, Software Metrics: A Rigorous and Practical Approach, Third Edition provides an up-to-date, accessible, and comprehensive introduction to software metrics. Like its popular predecessors, this third edition discusses important issues, explains essential concepts, and offers new approaches for tackling long-standing problems.New to the Third EditionThis edition contains new material relevant

  4. Deep Transfer Metric Learning.

    Science.gov (United States)

    Junlin Hu; Jiwen Lu; Yap-Peng Tan; Jie Zhou

    2016-12-01

    Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption does not hold in many real visual recognition applications, especially when samples are captured across different data sets. In this paper, we propose a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain. Specifically, our DTML learns a deep metric network by maximizing the inter-class variations and minimizing the intra-class variations, and minimizing the distribution divergence between the source domain and the target domain at the top layer of the network. To better exploit the discriminative information from the source domain, we further develop a deeply supervised transfer metric learning (DSTML) method by including an additional objective on DTML, where the output of both the hidden layers and the top layer are optimized jointly. To preserve the local manifold of input data points in the metric space, we present two new methods, DTML with autoencoder regularization and DSTML with autoencoder regularization. Experimental results on face verification, person re-identification, and handwritten digit recognition validate the effectiveness of the proposed methods.

  5. Clinical value of combined measurement of serum alpha-fetoprotein, alpha-L-fucosidase and ferritin levels in the diagnosis of primary liver cancer

    International Nuclear Information System (INIS)

    Zhang Aimin; Chai Xiaohong; Jin Ying; Dong Xuemei

    2005-01-01

    Objective: To investigate the clinical value of combined measurement of serum alpha-fetoprotein (AFP), alpha-L-fucosidase (AFU) and ferritin (SF) levels in the diagnosis of primary liver cancer. Methods: Serum AFP, AFU (with RIA) and SF (with biochemical method) were determined in 52 patients with primary liver cancer and 40 controls. Results: The positive rates of AFP, AFU and SF in patient with liver cancer were 82.7%, 86.6% and 76.9%, respectively. Positive rates with combined measurement of AFP plus AFU, AFP plus SF, and AFP plus AFU, SF were 94.2%, 90.4% and 98.1% respectively. Conclusion: Combined measurement of AFP, AFU and SF can significantly increase the positive rate in the diagnosis of primary liver cancer. (authors)

  6. A family of metric gravities

    Science.gov (United States)

    Shuler, Robert

    2018-04-01

    The goal of this paper is to take a completely fresh approach to metric gravity, in which the metric principle is strictly adhered to but its properties in local space-time are derived from conservation principles, not inferred from a global field equation. The global field strength variation then gains some flexibility, but only in the regime of very strong fields (2nd-order terms) whose measurement is now being contemplated. So doing provides a family of similar gravities, differing only in strong fields, which could be developed into meaningful verification targets for strong fields after the manner in which far-field variations were used in the 20th century. General Relativity (GR) is shown to be a member of the family and this is demonstrated by deriving the Schwarzschild metric exactly from a suitable field strength assumption. The method of doing so is interesting in itself because it involves only one differential equation rather than the usual four. Exact static symmetric field solutions are also given for one pedagogical alternative based on potential, and one theoretical alternative based on inertia, and the prospects of experimentally differentiating these are analyzed. Whether the method overturns the conventional wisdom that GR is the only metric theory of gravity and that alternatives must introduce additional interactions and fields is somewhat semantical, depending on whether one views the field strength assumption as a field and whether the assumption that produces GR is considered unique in some way. It is of course possible to have other fields, and the local space-time principle can be applied to field gravities which usually are weak-field approximations having only time dilation, giving them the spatial factor and promoting them to full metric theories. Though usually pedagogical, some of them are interesting from a quantum gravity perspective. Cases are noted where mass measurement errors, or distributions of dark matter, can cause one

  7. Metabolic changes in the pig liver during warm ischemia and reperfusion measured by microdialysis

    DEFF Research Database (Denmark)

    Kannerup, Anne-Sofie; Funch-Jensen, Peter; Grønbaek, Henning

    2008-01-01

    AIM: Portal triad clamping can cause ischemia-reperfusion injury. The aim of the study was to monitor metabolic changes by microdialysis before, during, and after warm ischemia in the pigliver. MATERIAL AND METHODS: Eight pigs underwent laparotomy followed by ischemia by Pringle's maneuver. One...... in transaminase levels was observed. CONCLUSIONS: During and after warm ischemia, there were profound metabolic changes in the pigliver observed with an increase in lactate, glucose, glycerol, and the lactate-pyruvate ratio. There were no differences between the four liver lobes, indicating the piglivers...

  8. The independence of software metrics taken at different life-cycle stages

    Science.gov (United States)

    Kafura, D.; Canning, J.; Reddy, G.

    1984-01-01

    Over the past few years a large number of software metrics have been proposed and, in varying degrees, a number of these metrics have been subjected to empirical validation which demonstrated the utility of the metrics in the software development process. Attempts to classify these metrics and to determine if the metrics in these different classes appear to be measuring distinct attributes of the software product are studied. Statistical analysis is used to determine the degree of relationship among the metrics.

  9. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  10. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  11. Follow-up CT measurement of liver malignoma according to RECIST and WHO vs. volumetry

    International Nuclear Information System (INIS)

    Heussel, C.P.; Meier, S.; Wittelsberger, S.; Goette, H.; Mildenberger, P.; Kauczor, H.U.

    2007-01-01

    Purpose: Intraindividual comparison of quantitative malignant liver tumor response analysis using computed tomography. The RECIST and WHO evaluation results were compared to the volumetry results. Materials and method: Consecutive CT follow-up investigations (portal-venous phase, collimation 3 mm, increment 2 mm) of 82 patients were analyzed retrospectively. The median interval was 56 (30 - 455) days. The patients showed a total of 198 (median 3, range 1 - 5) malignant liver lesions. The evaluation was performed by 2 radiologists using the OncoTREAT software (Mevis) in consensus. The results were classified according to RECIST (Response Evaluation Criteria in Solid Tumors, stable disease: - 30 % to + 20 %) and WHO (stable: - 50 % to + 25 %) and compared to the volumetric analysis (stable: - 65 % to + 44 %). Both the continual follow-up changes and the classified results (complete and partial remission, no change and progression) were analyzed. Results: The classified RECIST and WHO results agreed with the volumetric analysis in 71/82 (87 %) of cases κ RECIST = 0,699, κ WHO = 0,741. This included different patients thus showing the agreement between the RECIST and WHO evaluations in 68/82 (83 %) cases (κ = 0,656). The estimation of the relative tumor development was clearly different in all procedures. Relative tumor changes are not directly comparable because of underlying one-, two- and three-dimensional structures. (orig.)

  12. Preference elicitation approach for measuring the willingness to pay for liver cancer treatment in Korea

    Directory of Open Access Journals (Sweden)

    Donghun Cho

    2015-09-01

    Full Text Available Background/AimsThe Korean government has expanded the coverage of the national insurance scheme for four major diseases: cancers, cardiovascular diseases, cerebrovascular diseases, and rare diseases. This policy may have a detrimental effect on the budget of the national health insurance agency. Like taxes, national insurance premiums are levied on the basis of the income or wealth of the insured.MethodsUsing a preference elicitation method, we attempted to estimate how much people are willing to pay for insurance premiums that would expand their coverage for liver cancer treatment.ResultsWe calculated the marginal willingness to pay (MWTP through the marginal rate of substitution between the two attributes of the insurance premium and the total annual treatment cost by adopting conditional logit and mixed logit models.ConclusionsThe effects of various other terms that could interact with socioeconomic status were also estimated, such as gender, income level, educational attainment, age, employment status, and marital status. The estimated MWTP values of the monthly insurance premium for liver cancer treatment range from 4,130 KRW to 9,090 KRW.

  13. Whole-organ and segmental stiffness measured with liver magnetic resonance elastography in healthy adults: significance of the region of interest.

    Science.gov (United States)

    Rusak, Grażyna; Zawada, Elżbieta; Lemanowicz, Adam; Serafin, Zbigniew

    2015-04-01

    MR elastography (MRE) is a recent non-invasive technique that provides in vivo data on the viscoelasticity of the liver. Since the method is not well established, several different protocols were proposed that differ in results. The aim of the study was to analyze the variability of stiffness measurements in different regions of the liver. Twenty healthy adults aged 24-45 years were recruited. The examination was performed using a mechanical excitation of 64 Hz. MRE images were fused with axial T2WI breath-hold images (thickness 10 mm, spacing 10 mm). Stiffness was measured as a mean value of each cross section of the whole liver, on a single largest cross section, in the right lobe, and in ROIs (50 pix.) placed in the center of the left lobe, segments 5/6, 7, 8, and the parahilar region. Whole-liver stiffness ranged from 1.56 to 2.75 kPa. Mean segmental stiffness differed significantly between the tested regions (range from 1.55 ± 0.28 to 2.37 ± 0.32 kPa; P < 0.0001, ANOVA). Within-method variability of measurements ranged from 14 % for whole liver and segment 8-26 % for segment 7. Within-subject variability ranged from 13 to 31 %. Results of measurement within segment 8 were closest to the whole-liver method (ICC, 0.84). Stiffness of the liver presented significant variability depending on the region of measurement. The most reproducible method is averaging of cross sections of the whole liver. There was significant variability between stiffness in subjects considered healthy, which requires further investigation.

  14. Metrical Phonology and SLA.

    Science.gov (United States)

    Tice, Bradley S.

    Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English language with the intention that it may be used in second language instruction. Stress is defined by its physical and acoustical correlates, and the principles of…

  15. Pragmatic security metrics applying metametrics to information security

    CERN Document Server

    Brotby, W Krag

    2013-01-01

    Other books on information security metrics discuss number theory and statistics in academic terms. Light on mathematics and heavy on utility, PRAGMATIC Security Metrics: Applying Metametrics to Information Security breaks the mold. This is the ultimate how-to-do-it guide for security metrics.Packed with time-saving tips, the book offers easy-to-follow guidance for those struggling with security metrics. Step by step, it clearly explains how to specify, develop, use, and maintain an information security measurement system (a comprehensive suite of metrics) to

  16. Comparison of Macroscopic Pathology Measurements With Magnetic Resonance Imaging and Assessment of Microscopic Pathology Extension for Colorectal Liver Metastases

    International Nuclear Information System (INIS)

    Méndez Romero, Alejandra; Verheij, Joanne; Dwarkasing, Roy S.; Seppenwoolde, Yvette; Redekop, William K.; Zondervan, Pieter E.; Nowak, Peter J.C.M.; Ijzermans, Jan N.M.; Levendag, Peter C.; Heijmen, Ben J.M.; Verhoef, Cornelis

    2012-01-01

    Purpose: To compare pathology macroscopic tumor dimensions with magnetic resonance imaging (MRI) measurements and to establish the microscopic tumor extension of colorectal liver metastases. Methods and Materials: In a prospective pilot study we included patients with colorectal liver metastases planned for surgery and eligible for MRI. A liver MRI was performed within 48 hours before surgery. Directly after surgery, an MRI of the specimen was acquired to measure the degree of tumor shrinkage. The specimen was fixed in formalin for 48 hours, and another MRI was performed to assess the specimen/tumor shrinkage. All MRI sequences were imported into our radiotherapy treatment planning system, where the tumor and the specimen were delineated. For the macroscopic pathology analyses, photographs of the sliced specimens were used to delineate and reconstruct the tumor and the specimen volumes. Microscopic pathology analyses were conducted to assess the infiltration depth of tumor cell nests. Results: Between February 2009 and January 2010 we included 13 patients for analysis with 21 colorectal liver metastases. Specimen and tumor shrinkage after resection and fixation was negligible. The best tumor volume correlations between MRI and pathology were found for T1-weighted (w) echo gradient sequence (r s = 0.99, slope = 1.06), and the T2-w fast spin echo (FSE) single-shot sequence (r s = 0.99, slope = 1.08), followed by the T2-w FSE fat saturation sequence (r s = 0.99, slope = 1.23), and the T1-w gadolinium-enhanced sequence (r s = 0.98, slope = 1.24). We observed 39 tumor cell nests beyond the tumor border in 12 metastases. Microscopic extension was found between 0.2 and 10 mm from the main tumor, with 90% of the cases within 6 mm. Conclusions: MRI tumor dimensions showed a good agreement with the macroscopic pathology suggesting that MRI can be used for accurate tumor delineation. However, microscopic extensions found beyond the tumor border indicate that caution is needed

  17. Regional metabolic liver function measured in patients with cirrhosis by 2-[¹⁸F]fluoro-2-deoxy-D-galactose PET/CT.

    Science.gov (United States)

    Sørensen, Michael; Mikkelsen, Kasper S; Frisch, Kim; Villadsen, Gerda E; Keiding, Susanne

    2013-06-01

    There is a clinical need for methods that can quantify regional hepatic function non-invasively in patients with cirrhosis. Here we validate the use of 2-[(18)F]fluoro-2-deoxy-d-galactose (FDGal) PET/CT for measuring regional metabolic function to this purpose, and apply the method to test the hypothesis of increased intrahepatic metabolic heterogeneity in cirrhosis. Nine cirrhotic patients underwent dynamic liver FDGal PET/CT with blood samples from a radial artery and a liver vein. Hepatic blood flow was measured by indocyanine green infusion/Fick's principle. From blood measurements, hepatic systemic clearance (Ksyst, Lblood/min) and hepatic intrinsic clearance (Vmax/Km, Lblood/min) of FDGal were calculated. From PET data, hepatic systemic clearance of FDGal in liver parenchyma (Kmet, mL blood/mL liver tissue/min) was calculated. Intrahepatic metabolic heterogeneity was evaluated in terms of coefficient-of-variation (CoV, %) using parametric images of Kmet. Mean approximation of Ksyst to Vmax/Km was 86% which validates the use of FDGal as PET tracer of hepatic metabolic function. Mean Kmet was 0.157 mL blood/mL liver tissue/min, which was lower than 0.274 mL blood/mL liver tissue/min, previously found in healthy subjects (pdynamic FDGal PET/CT with arterial sampling provides an accurate measure of regional hepatic metabolic function in patients with cirrhosis. This is likely to have clinical implications for the assessment of patients with liver disease as well as treatment planning and monitoring. Copyright © 2013 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  18. Quality Markers in Cardiology. Main Markers to Measure Quality of Results (Outcomes) and Quality Measures Related to Better Results in Clinical Practice (Performance Metrics). INCARDIO (Indicadores de Calidad en Unidades Asistenciales del Área del Corazón): A SEC/SECTCV Consensus Position Paper.

    Science.gov (United States)

    López-Sendón, José; González-Juanatey, José Ramón; Pinto, Fausto; Cuenca Castillo, José; Badimón, Lina; Dalmau, Regina; González Torrecilla, Esteban; López-Mínguez, José Ramón; Maceira, Alicia M; Pascual-Figal, Domingo; Pomar Moya-Prats, José Luis; Sionis, Alessandro; Zamorano, José Luis

    2015-11-01

    Cardiology practice requires complex organization that impacts overall outcomes and may differ substantially among hospitals and communities. The aim of this consensus document is to define quality markers in cardiology, including markers to measure the quality of results (outcomes metrics) and quality measures related to better results in clinical practice (performance metrics). The document is mainly intended for the Spanish health care system and may serve as a basis for similar documents in other countries. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  19. Which are the cut-off values of 2D-Shear Wave Elastography (2D-SWE) liver stiffness measurements predicting different stages of liver fibrosis, considering Transient Elastography (TE) as the reference method?

    Energy Technology Data Exchange (ETDEWEB)

    Sporea, Ioan, E-mail: isporea@umft.ro; Bota, Simona, E-mail: bota_simona1982@yahoo.com; Gradinaru-Taşcău, Oana, E-mail: bluonmyown@yahoo.com; Şirli, Roxana, E-mail: roxanasirli@gmail.com; Popescu, Alina, E-mail: alinamircea.popescu@gmail.com; Jurchiş, Ana, E-mail: ana.jurchis@yahoo.com

    2014-03-15

    Introduction: To identify liver stiffness (LS) cut-off values assessed by means of 2D-Shear Wave Elastography (2D-SWE) for predicting different stages of liver fibrosis, considering Transient Elastography (TE) as the reference method. Methods: Our prospective study included 383 consecutive subjects, with or without hepatopathies, in which LS was evaluated by means of TE and 2D-SWE. To discriminate between various stages of fibrosis by TE we used the following LS cut-offs (kPa): F1-6, F2-7.2, F3-9.6 and F4-14.5. Results: The rate of reliable LS measurements was similar for TE and 2D-SWE: 73.9% vs. 79.9%, p = 0.06. Older age and higher BMI were associated for both TE and 2D-SWE with the impossibility to obtain reliable LS measurements. Reliable LS measurements by both elastographic methods were obtained in 65.2% of patients. A significant correlation was found between TE and 2D-SWE measurements (r = 0.68). The best LS cut-off values assessed by 2D-SWE for predicting different stages of liver fibrosis were: F ≥ 1: >7.1 kPa (AUROC = 0.825); F ≥ 2: >7.8 kPa (AUROC = 0.859); F ≥ 3: >8 kPa (AUROC = 0.897) and for F = 4: >11.5 kPa (AUROC = 0.914). Conclusions: 2D-SWE is a reliable method for the non-invasive evaluation of liver fibrosis, considering TE as the reference method. The accuracy of 2D-SWE measurements increased with the severity of liver fibrosis.

  20. Application of localized {sup 31}P MRS saturation transfer at 7 T for measurement of ATP metabolism in the liver: reproducibility and initial clinical application in patients with non-alcoholic fatty liver disease

    Energy Technology Data Exchange (ETDEWEB)

    Valkovic, Ladislav [Medical University of Vienna, High Field MR Centre, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria); Slovak Academy of Sciences, Department of Imaging Methods, Institute of Measurement Science, Bratislava (Slovakia); Gajdosik, Martin; Chmelik, Marek; Trattnig, Siegfried [Medical University of Vienna, High Field MR Centre, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria); Traussnigg, Stefan; Kienbacher, Christian; Trauner, Michael [Medical University of Vienna, Division of Gastroenterology and Hepatology, Department of Internal Medicine III, Vienna (Austria); Wolf, Peter; Krebs, Michael [Medical University of Vienna, Division of Endocrinology and Metabolism, Department of Internal Medicine III, Vienna (Austria); Bogner, Wolfgang [Medical University of Vienna, High Field MR Centre, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria); Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA (United States); Krssak, Martin [Medical University of Vienna, High Field MR Centre, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria); Medical University of Vienna, Division of Endocrinology and Metabolism, Department of Internal Medicine III, Vienna (Austria)

    2014-07-15

    Saturation transfer (ST) phosphorus MR spectroscopy ({sup 31}P MRS) enables in vivo insight into energy metabolism and thus could identify liver conditions currently diagnosed only by biopsy. This study assesses the reproducibility of the localized {sup 31}P MRS ST in liver at 7 T and tests its potential for noninvasive differentiation of non-alcoholic fatty liver (NAFL) and steatohepatitis (NASH). After the ethics committee approval, reproducibility of the localized {sup 31}P MRS ST at 7 T and the biological variation of acquired hepato-metabolic parameters were assessed in healthy volunteers. Subsequently, 16 suspected NAFL/NASH patients underwent MRS measurements and diagnostic liver biopsy. The Pi-to-ATP exchange parameters were compared between the groups by a Mann-Whitney U test and related to the liver fat content estimated by a single-voxel proton ({sup 1}H) MRS, measured at 3 T. The mean exchange rate constant (k) in healthy volunteers was 0.31 ± 0.03 s{sup -1} with a coefficient of variation of 9.0 %. Significantly lower exchange rates (p < 0.01) were found in NASH patients (k = 0.17 ± 0.04 s{sup -1}) when compared to healthy volunteers, and NAFL patients (k = 0.30 ± 0.05 s{sup -1}). Significant correlation was found between the k value and the liver fat content (r = 0.824, p < 0.01). Our data suggest that the {sup 31}P MRS ST technique provides a tool for gaining insight into hepatic ATP metabolism and could contribute to the differentiation of NAFL and NASH. (orig.)

  1. Minimal hepatic encephalopathy in patients with cirrhosis by measuring liver stiffness and hepatic venous pressure gradient.

    Science.gov (United States)

    Sharma, Praveen; Kumar, Ashish

    2012-01-01

    Transient elastography (TE) of liver and hepatic venous pressure gradient (HVPG) allows accurate prediction of cirrhosis and its complications in patients with chronic liver disease. There is no study on prediction of minimal hepatic encephalopathy (MHE) using TE and HVPG in patients with cirrhosis. Consecutive cirrhotic patients who never had an episode of hepatic encephalopathy (HE) were enrolled. All patients were assessed by psychometry (number connection test (NCT-A and B), digit symbol test (DST), serial dot test (SDT), line tracing test (LTT)), critical flicker frequency test (CFF), TE by FibroScan and HVPG. MHE was diagnosed if there were two or more abnormal psychometry tests (± 2 SD controls). 150 patients with cirrhosis who underwent HVPG were screened; 91 patients (61%, age 44.0 ± 11.4 years, M:F:75:16, Child's A:B:C 18:54:19) met the inclusion criteria. Fifty three (58%) patients had MHE (Child A (7/18, 39%), Child B (32/54, 59%) and Child C (14/19, 74%)). There was no significant difference between alanine aminotranferease (ALT), aspartate aminotransferase (AST) and total bilirubin level in patients with MHE versus non MHE. Patients with MHE had significantly lower CFF than non MHE patients (38.4 ± 3.0 vs. 40.2 ± 2.2 Hz, P = 0.002). TE and HVPG in patients with MHE did not significantly differ from patients with no MHE (30.9 ± 17.2 vs. 29.8 ± 18.2 KPas, P = 0.78; and 13.6 ± 2.7 vs. 13.6 ± 3.2 mmHg, P = 0.90, respectively).There was significant correlation of TE with Child's score (0.25, P = 0.01), MELD (0.40, P = 0.001) and HVPG (0.72, P = 0.001) while no correlation with psychometric tests, CFF and MHE. TE by FibroScan and HVPG cannot predict minimal hepatic encephalopathy in patients with cirrhosis.

  2. Intra-Tissue Pressure Measurement in Ex Vivo Liver Undergoing Laser Ablation with Fiber-Optic Fabry-Perot Probe

    Directory of Open Access Journals (Sweden)

    Daniele Tosi

    2016-04-01

    Full Text Available We report the first-ever intra-tissue pressure measurement performed during 1064 nm laser ablation (LA of an ex vivo porcine liver. Pressure detection has been performed with a biocompatible, all-glass, temperature-insensitive Extrinsic Fabry-Perot Interferometry (EFPI miniature probe; the proposed methodology mimics in-vivo treatment. Four experiments have been performed, positioning the probe at different positions from the laser applicator tip (from 0.5 mm to 5 mm. Pressure levels increase during ablation time, and decrease with distance from applicator tip: the recorded peak parenchymal pressure levels range from 1.9 kPa to 71.6 kPa. Different pressure evolutions have been recorded, as pressure rises earlier in proximity of the tip. The present study is the first investigation of parenchymal pressure detection in liver undergoing LA: the successful detection of intra-tissue pressure may be a key asset for improving LA, as pressure levels have been correlated to scattered recurrences of tumors by different studies.

  3. Decision Analysis for Metric Selection on a Clinical Quality Scorecard.

    Science.gov (United States)

    Guth, Rebecca M; Storey, Patricia E; Vitale, Michael; Markan-Aurora, Sumita; Gordon, Randolph; Prevost, Traci Q; Dunagan, Wm Claiborne; Woeltje, Keith F

    2016-09-01

    Clinical quality scorecards are used by health care institutions to monitor clinical performance and drive quality improvement. Because of the rapid proliferation of quality metrics in health care, BJC HealthCare found it increasingly difficult to select the most impactful scorecard metrics while still monitoring metrics for regulatory purposes. A 7-step measure selection process was implemented incorporating Kepner-Tregoe Decision Analysis, which is a systematic process that considers key criteria that must be satisfied in order to make the best decision. The decision analysis process evaluates what metrics will most appropriately fulfill these criteria, as well as identifies potential risks associated with a particular metric in order to identify threats to its implementation. Using this process, a list of 750 potential metrics was narrowed to 25 that were selected for scorecard inclusion. This decision analysis process created a more transparent, reproducible approach for selecting quality metrics for clinical quality scorecards. © The Author(s) 2015.

  4. Metrication: An economic wake-up call for US industry

    Science.gov (United States)

    Carver, G. P.

    1993-03-01

    As the international standard of measurement, the metric system is one key to success in the global marketplace. International standards have become an important factor in international economic competition. Non-metric products are becoming increasingly unacceptable in world markets that favor metric products. Procurement is the primary federal tool for encouraging and helping U.S. industry to convert voluntarily to the metric system. Besides the perceived unwillingness of the customer, certain regulatory language, and certain legal definitions in some states, there are no major impediments to conversion of the remaining non-metric industries to metric usage. Instead, there are good reasons for changing, including an opportunity to rethink many industry standards and to take advantage of size standardization. Also, when the remaining industries adopt the metric system, they will come into conformance with federal agencies engaged in similar activities.

  5. Fish communities and trophic metrics as measures of ecological degradation: a case study in the tributaries of the river Ganga basin, India.

    Science.gov (United States)

    Dubey, Vineet Kumar; Sarkar, Uttam Kumar; Pandey, Ajay; Lakra, Wazir Singh

    2013-09-01

    In India, freshwater aquatic resources are suffering from increasing human population, urbanization and shortage of all kind of natural resources like water. To mitigate this, all the major rivers have been planned for a river-interlinking through an interlinking canal system under a huge scheme; yet, the baseline information on ecological conditions of those tropical rivers and their fish communities is lacking at present. In view of that, the present study was undertaken to assess the ecological condition by comparing the trophic metrics of the fish community, conservation status and water chemistry of the two tropical rivers of the Ganga basin, from October 2007 to November 2009. The analysis of trophic niches of the available fish species indicated dominancy of carnivorous (19 species) in river Ken and omnivorous (23 species) in Betwa. The trophic level score of carnivorous species was recorded similar (33.33%) in both rivers, whereas omnivorous species were mostly found in Betwa (36.51%) than Ken (28.07%). Relatively undisturbed sites of Betwa (B1, B2 and B3) and Ken (K2, K3 and K5) were characterized by diverse fish fauna and high richness of threatened species. The higher mean trophic level scores were recorded at B4 of Betwa and K4 of Ken. The Bray-Curtis index for trophic level identified the carnivorous species (> 0.32) as an indicator species for pollution. Anthropogenic exposure, reflected in water quality as well as in fish community structure, was found higher especially in the lower stretches of both rivers. Our results suggest the importance of trophic metrics on fish community, for ecological conditions evaluation, which enables predictions on the effect of future morphodynamic changes (in the post-interlinking phases), and provide a framework and reference condition to support restoration efforts of relatively altered fish habitats in tropical rivers of India.

  6. Fish communities and trophic metrics as measures of ecological degradation: a case study in the tributaries of the river Ganga basin, India

    Directory of Open Access Journals (Sweden)

    Vineet Kumar Dubey

    2013-09-01

    Full Text Available In India, freshwater aquatic resources are suffering from increasing human population, urbanization and shortage of all kind of natural resources like water. To mitigate this, all the major rivers have been planned for a river-interlinking through an interlinking canal system under a huge scheme; yet, the baseline information on ecological conditions of those tropical rivers and their fish communities is lacking at present. In view of that, the present study was undertaken to assess the ecological condition by comparing the trophic metrics of the fish community, conservation status and water chemistry of the two tropical rivers of the Ganga basin, from October 2007 to November 2009. The analysis of trophic niches of the available fish species indicated dominancy of carnivorous (19 species in river Ken and omnivorous (23 species in Betwa. The trophic level score of carnivorous species was recorded similar (33.33% in both rivers, whereas omnivorous species were mostly found in Betwa (36.51% than Ken (28.07%. Relatively undisturbed sites of Betwa (B1, B2 and B3 and Ken (K2, K3 and K5 were characterized by diverse fish fauna and high richness of threatened species. The higher mean trophic level scores were recorded at B4 of Betwa and K4 of Ken. The Bray-Curtis index for trophic level identified the carnivorous species (>0.32 as an indicator species for pollution. Anthropogenic exposure, reflected in water quality as well as in fish community structure, was found higher especially in the lower stretches of both rivers. Our results suggest the importance of trophic metrics on fish community, for ecological conditions evaluation, which enables predictions on the effect of future morphodynamic changes (in the post-interlinking phases, and provide a framework and reference condition to support restoration efforts of relatively altered fish habitats in tropical rivers of India.

  7. Symmetries of the dual metrics

    International Nuclear Information System (INIS)

    Baleanu, D.

    1998-01-01

    The geometric duality between the metric g μν and a Killing tensor K μν is studied. The conditions were found when the symmetries of the metric g μν and the dual metric K μν are the same. Dual spinning space was constructed without introduction of torsion. The general results are applied to the case of Kerr-Newmann metric

  8. Kerr metric in cosmological background

    Energy Technology Data Exchange (ETDEWEB)

    Vaidya, P C [Gujarat Univ., Ahmedabad (India). Dept. of Mathematics

    1977-06-01

    A metric satisfying Einstein's equation is given which in the vicinity of the source reduces to the well-known Kerr metric and which at large distances reduces to the Robertson-Walker metric of a nomogeneous cosmological model. The radius of the event horizon of the Kerr black hole in the cosmological background is found out.

  9. Liver Hemangioma

    Science.gov (United States)

    Liver hemangioma Overview A liver hemangioma (he-man-jee-O-muh) is a noncancerous (benign) mass in the liver. A liver hemangioma is made up of a tangle of blood vessels. Other terms for a liver hemangioma are hepatic hemangioma and cavernous hemangioma. Most ...

  10. Metric inhomogeneous Diophantine approximation in positive characteristic

    DEFF Research Database (Denmark)

    Kristensen, Simon

    2011-01-01

    We obtain asymptotic formulae for the number of solutions to systems of inhomogeneous linear Diophantine inequalities over the field of formal Laurent series with coefficients from a finite fields, which are valid for almost every such system. Here `almost every' is with respect to Haar measure...... of the coefficients of the homogeneous part when the number of variables is at least two (singly metric case), and with respect to the Haar measure of all coefficients for any number of variables (doubly metric case). As consequences, we derive zero-one laws in the spirit of the Khintchine-Groshev Theorem and zero...

  11. Metric inhomogeneous Diophantine approximation in positive characteristic

    DEFF Research Database (Denmark)

    Kristensen, S.

    We obtain asymptotic formulae for the number of solutions to systems of inhomogeneous linear Diophantine inequalities over the field of formal Laurent series with coefficients from a finite fields, which are valid for almost every such system. Here 'almost every' is with respect to Haar measure...... of the coefficients of the homogeneous part when the number of variables is at least two (singly metric case), and with respect to the Haar measure of all coefficients for any number of variables (doubly metric case). As consequences, we derive zero-one laws in the spirit of the Khintchine--Groshev Theorem and zero...

  12. Measuring chronic liver disease mortality using an expanded cause of death definition and medical records in Connecticut, 2004.

    Science.gov (United States)

    Ly, Kathleen N; Speers, Suzanne; Klevens, R Monina; Barry, Vaughn; Vogt, Tara M

    2014-10-16

    Chronic liver disease (CLD) is a leading cause of death and is defined based on a specific set of underlying cause-of-death codes on death certificates. This conventional approach to measuring CLD mortality underestimates the true mortality burden because it does not consider certain CLD conditions like viral hepatitis and hepatocellular carcinoma. We measured how much the conventional CLD mortality case definition will underestimate CLD mortality and described the distribution of CLD etiologies in Connecticut. We used 2004 Connecticut death certificates to estimate CLD mortality two ways. One way used the conventional definition and the other used an expanded definition that included more conditions suggestive of CLD. We compared the number of deaths identified using this expanded definition with the number identified using the conventional definition. Medical records were reviewed to confirm CLD deaths. Connecticut had 29 314 registered deaths in 2004. Of these, 282 (1.0%) were CLD deaths identified by the conventional CLD definition while 616 (2.1%) were CLD deaths defined by the expanded definition. Medical record review confirmed that most deaths identified by the expanded definition were CLD-related (550/616); this suggested a 15.8 deaths/100 000 population mortality rate. Among deaths for which hepatitis B, hepatitis C and alcoholic liver disease were identified during medical record review, only 8.6%, 45.4% and 36.5%, respectively, had that specific cause-of-death code cited on the death certificate. An expanded CLD mortality case definition that incorporates multiple causes of death and additional CLD-related conditions will better estimate CLD mortality. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  13. Learning Low-Dimensional Metrics

    OpenAIRE

    Jain, Lalit; Mason, Blake; Nowak, Robert

    2017-01-01

    This paper investigates the theoretical foundations of metric learning, focused on three key questions that are not fully addressed in prior work: 1) we consider learning general low-dimensional (low-rank) metrics as well as sparse metrics; 2) we develop upper and lower (minimax)bounds on the generalization error; 3) we quantify the sample complexity of metric learning in terms of the dimension of the feature space and the dimension/rank of the underlying metric;4) we also bound the accuracy ...

  14. Metric Learning for Hyperspectral Image Segmentation

    Science.gov (United States)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  15. Liver biopsy

    Science.gov (United States)

    Biopsy - liver; Percutaneous biopsy ... the biopsy needle to be inserted into the liver. This is often done by using ultrasound. The ... the chance of damage to the lung or liver. The needle is removed quickly. Pressure will be ...

  16. A software quality model and metrics for risk assessment

    Science.gov (United States)

    Hyatt, L.; Rosenberg, L.

    1996-01-01

    A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.

  17. Plasma clearance of sup(99m)Tc-N/2,4-dimethyl-acetanilido/iminodiacetate complex as a measure of parenchymal liver damage

    International Nuclear Information System (INIS)

    Studniarek, M.; Durski, K.; Liniecki, J.; Akademia Medyczna, Lodz

    1983-01-01

    Fifty-two patients were studied with various diseases affecting liver parenchyma. Any disorders of bile transport were excluded on the basis of dynamic liver scintigraphy using intravenously injected N/2,4-dimethyl acetanilid/iminodiacetate sup(99m)Tc complex (HEPIDA). The activity concentration of sup(99m)Tc-HEPIDA in plasma was measured from 5 through 60 min post injection. Clearance of the substance (Clsub(B)) was calculated from blood plasma disappearance curves and compared with results of 13 laboratory tests used conventionally for assessment of damage of the liver and its functional capacity; age and body weight was also included in the analysis. Statistical relations were studied using linear regression analysis of two variables, multiple regression analysis as well as multidimensional analysis of variance. It was demonstrated that sup(99m)Tc-HEPIDA clearance is a simple, accurate and repeatable measure of liver parenchyma damage. In males, values of Clsub(B) above 245 ml min - 1 /1.73 m 2 exclude hepatic damage with high probability; values below 195 ml min - 1 /1.73 m 2 indicate evident impairment of liver parenchyma function. (orig.) [de

  18. CT- and MRI-based volumetry of resected liver specimen: Comparison to intraoperative volume and weight measurements and calculation of conversion factors

    International Nuclear Information System (INIS)

    Karlo, C.; Reiner, C.S.; Stolzmann, P.; Breitenstein, S.; Marincek, B.; Weishaupt, D.; Frauenfelder, T.

    2010-01-01

    Objective: To compare virtual volume to intraoperative volume and weight measurements of resected liver specimen and calculate appropriate conversion factors to reach better correlation. Methods: Preoperative (CT-group, n = 30; MRI-group, n = 30) and postoperative MRI (n = 60) imaging was performed in 60 patients undergoing partial liver resection. Intraoperative volume and weight of the resected liver specimen was measured. Virtual volume measurements were performed by two readers (R1,R2) using dedicated software. Conversion factors were calculated. Results: Mean intraoperative resection weight/volume: CT: 855 g/852 mL; MRI: 872 g/860 mL. Virtual resection volume: CT: 960 mL(R1), 982 mL(R2); MRI: 1112 mL(R1), 1115 mL(R2). Strong positive correlation for both readers between intraoperative and virtual measurements, mean of both readers: CT: R = 0.88(volume), R = 0.89(weight); MRI: R = 0.95(volume), R = 0.92(weight). Conversion factors: 0.85(CT), 0.78(MRI). Conclusion: CT- or MRI-based volumetry of resected liver specimen is accurate and recommended for preoperative planning. A conversion of the result is necessary to improve intraoperative and virtual measurement correlation. We found 0.85 for CT- and 0.78 for MRI-based volumetry the most appropriate conversion factors.

  19. Metrics with vanishing quantum corrections

    International Nuclear Information System (INIS)

    Coley, A A; Hervik, S; Gibbons, G W; Pope, C N

    2008-01-01

    We investigate solutions of the classical Einstein or supergravity equations that solve any set of quantum corrected Einstein equations in which the Einstein tensor plus a multiple of the metric is equated to a symmetric conserved tensor T μν (g αβ , ∂ τ g αβ , ∂ τ ∂ σ g αβ , ...,) constructed from sums of terms, the involving contractions of the metric and powers of arbitrary covariant derivatives of the curvature tensor. A classical solution, such as an Einstein metric, is called universal if, when evaluated on that Einstein metric, T μν is a multiple of the metric. A Ricci flat classical solution is called strongly universal if, when evaluated on that Ricci flat metric, T μν vanishes. It is well known that pp-waves in four spacetime dimensions are strongly universal. We focus attention on a natural generalization; Einstein metrics with holonomy Sim(n - 2) in which all scalar invariants are zero or constant. In four dimensions we demonstrate that the generalized Ghanam-Thompson metric is weakly universal and that the Goldberg-Kerr metric is strongly universal; indeed, we show that universality extends to all four-dimensional Sim(2) Einstein metrics. We also discuss generalizations to higher dimensions

  20. Hemodynamic changes in liver measured by multi-imaging methods before and after transjugular intrahepatic portosystemic stent-shunt

    International Nuclear Information System (INIS)

    Huang Yonghui; Chen Wei; Li Jiaping; Zhuang Wenquan; Li Ziping; Yang Jianyong

    2007-01-01

    Objective: To evaluate hemodynamic changes in liver treated by transjugular intrahepatic portosystemic stent-shunt (TIPSS) with hepatic computed tomography (CT) perfusion, Doppler ultrasound and portal vein pressure measurement, as well as the correlation among these methods. Methods: Hepatic CT perfusion was performed in 9 cirrhotic patients one week before TIPSS and 72 hours after TIPSS. Intraoperative portal vein pressure was measured before and after portosystemic shunt establish. The follow- up hepatic CT perfusion were carried out in 3 cases at 3 months and 6 months postoperatively. The hemodynamic surveillance by Doppler ultrasound were performed in 48 hours and 3 months after TIPSS for 9 cases, and in 6 months after TIPSS for 6 cases. Two cases underwent venography and portal vein pressure measurement in 6 months after TIPSS treatment. Results: The mean of portal vein perfusion (PVP), total hepatic blood flow (THBF), hepatic perfusion index (HPI) and portal vein free pressure (PVFP) before TIPSS were (0.92 ± 0.18) ml·min·ml -1 , (1.28±0.17) ml·min -1 ·ml -1 , (28 ± 8)%, and (23.92±0.86) mmHg, respectively. In 72 hours after TIPSS, the mean of PVP, THBF, HPI and PVFP were (0.21 ± 0.15) ml·min -1 ·ml -1 , (0.74 ± 0.18) ml·min -1 ·ml -1 , (74±13)%, and (12.62±1.54) mm Hg, respectively. After treatment, the mean of PVP was (0.49±0.05) ml·min -1 ·ml -1 at 3 months and (0.57±0.03) ml·min -1 ·ml -1 at 6 months, respectively. There was negative correlation between PVP and PVFP before TIPSS (r= 0.678, P 0.05). Moreover, a significant correlation was found between the degree of portal vein pressure decrease and portal vein perfusion decrease (r=0.867, P 3 /s was not less than that of main portal vein (9.83±5.72) cm 3 /s until six months after treatment. Conclusion: The portal vein pressure obviously decreased after TIPSS, and meanwhile, most blood flow of portal vein passed through portosystemic stent shunt without liver parenchyma perfusion

  1. Future of the PCI Readmission Metric.

    Science.gov (United States)

    Wasfy, Jason H; Yeh, Robert W

    2016-03-01

    Between 2013 and 2014, the Centers for Medicare and Medicaid Services and the National Cardiovascular Data Registry publically reported risk-adjusted 30-day readmission rates after percutaneous coronary intervention (PCI) as a pilot project. A key strength of this public reporting effort included risk adjustment with clinical rather than administrative data. Furthermore, because readmission after PCI is common, expensive, and preventable, this metric has substantial potential to improve quality and value in American cardiology care. Despite this, concerns about the metric exist. For example, few PCI readmissions are caused by procedural complications, limiting the extent to which improved procedural technique can reduce readmissions. Also, similar to other readmission measures, PCI readmission is associated with socioeconomic status and race. Accordingly, the metric may unfairly penalize hospitals that care for underserved patients. Perhaps in the context of these limitations, Centers for Medicare and Medicaid Services has not yet included PCI readmission among metrics that determine Medicare financial penalties. Nevertheless, provider organizations may still wish to focus on this metric to improve value for cardiology patients. PCI readmission is associated with low-risk chest discomfort and patient anxiety. Therefore, patient education, improved triage mechanisms, and improved care coordination offer opportunities to minimize PCI readmissions. Because PCI readmission is common and costly, reducing PCI readmission offers provider organizations a compelling target to improve the quality of care, and also performance in contracts involve shared financial risk. © 2016 American Heart Association, Inc.

  2. Fisher information metrics for binary classifier evaluation and training

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Different evaluation metrics for binary classifiers are appropriate to different scientific domains and even to different problems within the same domain. This presentation focuses on the optimisation of event selection to minimise statistical errors in HEP parameter estimation, a problem that is best analysed in terms of the maximisation of Fisher information about the measured parameters. After describing a general formalism to derive evaluation metrics based on Fisher information, three more specific metrics are introduced for the measurements of signal cross sections in counting experiments (FIP1) or distribution fits (FIP2) and for the measurements of other parameters from distribution fits (FIP3). The FIP2 metric is particularly interesting because it can be derived from any ROC curve, provided that prevalence is also known. In addition to its relation to measurement errors when used as an evaluation criterion (which makes it more interesting that the ROC AUC), a further advantage of the FIP2 metric is ...

  3. DIGITAL MARKETING: SUCCESS METRICS, FUTURE TRENDS

    OpenAIRE

    Preeti Kaushik

    2017-01-01

    Abstract – Business Marketing is one of the prospective which has been tremendously affected by digital world in last few years. Digital marketing refers to doing advertising through digital channels. This paper provides detailed study of metrics to measure success of digital marketing platform and glimpse of future of technologies by 2020.

  4. Language Games: University Responses to Ranking Metrics

    Science.gov (United States)

    Heffernan, Troy A.; Heffernan, Amanda

    2018-01-01

    League tables of universities that measure performance in various ways are now commonplace, with numerous bodies providing their own rankings of how institutions throughout the world are seen to be performing on a range of metrics. This paper uses Lyotard's notion of language games to theorise that universities are regaining some power over being…

  5. Completion of a Dislocated Metric Space

    Directory of Open Access Journals (Sweden)

    P. Sumati Kumari

    2015-01-01

    Full Text Available We provide a construction for the completion of a dislocated metric space (abbreviated d-metric space; we also prove that the completion of the metric associated with a d-metric coincides with the metric associated with the completion of the d-metric.

  6. Landscape pattern metrics and regional assessment

    Science.gov (United States)

    O'Neill, R. V.; Riitters, K.H.; Wickham, J.D.; Jones, K.B.

    1999-01-01

    The combination of remote imagery data, geographic information systems software, and landscape ecology theory provides a unique basis for monitoring and assessing large-scale ecological systems. The unique feature of the work has been the need to develop and interpret quantitative measures of spatial pattern-the landscape indices. This article reviews what is known about the statistical properties of these pattern metrics and suggests some additional metrics based on island biogeography, percolation theory, hierarchy theory, and economic geography. Assessment applications of this approach have required interpreting the pattern metrics in terms of specific environmental endpoints, such as wildlife and water quality, and research into how to represent synergystic effects of many overlapping sources of stress.

  7. Trapping of cis-2-butene-1,4-dial to measure furan metabolism in human liver microsomes by cytochrome P450 enzymes.

    Science.gov (United States)

    Gates, Leah A; Lu, Ding; Peterson, Lisa A

    2012-03-01

    Furan is a liver toxicant and carcinogen in rodents. It is classified as a possible human carcinogen, but the human health effects of furan exposure remain unknown. The oxidation of furan by cytochrome P450 (P450) enzymes is necessary for furan toxicity. The product of this reaction is the reactive α,β-unsaturated dialdehyde, cis-2-butene-1,4-dial (BDA). To determine whether human liver microsomes metabolize furan to BDA, a liquid chromatography/tandem mass spectrometry method was developed to detect and quantify BDA by trapping this reactive metabolite with N-acetyl-l-cysteine (NAC) and N-acetyl-l-lysine (NAL). Reaction of NAC and NAL with BDA generates N-acetyl-S-[1-(5-acetylamino-5-carboxypentyl)-1H-pyrrol-3-yl]-l-cysteine (NAC-BDA-NAL). Formation of NAC-BDA-NAL was quantified in 21 different human liver microsomal preparations. The levels of metabolism were comparable to that observed in F-344 rat and B6C3F1 mouse liver microsomes, two species known to be sensitive to furan-induced toxicity. Studies with recombinant human liver P450s indicated that CYP2E1 is the most active human liver furan oxidase. The activity of CYP2E1 as measured by p-nitrophenol hydroxylase activity was correlated to the extent of NAC-BDA-NAL formation in human liver microsomes. The formation of NAC-BDA-NAL was blocked by CYP2E1 inhibitors but not other P450 inhibitors. These results suggest that humans are capable of oxidizing furan to its toxic metabolite, BDA, at rates comparable to those of species sensitive to furan exposure. Therefore, humans may be susceptible to furan's toxic effects.

  8. Fiber-Optic Temperature and Pressure Sensors Applied to Radiofrequency Thermal Ablation in Liver Phantom: Methodology and Experimental Measurements

    Directory of Open Access Journals (Sweden)

    Daniele Tosi

    2015-01-01

    Full Text Available Radiofrequency thermal ablation (RFA is a procedure aimed at interventional cancer care and is applied to the treatment of small- and midsize tumors in lung, kidney, liver, and other tissues. RFA generates a selective high-temperature field in the tissue; temperature values and their persistency are directly related to the mortality rate of tumor cells. Temperature measurement in up to 3–5 points, using electrical thermocouples, belongs to the present clinical practice of RFA and is the foundation of a physical model of the ablation process. Fiber-optic sensors allow extending the detection of biophysical parameters to a vast plurality of sensing points, using miniature and noninvasive technologies that do not alter the RFA pattern. This work addresses the methodology for optical measurement of temperature distribution and pressure using four different fiber-optic technologies: fiber Bragg gratings (FBGs, linearly chirped FBGs (LCFBGs, Rayleigh scattering-based distributed temperature system (DTS, and extrinsic Fabry-Perot interferometry (EFPI. For each instrument, methodology for ex vivo sensing, as well as experimental results, is reported, leading to the application of fiber-optic technologies in vivo. The possibility of using a fiber-optic sensor network, in conjunction with a suitable ablation device, can enable smart ablation procedure whereas ablation parameters are dynamically changed.

  9. Direct measurement of the initial proton extrusion to oxygen uptake ratio accompanying succinate oxidation by rat liver mitochondria.

    Science.gov (United States)

    Setty, O H; Shrager, R I; Bunow, B; Reynafarje, B; Lehninger, A L; Hendler, R W

    1986-01-01

    The problem of obtaining very early ratios for the H+/O stoichiometry accompanying succinate oxidation by rat liver mitochondria was attacked using new techniques for direct measurement rather than extrapolations based on data obtained after mixing and the recovery of the electrode from initial injection of O2. Respiration was quickly initiated in a thoroughly mixed O2-containing suspension of mitochondria under a CO atmosphere by photolysis of the CO-cytochrome c oxidase complex-. Fast responding O2 and pH electrodes were used to collect data every 10 ms. The response time for each electrode was experimentally measured in each experiment and suitable corrections for electrode relaxations were made. With uncorrected data obtained after 0.8 s, the extrapolation back to zero time on the basis of single-exponential curve fitting confirmed values close to 8.0 as previously reported (Costa et al., 1984). The data directly obtained, however, indicate an initial burst in H+/O ratio that peaked to values of approximately 20 to 30 prior to 50 ms and which was no longer evident after 0.3 s. Newer information and considerations that place all extrapolation methods in question are discussed. PMID:3019443

  10. Metrics for Business Process Models

    Science.gov (United States)

    Mendling, Jan

    Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.

  11. Liver Stiffness Measured by Two-Dimensional Shear-Wave Elastography: Prognostic Value after Radiofrequency Ablation for Hepatocellular Carcinoma.

    Science.gov (United States)

    Lee, Dong Ho; Lee, Jeong Min; Yoon, Jung-Hwan; Kim, Yoon Jun; Lee, Jeong-Hoon; Yu, Su Jong; Han, Joon Koo

    2018-03-01

    To evaluate the prognostic value of liver stiffness (LS) measured using two-dimensional (2D) shear-wave elastography (SWE) in patients with hepatocellular carcinoma (HCC) treated by radiofrequency ablation (RFA). The Institutional Review Board approved this retrospective study and informed consent was obtained from all patients. A total of 134 patients with up to 3 HCCs ≤5 cm who had undergone pre-procedural 2D-SWE prior to RFA treatment between January 2012 and December 2013 were enrolled. LS values were measured using real-time 2D-SWE before RFA on the procedural day. After a mean follow-up of 33.8 ± 9.9 months, we analyzed the overall survival after RFA using the Kaplan-Meier method and Cox proportional hazard regression model. The optimal cutoff LS value to predict overall survival was determined using the minimal p value approach. During the follow-up period, 22 patients died, and the estimated 1- and 3-year overall survival rates were 96.4 and 85.8%, respectively. LS measured by 2D-SWE was found to be a significant predictive factor for overall survival after RFA of HCCs, as was the presence of extrahepatic metastases. As for the optimal cutoff LS value for the prediction of overall survival, it was determined to be 13.3 kPa. In our study, 71 patients had LS values ≥13.3 kPa, and the estimated 3-year overall survival was 76.8% compared to 96.3% in 63 patients with LS values measured by 2D-SWE was a significant predictive factor for overall survival after RFA for HCC.

  12. The metric system: An introduction

    Energy Technology Data Exchange (ETDEWEB)

    Lumley, S.M.

    1995-05-01

    On July 13, 1992, Deputy Director Duane Sewell restated the Laboratory`s policy on conversion to the metric system which was established in 1974. Sewell`s memo announced the Laboratory`s intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory`s conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on July 25, 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation`s conversion to the metric system. The second part of this report is on applying the metric system.

  13. Attack-Resistant Trust Metrics

    Science.gov (United States)

    Levien, Raph

    The Internet is an amazingly powerful tool for connecting people together, unmatched in human history. Yet, with that power comes great potential for spam and abuse. Trust metrics are an attempt to compute the set of which people are trustworthy and which are likely attackers. This chapter presents two specific trust metrics developed and deployed on the Advogato Website, which is a community blog for free software developers. This real-world experience demonstrates that the trust metrics fulfilled their goals, but that for good results, it is important to match the assumptions of the abstract trust metric computation to the real-world implementation.

  14. The metric system: An introduction

    Science.gov (United States)

    Lumley, Susan M.

    On 13 Jul. 1992, Deputy Director Duane Sewell restated the Laboratory's policy on conversion to the metric system which was established in 1974. Sewell's memo announced the Laboratory's intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory's conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on 25 Jul. 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation's conversion to the metric system. The second part of this report is on applying the metric system.

  15. Metric-adjusted skew information

    DEFF Research Database (Denmark)

    Liang, Cai; Hansen, Frank

    2010-01-01

    on a bipartite system and proved superadditivity of the Wigner-Yanase-Dyson skew informations for such states. We extend this result to the general metric-adjusted skew information. We finally show that a recently introduced extension to parameter values 1 ...We give a truly elementary proof of the convexity of metric-adjusted skew information following an idea of Effros. We extend earlier results of weak forms of superadditivity to general metric-adjusted skew information. Recently, Luo and Zhang introduced the notion of semi-quantum states...... of (unbounded) metric-adjusted skew information....

  16. Two classes of metric spaces

    Directory of Open Access Journals (Sweden)

    Isabel Garrido

    2016-04-01

    Full Text Available The class of metric spaces (X,d known as small-determined spaces, introduced by Garrido and Jaramillo, are properly defined by means of some type of real-valued Lipschitz functions on X. On the other hand, B-simple metric spaces introduced by Hejcman are defined in terms of some kind of bornologies of bounded subsets of X. In this note we present a common framework where both classes of metric spaces can be studied which allows us to see not only the relationships between them but also to obtain new internal characterizations of these metric properties.

  17. Liver volume measurement: reason of the difference between in vivo CT-volumetry and intraoperative ex vivo determination and how to cope it.

    Science.gov (United States)

    Niehues, Stefan M; Unger, J K; Malinowski, M; Neymeyer, J; Hamm, B; Stockmann, M

    2010-08-20

    Volumetric assessment of the liver regularly yields discrepant results between pre- and intraoperatively determined volumes. Nevertheless, the main factor responsible for this discrepancy remains still unclear. The aim of this study was to systematically determine the difference between in vivo CT-volumetry and ex vivo volumetry in a pig animal model. Eleven pigs were studied. Liver density assessment, CT-volumetry and water displacement volumetry was performed after surgical removal of the complete liver. Known possible errors of volume determination like resection or segmentation borders were eliminated in this model. Regression analysis was performed and differences between CT-volumetry and water displacement determined. Median liver density was 1.07g/ml. Regression analysis showed a high correlation of r(2) = 0.985 between CT-volumetry and water displacement. CT-volumetry was found to be 13% higher than water displacement volumetry (pvolumetry and ex vivo water displacement volumetry seems to be blood perfusion of the liver. The systematic difference of 13 percent has to be taken in account when dealing with those measures.

  18. Liver volume measurement: reason of the difference between in vivo CT-volumetry and intraoperative ex vivo determination and how to cope it

    Directory of Open Access Journals (Sweden)

    Niehues SM

    2010-08-01

    Full Text Available Abstract Purpose Volumetric assessment of the liver regularly yields discrepant results between pre- and intraoperatively determined volumes. Nevertheless, the main factor responsible for this discrepancy remains still unclear. The aim of this study was to systematically determine the difference between in vivo CT-volumetry and ex vivo volumetry in a pig animal model. Material and Methods Eleven pigs were studied. Liver density assessment, CT-volumetry and water displacement volumetry was performed after surgical removal of the complete liver. Known possible errors of volume determination like resection or segmentation borders were eliminated in this model. Regression analysis was performed and differences between CT-volumetry and water displacement determined. Results Median liver density was 1.07 g/ml. Regression analysis showed a high correlation of r2 = 0.985 between CT-volumetry and water displacement. CTvolumetry was found to be 13% higher than water displacement volumetry (p Conclusion In this study the only relevant factor leading to the difference between in vivo CT-volumetry and ex vivo water displacement volumetry seems to be blood perfusion of the liver. The systematic difference of 13 percent has to be taken in account when dealing with those measures.

  19. Flow cytometric measurement of the metabolism of benzo [a] pyrene by mouse liver cells in culture

    International Nuclear Information System (INIS)

    Bartholomew, J.C.; Wade, C.G.; Dougherty, K.

    1984-01-01

    The metabolism of benzo[a]pyrene in individual cells was monitored by flow cytometry. The measurements are based on the alterations that occur in the fluorescence emission spectrum of benzo[a]pyrene when it is converted to various metabolities. Using present instrumentation the technique could easily detect 1 x 10/sup 6/ molecules per cells of benzo [a]pyrene and 1 x 10/sup 7/ molecules per cell of the diol epoxide. The analysis of C3H IOT 1/2 mouse fibroblasts growing in culture indicated that there was heterogeneity in the conversion of the parent compound into diol epoxide derivative suggesting that some variation in sensitivity to transformation by benzo[a]pyrene may be due to differences in cellular metabolism

  20. Techniques of cardiac output measurement during liver transplantation: arterial pulse wave versus thermodilution

    DEFF Research Database (Denmark)

    Nissen, P.; Lieshout, J.J. van; Novovic, S.

    2009-01-01

    reperfusion)-without the detection of any significant difference between the 2 estimates of CO. TDCO ranged from 2.3 to 17.2 L/minute, and the bias (the mean difference between MCO and TDCO) prior to calibration was -0.4 +/- 1.6 L/minute (mean +/- standard deviation; 1309 paired measurements; 95% limits...... of agreement: -3.4 to 2.6 L/minute). After calibration of the first determined MCO by the simultaneously determined TDCO, the bias was 0.1 +/- 1.5 L/minute, with 57% (n = 744) of the comparisons being less than 1 L/minute and 35% (n = 453) being less than 0.5 L/minute; this was independent of the level of CO...

  1. Assessing Software Quality Through Visualised Cohesion Metrics

    Directory of Open Access Journals (Sweden)

    Timothy Shih

    2001-05-01

    Full Text Available Cohesion is one of the most important factors for software quality as well as maintainability, reliability and reusability. Module cohesion is defined as a quality attribute that seeks for measuring the singleness of the purpose of a module. The module of poor quality can be a serious obstacle to the system quality. In order to design a good software quality, software managers and engineers need to introduce cohesion metrics to measure and produce desirable software. A highly cohesion software is thought to be a desirable constructing. In this paper, we propose a function-oriented cohesion metrics based on the analysis of live variables, live span and the visualization of processing element dependency graph. We give six typical cohesion examples to be measured as our experiments and justification. Therefore, a well-defined, well-normalized, well-visualized and well-experimented cohesion metrics is proposed to indicate and thus enhance software cohesion strength. Furthermore, this cohesion metrics can be easily incorporated with software CASE tool to help software engineers to improve software quality.

  2. Liver Hypertension: Treatment in Infancy !

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Liver Hypertension: Treatment in Infancy ! Liver Disease > Heart. No good non-invasive method. Repeated measurements problematic. Drug efficacy 50% at best. No predictors of response. We Need YOU !!

  3. Morphology and morphometry of the caudate lobe of the liver in two populations.

    Science.gov (United States)

    Sagoo, Mandeep Gill; Aland, R Claire; Gosden, Edward

    2018-01-01

    The caudate lobe of the liver has portal blood supply and hepatic vein drainage independent of the remainder of the liver and may be differentially affected in liver pathologies. Ultrasonographic measurement of the caudate lobe can be used to generate hepatic indices that may indicate cirrhosis. This study investigated the relationship of metrics of the caudate lobe and other morphological features of human livers from a northwest Indian Punjabi population (n = 50) and a UK Caucasian population (n = 25), which may affect the calculation of hepatic indices. The width of the right lobe of the liver was significantly smaller, while the anteroposterior diameter of the caudate lobe and both Harbin's Index and the Hess Index scores were significantly larger in NWI livers than in UKC livers. The Hess Index score, in particular, is much larger in the NWI population (265 %, p liver. These differences may affect the calculation of hepatic indices, resulting in a greater percentage of false positives of cirrhosis in the NWI population. Population-specific data are required to correctly determine normal ranges.

  4. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    Science.gov (United States)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  5. Q-FISH measurement of hepatocyte telomere lengths in donor liver and graft after pediatric living-donor liver transplantation: donor age affects telomere length sustainability.

    Directory of Open Access Journals (Sweden)

    Youichi Kawano

    Full Text Available Along with the increasing need for living-donor liver transplantation (LDLT, the issue of organ shortage has become a serious problem. Therefore, the use of organs from elderly donors has been increasing. While the short-term results of LDLT have greatly improved, problems affecting the long-term outcome of transplant patients remain unsolved. Furthermore, since contradictory data have been reported with regard to the relationship between donor age and LT/LDLT outcome, the question of whether the use of elderly donors influences the long-term outcome of a graft after LT/LDLT remains unsettled. To address whether hepatocyte telomere length reflects the outcome of LDLT, we analyzed the telomere lengths of hepatocytes in informative biopsy samples from 12 paired donors and recipients (grafts of pediatric LDLT more than 5 years after adult-to-child LDLT because of primary biliary atresia, using quantitative fluorescence in situ hybridization (Q-FISH. The telomere lengths in the paired samples showed a robust relationship between the donor and grafted hepatocytes (r = 0.765, p = 0.0038, demonstrating the feasibility of our Q-FISH method for cell-specific evaluation. While 8 pairs showed no significant difference between the telomere lengths for the donor and the recipient, the other 4 pairs showed significantly shorter telomeres in the recipient than in the donor. Multiple regression analysis revealed that the donors in the latter group were older than those in the former (p = 0.001. Despite the small number of subjects, this pilot study indicates that donor age is a crucial factor affecting telomere length sustainability in hepatocytes after pediatric LDLT, and that the telomeres in grafted livers may be elongated somewhat longer when the grafts are immunologically well controlled.

  6. Using Publication Metrics to Highlight Academic Productivity and Research Impact

    Science.gov (United States)

    Carpenter, Christopher R.; Cone, David C.; Sarli, Cathy C.

    2016-01-01

    This article provides a broad overview of widely available measures of academic productivity and impact using publication data and highlights uses of these metrics for various purposes. Metrics based on publication data include measures such as number of publications, number of citations, the journal impact factor score, and the h-index, as well as emerging metrics based on document-level metrics. Publication metrics can be used for a variety of purposes for tenure and promotion, grant applications and renewal reports, benchmarking, recruiting efforts, and administrative purposes for departmental or university performance reports. The authors also highlight practical applications of measuring and reporting academic productivity and impact to emphasize and promote individual investigators, grant applications, or department output. PMID:25308141

  7. Effects of Metric Change on Workers’ Tools and Training.

    Science.gov (United States)

    1981-07-01

    understanding of the metric system, and particularly a lack of fluency in converting customary measurements to metric measuremerts, may increase the...assembly, installing, and repairing occupations 84 Painting, plastering, waterproofing, cementing , and related occupations 85 Excavating, grading... cementing , and related occupations 85 Excavating, grading, paving, and related occupations 86 Construction occupations, n.e.c. 89 Structural work

  8. Evaluating hydrological model performance using information theory-based metrics

    Science.gov (United States)

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  9. Virtual reality, ultrasound-guided liver biopsy simulator: development and performance discrimination

    Science.gov (United States)

    Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F

    2012-01-01

    Objectives The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Methods Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Results Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=−2.487 (−2.040 to −0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=−2.272 (−0.028 to −0.002). ANOVA reported significant differences across years of experience (0–1, 1–2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. Conclusion It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required. PMID:21304005

  10. Changes in liver stiffness measurement using acoustic radiation force impulse elastography after antiviral therapy in patients with chronic hepatitis C.

    Directory of Open Access Journals (Sweden)

    Sheng-Hung Chen

    Full Text Available To compare on-treatment and off-treatment parameters acquired using acoustic radiation force impulse elastography, the Fibrosis-4 (FIB-4 index, and aspartate aminotransferase-to-platelet ratio index (APRI in patients with chronic hepatitis C (CHC.Patients received therapies based on pegylated interferon or direct-acting antiviral agents. The changes in paired patient parameters, including liver stiffness (LS values, the FIB-4 index, and APRI, from baseline to sustained virologic response (SVR visit (24 weeks after the end of treatment were compared. Multiple regression models were used to identify significant factors that explained the correlations with LS, FIB-4, and APRI values and SVR.A total of 256 patients were included, of which 219 (85.5% achieved SVR. The paired LS values declined significantly from baseline to SVR visit in all groups and subgroups except the nonresponder subgroup (n = 10. Body mass index (P = 0.0062 and baseline LS (P < 0.0001 were identified as independent factors that explained the LS declines. Likewise, the baseline FIB-4 (P < 0.0001 and APRI (P < 0.0001 values independently explained the declines in the FIB-4 index and APRI, respectively. Moreover, interleukin-28B polymorphisms, baseline LS, and rapid virologic response were identified as independent correlates with SVR.Paired LS measurements in patients treated for CHC exhibited significant declines comparable to those in FIB-4 and APRI values. These declines may have correlated with the resolution of necroinflammation. Baseline LS values predicted SVR.

  11. Progression of biopsy-measured liver fibrosis in untreated patients with hepatitis C infection: non-Markov multistate model analysis.

    Directory of Open Access Journals (Sweden)

    Peter Bacchetti

    Full Text Available BACKGROUND: Fibrosis stages from liver biopsies reflect liver damage from hepatitis C infection, but analysis is challenging due to their ordered but non-numeric nature, infrequent measurement, misclassification, and unknown infection times. METHODS: We used a non-Markov multistate model, accounting for misclassification, with multiple imputation of unknown infection times, applied to 1062 participants of whom 159 had multiple biopsies. Odds ratios (OR quantified the estimated effects of covariates on progression risk at any given time. RESULTS: Models estimated that progression risk decreased the more time participants had already spent in the current stage, African American race was protective (OR 0.75, 95% confidence interval 0.60 to 0.95, p = 0.018, and older current age increased risk (OR 1.33 per decade, 95% confidence interval 1.15 to 1.54, p = 0.0002. When controlled for current age, older age at infection did not appear to increase risk (OR 0.92 per decade, 95% confidence interval 0.47 to 1.79, p = 0.80. There was a suggestion that co-infection with human immunodeficiency virus increased risk of progression in the era of highly active antiretroviral treatment beginning in 1996 (OR 2.1, 95% confidence interval 0.97 to 4.4, p = 0.059. Other examined risk factors may influence progression risk, but evidence for or against this was weak due to wide confidence intervals. The main results were essentially unchanged using different assumed misclassification rates or imputation of age of infection. DISCUSSION: The analysis avoided problems inherent in simpler methods, supported the previously suspected protective effect of African American race, and suggested that current age rather than age of infection increases risk. Decreasing risk of progression with longer time already spent in a stage was also previously found for post-transplant progression. This could reflect varying disease activity, with recent progression indicating

  12. 77 FR 12832 - Non-RTO/ISO Performance Metrics; Commission Staff Request Comments on Performance Metrics for...

    Science.gov (United States)

    2012-03-02

    ... Performance Metrics; Commission Staff Request Comments on Performance Metrics for Regions Outside of RTOs and... performance communicate about the benefits of RTOs and, where appropriate, (2) changes that need to be made to... common set of performance measures for markets both within and outside of ISOs/RTOs. As recommended by...

  13. Standardised metrics for global surgical surveillance.

    Science.gov (United States)

    Weiser, Thomas G; Makary, Martin A; Haynes, Alex B; Dziekan, Gerald; Berry, William R; Gawande, Atul A

    2009-09-26

    Public health surveillance relies on standardised metrics to evaluate disease burden and health system performance. Such metrics have not been developed for surgical services despite increasing volume, substantial cost, and high rates of death and disability associated with surgery. The Safe Surgery Saves Lives initiative of WHO's Patient Safety Programme has developed standardised public health metrics for surgical care that are applicable worldwide. We assembled an international panel of experts to develop and define metrics for measuring the magnitude and effect of surgical care in a population, while taking into account economic feasibility and practicability. This panel recommended six measures for assessing surgical services at a national level: number of operating rooms, number of operations, number of accredited surgeons, number of accredited anaesthesia professionals, day-of-surgery death ratio, and postoperative in-hospital death ratio. We assessed the feasibility of gathering such statistics at eight diverse hospitals in eight countries and incorporated them into the WHO Guidelines for Safe Surgery, in which methods for data collection, analysis, and reporting are outlined.

  14. Multimetric indices: How many metrics?

    Science.gov (United States)

    Multimetric indices (MMI’s) often include 5 to 15 metrics, each representing a different attribute of assemblage condition, such as species diversity, tolerant taxa, and nonnative taxa. Is there an optimal number of metrics for MMIs? To explore this question, I created 1000 9-met...

  15. Metrical Phonology: German Sound System.

    Science.gov (United States)

    Tice, Bradley S.

    Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English and German languages. The objective is to promote use of metrical phonology as a tool for enhancing instruction in stress patterns in words and sentences, particularly in…

  16. Extending cosmology: the metric approach

    OpenAIRE

    Mendoza, S.

    2012-01-01

    Comment: 2012, Extending Cosmology: The Metric Approach, Open Questions in Cosmology; Review article for an Intech "Open questions in cosmology" book chapter (19 pages, 3 figures). Available from: http://www.intechopen.com/books/open-questions-in-cosmology/extending-cosmology-the-metric-approach

  17. Numerical Calabi-Yau metrics

    International Nuclear Information System (INIS)

    Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, Rene

    2008-01-01

    We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results

  18. High resolution metric imaging payload

    Science.gov (United States)

    Delclaud, Y.

    2017-11-01

    Alcatel Space Industries has become Europe's leader in the field of high and very high resolution optical payloads, in the frame work of earth observation system able to provide military government with metric images from space. This leadership allowed ALCATEL to propose for the export market, within a French collaboration frame, a complete space based system for metric observation.

  19. Weyl metrics and wormholes

    Energy Technology Data Exchange (ETDEWEB)

    Gibbons, Gary W. [DAMTP, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA U.K. (United Kingdom); Volkov, Mikhail S., E-mail: gwg1@cam.ac.uk, E-mail: volkov@lmpt.univ-tours.fr [Laboratoire de Mathématiques et Physique Théorique, LMPT CNRS—UMR 7350, Université de Tours, Parc de Grandmont, Tours, 37200 France (France)

    2017-05-01

    We study solutions obtained via applying dualities and complexifications to the vacuum Weyl metrics generated by massive rods and by point masses. Rescaling them and extending to complex parameter values yields axially symmetric vacuum solutions containing singularities along circles that can be viewed as singular matter sources. These solutions have wormhole topology with several asymptotic regions interconnected by throats and their sources can be viewed as thin rings of negative tension encircling the throats. For a particular value of the ring tension the geometry becomes exactly flat although the topology remains non-trivial, so that the rings literally produce holes in flat space. To create a single ring wormhole of one metre radius one needs a negative energy equivalent to the mass of Jupiter. Further duality transformations dress the rings with the scalar field, either conventional or phantom. This gives rise to large classes of static, axially symmetric solutions, presumably including all previously known solutions for a gravity-coupled massless scalar field, as for example the spherically symmetric Bronnikov-Ellis wormholes with phantom scalar. The multi-wormholes contain infinite struts everywhere at the symmetry axes, apart from solutions with locally flat geometry.

  20. Smart Grid Status and Metrics Report

    Energy Technology Data Exchange (ETDEWEB)

    Balducci, Patrick J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Weimar, Mark R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kirkham, Harold [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-07-01

    To convey progress made in achieving the vision of a smart grid, this report uses a set of six characteristics derived from the National Energy Technology Laboratory Modern Grid Strategy. It measures 21 metrics to provide insight into the grid’s capacity to embody these characteristics. This report looks across a spectrum of smart grid concerns to measure the status of smart grid deployment and impacts.

  1. Evaluation of Real-time Measurement Liver Tumor's Movement and SynchronyTM System's Accuracy of Radiosurgery using a Robot CyberKnife

    International Nuclear Information System (INIS)

    Kim, Gha Jung; Shim, Su Jung; Kim, Jeong Ho; Min, Chul Kee; Chung, Weon Kuu

    2008-01-01

    This study aimed to quantitatively measure the movement of tumors in real-time and evaluate the treatment accuracy, during the treatment of a liver tumor patient, who underwent radiosurgery with a Synchrony Respiratory motion tracking system of a robot CyberKnife. Materials and Methods: The study subjects included 24 liver tumor patients who underwent CyberKnife treatment, which included 64 times of treatment with the Synchrony Respiratory motion tracking system (SynchronyTM). The treatment involved inserting 4 to 6 acupuncture needles into the vicinity of the liver tumor in all the patients using ultrasonography as a guide. A treatment plan was set up using the CT images for treatment planning uses. The position of the acupuncture needle was identified for every treatment time by Digitally Reconstructed Radiography (DRR) prepared at the time of treatment planning and X-ray images photographed in real-time. Subsequent results were stored through a Motion Tracking System (MTS) using the Mtsmain.log treatment file. In this way, movement of the tumor was measured. Besides, the accuracy of radiosurgery using CyberKnife was evaluated by the correlation errors between the real-time positions of the acupuncture needles and the predicted coordinates. Results: The maximum and the average translational movement of the liver tumor were measured 23.5 mm and 13.9±5.5 mm, respectively from the superior to the inferior direction, 3.9 mm and 1.9±0.9 mm, respectively from left to right, and 8.3 mm and 4.9±1.9 mm, respectively from the anterior to the posterior direction. The maximum and the average rotational movement of the liver tumor were measured to be 3.3o and 2.6±1.3o, respectively for X (Left-Right) axis rotation, 4.8o and 2.3±1.0o, respectively for Y (Cranio-Caudal) axis rotation, 3.9o and 2.8±1.1o, respectively for Z (Anterior-Posterior) axis rotation. In addition, the average correlation error, which represents the treatment's accuracy was 1.1±0.7 mm. Conclusion

  2. Measurement of hepatic volume and effective blood flow with radioactive colloids: Evaluation of development in liver diseases

    International Nuclear Information System (INIS)

    Fujii, M.; Uchino, H.; Kyoto Univ.

    1982-01-01

    Changes in hepatic volume and the blood flow effectively perfusing the liver parenchyma were studied as an assessment of the severity of liver diseases. Hepatic effective blood flow was estimated as the hepatic fractional clearance of radioactive colloids, obtained from the disappearance rate multiplied by the fraction of injected dose taken up by the liver. The hepatic fractional clearance was normal or not markedly decreased in patients with acute hepatitis which had developed favorably, but was severely decreased in patients with fulminant hepatitis. In liver diseases, the ratio of hepatic volume to fractional clearance was found to increase as the clearance decreased. In subjects with normal clearance, hepatic fractional clearance was correlated significantly with liver volume, indicating that hepatic effective blood flow is proportional to parenchymal volume in an unanesthetized, resting state. In biopsied cases changes in volume and blood flow accorded well with changes indicated by morphological criteria. In chronic persistent hepatitis, effective hepatic blood flow is not diminished. However, hepatic blood flow were observed between the cirrhosis or chronic aggressive hepatitis, and normal control groups. Extension of chronic inflammatory infiltration into the parenchyma distinguishes chronic aggressive hepatitis from chronic persistent hepatitis. Architecture is often disturbed in the former. These changes should be accompanied by disturbance of microcirculation. The present study indicates that the decrease in effective hepatic blood flow in chronic hepatitis and cirrhosis has two aspects: one is a summation of microcirculatory disturbances, and the other is a decrease in liver cell mass. (orig.)

  3. Accuracy of the CT-estimated weight of the right hepatic lobe prior to living related liver donation (LRLD) for predicting the intraoperatively measured weight of the graft

    International Nuclear Information System (INIS)

    Lemke, A.-J.; Brinkmann, M.; Felix, R.; Pascher, A.; Steinmueller, T.; Settmacher, U.; Neuhaus, P.

    2003-01-01

    Purpose: Due to the shortage of cadaver donors, living related liver donation (LRLD) has emerged as an alternative to cadaver donation. The expected graft weight is one of the main determinants for donor selection. This study investigates the accuracy of preoperatively performed CT-volumetry to predict the actual weight of the right liver lobe graft. Materials and methods: In a prospective study the weight of the right hepatic lobe was calculated by volumetric analysis based on CT in 33 patients (21 females, 12 males, mean age 42.1 years, median age 41 years) prior to living related liver donation. Graft weight was calculated as the product of CT-based graft volume and 1.00 g/ml (the approximated density of healthy liver parenchyma). The calculated weight was compared with the intraoperatively measured weight of the harvested right hepatic lobe. The difference was used to determine a correction factor for estimating the actual graft weight. Results: Based on the assumption of a parenchymal density of 1.00 g/ml, the preoperatively estimated graft weight (mean 980 g ± 168 g) deviated + 33% from the intraoperatively measured right hepatic lobe weight (mean 749 g ± 170 g). By reducing the preoperatively predicted weight of the right hepatic lobe with a correction factor of 0.75, the actual graft weight can be calculated [de

  4. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  5. Metrics for Analyzing Quantifiable Differentiation of Designs with Varying Integrity for Hardware Assurance

    Science.gov (United States)

    2017-03-01

    Keywords — Trojan; integrity; trust; quantify; hardware; assurance; verification; metrics ; reference, quality ; profile I. INTRODUCTION A. The Rising...as a framework for benchmarking Trusted Part certifications. Previous work conducted in Trust Metric development has focused on measures at the...the lowest integrities. Based on the analysis, the DI metric shows measurable differentiation between all five Test Article Error Location Error

  6. Liver Immunology

    Science.gov (United States)

    Bogdanos, Dimitrios P.; Gao, Bin; Gershwin, M. Eric

    2014-01-01

    The liver is the largest organ in the body and is generally regarded by non-immunologists as not having lymphoid function. However, such is far from accurate. This review highlights the importance of the liver as a lymphoid organ. Firstly, we discuss experimental data surrounding the role of liver as a lymphoid organ. The liver facilitates a tolerance rather than immunoreactivity, which protects the host from antigenic overload of dietary components and drugs derived from the gut and is also instrumental to fetal immune tolerance. Loss of liver tolerance leads to autoaggressive phenomena which if are not controlled by regulatory lymphoid populations may lead to the induction of autoimmune liver diseases. Liver-related lymphoid subpopulations also act as critical antigen-presenting cells. The study of the immunological properties of liver and delineation of the microenvironment of the intrahepatic milieu in normal and diseased livers provides a platform to understand the hierarchy of a series of detrimental events which lead to immune-mediated destruction of the liver and the rejection of liver allografts. The majority of emphasis within this review will be on the normal mononuclear cell composition of the liver. However, within this context, we will discus select, but not all, immune mediated liver disease and attempt to place these data in the context of human autoimmunity. PMID:23720323

  7. METRICS DEVELOPMENT FOR PATENTS.

    Science.gov (United States)

    Veiga, Daniela Francescato; Ferreira, Lydia Masako

    2015-01-01

    To develop a proposal for metrics for patents to be applied in assessing the postgraduate programs of Medicine III - Capes. From the reading and analysis of the 2013 area documents of all the 48 areas of Capes, a proposal for metrics for patents was developed to be applied in Medicine III programs. Except for the areas Biotechnology, Food Science, Biological Sciences III, Physical Education, Engineering I, III and IV and Interdisciplinary, most areas do not adopt a scoring system for patents. The proposal developed was based on the criteria of Biotechnology, with adaptations. In general, it will be valued, in ascending order, the deposit, the granting and licensing/production. It will also be assigned higher scores to patents registered abroad and whenever there is a participation of students. This proposal can be applied to the item Intellectual Production of the evaluation form, in subsection Technical Production/Patents. The percentage of 10% for academic programs and 40% for Masters Professionals should be maintained. The program will be scored as Very Good when it reaches 400 points or over; Good, between 200 and 399 points; Regular, between 71 and 199 points; Weak up to 70 points; Insufficient, no punctuation. Desenvolver uma proposta de métricas para patentes a serem aplicadas na avaliação dos Programas de Pós-Graduação da Área Medicina III - Capes. A partir da leitura e análise dos documentos de área de 2013 de todas as 48 Áreas da Capes, desenvolveu-se uma proposta de métricas para patentes, a ser aplicada na avaliação dos programas da área. Constatou-se que, com exceção das áreas Biotecnologia, Ciência de Alimentos, Ciências Biológicas III, Educação Física, Engenharias I, III e IV e Interdisciplinar, a maioria não adota sistema de pontuação para patentes. A proposta desenvolvida baseou-se nos critérios da Biotecnologia, com adaptações. De uma forma geral, foi valorizado, em ordem crescente, o depósito, a concessão e o

  8. Factors associated with the impossibility to obtain reliable liver stiffness measurements by means of Acoustic Radiation Force Impulse (ARFI) elastography—Analysis of a cohort of 1031 subjects

    Energy Technology Data Exchange (ETDEWEB)

    Bota, Simona, E-mail: bota_simona1982@yahoo.com; Sporea, Ioan, E-mail: isporea@umft.ro; Sirli, Roxana, E-mail: roxanasirli@gmail.com; Popescu, Alina, E-mail: alinamircea.popescu@gmail.com; Danila, Mirela, E-mail: mireladanila@gmail.com; Jurchis, Ana, E-mail: ana.jurchis@yahoo.com; Gradinaru-Tascau, Oana, E-mail: bluonmyown@yahoo.com

    2014-02-15

    Introduction: Acoustic Radiation Force Impulse (ARFI) elastography is a non-invasive technique for liver fibrosis assessment. Aim: To assess the feasibility of ARFI elastography in a large cohort of subjects and to identify factors associated with impossibility to obtain reliable liver stiffness (LS) measurements by means of this technique. Methods: Our retrospective study included 1031 adult subjects with or without chronic liver disease. In each subject LS was assessed by means of ARFI elastography. Failure of ARFI measurements was defined if no valid measurement was obtained after at least 10 shots and unreliable in the following situations: fewer than 10 valid shots; or median value of 10 valid measurements with a success rate (SR) < 60% and/or an interquartile range interval (IQR) ≥ 30%. Results: Failure of LS measurements by means of ARFI was observed in 4 subjects (0.3%), unreliable measurements in 66 subjects (6.4%), so reliable measurements were obtained in 961 subjects (93.3%). In univariant analysis, the following risk factors were associated with failed and unreliable measurements: age over 58 years (OR = 0.49; 95% CI 0.30–0.80, p = 0.005), male gender (OR = 0.58; 95% CI 0.34–0.94, p = 0.04), BMI > 27.7 kg/m{sup 2} (OR = 0.23, 95% CI 0.13–0.41, p < 0.0001). In multivariate analysis all the factors mentioned above were independently associated with the risk of failed and unreliable measurements. Conclusions: Reliable LS measurements by means of ARFI elastography were obtained in 93.3% of cases. Older age, higher BMI and male gender were associated with the risk of failed and unreliable measurements, but their influence is limited as compared with Transient Elastography.

  9. Adipokines in Liver Cirrhosis.

    Science.gov (United States)

    Buechler, Christa; Haberl, Elisabeth M; Rein-Fischboeck, Lisa; Aslanidis, Charalampos

    2017-06-29

    Liver fibrosis can progress to cirrhosis, which is considered a serious disease. The Child-Pugh score and the model of end-stage liver disease score have been established to assess residual liver function in patients with liver cirrhosis. The development of portal hypertension contributes to ascites, variceal bleeding and further complications in these patients. A transjugular intrahepatic portosystemic shunt (TIPS) is used to lower portal pressure, which represents a major improvement in the treatment of patients. Adipokines are proteins released from adipose tissue and modulate hepatic fibrogenesis. These proteins affect various biological processes that are involved in liver function, including angiogenesis, vasodilation, inflammation and deposition of extracellular matrix proteins. The best studied adipokines are adiponectin and leptin. Adiponectin protects against hepatic inflammation and fibrogenesis, and leptin functions as a profibrogenic factor. These and other adipokines are supposed to modulate disease severity in patients with liver cirrhosis. Consequently, circulating levels of these proteins have been analyzed to identify associations with parameters of hepatic function, portal hypertension and its associated complications in patients with liver cirrhosis. This review article briefly addresses the role of adipokines in hepatitis and liver fibrosis. Here, studies having analyzed these proteins in systemic blood in cirrhotic patients are listed to identify adipokines that are comparably changed in the different cohorts of patients with liver cirrhosis. Some studies measured these proteins in systemic, hepatic and portal vein blood or after TIPS to specify the tissues contributing to circulating levels of these proteins and the effect of portal hypertension, respectively.

  10. A Metric for Heterotic Moduli

    Science.gov (United States)

    Candelas, Philip; de la Ossa, Xenia; McOrist, Jock

    2017-12-01

    Heterotic vacua of string theory are realised, at large radius, by a compact threefold with vanishing first Chern class together with a choice of stable holomorphic vector bundle. These form a wide class of potentially realistic four-dimensional vacua of string theory. Despite all their phenomenological promise, there is little understanding of the metric on the moduli space of these. What is sought is the analogue of special geometry for these vacua. The metric on the moduli space is important in phenomenology as it normalises D-terms and Yukawa couplings. It is also of interest in mathematics, since it generalises the metric, first found by Kobayashi, on the space of gauge field connections, to a more general context. Here we construct this metric, correct to first order in {α^{\\backprime}}, in two ways: first by postulating a metric that is invariant under background gauge transformations of the gauge field, and also by dimensionally reducing heterotic supergravity. These methods agree and the resulting metric is Kähler, as is required by supersymmetry. Checking the metric is Kähler is intricate and the anomaly cancellation equation for the H field plays an essential role. The Kähler potential nevertheless takes a remarkably simple form: it is the Kähler potential of special geometry with the Kähler form replaced by the {α^{\\backprime}}-corrected hermitian form.

  11. Liver spots

    Science.gov (United States)

    ... skin changes - liver spots; Senile or solar lentigines; Skin spots - aging; Age spots ... Liver spots are changes in skin color that occur in older skin. The coloring may be due to aging, exposure to the sun ...

  12. Liver Diseases

    Science.gov (United States)

    Your liver is the largest organ inside your body. It helps your body digest food, store energy, and remove poisons. There are many kinds of liver diseases: Diseases caused by viruses, such as hepatitis ...

  13. Liver disease

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/000205.htm Liver disease To use the sharing features on this page, please enable JavaScript. The term "liver disease" applies to many conditions that stop the ...

  14. Implications of Metric Choice for Common Applications of Readmission Metrics

    OpenAIRE

    Davies, Sheryl; Saynina, Olga; Schultz, Ellen; McDonald, Kathryn M; Baker, Laurence C

    2013-01-01

    Objective. To quantify the differential impact on hospital performance of three readmission metrics: all-cause readmission (ACR), 3M Potential Preventable Readmission (PPR), and Centers for Medicare and Medicaid 30-day readmission (CMS).

  15. Role of the apparent diffusion coefficient measurement by diffusion weighted magnetic resonance imaging in the diagnosis of Fasciola hepatica in the liver

    International Nuclear Information System (INIS)

    Onur, M.R.; Cicekci, M.; Kayali, A.; Aygun, C.; Kocakoc, E.

    2011-01-01

    The aim of this study was to investigate the diagnostic role of the apparent diffusion coefficient (ADC) measurement in the diagnosis of focal parenchymal lesions and to understand the discriminating role of the ADC value for differentiating Fasciola lesions from other focal liver lesions. We measured ADC values of parenchymal lesions and liver parenchyma in 18 patients with Fasciola hepatica infestation at b 100, b 600, and b 1000 s/mm 2 gradients. We further measured average ADC values of hepatic metastases (n=21), hepatocellular carcinomas (n=21), cholangiocarcinomas (n=7), hydatid cysts (n=12), and focal nodular hyperplasia (FNH) (n=12) and compared them with average ADC values for Fasciola hepatica. The differences between average ADC values of lesions (2.16±0.36 x 10 -3 mm 2 /s) and parenchyma (1.64±0.2 x 10 -3 mm 2 /s) at three gradients were statistically significant (P<0.05). Mean ADC values of Fasciola hepatica lesions were significantly different from most of the other focal hepatic lesions, except FNH at all gradients and hydatid cyst at only the b 100 gradient. ADC measurement may be a complementary method in the diagnosis of Fasciola hepatica, and it may be used to differentiate these lesions from other focal liver lesions. (author)

  16. The effects of farm management practices on liver fluke prevalence and the current internal parasite control measures employed on Irish dairy farms.

    Science.gov (United States)

    Selemetas, Nikolaos; Phelan, Paul; O'Kiely, Padraig; de Waal, Theo

    2015-01-30

    Fasciolosis caused by Fasciola hepatica is responsible for major production losses in cattle farms. The objectives of this study were to assess the effect of farm management practices on liver fluke prevalence on Irish dairy farms and to document the current control measures against parasitic diseases. In total, 369 dairy farms throughout Ireland were sampled from October to December 2013, each providing a single bulk tank milk (BTM) sample for liver fluke antibody-detection ELISA testing and completing a questionnaire on their farm management. The analysis of samples showed that cows on 78% (n=288) of dairy farms had been exposed to liver fluke. There was a difference (P0.05) between positive and negative farms in (a) the grazing of dry cows together with replacement cows, (b) whether or not grazed grassland was mowed for conservation, (c) the type of drinking water provision system, (d) spreading of cattle manure on grassland or (e) for grazing season length (GSL; mean=262.5 days). Also, there were differences (Pmanagement practices between Irish farms with dairy herds exposed or not exposed to liver fluke and stressed the need of fine-scale mapping of the disease patterns even at farm level to increase the accuracy of risk models. Also, comprehensive advice and professional support services to farmers on appropriate farm management practices are very important for an effective anthelmintic control strategy. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Design of a control system for stepper motors with micro-metric precision employed in the beam emittance measurement of the Linac4 at CERN

    CERN Document Server

    AUTHOR|(CDS)2207212; Dueñas Díaz, José Antonio

    A new linear accelerator (Linac4) is being designed to replace its predecessor (Linac2) at CERN. The new Linac4 will double the initial intensity giving an injection energy of up to 160 MeV. It will be an essential component of the LHC (Large Hadron Collider). To assess the quality of the beam, monitoring systems are placed along the beam pipe, being one of them devoted to measure its emittance, the so-called emittance scanner. The measurement of the emittance is important since it constitutes one of the two main parameters that limits the overall LHC performance, being the other parameter the energy of the beam. While the energy level of the beam can be modified during different phases at CERN, the beam emittance cannot; it is determined by the first source that produces the beam. The beam emittance directly influences the amount of particles colliding. For this purpose, the Linac4 emittance scanner will be placed on the very first step of the whole CERN accelerator complex right after the particles sour...

  18. Background metric in supergravity theories

    International Nuclear Information System (INIS)

    Yoneya, T.

    1978-01-01

    In supergravity theories, we investigate the conformal anomaly of the path-integral determinant and the problem of fermion zero modes in the presence of a nontrivial background metric. Except in SO(3) -invariant supergravity, there are nonvanishing conformal anomalies. As a consequence, amplitudes around the nontrivial background metric contain unpredictable arbitrariness. The fermion zero modes which are explicitly constructed for the Euclidean Schwarzschild metric are interpreted as an indication of the supersymmetric multiplet structure of a black hole. The degree of degeneracy of a black hole is 2/sup 4n/ in SO(n) supergravity

  19. Generalized Painleve-Gullstrand metrics

    Energy Technology Data Exchange (ETDEWEB)

    Lin Chunyu [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China)], E-mail: l2891112@mail.ncku.edu.tw; Soo Chopin [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China)], E-mail: cpsoo@mail.ncku.edu.tw

    2009-02-02

    An obstruction to the implementation of spatially flat Painleve-Gullstrand (PG) slicings is demonstrated, and explicitly discussed for Reissner-Nordstroem and Schwarzschild-anti-deSitter spacetimes. Generalizations of PG slicings which are not spatially flat but which remain regular at the horizons are introduced. These metrics can be obtained from standard spherically symmetric metrics by physical Lorentz boosts. With these generalized PG metrics, problematic contributions to the imaginary part of the action in the Parikh-Wilczek derivation of Hawking radiation due to the obstruction can be avoided.

  20. Daylight metrics and energy savings

    Energy Technology Data Exchange (ETDEWEB)

    Mardaljevic, John; Heschong, Lisa; Lee, Eleanor

    2009-12-31

    The drive towards sustainable, low-energy buildings has increased the need for simple, yet accurate methods to evaluate whether a daylit building meets minimum standards for energy and human comfort performance. Current metrics do not account for the temporal and spatial aspects of daylight, nor of occupants comfort or interventions. This paper reviews the historical basis of current compliance methods for achieving daylit buildings, proposes a technical basis for development of better metrics, and provides two case study examples to stimulate dialogue on how metrics can be applied in a practical, real-world context.

  1. Next-Generation Metrics: Responsible Metrics & Evaluation for Open Science

    Energy Technology Data Exchange (ETDEWEB)

    Wilsdon, J.; Bar-Ilan, J.; Peters, I.; Wouters, P.

    2016-07-01

    Metrics evoke a mixed reaction from the research community. A commitment to using data to inform decisions makes some enthusiastic about the prospect of granular, real-time analysis o of research and its wider impacts. Yet we only have to look at the blunt use of metrics such as journal impact factors, h-indices and grant income targets, to be reminded of the pitfalls. Some of the most precious qualities of academic culture resist simple quantification, and individual indicators often struggle to do justice to the richness and plurality of research. Too often, poorly designed evaluation criteria are “dominating minds, distorting behaviour and determining careers (Lawrence, 2007).” Metrics hold real power: they are constitutive of values, identities and livelihoods. How to exercise that power to more positive ends has been the focus of several recent and complementary initiatives, including the San Francisco Declaration on Research Assessment (DORA1), the Leiden Manifesto2 and The Metric Tide3 (a UK government review of the role of metrics in research management and assessment). Building on these initiatives, the European Commission, under its new Open Science Policy Platform4, is now looking to develop a framework for responsible metrics for research management and evaluation, which can be incorporated into the successor framework to Horizon 2020. (Author)

  2. Metabolic liver function in humans measured by 2-(18)F-fluoro-2-deoxy-D-galactose PET/CT-reproducibility and clinical potential

    DEFF Research Database (Denmark)

    Bak-Fredslund, Kirstine P; Lykke Eriksen, Peter; Munk, Ole L

    2017-01-01

    Background: PET/CT with the radioactively labelled galactose analogue 2-18F-fluoro-2-deoxy-D-galactose (18F-FDGal) can be used to quantify the hepatic metabolic function and visualise regional metabolic heterogeneity. We determined the day-to-day variation in humans with and without liver disease....... Furthermore, we examined whether the standardised uptake value (SUV) of 18F-FDGal from static scans can substitute the hepatic systemic clearance of 18F- FDGal (Kmet, mL blood/min/mL liver tissue/) quantified from dynamic scans as measure of metabolic function. Four patients with cirrhosis and six healthy...... subjects underwent two 18F-FDGal PET/CT scans within a median interval of 15 days for determination of day-to-day variation. The correlation between Kmet and SUV was examined using scan data and measured arterial blood concentrations of 18F-FDGal (blood samples) from 14 subjects from previous studies...

  3. MELD score measured day 10 after orthotopic liver transplantation predicts death and re-transplantation within the first year

    DEFF Research Database (Denmark)

    Rostved, Andreas A; Lundgren, Jens D; Hillingsø, Jens

    2016-01-01

    -transplantation. MATERIAL AND METHODS: Retrospective cohort study on adults undergoing orthotopic deceased donor liver transplantation from 2004 to 2014. The MELD score was determined prior to transplantation and daily until 21 days after. The risk of mortality or re-transplantation within the first year was assessed...... day 1 the MELD score significantly diversified and was higher in the poor outcome group (MELD score quartile 4 versus quartile 1-3 at day 10: HR 5.1, 95% CI: 2.8-9.0). This association remained after adjustment for non-identical blood type, autoimmune liver disease and hepatocellular carcinoma...... (adjusted HR 5.3, 95% CI: 2.9-9.5 for MELD scores at day 10). The post-transplant MELD score was not associated with pre-transplant MELD score or the Eurotransplant donor risk index. CONCLUSION: Early determination of the MELD score as an indicator of early allograft dysfunction after liver transplantation...

  4. Agreement between manual relaxometry and semi-automated scanner-based multi-echo Dixon technique for measuring liver T2* in a pediatric and young adult population

    International Nuclear Information System (INIS)

    Serai, Suraj D.; Trout, Andrew T.; Dillman, Jonathan R.; Smith, Ethan A.

    2018-01-01

    Commercially available 3D multi-echo Dixon (mDixon) sequences provide parametric maps of liver T2*, obviating manual curve fitting that is often required with conventional gradient recalled echo (GRE)-based multi-echo relaxometry, potentially simplifying clinical work flow. The purpose of our study was to compare T2* values generated by a 3D mDixon sequence to values generated by GRE-based T2* relaxometry with manual curve fitting in a pediatric and young adult population. We reviewed clinical MRI exams performed at 1.5T for liver iron content estimation between February 2015 and June 2016 that included both mDixon and multi-echo GRE pulse sequences. We obtained mean T2* measurements based on each sequence by drawing regions of interest on each of four axial slices through the mid-liver. We compared mDixon-based and GRE-based T2* measurements using paired t-tests and assessed agreement using single-measure intra-class correlation coefficients and Bland-Altman difference plots. One hundred nine patients met inclusion criteria (site 1=82; site 2=27). Mean age was 12.4±5.8 years, and 42 subjects (39%) were female. There was no statistically significant difference in mean T2* values for the two sequences (pooled means: 11.7±11.0 [GRE] vs. 11.7±10.9 ms [mDixon]; P=0.93). There was excellent absolute agreement between sequences (intraclass correlation coefficient [ICC]=0.98 for patients at both sites, confidence interval [CI]: 0.97-0.98 with mean bias of 0.0 ms [-4.2 ms to +4.2 ms]). 3D mDixon is accurate for measuring liver T2* and can likely replace 2D GRE-based relaxometry. (orig.)

  5. Agreement between manual relaxometry and semi-automated scanner-based multi-echo Dixon technique for measuring liver T2* in a pediatric and young adult population

    Energy Technology Data Exchange (ETDEWEB)

    Serai, Suraj D.; Trout, Andrew T.; Dillman, Jonathan R. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Smith, Ethan A. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); University of Michigan Health System, Section of Pediatric Radiology, C.S. Mott Children' s Hospital, Department of Radiology, Ann Arbor, MI (United States)

    2018-01-15

    Commercially available 3D multi-echo Dixon (mDixon) sequences provide parametric maps of liver T2*, obviating manual curve fitting that is often required with conventional gradient recalled echo (GRE)-based multi-echo relaxometry, potentially simplifying clinical work flow. The purpose of our study was to compare T2* values generated by a 3D mDixon sequence to values generated by GRE-based T2* relaxometry with manual curve fitting in a pediatric and young adult population. We reviewed clinical MRI exams performed at 1.5T for liver iron content estimation between February 2015 and June 2016 that included both mDixon and multi-echo GRE pulse sequences. We obtained mean T2* measurements based on each sequence by drawing regions of interest on each of four axial slices through the mid-liver. We compared mDixon-based and GRE-based T2* measurements using paired t-tests and assessed agreement using single-measure intra-class correlation coefficients and Bland-Altman difference plots. One hundred nine patients met inclusion criteria (site 1=82; site 2=27). Mean age was 12.4±5.8 years, and 42 subjects (39%) were female. There was no statistically significant difference in mean T2* values for the two sequences (pooled means: 11.7±11.0 [GRE] vs. 11.7±10.9 ms [mDixon]; P=0.93). There was excellent absolute agreement between sequences (intraclass correlation coefficient [ICC]=0.98 for patients at both sites, confidence interval [CI]: 0.97-0.98 with mean bias of 0.0 ms [-4.2 ms to +4.2 ms]). 3D mDixon is accurate for measuring liver T2* and can likely replace 2D GRE-based relaxometry. (orig.)

  6. Developing a Security Metrics Scorecard for Healthcare Organizations.

    Science.gov (United States)

    Elrefaey, Heba; Borycki, Elizabeth; Kushniruk, Andrea

    2015-01-01

    In healthcare, information security is a key aspect of protecting a patient's privacy and ensuring systems availability to support patient care. Security managers need to measure the performance of security systems and this can be achieved by using evidence-based metrics. In this paper, we describe the development of an evidence-based security metrics scorecard specific to healthcare organizations. Study participants were asked to comment on the usability and usefulness of a prototype of a security metrics scorecard that was developed based on current research in the area of general security metrics. Study findings revealed that scorecards need to be customized for the healthcare setting in order for the security information to be useful and usable in healthcare organizations. The study findings resulted in the development of a security metrics scorecard that matches the healthcare security experts' information requirements.

  7. Measurement of Hydrologic Streamflow Metrics and Estimation of Streamflow with Lumped Parameter Models in a Managed Lake System, Sebago Lake, Maine

    Science.gov (United States)

    Reeve, A. S.; Martin, D.; Smith, S. M.

    2013-12-01

    Surface waters within the Sebago Lake watershed (southern Maine, USA) provide a variety of economically and intrinsically valuable recreational, commercial and environmental services. Different stakeholder groups for the 118 km2 Sebago Lake and surrounding watershed advocate for different lake and watershed management strategies, focusing on the operation of a dam at the outflow from Sebago Lake. While lake level in Sebago Lake has been monitored for over a century, limited data is available on the hydrologic processes that drive lake level and therefore impact how dam operation (and other changes to the region) will influence the hydroperiod of the lake. To fill this information gap several tasks were undertaken including: 1) deploying data logging pressure transducers to continuously monitor stream stage in nine tributaries, 2) measuring stream discharge at these sites to create rating curves for the nine tributaries, and using the resulting continuous discharge records to 3) calibrate lumped parameter computer models based on the GR4J model, modified to include a degree-day snowmelt routine. These lumped parameter models have been integrated with a simple lake water-balance model to estimate lake level and its response to different scenarios including dam management strategies. To date, about three years of stream stage data have been used to estimate stream discharge in all monitored tributaries (data collection is ongoing). Baseflow separation indices (BFI) for 2010 and 2011 using the USGS software PART and the Eckhart digital filter in WHAT range from 0.80-0.86 in the Crooked River and Richmill Outlet,followed by Northwest (0.75) and Muddy (0.53-0.56) Rivers, with the lowest BFI measured in Sticky River (0.41-0.56). The BFI values indicate most streams have significant groundwater (or other storage) inputs. The lumped parameter watershed model has been calibrated for four streams (Nash-Sutcliffe = 0.4 to 0.9), with the other major tributaries containing

  8. Liver regeneration and restoration of liver function after partial hepatectomy in patients with liver tumors

    International Nuclear Information System (INIS)

    Jansen, P.L.M.; Chamuleau, R.A.F.; Leeuwen, D.J. van; Schippor, H.G.; Busemann-Sokole, E.; Heyde, M.N. van der

    1990-01-01

    Liver regeneration and restoration of liver function were studied in six patients who underwent partial hepatectomy with removal of 30-70% of the liver. Liver volume and liver regeneration were studied by single photon computed tomography (SPECT), using 99m Tc-colloid as tracer. The method was assessed in 11 patients by comparing the pre- and post-operative volume measurement with the volume of the resected liver mass. Liver function was determined by measuring the galactose elimination capacity and the caffeine clearance. After a postoperative follow-up period of 50 days, the liver had regenerated maximally to a volume of 75 ± 2% of the preoperative liver mass. Maximal restoration of liver function was achieved 120 days after operation and amounted to 75 ± 10% for the caffeine clearance and to 100 ± 25% for the galactose elimination capacity. This study shows that SPECT is a useful method for assessing liver regeneration in patients after partial hepatectomy. The study furthermore shows that caffeine clearance correlates well with total liver volume, whereas the galactose elimination capacity overestimates total liver volume after partial hepatectomy. 22 refs

  9. Acute Liver Failure

    Science.gov (United States)

    ... can cause acute liver failure. It is an industrial chemical found in refrigerants and solvents for waxes, varnishes ... measures when spraying insecticides, fungicides, paint and other toxic chemicals. Follow product instructions carefully. Watch what gets on ...

  10. Let's Make Metric Ice Cream

    Science.gov (United States)

    Zimmerman, Marianna

    1975-01-01

    Describes a classroom activity which involved sixth grade students in a learning situation including making ice cream, safety procedures in a science laboratory, calibrating a thermometer, using metric units of volume and mass. (EB)

  11. Experiential space is hardly metric

    Czech Academy of Sciences Publication Activity Database

    Šikl, Radovan; Šimeček, Michal; Lukavský, Jiří

    2008-01-01

    Roč. 2008, č. 37 (2008), s. 58-58 ISSN 0301-0066. [European Conference on Visual Perception. 24.08-28.08.2008, Utrecht] R&D Projects: GA ČR GA406/07/1676 Institutional research plan: CEZ:AV0Z70250504 Keywords : visual space perception * metric and non-metric perceptual judgments * ecological validity Subject RIV: AN - Psychology

  12. Phantom metrics with Killing spinors

    Directory of Open Access Journals (Sweden)

    W.A. Sabra

    2015-11-01

    Full Text Available We study metric solutions of Einstein–anti-Maxwell theory admitting Killing spinors. The analogue of the IWP metric which admits a space-like Killing vector is found and is expressed in terms of a complex function satisfying the wave equation in flat (2+1-dimensional space–time. As examples, electric and magnetic Kasner spaces are constructed by allowing the solution to depend only on the time coordinate. Euclidean solutions are also presented.

  13. Metrics for border management systems.

    Energy Technology Data Exchange (ETDEWEB)

    Duggan, Ruth Ann

    2009-07-01

    There are as many unique and disparate manifestations of border systems as there are borders to protect. Border Security is a highly complex system analysis problem with global, regional, national, sector, and border element dimensions for land, water, and air domains. The complexity increases with the multiple, and sometimes conflicting, missions for regulating the flow of people and goods across borders, while securing them for national security. These systems include frontier border surveillance, immigration management and customs functions that must operate in a variety of weather, terrain, operational conditions, cultural constraints, and geopolitical contexts. As part of a Laboratory Directed Research and Development Project 08-684 (Year 1), the team developed a reference framework to decompose this complex system into international/regional, national, and border elements levels covering customs, immigration, and border policing functions. This generalized architecture is relevant to both domestic and international borders. As part of year two of this project (09-1204), the team determined relevant relative measures to better understand border management performance. This paper describes those relative metrics and how they can be used to improve border management systems.

  14. Metrical results on systems of small linear forms

    DEFF Research Database (Denmark)

    Hussain, M.; Kristensen, Simon

    In this paper the metric theory of Diophantine approximation associated with the small linear forms is investigated. Khintchine--Groshev theorems are established along with Hausdorff measure generalization without the monotonic assumption on the approximating function.......In this paper the metric theory of Diophantine approximation associated with the small linear forms is investigated. Khintchine--Groshev theorems are established along with Hausdorff measure generalization without the monotonic assumption on the approximating function....

  15. MEASUREMENT OF CONTROLLED ATTENUATION PARAMETER: A SURROGATE MARKER OF HEPATIC STEATOSIS IN PATIENTS OF NONALCOHOLIC FATTY LIVER DISEASE ON LIFESTYLE MODIFICATION - A PROSPECTIVE FOLLOW-UP STUDY

    Directory of Open Access Journals (Sweden)

    Jayanta PAUL

    Full Text Available ABSTRACT BACKGROUND: Liver biopsy is a gold standard method for hepatic steatosis assessment. However, liver biopsy is an invasive and painful procedure and can cause severe complications therefore it cannot be frequently used in case of follow-up of patients. Non-invasive assessment of steatosis and fibrosis is of growing relevance in non-alcoholic fatty liver disease (NAFLD. To evaluate hepatic steatosis, transient elastography with controlled attenuation parameter (CAP measurement is an option now days. OBJECTIVE: Aim of this study is to evaluate role of measurement of controlled attenuation parameter, a surrogate marker of hepatic steatosis in patients of nonalcoholic fatty liver disease on lifestyle modification. METHODS: In this study, initially 37 participants were included who were followed up after 6 months with transient elastography, blood biochemical tests and anthropometric measurements. The results were analyzed by Multivariate linear regression analysis and paired samples t-test (Dependent t-test with 95% confidence interval. Correlation is calculated by Pearson correlation coefficients. RESULTS: Mean CAP value for assessing hepatic steatosis during 1st consultation (278.57±49.13 dB/m was significantly improved (P=0.03 after 6 months of lifestyle modification (252.91±62.02 dB/m. Only fasting blood sugar (P=0.008, weight (P=0.000, body mass index (BMI (P=0.000 showed significant positive correlation with CAP. Only BMI (P=0.034 and weight (P=0.035 were the independent predictor of CAP value in NAFLD patients. CONCLUSION: Lifestyle modification improves the hepatic steatosis, and CAP can be used to detect the improvement of hepatic steatosis during follow-up in patients with NAFLD on lifestyle modification. There is no relation between CAP and Fibroscan score in NAFLD patients. Only BMI and weight can predict CAP value independently.

  16. MEASUREMENT OF CONTROLLED ATTENUATION PARAMETER: A SURROGATE MARKER OF HEPATIC STEATOSIS IN PATIENTS OF NONALCOHOLIC FATTY LIVER DISEASE ON LIFESTYLE MODIFICATION - A PROSPECTIVE FOLLOW-UP STUDY.

    Science.gov (United States)

    Paul, Jayanta; Venugopal, Raj Vigna; Peter, Lorance; Shetty, Kula Naresh Kumar; Shetti, Mohit P

    2018-01-01

    Liver biopsy is a gold standard method for hepatic steatosis assessment. However, liver biopsy is an invasive and painful procedure and can cause severe complications therefore it cannot be frequently used in case of follow-up of patients. Non-invasive assessment of steatosis and fibrosis is of growing relevance in non-alcoholic fatty liver disease (NAFLD). To evaluate hepatic steatosis, transient elastography with controlled attenuation parameter (CAP) measurement is an option now days. Aim of this study is to evaluate role of measurement of controlled attenuation parameter, a surrogate marker of hepatic steatosis in patients of nonalcoholic fatty liver disease on lifestyle modification. In this study, initially 37 participants were included who were followed up after 6 months with transient elastography, blood biochemical tests and anthropometric measurements. The results were analyzed by Multivariate linear regression analysis and paired samples t-test (Dependent t-test) with 95% confidence interval. Correlation is calculated by Pearson correlation coefficients. Mean CAP value for assessing hepatic steatosis during 1st consultation (278.57±49.13 dB/m) was significantly improved (P=0.03) after 6 months of lifestyle modification (252.91±62.02 dB/m). Only fasting blood sugar (P=0.008), weight (P=0.000), body mass index (BMI) (P=0.000) showed significant positive correlation with CAP. Only BMI (P=0.034) and weight (P=0.035) were the independent predictor of CAP value in NAFLD patients. Lifestyle modification improves the hepatic steatosis, and CAP can be used to detect the improvement of hepatic steatosis during follow-up in patients with NAFLD on lifestyle modification. There is no relation between CAP and Fibroscan score in NAFLD patients. Only BMI and weight can predict CAP value independently.

  17. Measurements of T1 and T2 relaxation times of colon cancer metastases in rat liver at 7 T

    NARCIS (Netherlands)

    Gambarota, G.; Veltien, A.; van Laarhoven, H.; Philippens, M.; Jonker, A.; Mook, O. R.; Frederiks, W. M.; Heerschap, A.

    2004-01-01

    The purpose of this study was to investigate the magnetic resonance imaging (MRI) characteristics of colon cancer metastases in rat liver at 7 T. A dedicated RF microstrip coil of novel design was built in order to increase the signal-to-noise ratio and, in combination with respiratory triggering,

  18. SU-D-204-05: Quantitative Comparison of a High Resolution Micro-Angiographic Fluoroscopic (MAF) Detector with a Standard Flat Panel Detector (FPD) Using the New Metric of Generalized Measured Relative Object Detectability (GM-ROD)

    Energy Technology Data Exchange (ETDEWEB)

    Russ, M; Ionita, C; Bednarek, D; Rudin, S [Toshiba Stroke and Vascular Research Center, University at Buffalo (SUNY), Buffalo, NY (United States)

    2015-06-15

    Purpose: In endovascular image-guided neuro-interventions, visualization of fine detail is paramount. For example, the ability of the interventionist to visualize the stent struts depends heavily on the x-ray imaging detector performance. Methods: A study to examine the relative performance of the high resolution MAF-CMOS (pixel size 75µm, Nyquist frequency 6.6 cycles/mm) and a standard Flat Panel Detector (pixel size 194µm, Nyquist frequency 2.5 cycles/mm) detectors in imaging a neuro stent was done using the Generalized Measured Relative Object Detectability (GM-ROD) metric. Low quantum noise images of a deployed stent were obtained by averaging 95 frames obtained by both detectors without changing other exposure or geometric parameters. The square of the Fourier transform of each image is taken and divided by the generalized normalized noise power spectrum to give an effective measured task-specific signal-to-noise ratio. This expression is then integrated from 0 to each of the detector’s Nyquist frequencies, and the GM-ROD value is determined by taking a ratio of the integrals for the MAF-CMOS to that of the FPD. The lower bound of integration can be varied to emphasize high frequencies in the detector comparisons. Results: The MAF-CMOS detector exhibits vastly superior performance over the FPD when integrating over all frequencies, yielding a GM-ROD value of 63.1. The lower bound of integration was stepped up in increments of 0.5 cycles/mm for higher frequency comparisons. As the lower bound increased, the GM-ROD value was augmented, reflecting the superior performance of the MAF-CMOS in the high frequency regime. Conclusion: GM-ROD is a versatile metric that can provide quantitative detector and task dependent comparisons that can be used as a basis for detector selection. Supported by NIH Grant: 2R01EB002873 and an equipment grant from Toshiba Medical Systems Corporation.

  19. Empirical Information Metrics for Prediction Power and Experiment Planning

    Directory of Open Access Journals (Sweden)

    Christopher Lee

    2011-01-01

    Full Text Available In principle, information theory could provide useful metrics for statistical inference. In practice this is impeded by divergent assumptions: Information theory assumes the joint distribution of variables of interest is known, whereas in statistical inference it is hidden and is the goal of inference. To integrate these approaches we note a common theme they share, namely the measurement of prediction power. We generalize this concept as an information metric, subject to several requirements: Calculation of the metric must be objective or model-free; unbiased; convergent; probabilistically bounded; and low in computational complexity. Unfortunately, widely used model selection metrics such as Maximum Likelihood, the Akaike Information Criterion and Bayesian Information Criterion do not necessarily meet all these requirements. We define four distinct empirical information metrics measured via sampling, with explicit Law of Large Numbers convergence guarantees, which meet these requirements: Ie, the empirical information, a measure of average prediction power; Ib, the overfitting bias information, which measures selection bias in the modeling procedure; Ip, the potential information, which measures the total remaining information in the observations not yet discovered by the model; and Im, the model information, which measures the model’s extrapolation prediction power. Finally, we show that Ip + Ie, Ip + Im, and Ie — Im are fixed constants for a given observed dataset (i.e. prediction target, independent of the model, and thus represent a fundamental subdivision of the total information contained in the observations. We discuss the application of these metrics to modeling and experiment planning.    

  20. IDENTIFYING MARKETING EFFECTIVENESS METRICS (Case study: East Azerbaijan`s industrial units)

    OpenAIRE

    Faridyahyaie, Reza; Faryabi, Mohammad; Bodaghi Khajeh Noubar, Hossein

    2012-01-01

    The Paper attempts to identify marketing eff ectiveness metrics in industrial units. The metrics investigated in this study are completely applicable and comprehensive, and consequently they can evaluate marketing eff ectiveness in various industries. The metrics studied include: Market Share, Profitability, Sales Growth, Customer Numbers, Customer Satisfaction and Customer Loyalty. The findings indicate that these six metrics are impressive when measuring marketing effectiveness. Data was ge...

  1. Measurement of bone mineral density using DEXA and biochemical markers of bone turnover in 5-year survivors after orthotopic liver transplantation

    International Nuclear Information System (INIS)

    Xu Hao; Eichstaedt, H.

    1998-01-01

    Purpose: To observe bone loss and bone metabolism status in 5-year survivors after orthotopic liver transplantation (OLT). Methods: Measurement of bone mineral density (BMD) of the lumbar spine (L2∼L4) and femoral neck using dual energy X-ray absorptiometry (DEXA) and analysis of biochemical markers of bone turnover, such as ostecalcin (OSC), bone alkaline phosphatase (BAP), carboxy-terminal propeptide of type I procollagen (PICP), carboxy-terminal cross-linked telo-peptide of type I collagen (ICTP), PTH and 25-hydroxy-vitamin D (25-OH-D). These markers were measured in 31 5-year survivors after OLT, 34 patients with chronic liver failure (CLF) before OLT and 38 normal subjects. Results: Age-matched Z-score of BMD (Z-score) at L2∼L4 was significantly higher in 5-year survivors than that in patients with CLF before OLT. Incidence of osteoporosis (Z-score<-2.0) in 5-year survivors was significantly lower than that in patients with CLF before OLT. Although serum concentrations of bone formation and bone resorption markers in 5-year survivors were high than those of normal subjects, as compared to patients with CLF before OLT, serum OSC was increased, serum ICTP and BAP were reduced, serum PICP was unchanged. Serum PTH and 25-OH-D level was normal. Conclusions: In 5-year survivors following liver transplantation there was a reduction in bone loss and incidence of osteoporosis and an improvement of bone metabolism

  2. A new technique for measuring protein turnover in the gut, liver and kidneys of lean and obese mice with [3H] glutamic acid

    International Nuclear Information System (INIS)

    Miller, B.G.; Grimble, R.F.; Taylor, T.G.

    1978-01-01

    Measurements have been made of the incorporation of an intraperitoneal injection of [ 3 H]glutamate into the protein of the gut, liver and kidney of lean and obese siblings of the genetically obese mouse. Recycling of the 3 H was minimized by using glutamate labelled at the C-2 position. Loss of label from the amino acid pool by transamination and deamination was rapid, with a half-life of 4 h. In tissue protein the amino acid showing the highest 3 H radioactivity was glutamate. The half-lives for protein synthesis and catabolism were calculated from the decay curves of both specific and total radioactivity of [ 3 H] glutamate in tissue protein. No significant differences were found between kidney, liver and gut in lean and obese mice. (author)

  3. Scalar-metric and scalar-metric-torsion gravitational theories

    International Nuclear Information System (INIS)

    Aldersley, S.J.

    1977-01-01

    The techniques of dimensional analysis and of the theory of tensorial concomitants are employed to study field equations in gravitational theories which incorporate scalar fields of the Brans-Dicke type. Within the context of scalar-metric gravitational theories, a uniqueness theorem for the geometric (or gravitational) part of the field equations is proven and a Lagrangian is determined which is uniquely specified by dimensional analysis. Within the context of scalar-metric-torsion gravitational theories a uniqueness theorem for field Lagrangians is presented and the corresponding Euler-Lagrange equations are given. Finally, an example of a scalar-metric-torsion theory is presented which is similar in many respects to the Brans-Dicke theory and the Einstein-Cartan theory

  4. Modelling Metrics for Mine Counter Measure Operations

    Science.gov (United States)

    2014-08-01

    the Minister of National Defence, 2014 © Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2014...a random search derived by Koopman is widely used yet it assumes no angular dependence (Ref [10]). In a series of publications considering tactics...Node Placement in Sensor Localization by Optimization of Subspace Principal Angles, In Proceedings of IEEE International Conference on Acoustics

  5. Measuring privacy compliance using fitness metrics

    NARCIS (Netherlands)

    Banescu, S.; Petkovic, M.; Zannone, N.; Barros, A.; Gal, A.; Kindler, E.

    2012-01-01

    Nowadays, repurposing of personal data is a major privacy issue. Detection of data repurposing requires posteriori mechanisms able to determine how data have been processed. However, current a posteriori solutions for privacy compliance are often manual, leading infringements to remain undetected.

  6. The AGIS metric and time of test: A replication study

    OpenAIRE

    Counsell, S; Swift, S; Tucker, A

    2016-01-01

    Visual Field (VF) tests and corresponding data are commonly used in clinical practices to manage glaucoma. The standard metric used to measure glaucoma severity is the Advanced Glaucoma Intervention Studies (AGIS) metric. We know that time of day when VF tests are applied can influence a patient’s AGIS metric value; a previous study showed that this was the case for a data set of 160 patients. In this paper, we replicate that study using data from 2468 patients obtained from Moorfields Eye Ho...

  7. Comparison of routing metrics for wireless mesh networks

    CSIR Research Space (South Africa)

    Nxumalo, SL

    2011-09-01

    Full Text Available in each and every relay node so as to find the next hop for the packet. A routing metric is simply a measure used for selecting the best path, used by a routing protocol. Figure 2 shows the relationship between a routing protocol and the routing... on its QoS-awareness level. The routing metrics that considered QoS the most were selected from each group. This section discusses the four routing metrics that were compared in this paper, which are: hop count (HOP), expected transmission count (ETX...

  8. Residential Energy Performance Metrics

    Directory of Open Access Journals (Sweden)

    Christopher Wright

    2010-06-01

    Full Text Available Techniques for residential energy monitoring are an emerging field that is currently drawing significant attention. This paper is a description of the current efforts to monitor and compare the performance of three solar powered homes built at Missouri University of Science and Technology. The homes are outfitted with an array of sensors and a data logger system to measure and record electricity production, system energy use, internal home temperature and humidity, hot water production, and exterior ambient conditions the houses are experiencing. Data is being collected to measure the performance of the houses, compare to energy modeling programs, design and develop cost effective sensor systems for energy monitoring, and produce a cost effective home control system.

  9. Symmetries of Taub-NUT dual metrics

    International Nuclear Information System (INIS)

    Baleanu, D.; Codoban, S.

    1998-01-01

    Recently geometric duality was analyzed for a metric which admits Killing tensors. An interesting example arises when the manifold has Killing-Yano tensors. The symmetries of the dual metrics in the case of Taub-NUT metric are investigated. Generic and non-generic symmetries of dual Taub-NUT metric are analyzed

  10. A Kerr-NUT metric

    International Nuclear Information System (INIS)

    Vaidya, P.C.; Patel, L.K.; Bhatt, P.V.

    1976-01-01

    Using Galilean time and retarded distance as coordinates the usual Kerr metric is expressed in form similar to the Newman-Unti-Tamburino (NUT) metric. The combined Kerr-NUT metric is then investigated. In addition to the Kerr and NUT solutions of Einstein's equations, three other types of solutions are derived. These are (i) the radiating Kerr solution, (ii) the radiating NUT solution satisfying Rsub(ik) = sigmaxisub(i)xisub(k), xisub(i)xisup(i) = 0, and (iii) the associated Kerr solution satisfying Rsub(ik) = 0. Solution (i) is distinct from and simpler than the one reported earlier by Vaidya and Patel (Phys. Rev.; D7:3590 (1973)). Solutions (ii) and (iii) gave line elements which have the axis of symmetry as a singular line. (author)

  11. The uniqueness of the Fisher metric as information metric

    Czech Academy of Sciences Publication Activity Database

    Le, Hong-Van

    2017-01-01

    Roč. 69, č. 4 (2017), s. 879-896 ISSN 0020-3157 Institutional support: RVO:67985840 Keywords : Chentsov’s theorem * mixed topology * monotonicity of the Fisher metric Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.049, year: 2016 https://link.springer.com/article/10.1007%2Fs10463-016-0562-0

  12. Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.

    Science.gov (United States)

    Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen

    2017-06-01

    The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.

  13. Relevance of motion-related assessment metrics in laparoscopic surgery.

    Science.gov (United States)

    Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J

    2013-06-01

    Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.

  14. Thermodynamic metrics and optimal paths.

    Science.gov (United States)

    Sivak, David A; Crooks, Gavin E

    2012-05-11

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  15. Invariant metrics for Hamiltonian systems

    International Nuclear Information System (INIS)

    Rangarajan, G.; Dragt, A.J.; Neri, F.

    1991-05-01

    In this paper, invariant metrics are constructed for Hamiltonian systems. These metrics give rise to norms on the space of homeogeneous polynomials of phase-space variables. For an accelerator lattice described by a Hamiltonian, these norms characterize the nonlinear content of the lattice. Therefore, the performance of the lattice can be improved by minimizing the norm as a function of parameters describing the beam-line elements in the lattice. A four-fold increase in the dynamic aperture of a model FODO cell is obtained using this procedure. 7 refs

  16. Generalization of Vaidya's radiation metric

    Energy Technology Data Exchange (ETDEWEB)

    Gleiser, R J; Kozameh, C N [Universidad Nacional de Cordoba (Argentina). Instituto de Matematica, Astronomia y Fisica

    1981-11-01

    In this paper it is shown that if Vaidya's radiation metric is considered from the point of view of kinetic theory in general relativity, the corresponding phase space distribution function can be generalized in a particular way. The new family of spherically symmetric radiation metrics obtained contains Vaidya's as a limiting situation. The Einstein field equations are solved in a ''comoving'' coordinate system. Two arbitrary functions of a single variable are introduced in the process of solving these equations. Particular examples considered are a stationary solution, a nonvacuum solution depending on a single parameter, and several limiting situations.

  17. Image characterization metrics for muon tomography

    Science.gov (United States)

    Luo, Weidong; Lehovich, Andre; Anashkin, Edward; Bai, Chuanyong; Kindem, Joel; Sossong, Michael; Steiger, Matt

    2014-05-01

    Muon tomography uses naturally occurring cosmic rays to detect nuclear threats in containers. Currently there are no systematic image characterization metrics for muon tomography. We propose a set of image characterization methods to quantify the imaging performance of muon tomography. These methods include tests of spatial resolution, uniformity, contrast, signal to noise ratio (SNR) and vertical smearing. Simulated phantom data and analysis methods were developed to evaluate metric applicability. Spatial resolution was determined as the FWHM of the point spread functions in X, Y and Z axis for 2.5cm tungsten cubes. Uniformity was measured by drawing a volume of interest (VOI) within a large water phantom and defined as the standard deviation of voxel values divided by the mean voxel value. Contrast was defined as the peak signals of a set of tungsten cubes divided by the mean voxel value of the water background. SNR was defined as the peak signals of cubes divided by the standard deviation (noise) of the water background. Vertical smearing, i.e. vertical thickness blurring along the zenith axis for a set of 2 cm thick tungsten plates, was defined as the FWHM of vertical spread function for the plate. These image metrics provided a useful tool to quantify the basic imaging properties for muon tomography.

  18. Remarks on G-Metric Spaces

    Directory of Open Access Journals (Sweden)

    Bessem Samet

    2013-01-01

    Full Text Available In 2005, Mustafa and Sims (2006 introduced and studied a new class of generalized metric spaces, which are called G-metric spaces, as a generalization of metric spaces. We establish some useful propositions to show that many fixed point theorems on (nonsymmetric G-metric spaces given recently by many authors follow directly from well-known theorems on metric spaces. Our technique can be easily extended to other results as shown in application.

  19. DLA Energy Biofuel Feedstock Metrics Study

    Science.gov (United States)

    2012-12-11

    moderately/highly in- vasive  Metric 2: Genetically modified organism ( GMO ) hazard, Yes/No and Hazard Category  Metric 3: Species hybridization...4– biofuel distribution Stage # 5– biofuel use Metric 1: State inva- siveness ranking Yes Minimal Minimal No No Metric 2: GMO hazard Yes...may utilize GMO microbial or microalgae species across the applicable biofuel life cycles (stages 1–3). The following consequence Metrics 4–6 then

  20. State of the art metrics for aspect oriented programming

    Science.gov (United States)

    Ghareb, Mazen Ismaeel; Allen, Gary

    2018-04-01

    The quality evaluation of software, e.g., defect measurement, gains significance with higher use of software applications. Metric measurements are considered as the primary indicator of imperfection prediction and software maintenance in various empirical studies of software products. However, there is no agreement on which metrics are compelling quality indicators for novel development approaches such as Aspect Oriented Programming (AOP). AOP intends to enhance programming quality, by providing new and novel constructs for the development of systems, for example, point cuts, advice and inter-type relationships. Hence, it is not evident if quality pointers for AOP can be derived from direct expansions of traditional OO measurements. Then again, investigations of AOP do regularly depend on established coupling measurements. Notwithstanding the late reception of AOP in empirical studies, coupling measurements have been adopted as useful markers of flaw inclination in this context. In this paper we will investigate the state of the art metrics for measurement of Aspect Oriented systems development.

  1. Liver Disease

    Science.gov (United States)

    ... and ridding your body of toxic substances. Liver disease can be inherited (genetic) or caused by a variety of factors that damage the ... that you can't stay still. Causes Liver disease has many ... or semen, contaminated food or water, or close contact with a person who is ...

  2. Liver scintigraphy

    International Nuclear Information System (INIS)

    Tateno, Yukio

    1996-01-01

    Liver scintigraphy can be classified into 3 major categories according to the properties of the radiopharmaceuticals used, i.e., methods using radiopharmaceuticals which are (1) incorporated by hepatocytes, (2) taken up by reticulo endothelial cells, and (3) distributed in the blood pool of the liver. Of these three categories, the liver scintigraphy of the present research falls into category 2. Radiopharmaceuticals which are taken up by endothelial cells include 198 Au colloids and 99m Tc-labelled colloids. Liver scintigraphy takes advantage of the property by which colloidal microparticles are phagocytosed by Kupffer cells, and reflect the distribution of endothelial cells and the intensity of their phagocytic capacity. This examination is indicated in the following situations: (i) when you suspect a localized intrahepatic lesion (tumour, abscess, cyst, etc.), (ii) when you want to follow the course of therapy of a localized lesion, (iii) when you suspect liver cirrhosis, (iv) when you want to know the severity of liver cirrhosis or hepatitis, (v) when there is hepatomegaly and you want to determine the morphology of the liver, (vi) differential diagnosis of upper abdominal masses, and (vii) when there are abnormalities of the right diaphragm and you want to know their relation to the liver

  3. Liver regeneration

    NARCIS (Netherlands)

    Chamuleau, R. A.; Bosman, D. K.

    1988-01-01

    Despite great advances in analysing hemodynamic, morphological and biochemical changes during the process of liver regeneration, the exact (patho)physiological mechanism is still unknown. A short survey of literature is given of the kinetics of liver regeneration and the significance of different

  4. Relationship between hemodynamic changes of portal vein and hepatic artery measured by color Doppler ultrasound and FibroScan value in patients with liver cirrhosis

    Directory of Open Access Journals (Sweden)

    CHENG Xiaofei

    2014-11-01

    Full Text Available ObjectiveTo explore the relationship between hemodynamic changes of the portal vein and hepatic artery measured by color Doppler ultrasound and FibroScan value in patients with liver cirrhosis. MethodsA total of 192 patients with hepatitis B cirrhosis who were admitted to our hospital from March 2010 to December 2013, as well as 100 healthy persons, were recruited. The mean portal vein blood flow velocity (PVVmean, hepatic artery pulsatility index (HAPI, and hepatic artery resistance index (HARI were measured by color Doppler ultrasound. FibroScan was also carried out. All data were statistically analyzed using SPSS 13.0. Continuous data were expressed as mean±SD and compared between groups by t-test. ResultsThe HAPI, HARI, and FibroScan value of the patient group were 1.56±024, 0.73±0.05, and 25.38±7.73, respectively, significantly higher than those of the control group (1.36±0.14, 0.65±0.07, and 7.8±3.6 (P<0.05; the PVVmean of the patient group was 14.43±1.86, significantly lower than that of the control group (17.35±0.56 (P<0.05. FibroScan value was positively correlated with HAPI and HARI (r1=0.59, r2=0.66, P<0.001, but negatively correlated with PVVmean (r=-0.64, P<0.001. ConclusionThe liver stiffness assessed by FibroScan and the hemodynamic changes of the portal vein and hepatic artery measured by color Doppler ultrasound are vitally important for evaluating the severity of liver cirrhosis.

  5. Geographic inequities in liver allograft supply and demand: does it affect patient outcomes?

    Science.gov (United States)

    Rana, Abbas; Kaplan, Bruce; Riaz, Irbaz B; Porubsky, Marian; Habib, Shahid; Rilo, Horacio; Gruessner, Angelika C; Gruessner, Rainer W G

    2015-03-01

    Significant geographic inequities mar the distribution of liver allografts for transplantation. We analyzed the effect of geographic inequities on patient outcomes. During our study period (January 1 through December 31, 2010), 11,244 adult candidates were listed for liver transplantation: 5,285 adult liver allografts became available, and 5,471 adult recipients underwent transplantation. We obtained population data from the 2010 United States Census. To determine the effect of regional supply and demand disparities on patient outcomes, we performed linear regression and multivariate Cox regression analyses. Our proposed disparity metric, the ratio of listed candidates to liver allografts available varied from 1.3 (region 11) to 3.4 (region 1). When that ratio was used as the explanatory variable, the R(2) values for outcome measures were as follows: 1-year waitlist mortality, 0.23 and 1-year posttransplant survival, 0.27. According to our multivariate analysis, the ratio of listed candidates to liver allografts available had a significant effect on waitlist survival (hazards ratio, 1.21; 95% confidence interval, 1.04-1.40) but was not a significant risk factor for posttransplant survival. We found significant differences in liver allograft supply and demand--but these differences had only a modest effect on patient outcomes. Redistricting and allocation-sharing schemes should seek to equalize regional supply and demand rather than attempting to equalize patient outcomes.

  6. Reproducibility of graph metrics in fMRI networks

    Directory of Open Access Journals (Sweden)

    Qawi K Telesford

    2010-12-01

    Full Text Available The reliability of graph metrics calculated in network analysis is essential to the interpretation of complex network organization. These graph metrics are used to deduce the small-world properties in networks. In this study, we investigated the test-retest reliability of graph metrics from functional magnetic resonance imaging (fMRI data collected for two runs in 45 healthy older adults. Graph metrics were calculated on data for both runs and compared using intraclass correlation coefficient (ICC statistics and Bland-Altman (BA plots. ICC scores describe the level of absolute agreement between two measurements and provide a measure of reproducibility. For mean graph metrics, ICC scores were high for clustering coefficient (ICC=0.86, global efficiency (ICC=0.83, path length (ICC=0.79, and local efficiency (ICC=0.75; the ICC score for degree was found to be low (ICC=0.29. ICC scores were also used to generate reproducibility maps in brain space to test voxel-wise reproducibility for unsmoothed and smoothed data. Reproducibility was uniform across the brain for global efficiency and path length, but was only high in network hubs for clustering coefficient, local efficiency and degree. BA plots were used to test the measurement repeatability of all graph metrics. All graph metrics fell within the limits for repeatability. Together, these results suggest that with exception of degree, mean graph metrics are reproducible and suitable for clinical studies. Further exploration is warranted to better understand reproducibility across the brain on a voxel-wise basis.

  7. An objective measure to identify pediatric liver transplant recipients at risk for late allograft rejection related to non-adherence.

    Science.gov (United States)

    Venkat, Veena L; Nick, Todd G; Wang, Yu; Bucuvalas, John C

    2008-02-01

    Non-adherence to a prescribed immunosuppressive regimen increases risk for late allograft rejection (LAR). We implemented a protocol for immunosuppression management which decreased variation in calcineurin inhibitor blood levels in pediatric liver transplant recipients by controlling for confounders such as physician practice variability. We hypothesized that patients with increased variation in tacrolimus blood levels despite implementation of the immunosuppression management protocol were at increased risk for LAR. We conducted a single center retrospective cohort study of 101 pediatric liver transplant recipients who were at least one year post liver transplantation and receiving tacrolimus for immunosuppression. The primary outcome variable was biopsy proven allograft rejection. Primary candidate predictor variables were the standard deviation (SD) of tacrolimus blood levels (a marker of drug level variability), mean tacrolimus blood level, age, and insurance type. SD of tacrolimus blood levels was determined for each patient from a minimum of four outpatient levels during the study period. Unadjusted and adjusted logistic regression models were used to determine the prognostic value of candidate predictors. The median and interquartile range of the SD of tacrolimus blood levels was 1.6 (1.1, 2.1). Eleven episodes of LAR occurred during the study period. Ten of the 11 episodes occurred in patients with tacrolimus blood level SD > 2. Insurance type, mean tacrolimus blood level and SD of tacrolimus blood levels were significantly related to LAR in the unadjusted analyses (ptype, mean and SD of tacrolimus blood levels was significantly associated with LAR (validated C-statistic = 0.88, p = 0.012). The adjusted odds of rejection for a one unit increase in the SD of tacrolimus blood level was 3.49 (95% CI 1.31 to 9.29). Effects of age and insurance status on LAR did not provide independent prognostic value after controlling for SD. Variation in tacrolimus blood

  8. Separable metrics and radiating stars

    Indian Academy of Sciences (India)

    We study the junction condition relating the pressure to heat flux at the boundary of an accelerating and expanding spherically symmetric radiating star. We transform the junction condition to an ordinary differential equation by making a separability assumption on the metric functions in the space–time variables.

  9. Socio-technical security metrics

    NARCIS (Netherlands)

    Gollmann, D.; Herley, C.; Koenig, V.; Pieters, W.; Sasse, M.A.

    2015-01-01

    Report from Dagstuhl seminar 14491. This report documents the program and the outcomes of Dagstuhl Seminar 14491 “Socio-Technical Security Metrics”. In the domain of safety, metrics inform many decisions, from the height of new dikes to the design of nuclear plants. We can state, for example, that

  10. Leading Gainful Employment Metric Reporting

    Science.gov (United States)

    Powers, Kristina; MacPherson, Derek

    2016-01-01

    This chapter will address the importance of intercampus involvement in reporting of gainful employment student-level data that will be used in the calculation of gainful employment metrics by the U.S. Department of Education. The authors will discuss why building relationships within the institution is critical for effective gainful employment…

  11. Energy Metrics for State Government Buildings

    Science.gov (United States)

    Michael, Trevor

    Measuring true progress towards energy conservation goals requires the accurate reporting and accounting of energy consumption. An accurate energy metrics framework is also a critical element for verifiable Greenhouse Gas Inventories. Energy conservation in government can reduce expenditures on energy costs leaving more funds available for public services. In addition to monetary savings, conserving energy can help to promote energy security, air quality, and a reduction of carbon footprint. With energy consumption/GHG inventories recently produced at the Federal level, state and local governments are beginning to also produce their own energy metrics systems. In recent years, many states have passed laws and executive orders which require their agencies to reduce energy consumption. In June 2008, SC state government established a law to achieve a 20% energy usage reduction in state buildings by 2020. This study examines case studies from other states who have established similar goals to uncover the methods used to establish an energy metrics system. Direct energy consumption in state government primarily comes from buildings and mobile sources. This study will focus exclusively on measuring energy consumption in state buildings. The case studies reveal that many states including SC are having issues gathering the data needed to accurately measure energy consumption across all state buildings. Common problems found include a lack of enforcement and incentives that encourage state agencies to participate in any reporting system. The case studies are aimed at finding the leverage used to gather the needed data. The various approaches at coercing participation will hopefully reveal methods that SC can use to establish the accurate metrics system needed to measure progress towards its 20% by 2020 energy reduction goal. Among the strongest incentives found in the case studies is the potential for monetary savings through energy efficiency. Framing energy conservation

  12. Response evaluation of malignant liver lesions after TACE/SIRT. Comparison of manual and semi-automatic measurement of different response criteria in multislice CT

    International Nuclear Information System (INIS)

    Hoeink, Anna Janina

    2017-01-01

    To compare measurement precision and interobserver variability in the evaluation of hepatocellular carcinoma (HCC) and liver metastases in MSCT before and after transarterial local ablative therapies. Retrospective study of 72 patients with malignant liver lesions (42 metastases; 30 HCCs) before and after therapy (43 SIRT procedures; 29 TACE procedures). Established (LAD; SAD; WHO) and vitality-based parameters (mRECIST; mLAD; mSAD; EASL) were assessed manually and semi-automatically by two readers. The relative interobserver difference (RID) and intraclass correlation coefficient (ICC) were calculated. The median RID for vitality-based parameters was lower from semi-automatic than from manual measurement of mLAD (manual 12.5 %; semi-automatic 3.4 %), mSAD (manual 12.7 %; semi-automatic 5.7 %) and EASL (manual 10.4 %; semi-automatic 1.8 %). The difference in established parameters was not statistically noticeable (p > 0.05). The ICCs of LAD (manual 0.984; semi-automatic 0.982), SAD (manual 0.975; semi-automatic 0.958) and WHO (manual 0.984; semi-automatic 0.978) are high, both in manual and semi-automatic measurements. The ICCs of manual measurements of mLAD (0.897), mSAD (0.844) and EASL (0.875) are lower. This decrease cannot be found in semi-automatic measurements of mLAD (0.997), mSAD (0.992) and EASL (0.998). Conclusion Vitality-based tumor measurements of HCC and metastases after transarterial local therapies should be performed semi-automatically due to greater measurement precision, thus increasing the reproducibility and in turn the reliability of therapeutic decisions.

  13. Response evaluation of malignant liver lesions after TACE/SIRT. Comparison of manual and semi-automatic measurement of different response criteria in multislice CT

    Energy Technology Data Exchange (ETDEWEB)

    Hoeink, Anna Janina [Univ. Hospital Cologne (Germany). Diagnostic and Interventional Radiology; Schuelke, Christoph; Loehnert, Annika; Kammerer, Sara; Fortkamp, Rasmus; Heindel, Walter; Buerke, Boris [Univ. Hospital Muenster (UKM), Muenster (Germany). Dept. of Clinical Radiology; Koch, Raphael [Univ. Hospital Muenster (UKM), Muenster (Germany). Inst. of Biostatistics and Clinical Research (IBKF)

    2017-11-15

    To compare measurement precision and interobserver variability in the evaluation of hepatocellular carcinoma (HCC) and liver metastases in MSCT before and after transarterial local ablative therapies. Retrospective study of 72 patients with malignant liver lesions (42 metastases; 30 HCCs) before and after therapy (43 SIRT procedures; 29 TACE procedures). Established (LAD; SAD; WHO) and vitality-based parameters (mRECIST; mLAD; mSAD; EASL) were assessed manually and semi-automatically by two readers. The relative interobserver difference (RID) and intraclass correlation coefficient (ICC) were calculated. The median RID for vitality-based parameters was lower from semi-automatic than from manual measurement of mLAD (manual 12.5 %; semi-automatic 3.4 %), mSAD (manual 12.7 %; semi-automatic 5.7 %) and EASL (manual 10.4 %; semi-automatic 1.8 %). The difference in established parameters was not statistically noticeable (p > 0.05). The ICCs of LAD (manual 0.984; semi-automatic 0.982), SAD (manual 0.975; semi-automatic 0.958) and WHO (manual 0.984; semi-automatic 0.978) are high, both in manual and semi-automatic measurements. The ICCs of manual measurements of mLAD (0.897), mSAD (0.844) and EASL (0.875) are lower. This decrease cannot be found in semi-automatic measurements of mLAD (0.997), mSAD (0.992) and EASL (0.998). Conclusion Vitality-based tumor measurements of HCC and metastases after transarterial local therapies should be performed semi-automatically due to greater measurement precision, thus increasing the reproducibility and in turn the reliability of therapeutic decisions.

  14. Analyses Of Two End-User Software Vulnerability Exposure Metrics

    Energy Technology Data Exchange (ETDEWEB)

    Jason L. Wright; Miles McQueen; Lawrence Wellman

    2012-08-01

    The risk due to software vulnerabilities will not be completely resolved in the near future. Instead, putting reliable vulnerability measures into the hands of end-users so that informed decisions can be made regarding the relative security exposure incurred by choosing one software package over another is of importance. To that end, we propose two new security metrics, average active vulnerabilities (AAV) and vulnerability free days (VFD). These metrics capture both the speed with which new vulnerabilities are reported to vendors and the rate at which software vendors fix them. We then examine how the metrics are computed using currently available datasets and demonstrate their estimation in a simulation experiment using four different browsers as a case study. Finally, we discuss how the metrics may be used by the various stakeholders of software and to software usage decisions.

  15. Green Chemistry Metrics with Special Reference to Green Analytical Chemistry

    Directory of Open Access Journals (Sweden)

    Marek Tobiszewski

    2015-06-01

    Full Text Available The concept of green chemistry is widely recognized in chemical laboratories. To properly measure an environmental impact of chemical processes, dedicated assessment tools are required. This paper summarizes the current state of knowledge in the field of development of green chemistry and green analytical chemistry metrics. The diverse methods used for evaluation of the greenness of organic synthesis, such as eco-footprint, E-Factor, EATOS, and Eco-Scale are described. Both the well-established and recently developed green analytical chemistry metrics, including NEMI labeling and analytical Eco-scale, are presented. Additionally, this paper focuses on the possibility of the use of multivariate statistics in evaluation of environmental impact of analytical procedures. All the above metrics are compared and discussed in terms of their advantages and disadvantages. The current needs and future perspectives in green chemistry metrics are also discussed.

  16. Green Chemistry Metrics with Special Reference to Green Analytical Chemistry.

    Science.gov (United States)

    Tobiszewski, Marek; Marć, Mariusz; Gałuszka, Agnieszka; Namieśnik, Jacek

    2015-06-12

    The concept of green chemistry is widely recognized in chemical laboratories. To properly measure an environmental impact of chemical processes, dedicated assessment tools are required. This paper summarizes the current state of knowledge in the field of development of green chemistry and green analytical chemistry metrics. The diverse methods used for evaluation of the greenness of organic synthesis, such as eco-footprint, E-Factor, EATOS, and Eco-Scale are described. Both the well-established and recently developed green analytical chemistry metrics, including NEMI labeling and analytical Eco-scale, are presented. Additionally, this paper focuses on the possibility of the use of multivariate statistics in evaluation of environmental impact of analytical procedures. All the above metrics are compared and discussed in terms of their advantages and disadvantages. The current needs and future perspectives in green chemistry metrics are also discussed.

  17. Building Cost and Performance Metrics: Data Collection Protocol, Revision 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, Kimberly M.; Solana, Amy E.; Spees, Kathleen L.

    2005-09-29

    This technical report describes the process for selecting and applying the building cost and performance metrics for measuring sustainably designed buildings in comparison to traditionally designed buildings.

  18. [Liver transplantation].

    Science.gov (United States)

    Pompili, Maurizio; Mirante, Vincenzo Giorgio; Rapaccini, Gian Ludovico; Gasbarrini, Giovanni

    2004-01-01

    Liver transplantation represents the first choice treatment for patients with fulminant acute hepatitis and for patients with chronic liver disease and advanced functional failure. Patients in the waiting list for liver transplantation are classified according to the severity of their clinical conditions (evaluated using staging systems mostly based on hematochemical parameters related to liver function). This classification, together with the blood group and the body size compatibility, remains the main criterion for organ allocation. The main indications for liver transplantation are cirrhosis (mainly HCV-, HBV- and alcohol-related) and hepatocellular carcinoma emerging in cirrhosis in adult patients, biliary atresia and some inborn errors of metabolism in pediatric patients. In adults the overall 5-year survival ranges between 60 and 70%, in both American and European series. Even better results have been reported for pediatric patients: in fact, the 5-year survival rate for children ranges between 70 and 80% in the main published series. In this study we evaluated the main medical problems correlated with liver transplantation such as immunosuppressive treatment, acute and chronic rejection, infectious complications, the recurrence of the liver disease leading to transplantation, and cardiovascular and metabolic complications.

  19. Benign Liver Tumors

    Science.gov (United States)

    ... Liver Function Tests Clinical Trials Liver Transplant FAQs Medical Terminology Diseases of the Liver Alagille Syndrome Alcohol-Related ... the Liver The Progression of Liver Disease FAQs Medical Terminology HOW YOU CAN HELP Sponsorship Ways to Give ...

  20. Liver Function Tests

    Science.gov (United States)

    ... Liver Function Tests Clinical Trials Liver Transplant FAQs Medical Terminology Diseases of the Liver Alagille Syndrome Alcohol-Related ... the Liver The Progression of Liver Disease FAQs Medical Terminology HOW YOU CAN HELP Sponsorship Ways to Give ...

  1. Progression of Liver Disease

    Science.gov (United States)

    ... Liver Function Tests Clinical Trials Liver Transplant FAQs Medical Terminology Diseases of the Liver Alagille Syndrome Alcohol-Related ... the Liver The Progression of Liver Disease FAQs Medical Terminology HOW YOU CAN HELP Sponsorship Ways to Give ...

  2. Liver (Hepatocellular) Cancer Screening

    Science.gov (United States)

    ... Treatment Liver Cancer Prevention Liver Cancer Screening Research Liver (Hepatocellular) Cancer Screening (PDQ®)–Patient Version What is ... These are called diagnostic tests . General Information About Liver (Hepatocellular) Cancer Key Points Liver cancer is a ...

  3. Measurement of protein synthesis in vivo in the growing lamb liver using 14C lysyl-tRNAlys

    International Nuclear Information System (INIS)

    Ferrara, M.; Arnal, M.; Fauconneau, G.

    1977-01-01

    The distribution of intravenously administered 14 C lysine was followed in the extracellular and intracellular pools of the lamb liver and in the crude aminoacyl-tRNA extracted from them at the same time. The evolution of the specific radioactivity of lysine in these three pools suggested that lysyl-tRNAlys was acrylated with amino acid derived from the extracellular and intracellular pools. The rate of protein synthesis calculated with specific radioactivity of lysine released from aminoacyl-tRNA was intermediate between those obtained with the extracellular and intracellular pools and indicated that determination of specific radioactivity of amino acid released from aminoacyl-tRNA was important for correct assessment of protein turnover [fr

  4. The 4A Metric Algorithm: A Unique E-Learning Engineering Solution Designed via Neuroscience to Counter Cheating and Reduce Its Recidivism by Measuring Student Growth through Systemic Sequential Online Learning

    Science.gov (United States)

    Osler, James Edward

    2016-01-01

    This paper provides a novel instructional methodology that is a unique E-Learning engineered "4A Metric Algorithm" designed to conceptually address the four main challenges faced by 21st century students, who are tempted to cheat in a myriad of higher education settings (face to face, hybrid, and online). The algorithmic online…

  5. Enlarged Liver

    Science.gov (United States)

    ... of liver damage. Medicinal herbs. Certain herbs, including comfrey, ma huang and mistletoe, can increase your risk ... herbs to avoid include germander, chaparral, senna, mistletoe, comfrey, ma huang, valerian root, kava, celandine and green ...

  6. Group covariance and metrical theory

    International Nuclear Information System (INIS)

    Halpern, L.

    1983-01-01

    The a priori introduction of a Lie group of transformations into a physical theory has often proved to be useful; it usually serves to describe special simplified conditions before a general theory can be worked out. Newton's assumptions of absolute space and time are examples where the Euclidian group and translation group have been introduced. These groups were extended to the Galilei group and modified in the special theory of relativity to the Poincare group to describe physics under the given conditions covariantly in the simplest way. The criticism of the a priori character leads to the formulation of the general theory of relativity. The general metric theory does not really give preference to a particular invariance group - even the principle of equivalence can be adapted to a whole family of groups. The physical laws covariantly inserted into the metric space are however adapted to the Poincare group. 8 references

  7. General relativity: An erfc metric

    Science.gov (United States)

    Plamondon, Réjean

    2018-06-01

    This paper proposes an erfc potential to incorporate in a symmetric metric. One key feature of this model is that it relies on the existence of an intrinsic physical constant σ, a star-specific proper length that scales all its surroundings. Based thereon, the new metric is used to study the space-time geometry of a static symmetric massive object, as seen from its interior. The analytical solutions to the Einstein equation are presented, highlighting the absence of singularities and discontinuities in such a model. The geodesics are derived in their second- and first-order differential formats. Recalling the slight impact of the new model on the classical general relativity tests in the solar system, a number of facts and open problems are briefly revisited on the basis of a heuristic definition of σ. A special attention is given to gravitational collapses and non-singular black holes.

  8. Investigation on liver fast metabolism with CT

    International Nuclear Information System (INIS)

    Huebener, K.H.; Schmitt, W.G.H.

    1981-01-01

    Measurements of the density of normal and diffusely diseased liver parenchyma show a significant difference only in fatty liver. A linear relationship between the fat content and physical density has been demonstrated. Computed tomographic densitometry of liver tissue correlates well with physical in vitro measurements of fat content and is sufficiently accurate for clinical use. Other types of liver diseases cannot be differentiated by densitometry, Lipolisis in fatty liver in chronic alcoholism alcohol withdrawal has been investigated. It has been found that a rate of decrease of the fatty degeneration of the liver equals to 1 percent/day. Fatty degeneration of the liver in acute pancreatitis and other diseases have been also investigated. CT densitometry of the liver should be considered as a useful routine clinical method to determine the fat content of liver. (author)

  9. Investigation on liver fast metabolism with CT

    Energy Technology Data Exchange (ETDEWEB)

    Huebener, K.H.; Schmitt, W.G.H. (Heidelberg Univ. (Germany, F.R.). Pathologisches Inst.)

    1981-01-01

    Measurements of the density of normal and diffusely diseased liver parenchyma show a significant difference only in fatty liver. A linear relationship between the fat content and physical density has been demonstrated. Computed tomographic densitometry of liver tissue correlates well with physical in vitro measurements of fat content and is sufficiently accurate for clinical use. Other types of liver diseases cannot be differentiated by densitometry, Lipolisis in fatty liver in chronic alcoholism alcohol withdrawal has been investigated. It has been found that a rate of decrease of the fatty degeneration of the liver equals to 1 percent/day. Fatty degeneration of the liver in acute pancreatitis and other diseases have been also investigated. CT densitometry of the liver should be considered as a useful routine clinical method to determine the fat content of liver.

  10. Effects of a Calcium Bentonite Clay in Diets Containing Aflatoxin when Measuring Liver Residues of Aflatoxin B1 in Starter Broiler Chicks

    Directory of Open Access Journals (Sweden)

    Justin Fowler

    2015-08-01

    Full Text Available Research has shown success using clay-based binders to adsorb aflatoxin in animal feeds; however, no adsorbent has been approved for the prevention or treatment of aflatoxicosis. In this study, growth and relative organ weights were evaluated along with a residue analysis for aflatoxin B1 in liver tissue collected from broiler chickens consuming dietary aflatoxin (0, 600, 1200, and 1800 µg/kg both with and without 0.2% of a calcium bentonite clay additive (TX4. After one week, only the combined measure of a broiler productivity index was significantly affected by 1800 µg/kg aflatoxin. However, once birds had consumed treatment diets for two weeks, body weights and relative kidney weights were affected by the lowest concentration. Then, during the third week, body weights, feed conversion, and the productivity index were affected by the 600 µg/kg level. Results also showed that 0.2% TX4 was effective at reducing the accumulation of aflatoxin B1 residues in the liver and improving livability in birds fed aflatoxin. The time required to clear all residues from the liver was less than one week. With evidence that the liver’s ability to process aflatoxin becomes relatively efficient within three weeks, this would imply that an alternative strategy for handling aflatoxin contamination in feed could be to allow a short, punctuated exposure to a higher level, so long as that exposure is followed by at least a week of a withdrawal period on a clean diet free of aflatoxin.

  11. Magnetic Resonance Elastography: Measurement of Hepatic Stiffness Using Different Direct Inverse Problem Reconstruction Methods in Healthy Volunteers and Patients with Liver Disease.

    Science.gov (United States)

    Saito, Shigeyoshi; Tanaka, Keiko; Hashido, Takashi

    2016-02-01

    The purpose of this study was to compare the mean hepatic stiffness values obtained by the application of two different direct inverse problem reconstruction methods to magnetic resonance elastography (MRE). Thirteen healthy men (23.2±2.1 years) and 16 patients with liver diseases (78.9±4.3 years; 12 men and 4 women) were examined for this study using a 3.0 T-MRI. The healthy volunteers underwent three consecutive scans, two 70-Hz waveform and a 50-Hz waveform scans. On the other hand, the patients with liver disease underwent scanning using the 70-Hz waveform only. The MRE data for each subject was processed twice for calculation of the mean hepatic stiffness (Pa), once using the multiscale direct inversion (MSDI) and once using the multimodel direct inversion (MMDI). There were no significant differences in the mean stiffness values among the scans obtained with two 70-Hz and different waveforms. However, the mean stiffness values obtained with the MSDI technique (with mask: 2895.3±255.8 Pa, without mask: 2940.6±265.4 Pa) were larger than those obtained with the MMDI technique (with mask: 2614.0±242.1 Pa, without mask: 2699.2±273.5 Pa). The reproducibility of measurements obtained using the two techniques was high for both the healthy volunteers [intraclass correlation coefficients (ICCs): 0.840-0.953] and the patients (ICC: 0.830-0.995). These results suggest that knowledge of the characteristics of different direct inversion algorithms is important for longitudinal liver stiffness assessments such as the comparison of different scanners and evaluation of the response to fibrosis therapy.

  12. Validation of an LC-MS/MS method to measure tacrolimus in rat kidney and liver tissue and its application to human kidney biopsies.

    Science.gov (United States)

    Noll, Benjamin D; Coller, Janet K; Somogyi, Andrew A; Morris, Raymond G; Russ, Graeme R; Hesselink, Dennis A; Van Gelder, Teun; Sallustio, Benedetta C

    2013-10-01

    Tacrolimus (TAC) has a narrow therapeutic index and high interindividual and intraindividual pharmacokinetic variability, necessitating therapeutic drug monitoring to individualize dosage. Recent evidence suggests that intragraft TAC concentrations may better predict transplant outcomes. This study aimed to develop a method for the quantification of TAC in small biopsy-sized samples of rat kidney and liver tissue, which could be applied to clinical biopsy samples from kidney transplant recipients. Kidneys and livers were harvested from Mrp2-deficient TR- Wistar rats administered TAC (4 mg·kg·d for 14 days, n = 8) or vehicle (n = 10). Tissue samples (0.20-1.00 mg of dry weight) were solubilized enzymatically and underwent liquid-liquid extraction before analysis by liquid chromatography tandem mass spectrometry method. TAC-free tissue was used in the calibrator and quality control samples. Analyte detection was accomplished using positive electrospray ionization (TAC: m/z 821.5 → 768.6; internal standard ascomycin m/z 809.3 → 756.4). Calibration curves (0.04-2.6 μg/L) were linear (R > 0.99, n = 10), with interday and intraday calibrator coefficients of variation and bias <17% at the lower limit of quantification and <15% at all other concentrations (n = 6-10). Extraction efficiencies for TAC and ascomycin were approximately 70%, and matrix effects were minimal. Rat kidney TAC concentrations were higher (range 109-190 pg/mg tissue) than those in the liver (range 22-53 pg/mg of tissue), with median tissue/blood concentrations ratios of 72.0 and 17.6, respectively. In 2 transplant patients, kidney TAC concentrations ranged from 119 to 285 pg/mg of tissue and were approximately 20 times higher than whole blood trough TAC concentrations. The method displayed precision and accuracy suitable for application to TAC measurement in human kidney biopsy tissue.

  13. hdm: High-dimensional metrics

    OpenAIRE

    Chernozhukov, Victor; Hansen, Christian; Spindler, Martin

    2016-01-01

    In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...

  14. Multi-Metric Sustainability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cowlin, Shannon [National Renewable Energy Lab. (NREL), Golden, CO (United States); Heimiller, Donna [National Renewable Energy Lab. (NREL), Golden, CO (United States); Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Pless, Jacquelyn [National Renewable Energy Lab. (NREL), Golden, CO (United States); Munoz, David [Colorado School of Mines, Golden, CO (United States)

    2014-12-01

    A readily accessible framework that allows for evaluating impacts and comparing tradeoffs among factors in energy policy, expansion planning, and investment decision making is lacking. Recognizing this, the Joint Institute for Strategic Energy Analysis (JISEA) funded an exploration of multi-metric sustainability analysis (MMSA) to provide energy decision makers with a means to make more comprehensive comparisons of energy technologies. The resulting MMSA tool lets decision makers simultaneously compare technologies and potential deployment locations.

  15. Sensory Metrics of Neuromechanical Trust.

    Science.gov (United States)

    Softky, William; Benford, Criscillia

    2017-09-01

    Today digital sources supply a historically unprecedented component of human sensorimotor data, the consumption of which is correlated with poorly understood maladies such as Internet addiction disorder and Internet gaming disorder. Because both natural and digital sensorimotor data share common mathematical descriptions, one can quantify our informational sensorimotor needs using the signal processing metrics of entropy, noise, dimensionality, continuity, latency, and bandwidth. Such metrics describe in neutral terms the informational diet human brains require to self-calibrate, allowing individuals to maintain trusting relationships. With these metrics, we define the trust humans experience using the mathematical language of computational models, that is, as a primitive statistical algorithm processing finely grained sensorimotor data from neuromechanical interaction. This definition of neuromechanical trust implies that artificial sensorimotor inputs and interactions that attract low-level attention through frequent discontinuities and enhanced coherence will decalibrate a brain's representation of its world over the long term by violating the implicit statistical contract for which self-calibration evolved. Our hypersimplified mathematical understanding of human sensorimotor processing as multiscale, continuous-time vibratory interaction allows equally broad-brush descriptions of failure modes and solutions. For example, we model addiction in general as the result of homeostatic regulation gone awry in novel environments (sign reversal) and digital dependency as a sub-case in which the decalibration caused by digital sensorimotor data spurs yet more consumption of them. We predict that institutions can use these sensorimotor metrics to quantify media richness to improve employee well-being; that dyads and family-size groups will bond and heal best through low-latency, high-resolution multisensory interaction such as shared meals and reciprocated touch; and

  16. Metric reconstruction from Weyl scalars

    Energy Technology Data Exchange (ETDEWEB)

    Whiting, Bernard F; Price, Larry R [Department of Physics, PO Box 118440, University of Florida, Gainesville, FL 32611 (United States)

    2005-08-07

    The Kerr geometry has remained an elusive world in which to explore physics and delve into the more esoteric implications of general relativity. Following the discovery, by Kerr in 1963, of the metric for a rotating black hole, the most major advance has been an understanding of its Weyl curvature perturbations based on Teukolsky's discovery of separable wave equations some ten years later. In the current research climate, where experiments across the globe are preparing for the first detection of gravitational waves, a more complete understanding than concerns just the Weyl curvature is now called for. To understand precisely how comparatively small masses move in response to the gravitational waves they emit, a formalism has been developed based on a description of the whole spacetime metric perturbation in the neighbourhood of the emission region. Presently, such a description is not available for the Kerr geometry. While there does exist a prescription for obtaining metric perturbations once curvature perturbations are known, it has become apparent that there are gaps in that formalism which are still waiting to be filled. The most serious gaps include gauge inflexibility, the inability to include sources-which are essential when the emitting masses are considered-and the failure to describe the l = 0 and 1 perturbation properties. Among these latter properties of the perturbed spacetime, arising from a point mass in orbit, are the perturbed mass and axial component of angular momentum, as well as the very elusive Carter constant for non-axial angular momentum. A status report is given on recent work which begins to repair these deficiencies in our current incomplete description of Kerr metric perturbations.

  17. Metric reconstruction from Weyl scalars

    International Nuclear Information System (INIS)

    Whiting, Bernard F; Price, Larry R

    2005-01-01

    The Kerr geometry has remained an elusive world in which to explore physics and delve into the more esoteric implications of general relativity. Following the discovery, by Kerr in 1963, of the metric for a rotating black hole, the most major advance has been an understanding of its Weyl curvature perturbations based on Teukolsky's discovery of separable wave equations some ten years later. In the current research climate, where experiments across the globe are preparing for the first detection of gravitational waves, a more complete understanding than concerns just the Weyl curvature is now called for. To understand precisely how comparatively small masses move in response to the gravitational waves they emit, a formalism has been developed based on a description of the whole spacetime metric perturbation in the neighbourhood of the emission region. Presently, such a description is not available for the Kerr geometry. While there does exist a prescription for obtaining metric perturbations once curvature perturbations are known, it has become apparent that there are gaps in that formalism which are still waiting to be filled. The most serious gaps include gauge inflexibility, the inability to include sources-which are essential when the emitting masses are considered-and the failure to describe the l = 0 and 1 perturbation properties. Among these latter properties of the perturbed spacetime, arising from a point mass in orbit, are the perturbed mass and axial component of angular momentum, as well as the very elusive Carter constant for non-axial angular momentum. A status report is given on recent work which begins to repair these deficiencies in our current incomplete description of Kerr metric perturbations

  18. Landscape metrics for three-dimension urban pattern recognition

    Science.gov (United States)

    Liu, M.; Hu, Y.; Zhang, W.; Li, C.

    2017-12-01

    Understanding how landscape pattern determines population or ecosystem dynamics is crucial for managing our landscapes. Urban areas are becoming increasingly dominant social-ecological systems, so it is important to understand patterns of urbanization. Most studies of urban landscape pattern examine land-use maps in two dimensions because the acquisition of 3-dimensional information is difficult. We used Brista software based on Quickbird images and aerial photos to interpret the height of buildings, thus incorporating a 3-dimensional approach. We estimated the feasibility and accuracy of this approach. A total of 164,345 buildings in the Liaoning central urban agglomeration of China, which included seven cities, were measured. Twelve landscape metrics were proposed or chosen to describe the urban landscape patterns in 2- and 3-dimensional scales. The ecological and social meaning of landscape metrics were analyzed with multiple correlation analysis. The results showed that classification accuracy compared with field surveys was 87.6%, which means this method for interpreting building height was acceptable. The metrics effectively reflected the urban architecture in relation to number of buildings, area, height, 3-D shape and diversity aspects. We were able to describe the urban characteristics of each city with these metrics. The metrics also captured ecological and social meanings. The proposed landscape metrics provided a new method for urban landscape analysis in three dimensions.

  19. Sustainability Metrics: The San Luis Basin Project

    Science.gov (United States)

    Sustainability is about promoting humanly desirable dynamic regimes of the environment. Metrics: ecological footprint, net regional product, exergy, emergy, and Fisher Information. Adaptive management: (1) metrics assess problem, (2) specific problem identified, and (3) managemen...

  20. Reproducibility of graph metrics of human brain functional networks.

    Science.gov (United States)

    Deuker, Lorena; Bullmore, Edward T; Smith, Marie; Christensen, Soren; Nathan, Pradeep J; Rockstroh, Brigitte; Bassett, Danielle S

    2009-10-01

    Graph theory provides many metrics of complex network organization that can be applied to analysis of brain networks derived from neuroimaging data. Here we investigated the test-retest reliability of graph metrics of functional networks derived from magnetoencephalography (MEG) data recorded in two sessions from 16 healthy volunteers who were studied at rest and during performance of the n-back working memory task in each session. For each subject's data at each session, we used a wavelet filter to estimate the mutual information (MI) between each pair of MEG sensors in each of the classical frequency intervals from gamma to low delta in the overall range 1-60 Hz. Undirected binary graphs were generated by thresholding the MI matrix and 8 global network metrics were estimated: the clustering coefficient, path length, small-worldness, efficiency, cost-efficiency, assortativity, hierarchy, and synchronizability. Reliability of each graph metric was assessed using the intraclass correlation (ICC). Good reliability was demonstrated for most metrics applied to the n-back data (mean ICC=0.62). Reliability was greater for metrics in lower frequency networks. Higher frequency gamma- and beta-band networks were less reliable at a global level but demonstrated high reliability of nodal metrics in frontal and parietal regions. Performance of the n-back task was associated with greater reliability than measurements on resting state data. Task practice was also associated with greater reliability. Collectively these results suggest that graph metrics are sufficiently reliable to be considered for future longitudinal studies of functional brain network changes.

  1. Estimation of lymphatic conductance. A model based on protein-kinetic studies and haemodynamic measurements in patients with cirrhosis of the liver and in pigs

    DEFF Research Database (Denmark)

    Henriksen, Jens Henrik Sahl

    1985-01-01

    A model of lymphatic conductivity (i.e. flow rate per unit pressure difference = conductance) based on protein-kinetic and haemodynamic measurements is described. The model is applied to data from patients with cirrhosis and from pigs with different haemodynamic abnormalities in the hepatosplanch......A model of lymphatic conductivity (i.e. flow rate per unit pressure difference = conductance) based on protein-kinetic and haemodynamic measurements is described. The model is applied to data from patients with cirrhosis and from pigs with different haemodynamic abnormalities...... compatible with increased sinusoidal wall tightening and fibrosis in the interstitial space of the liver. The model presented supports the so-called 'lymph-imbalance' theory of ascites formation according to which a relatively insufficient lymph drainage is important in the pathogenesis of hepatic ascites....

  2. Crowdsourcing metrics of digital collections

    Directory of Open Access Journals (Sweden)

    Tuula Pääkkönen

    2015-12-01

    Full Text Available In the National Library of Finland (NLF there are millions of digitized newspaper and journal pages, which are openly available via the public website  http://digi.kansalliskirjasto.fi. To serve users better, last year the front end was completely overhauled with its main aim in crowdsourcing features, e.g., by giving end-users the opportunity to create digital clippings and a personal scrapbook from the digital collections. But how can you know whether crowdsourcing has had an impact? How much crowdsourcing functionalities have been used so far? Did crowdsourcing work? In this paper the statistics and metrics of a recent crowdsourcing effort are analysed across the different digitized material types (newspapers, journals, ephemera. The subjects, categories and keywords given by the users are analysed to see which topics are the most appealing. Some notable public uses of the crowdsourced article clippings are highlighted. These metrics give us indications on how the end-users, based on their own interests, are investigating and using the digital collections. Therefore, the suggested metrics illustrate the versatility of the information needs of the users, varying from citizen science to research purposes. By analysing the user patterns, we can respond to the new needs of the users by making minor changes to accommodate the most active participants, while still making the service more approachable for those who are trying out the functionalities for the first time. Participation in the clippings and annotations can enrich the materials in unexpected ways and can possibly pave the way for opportunities of using crowdsourcing more also in research contexts. This creates more opportunities for the goals of open science since source data becomes ­available, making it possible for researchers to reach out to the general public for help. In the long term, utilizing, for example, text mining methods can allow these different end-user segments to

  3. Hybrid metric-Palatini stars

    Science.gov (United States)

    Danilǎ, Bogdan; Harko, Tiberiu; Lobo, Francisco S. N.; Mak, M. K.

    2017-02-01

    We consider the internal structure and the physical properties of specific classes of neutron, quark and Bose-Einstein condensate stars in the recently proposed hybrid metric-Palatini gravity theory, which is a combination of the metric and Palatini f (R ) formalisms. It turns out that the theory is very successful in accounting for the observed phenomenology, since it unifies local constraints at the Solar System level and the late-time cosmic acceleration, even if the scalar field is very light. In this paper, we derive the equilibrium equations for a spherically symmetric configuration (mass continuity and Tolman-Oppenheimer-Volkoff) in the framework of the scalar-tensor representation of the hybrid metric-Palatini theory, and we investigate their solutions numerically for different equations of state of neutron and quark matter, by adopting for the scalar field potential a Higgs-type form. It turns out that the scalar-tensor definition of the potential can be represented as an Clairaut differential equation, and provides an explicit form for f (R ) given by f (R )˜R +Λeff, where Λeff is an effective cosmological constant. Furthermore, stellar models, described by the stiff fluid, radiation-like, bag model and the Bose-Einstein condensate equations of state are explicitly constructed in both general relativity and hybrid metric-Palatini gravity, thus allowing an in-depth comparison between the predictions of these two gravitational theories. As a general result it turns out that for all the considered equations of state, hybrid gravity stars are more massive than their general relativistic counterparts. Furthermore, two classes of stellar models corresponding to two particular choices of the functional form of the scalar field (constant value, and logarithmic form, respectively) are also investigated. Interestingly enough, in the case of a constant scalar field the equation of state of the matter takes the form of the bag model equation of state describing

  4. Temporal variability of daily personal magnetic field exposure metrics in pregnant women.

    Science.gov (United States)

    Lewis, Ryan C; Evenson, Kelly R; Savitz, David A; Meeker, John D

    2015-01-01

    Recent epidemiology studies of power-frequency magnetic fields and reproductive health have characterized exposures using data collected from personal exposure monitors over a single day, possibly resulting in exposure misclassification due to temporal variability in daily personal magnetic field exposure metrics, but relevant data in adults are limited. We assessed the temporal variability of daily central tendency (time-weighted average, median) and peak (upper percentiles, maximum) personal magnetic field exposure metrics over 7 consecutive days in 100 pregnant women. When exposure was modeled as a continuous variable, central tendency metrics had substantial reliability, whereas peak metrics had fair (maximum) to moderate (upper percentiles) reliability. The predictive ability of a single-day metric to accurately classify participants into exposure categories based on a weeklong metric depended on the selected exposure threshold, with sensitivity decreasing with increasing exposure threshold. Consistent with the continuous measures analysis, sensitivity was higher for central tendency metrics than for peak metrics. If there is interest in peak metrics, more than 1 day of measurement is needed over the window of disease susceptibility to minimize measurement error, but 1 day may be sufficient for central tendency metrics.

  5. Comparison of Liver Biopsy and Transient Elastography based on Clinical Relevance

    Directory of Open Access Journals (Sweden)

    Ryota Masuzaki

    2008-01-01

    Full Text Available BACKGROUND: Liver stiffness measurement (LSM by transient elastography has recently been validated for the evaluation of liver fibrosis in chronic liver diseases. The present study focused on cases in which liver biopsy and LSM were discordant.

  6. Scintigraphic assessment of liver function in patients requiring liver surgery

    NARCIS (Netherlands)

    Cieślak, K.P.

    2018-01-01

    This thesis addresses various aspects of assessment of liver function using a quantitative liver function test, 99mTc-mebrofenin hepatobiliary scintigraphy (HBS). HBS enables direct measurement of at least one of the liver’s true processes with minimal external interference and offers the

  7. Metrics for supporting the use of Modularisation in IPD

    DEFF Research Database (Denmark)

    Riitahuhta, Asko; Andreasen, Mogens Myrup

    1998-01-01

    The measuring of Modularisation is a relatively new subject. Because the Modularisation is gaining more importance in a remarkable way, it is necessary to create measurement systems for it. In the paper we present the theory base for metrics; business relations, the view to the Modularisation stage...

  8. Metrics for Evaluation of Student Models

    Science.gov (United States)

    Pelanek, Radek

    2015-01-01

    Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…

  9. Context-dependent ATC complexity metric

    NARCIS (Netherlands)

    Mercado Velasco, G.A.; Borst, C.

    2015-01-01

    Several studies have investigated Air Traffic Control (ATC) complexity metrics in a search for a metric that could best capture workload. These studies have shown how daunting the search for a universal workload metric (one that could be applied in different contexts: sectors, traffic patterns,

  10. Properties of C-metric spaces

    Science.gov (United States)

    Croitoru, Anca; Apreutesei, Gabriela; Mastorakis, Nikos E.

    2017-09-01

    The subject of this paper belongs to the theory of approximate metrics [23]. An approximate metric on X is a real application defined on X × X that satisfies only a part of the metric axioms. In a recent paper [23], we introduced a new type of approximate metric, named C-metric, that is an application which satisfies only two metric axioms: symmetry and triangular inequality. The remarkable fact in a C-metric space is that a topological structure induced by the C-metric can be defined. The innovative idea of this paper is that we obtain some convergence properties of a C-metric space in the absence of a metric. In this paper we investigate C-metric spaces. The paper is divided into four sections. Section 1 is for Introduction. In Section 2 we recall some concepts and preliminary results. In Section 3 we present some properties of C-metric spaces, such as convergence properties, a canonical decomposition and a C-fixed point theorem. Finally, in Section 4 some conclusions are highlighted.

  11. Changes in foetal liver T2* measurements by MRI in response to maternal oxygen breathing: application to diagnosing foetal growth restriction

    International Nuclear Information System (INIS)

    Morris, David M; Semple, Scott IK; Gilbert, Fiona J; Redpath, Thomas W; Ross, John AS; McVicar, Alexandra; Haggarty, Paul; Abramovich, David R; Smith, Norman

    2010-01-01

    The motivation of the project was to investigate the use of oxygen-challenge magnetic resonance imaging (OC-MRI) as a method of diagnosing foetal growth restriction. Foetal growth restriction is associated with restricted foetal oxygen supply and is also associated with increased risks of perinatal mortality and morbidity, and a number of serious and chronic health problems. Measurements of T2* relaxation time, an MRI parameter which increases with blood oxygenation, were made in the right lobe of the foetal liver in 80 singleton pregnancies, before and after the mother breathed oxygen. The groups consisted of 41 foetuses with normal growth and 39 with apparent growth restriction. The mean ± SD gestational age at scanning was 35 ± 3 weeks. Changes in foetal liver T2* on maternal oxygen breathing showed no significant difference between the groups therefore the OC-MRI protocol used in this study has no value in the diagnosis of foetal growth restriction. A secondary finding was that a significant positive correlation of T2* change with gestational age was observed. Future studies on the use of oxygen-challenge MRI to investigate foetal growth restriction may therefore need to control for gestational age at the time of MR scanning in order to observe any underlying foetal growth-related effects

  12. Measurement of Thermal Conductivity of Porcine Liver in the Temperature Range of Cryotherapy and Hyperthermia (250~315k) by A Thermal Sensor Made of A Micron-Scale Enameled Copper Wire.

    Science.gov (United States)

    Jiang, Z D; Zhao, G; Lu, G R

      BACKGROUND: Cryotherapy and hyperthermia are effective treatments for several diseases, especially for liver cancers. Thermal conductivity is a significant thermal property for the prediction and guidance of surgical procedure. However, the thermal conductivities of organs and tissues, especially over the temperature range of both cryotherapy and hyperthermia are scarce. To provide comprehensive thermal conductivity of liver for both cryotherapy and hyperthermia. A hot probe made of stain steel needle and micron-sized copper wire is used for measurement. To verify data processing, both the least square method and the Monte Carlo inversion method are used to determine the hot probe constants, respectively, with reference materials of water and 29.9 % Ca 2 Cl aqueous solution. Then the thermal conductivities of Hanks solution and pork liver bathed in Hanks solution are measured. The effective length for two methods is nearly the same, but the heat capacity of probe calibrated by the Monte Carlo inversion is temperature dependent. Fairly comprehensive thermal conductivity of porcine liver measured with these two methods in the target temperature range is verified to be similar. We provide an integrated thermal conductivity of liver for cryotherapy and hyperthermia in two methods, and make more accurate predictions possible for surgery. The least square method and the Monte Carlo inversion method have their advantages and disadvantages. The least square method is available for measurement of liquids that not prone to convection or solids in a wide temperature range, while the Monte Carlo inversion method is available for accurate and rapid measurement.

  13. Cophenetic metrics for phylogenetic trees, after Sokal and Rohlf.

    Science.gov (United States)

    Cardona, Gabriel; Mir, Arnau; Rosselló, Francesc; Rotger, Lucía; Sánchez, David

    2013-01-16

    Phylogenetic tree comparison metrics are an important tool in the study of evolution, and hence the definition of such metrics is an interesting problem in phylogenetics. In a paper in Taxon fifty years ago, Sokal and Rohlf proposed to measure quantitatively the difference between a pair of phylogenetic trees by first encoding them by means of their half-matrices of cophenetic values, and then comparing these matrices. This idea has been used several times since then to define dissimilarity measures between phylogenetic trees but, to our knowledge, no proper metric on weighted phylogenetic trees with nested taxa based on this idea has been formally defined and studied yet. Actually, the cophenetic values of pairs of different taxa alone are not enough to single out phylogenetic trees with weighted arcs or nested taxa. For every (rooted) phylogenetic tree T, let its cophenetic vectorφ(T) consist of all pairs of cophenetic values between pairs of taxa in T and all depths of taxa in T. It turns out that these cophenetic vectors single out weighted phylogenetic trees with nested taxa. We then define a family of cophenetic metrics dφ,p by comparing these cophenetic vectors by means of Lp norms, and we study, either analytically or numerically, some of their basic properties: neighbors, diameter, distribution, and their rank correlation with each other and with other metrics. The cophenetic metrics can be safely used on weighted phylogenetic trees with nested taxa and no restriction on degrees, and they can be computed in O(n2) time, where n stands for the number of taxa. The metrics dφ,1 and dφ,2 have positive skewed distributions, and they show a low rank correlation with the Robinson-Foulds metric and the nodal metrics, and a very high correlation with each other and with the splitted nodal metrics. The diameter of dφ,p, for p⩾1 , is in O(n(p+2)/p), and thus for low p they are more discriminative, having a wider range of values.

  14. Assessment of liver volume with spiral computerized tomography scanning: predicting liver volume by age and height

    OpenAIRE

    Madhu Sharma; Abhishek Singh; Shewtank Goel; Setu Satani; Kavita Mudgil

    2016-01-01

    Background: Estimation of liver size has critical clinical implication. Precise knowledge of liver dimensions and volume is prerequisite for clinical assessment of liver disorders. Liver span as measured by palpation and USG is prone to inter-observer variability and poor repeatability. The aim was to assess the normal liver volume of healthy adults using spiral computed tomography scans and to observe its relationship with various body indices. Methods: In this prospective study, all the...

  15. Circulating dipeptidyl peptidase-4 activity correlates with measures of hepatocyte apoptosis and fibrosis in non-alcoholic fatty liver disease in type 2 diabetes mellitus and obesity: A dual cohort cross-sectional study.

    Science.gov (United States)

    Williams, Kathryn H; Vieira De Ribeiro, Ana Júlia; Prakoso, Emilia; Veillard, Anne-Sophie; Shackel, Nicholas A; Brooks, Belinda; Bu, Yangmin; Cavanagh, Erika; Raleigh, Jim; McLennan, Susan V; McCaughan, Geoffrey W; Keane, Fiona M; Zekry, Amany; Gorrell, Mark D; Twigg, Stephen M

    2015-11-01

    Intrahepatic expression of dipeptidyl peptidase-4 (DPP4), and circulating DPP4 (cDPP4) levels and its enzymatic activity, are increased in non-alcoholic fatty liver disease (NAFLD) and in type 2 diabetes mellitus and/or obesity. DPP4 has been implicated as a causative factor in NAFLD progression but few studies have examined associations between cDPP4 activity and NAFLD severity in humans. This study aimed to examine the relationship of cDPP4 activity with measures of liver disease severity in NAFLD in subjects with diabetes and/or obesity. cDPP4 was measured in 106 individuals with type 2 diabetes who had transient elastography (Cohort 1) and 145 individuals with morbid obesity who had liver biopsy (Cohort 2). Both cohorts had caspase-cleaved keratin-18 (ccK18) measured as a marker of apoptosis. Natural log increases in cDPP4 activity were associated with increasing quartiles of ccK18 (Cohorts 1 and 2) and with median liver stiffness ≥10.3 kPa (Cohort 1) and significant fibrosis (F ≥ 2) on liver biopsy (Cohort 2). In diabetes and/or obesity, cDPP4 activity is associated with current apoptosis and liver fibrosis. Given the pathogenic mechanisms by which DPP4 may progress NAFLD, measurement of cDPP4 activity may have utility to predict disease progression and DPP4 inhibition may improve liver histology over time. © 2014 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.

  16. On characterizations of quasi-metric completeness

    Energy Technology Data Exchange (ETDEWEB)

    Dag, H.; Romaguera, S.; Tirado, P.

    2017-07-01

    Hu proved in [4] that a metric space (X, d) is complete if and only if for any closed subspace C of (X, d), every Banach contraction on C has fixed point. Since then several authors have investigated the problem of characterizing the metric completeness by means of fixed point theorems. Recently this problem has been studied in the more general context of quasi-metric spaces for different notions of completeness. Here we present a characterization of a kind of completeness for quasi-metric spaces by means of a quasi-metric versions of Hu’s theorem. (Author)

  17. Prioritizing Urban Habitats for Connectivity Conservation: Integrating Centrality and Ecological Metrics.

    Science.gov (United States)

    Poodat, Fatemeh; Arrowsmith, Colin; Fraser, David; Gordon, Ascelin

    2015-09-01

    Connectivity among fragmented areas of habitat has long been acknowledged as important for the viability of biological conservation, especially within highly modified landscapes. Identifying important habitat patches in ecological connectivity is a priority for many conservation strategies, and the application of 'graph theory' has been shown to provide useful information on connectivity. Despite the large number of metrics for connectivity derived from graph theory, only a small number have been compared in terms of the importance they assign to nodes in a network. This paper presents a study that aims to define a new set of metrics and compares these with traditional graph-based metrics, used in the prioritization of habitat patches for ecological connectivity. The metrics measured consist of "topological" metrics, "ecological metrics," and "integrated metrics," Integrated metrics are a combination of topological and ecological metrics. Eight metrics were applied to the habitat network for the fat-tailed dunnart within Greater Melbourne, Australia. A non-directional network was developed in which nodes were linked to adjacent nodes. These links were then weighted by the effective distance between patches. By applying each of the eight metrics for the study network, nodes were ranked according to their contribution to the overall network connectivity. The structured comparison revealed the similarity and differences in the way the habitat for the fat-tailed dunnart was ranked based on different classes of metrics. Due to the differences in the way the metrics operate, a suitable metric should be chosen that best meets the objectives established by the decision maker.

  18. The Metric of Colour Space

    DEFF Research Database (Denmark)

    Gravesen, Jens

    2015-01-01

    and found the MacAdam ellipses which are often interpreted as defining the metric tensor at their centres. An important question is whether it is possible to define colour coordinates such that the Euclidean distance in these coordinates correspond to human perception. Using cubic splines to represent......The space of colours is a fascinating space. It is a real vector space, but no matter what inner product you put on the space the resulting Euclidean distance does not correspond to human perception of difference between colours. In 1942 MacAdam performed the first experiments on colour matching...

  19. Product Operations Status Summary Metrics

    Science.gov (United States)

    Takagi, Atsuya; Toole, Nicholas

    2010-01-01

    The Product Operations Status Summary Metrics (POSSUM) computer program provides a readable view into the state of the Phoenix Operations Product Generation Subsystem (OPGS) data pipeline. POSSUM provides a user interface that can search the data store, collect product metadata, and display the results in an easily-readable layout. It was designed with flexibility in mind for support in future missions. Flexibility over various data store hierarchies is provided through the disk-searching facilities of Marsviewer. This is a proven program that has been in operational use since the first day of the Phoenix mission.

  20. A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric

    Science.gov (United States)

    Simon, Donald L.

    2011-01-01

    Receiver Operator Characteristic (ROC) curves are commonly applied as metrics for quantifying the performance of binary fault detection systems. An ROC curve provides a visual representation of a detection system s True Positive Rate versus False Positive Rate sensitivity as the detection threshold is varied. The area under the curve provides a measure of fault detection performance independent of the applied detection threshold. While the standard ROC curve is well suited for quantifying binary fault detection performance, it is not suitable for quantifying the classification performance of multi-fault classification problems. Furthermore, it does not provide a measure of diagnostic latency. To address these shortcomings, a novel three-dimensional receiver operator characteristic (3D ROC) surface metric has been developed. This is done by generating and applying two separate curves: the standard ROC curve reflecting fault detection performance, and a second curve reflecting fault classification performance. A third dimension, diagnostic latency, is added giving rise to 3D ROC surfaces. Applying numerical integration techniques, the volumes under and between the surfaces are calculated to produce metrics of the diagnostic system s detection and classification performance. This paper will describe the 3D ROC surface metric in detail, and present an example of its application for quantifying the performance of aircraft engine gas path diagnostic methods. Metric limitations and potential enhancements are also discussed

  1. Polymerase-free measurement of microRNA-122 with single base specificity using single molecule arrays: Detection of drug-induced liver injury.

    Directory of Open Access Journals (Sweden)

    David M Rissin

    Full Text Available We have developed a single probe method for detecting microRNA from human serum using single molecule arrays, with sequence specificity down to a single base, and without the use of amplification by polymerases. An abasic peptide nucleic acid (PNA probe-containing a reactive amine instead of a nucleotide at a specific position in the sequence-for detecting a microRNA was conjugated to superparamagnetic beads. These beads were incubated with a sample containing microRNA, a biotinylated reactive nucleobase-containing an aldehyde group-that was complementary to the missing base in the probe sequence, and a reducing agent. When a target molecule with an exact match in sequence hybridized to the capture probe, the reactive nucleobase was covalently attached to the backbone of the probe by a dynamic covalent chemical reaction. Single molecules of the biotin-labeled probe were then labeled with streptavidin-β-galactosidase (SβG, the beads were resuspended in a fluorogenic enzyme substrate, loaded into an array of femtoliter wells, and sealed with oil. The array was imaged fluorescently to determine which beads were associated with single enzymes, and the average number of enzymes per bead was determined. The assay had a limit of detection of 500 fM, approximately 500 times more sensitive than a corresponding analog bead-based assay, with target specificity down to a single base mis-match. This assay was used to measure microRNA-122 (miR-122-an established biomarker of liver toxicity-extracted from the serum of patients who had acute liver injury due to acetaminophen, and control healthy patients. All patients with liver injury had higher levels of miR-122 in their serum compared to controls, and the concentrations measured correlated well with those determined using RT-qPCR. This approach allows rapid quantification of circulating microRNA with single-based specificity and a limit of quantification suitable for clinical use.

  2. Fanpage metrics analysis. "Study on content engagement"

    Science.gov (United States)

    Rahman, Zoha; Suberamanian, Kumaran; Zanuddin, Hasmah Binti; Moghavvemi, Sedigheh; Nasir, Mohd Hairul Nizam Bin Md

    2016-08-01

    Social Media is now determined as an excellent communicative tool to connect directly with consumers. One of the most significant ways to connect with the consumers through these Social Networking Sites (SNS) is to create a facebook fanpage with brand contents and to place different posts periodically on these fanpages. In measuring social networking sites' effectiveness, corporate houses are now analyzing metrics in terms of calculating engagement rate, number of comments/share and likings in fanpages. So now, it is very important for the marketers to know the effectiveness of different contents or posts of fanpages in order to increase the fan responsiveness and engagement rate in the fan pages. In the study the authors have analyzed total 1834 brand posts from 17 international brands of Electronics companies. Data of 9 months (From December 2014 to August 2015) have been collected for analyses, which were available online in the Brand' fan pages. An econometrics analysis is conducted using Eviews 9, to determine the impact of different contents on fanpage engagement. The study picked the four most frequently posted content to determine their impact on PTA (people Talking About) metrics and Fanpage engagement activities.

  3. Value of the Company and Marketing Metrics

    Directory of Open Access Journals (Sweden)

    André Luiz Ramos

    2013-12-01

    Full Text Available Thinking marketing strategies from a resource-based perspective (Barney, 1991, proposing assets as either tangible, organizational and human, and from Constantin and Luch’s vision (1994, where strategic resources can be tanbigle or intangible, internal or external to the firm, raises a research approach on Marketing and Finance. According to Srivastava, Shervani and Fahey (1998 there are 3 market assets types, which generate firm value. Firm value can be measured by discounted cashflow, compromising marketing activities with value generation forcasts (Anderson, 1982; Day, Fahey, 1988; Doyle, 2000; Rust et al., 2004a. The economic value of marketing strategies and marketing metrics are calling strategy researchers’ and marketing managers’ attention, making clear the need for building a bridge able to articulate marketing and finance form a strategic perspective. This article proposes an analytical framework based on different scientific approaches envolving risk and return promoted by marketing strategies and points out advances concerning both methodological approaches and marketing strategies and its impact on firm metrics and value, usgin Srinivasan and Hanssens (2009 as a start point.

  4. Covariant electrodynamics in linear media: Optical metric

    Science.gov (United States)

    Thompson, Robert T.

    2018-03-01

    While the postulate of covariance of Maxwell's equations for all inertial observers led Einstein to special relativity, it was the further demand of general covariance—form invariance under general coordinate transformations, including between accelerating frames—that led to general relativity. Several lines of inquiry over the past two decades, notably the development of metamaterial-based transformation optics, has spurred a greater interest in the role of geometry and space-time covariance for electrodynamics in ponderable media. I develop a generally covariant, coordinate-free framework for electrodynamics in general dielectric media residing in curved background space-times. In particular, I derive a relation for the spatial medium parameters measured by an arbitrary timelike observer. In terms of those medium parameters I derive an explicit expression for the pseudo-Finslerian optical metric of birefringent media and show how it reduces to a pseudo-Riemannian optical metric for nonbirefringent media. This formulation provides a basis for a unified approach to ray and congruence tracing through media in curved space-times that may smoothly vary among positively refracting, negatively refracting, and vacuum.

  5. The Finsler spacetime framework. Backgrounds for physics beyond metric geometry

    Energy Technology Data Exchange (ETDEWEB)

    Pfeifer, Christian

    2013-11-15

    The fundamental structure on which physics is described is the geometric spacetime background provided by a four dimensional manifold equipped with a Lorentzian metric. Most importantly the spacetime manifold does not only provide the stage for physical field theories but its geometry encodes causality, observers and their measurements and gravity simultaneously. This threefold role of the Lorentzian metric geometry of spacetime is one of the key insides of general relativity. During this thesis we extend the background geometry for physics from the metric framework of general relativity to our Finsler spacetime framework and ensure that the threefold role of the geometry of spacetime in physics is not changed. The geometry of Finsler spacetimes is determined by a function on the tangent bundle and includes metric geometry. In contrast to the standard formulation of Finsler geometry our Finsler spacetime framework overcomes the differentiability and existence problems of the geometric objects in earlier attempts to use Finsler geometry as an extension of Lorentzian metric geometry. The development of our nonmetric geometric framework which encodes causality is one central achievement of this thesis. On the basis of our well-defined Finsler spacetime geometry we are able to derive dynamics for the non-metric Finslerian geometry of spacetime from an action principle, obtained from the Einstein-Hilbert action, for the first time. We can complete the dynamics to a non-metric description of gravity by coupling matter fields, also formulated via an action principle, to the geometry of our Finsler spacetimes. We prove that the combined dynamics of the fields and the geometry are consistent with general relativity. Furthermore we demonstrate how to define observers and their measurements solely through the non-metric spacetime geometry. Physical consequence derived on the basis of our Finsler spacetime are: a possible solution to the fly-by anomaly in the solar system; the

  6. The Finsler spacetime framework. Backgrounds for physics beyond metric geometry

    International Nuclear Information System (INIS)

    Pfeifer, Christian

    2013-11-01

    The fundamental structure on which physics is described is the geometric spacetime background provided by a four dimensional manifold equipped with a Lorentzian metric. Most importantly the spacetime manifold does not only provide the stage for physical field theories but its geometry encodes causality, observers and their measurements and gravity simultaneously. This threefold role of the Lorentzian metric geometry of spacetime is one of the key insides of general relativity. During this thesis we extend the background geometry for physics from the metric framework of general relativity to our Finsler spacetime framework and ensure that the threefold role of the geometry of spacetime in physics is not changed. The geometry of Finsler spacetimes is determined by a function on the tangent bundle and includes metric geometry. In contrast to the standard formulation of Finsler geometry our Finsler spacetime framework overcomes the differentiability and existence problems of the geometric objects in earlier attempts to use Finsler geometry as an extension of Lorentzian metric geometry. The development of our nonmetric geometric framework which encodes causality is one central achievement of this thesis. On the basis of our well-defined Finsler spacetime geometry we are able to derive dynamics for the non-metric Finslerian geometry of spacetime from an action principle, obtained from the Einstein-Hilbert action, for the first time. We can complete the dynamics to a non-metric description of gravity by coupling matter fields, also formulated via an action principle, to the geometry of our Finsler spacetimes. We prove that the combined dynamics of the fields and the geometry are consistent with general relativity. Furthermore we demonstrate how to define observers and their measurements solely through the non-metric spacetime geometry. Physical consequence derived on the basis of our Finsler spacetime are: a possible solution to the fly-by anomaly in the solar system; the

  7. Relationship Between Hepatic Albumin and Sulphate Synthesis and its Use in Measurement of the Absolute Rate of Synthesis of Liver-Produced Plasma Proteins

    Energy Technology Data Exchange (ETDEWEB)

    Awwad, H. K.; Sheraki, A. S. [Department of Radiology and Radiological Sciences, Cancer Institute, University of Cairo, Cairo (Egypt); Radioisotope Unit, Medical Research Institute, Alexandria (Egypt)

    1971-02-15

    A model is proposed whereby serum albumin synthesis is expressed in terms of production of inorganic sulphate in the liver and the entire organism, following the administration of {sup 35}S-L-cystine. The basis assumption involved is that the precursor amino acid pool for albumin synthesis in the liver is either identical with that of inorganic sulphate synthesis or that the two pools concerned are in rapid equilibrium with each other so that they can be treated as a single pool. The feasibility of the proposed model was tested by comparing the synthesis rate of rat serum albumin with the catabolic rate of the radioiodinated protein measured in the same animal. A good agreement between the two rates was noted in a group of adult rats, whereas an excess of anabolism was noted in young growing animals. In rats fed low-protein diet, the synthesis rate exceeded the catabolic rate; both being subnormal. The equilibrium between hepatic and plasma radiosulphate concentration was complete within four hours following the injection of {sup 35}S-cystine. The total radiosulphate production could then be evaluated after such an interval from the urinary excretion and serum concentration multiplied by the volume of the sulphate space. Lack of significant re-utilization was demonstrated following the injection of radiosulphate. This is a decided advantage of the proposed method. However, extensive re-utilization of selenate selenium in the synthesis of the seleno-analogues of sulphur-amino acids was shown. This could explain the poor yield of radioselenate following the injection of {sup 75}Se-selenocystine and precludes the use of the latter agent as a tracer for measurement of synthesis of plasma proteins. (author)

  8. Intraprocedural blood volume measurement using C-arm CT as a predictor for treatment response of malignant liver tumours undergoing repetitive transarterial chemoembolization (TACE)

    International Nuclear Information System (INIS)

    Vogl, Thomas J.; Schaefer, Patrik; Lehnert, Thomas; Mbalisike, Emmanuel; Hammerstingl, Renate; Eichler, Katrin; Zangos, Stephan; Nour-Eldin, Nour-Eldin A.; Ackermann, Hanns; Naguib, Nagy N.N.

    2016-01-01

    To evaluate feasibility of measuring parenchymal blood volume (PBV) of malignant hepatic tumours using C-arm CT, test the changes in PBV following repeated transarterial chemoembolization (TACE) and correlate these changes with the change in tumour size in MRI. 111 patients with liver malignancy were included. Patients underwent MRI and TACE in a 4- to 6-week interval. During intervention C-arm CT was performed. Images were post-processed to generate PBV maps. Blood volume data in C-arm CT and change in size in MRI were evaluated. The correlation between PBV and size was tested using Spearman rank test. Pre-interventional PBV maps showed a mean blood volume of 84.5 ml/1000 ml ± 62.0, follow-up PBV maps after multiple TACE demonstrated 61.1 ml/1000 ml ± 57.5. The change in PBV was statistically significant (p = 0.02). Patients with initial tumour blood volume >100 ml/1000 ml dropped 7.1 % in size and 47.2 % in blood volume; 50-100 ml/1000 ml dropped 4.6 % in size and 25.7 % in blood volume; and <50 ml/1000 ml decreased 2.8 % in size and increased 82.2 % in blood volume. PBV measurement of malignant liver tumours using C-arm CT is feasible. Following TACE PBV decreased significantly. Patients with low initial PBV show low local response rates and further increase in blood volume, whereas high initial tumour PBV showed better response to TACE. (orig.)

  9. Metrics for building performance assurance

    Energy Technology Data Exchange (ETDEWEB)

    Koles, G.; Hitchcock, R.; Sherman, M.

    1996-07-01

    This report documents part of the work performed in phase I of a Laboratory Directors Research and Development (LDRD) funded project entitled Building Performance Assurances (BPA). The focus of the BPA effort is to transform the way buildings are built and operated in order to improve building performance by facilitating or providing tools, infrastructure, and information. The efforts described herein focus on the development of metrics with which to evaluate building performance and for which information and optimization tools need to be developed. The classes of building performance metrics reviewed are (1) Building Services (2) First Costs, (3) Operating Costs, (4) Maintenance Costs, and (5) Energy and Environmental Factors. The first category defines the direct benefits associated with buildings; the next three are different kinds of costs associated with providing those benefits; the last category includes concerns that are broader than direct costs and benefits to the building owner and building occupants. The level of detail of the various issues reflect the current state of knowledge in those scientific areas and the ability of the to determine that state of knowledge, rather than directly reflecting the importance of these issues; it intentionally does not specifically focus on energy issues. The report describes work in progress and is intended as a resource and can be used to indicate the areas needing more investigation. Other reports on BPA activities are also available.

  10. Adapting the Surgical Apgar Score for Perioperative Outcome Prediction in Liver Transplantation: A Retrospective Study

    Directory of Open Access Journals (Sweden)

    Amy C. S. Pearson, MD

    2017-11-01

    Conclusions. The SAS-LT utilized simple intraoperative metrics to predict early morbidity and mortality after liver transplant with similar accuracy to other scoring systems at an earlier postoperative time point.

  11. A condition metric for Eucalyptus woodland derived from expert evaluations.

    Science.gov (United States)

    Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D

    2018-02-01

    The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.

  12. The Relationship between Type 2 Diabetes Mellitus and Non-Alcoholic Fatty Liver Disease Measured by Controlled Attenuation Parameter.

    Science.gov (United States)

    Chon, Young Eun; Kim, Kwang Joon; Jung, Kyu Sik; Kim, Seung Up; Park, Jun Yong; Kim, Do Young; Ahn, Sang Hoon; Chon, Chae Yoon; Chung, Jae Bock; Park, Kyeong Hye; Bae, Ji Cheol; Han, Kwang Hyub

    2016-07-01

    The severity of non-alcoholic fatty liver disease (NAFLD) in type 2 diabetes mellitus (T2DM) population compared with that in normal glucose tolerance (NGT) individuals has not yet been quantitatively assessed. We investigated the prevalence and the severity of NAFLD in a T2DM population using controlled attenuation parameter (CAP). Subjects who underwent testing for biomarkers related to T2DM and CAP using Fibroscan® during a regular health check-up were enrolled. CAP values of 250 dB/m and 300 dB/m were selected as the cutoffs for the presence of NAFLD and for moderate to severe NAFLD, respectively. Biomarkers related to T2DM included fasting glucose/insulin, fasting C-peptide, hemoglobin A1c (HbA1c), glycoalbumin, and homeostasis model assessment of insulin resistance of insulin resistance (HOMA-IR). Among 340 study participants (T2DM, n=66; pre-diabetes, n=202; NGT, n=72), the proportion of subjects with NAFLD increased according to the glucose tolerance status (31.9% in NGT; 47.0% in pre-diabetes; 57.6% in T2DM). The median CAP value was significantly higher in subjects with T2DM (265 dB/m) than in those with pre-diabetes (245 dB/m) or NGT (231 dB/m) (all p<0.05). Logistic regression analysis showed that subjects with moderate to severe NAFLD had a 2.8-fold (odds ratio) higher risk of having T2DM than those without NAFLD (p=0.02; 95% confidence interval, 1.21-6.64), and positive correlations between the CAP value and HOMA-IR (ρ0.407) or fasting C-peptide (ρ0.402) were demonstrated. Subjects with T2DM had a higher prevalence of severe NAFLD than those with NGT. Increased hepatic steatosis was significantly associated with the presence of T2DM, and insulin resistance induced by hepatic fat may be an important mechanistic connection.

  13. Factors influencing liver and spleen volume changes after donor hepatectomy for living donor liver transplantation

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Ji Hee; Ryeom, Hunku; Song, Jung Hup [Kyungpook National University Hospital, Daegu (Korea, Republic of)

    2013-11-15

    To define the changes in liver and spleen volumes in the early postoperative period after partial liver donation for living-donor liver transplantation (LDLT) and to determine factors that influence liver and spleen volume changes. 27 donors who underwent partial hepatectomy for LDLT were included in this study. The rates of liver and spleen volume change, measured with CT volumetry, were correlated with several factors. The analyzed factors included the indocyanine green (ICG) retention rate at 15 minutes after ICG administration, preoperative platelet count, preoperative liver and splenic volumes, resected liver volume, resected-to-whole liver volume ratio (LV{sub R}/LV{sub W}), resected liver volume to the sum of whole liver and spleen volume ratio [LV{sub R}/(LV{sub W} + SV{sub 0})], and pre and post hepatectomy portal venous pressures. In all hepatectomy donors, the volumes of the remnant liver and spleen were increased (increased rates, 59.5 ± 50.5%, 47.9 ± 22.6%). The increment rate of the remnant liver volume revealed a positive correlation with LV{sub R}/LV{sub W} (r = 0.759, p < 0.01). The other analyzed factors showed no correlation with changes in liver and spleen volumes. The spleen and remnant liver volumes were increased at CT volumetry performed 2 weeks after partial liver donation. Among the various analyzed factors, LV{sub R}/LV{sub W} influences the increment rate of the remnant liver volume.

  14. Liver Hypertension: Causes, Consequences and Prevention

    Indian Academy of Sciences (India)

    Table of contents. Liver Hypertension: Causes, Consequences and Prevention · Heart Pressure : Blood Pressure · Slide 3 · If you continue to have high BP · Doctor Measures Blood Pressure (BP): Medicines to Decrease BP · LIVER ~ ~ LIFE Rightists vs. Leftists · Slide 7 · Slide 8 · Slide 9 · Liver Spleen - Splanchnic ...

  15. Amoebic liver

    African Journals Online (AJOL)

    lymphadenopathy were noted. The right-sided pleural effusion with relaxation atelectasis was also con- firmed (Fig. 4). The diagnosis of pos- sible amoebic liver abscess complicat- ed by rupture to the gallbladder was made at that stage. Ultrasound-guided abscess drainage was done and approximately 300 ml of pus was.

  16. Hepatic (Liver) Function Panel

    Science.gov (United States)

    ... Educators Search English Español Blood Test: Hepatic (Liver) Function Panel KidsHealth / For Parents / Blood Test: Hepatic (Liver) ... kidneys ) is working. What Is a Hepatic (Liver) Function Panel? A liver function panel is a blood ...

  17. American Liver Foundation

    Science.gov (United States)

    ... Cirrhosis Clinical Trials Galactosemia Gilbert Syndrome Hemochromatosis Hepatic Encephalopathy Hepatitis A Hepatitis B Hepatitis C Hepatocellular Carcinoma Lysosomal Acid Lipase Deficiency(LALD) Intrahepatic Cholestasis of Pregnancy (ICP) Liver Biopsy Liver Cancer Liver Cysts Liver Function Tests ...

  18. Elevated Liver Enzymes

    Science.gov (United States)

    Symptoms Elevated liver enzymes By Mayo Clinic Staff Elevated liver enzymes may indicate inflammation or damage to cells in the liver. Inflamed or ... than normal amounts of certain chemicals, including liver enzymes, into the bloodstream, which can result in elevated ...

  19. An Innovative Metric to Evaluate Satellite Precipitation's Spatial Distribution

    Science.gov (United States)

    Liu, H.; Chu, W.; Gao, X.; Sorooshian, S.

    2011-12-01

    Thanks to its capability to cover the mountains, where ground measurement instruments cannot reach, satellites provide a good means of estimating precipitation over mountainous regions. In regions with complex terrains, accurate information on high-resolution spatial distribution of precipitation is critical for many important issues, such as flood/landslide warning, reservoir operation, water system planning, etc. Therefore, in order to be useful in many practical applications, satellite precipitation products should possess high quality in characterizing spatial distribution. However, most existing validation metrics, which are based on point/grid comparison using simple statistics, cannot effectively measure satellite's skill of capturing the spatial patterns of precipitation fields. This deficiency results from the fact that point/grid-wised comparison does not take into account of the spatial coherence of precipitation fields. Furth more, another weakness of many metrics is that they can barely provide information on why satellite products perform well or poor. Motivated by our recent findings of the consistent spatial patterns of the precipitation field over the western U.S., we developed a new metric utilizing EOF analysis and Shannon entropy. The metric can be derived through two steps: 1) capture the dominant spatial patterns of precipitation fields from both satellite products and reference data through EOF analysis, and 2) compute the similarities between the corresponding dominant patterns using mutual information measurement defined with Shannon entropy. Instead of individual point/grid, the new metric treat the entire precipitation field simultaneously, naturally taking advantage of spatial dependence. Since the dominant spatial patterns are shaped by physical processes, the new metric can shed light on why satellite product can or cannot capture the spatial patterns. For demonstration, a experiment was carried out to evaluate a satellite

  20. SU-E-T-163: Evaluation of Dose Distributions Recalculated with Per-Field Measurement Data Under the Condition of Respiratory Motion During IMRT for Liver Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Song, J; Yoon, M; Nam, T; Ahn, S; Chung, W [Chonnam National University Hwasun Hospital, Hwasun-kun, Chonnam (Korea, Republic of)

    2014-06-01

    Purpose: The dose distributions within the real volumes of tumor targets and critical organs during internal target volume-based intensity-modulated radiation therapy (ITV-IMRT) for liver cancer were recalculated by applying the effects of actual respiratory organ motion, and the dosimetric features were analyzed through comparison with gating IMRT (Gate-IMRT) plan results. Methods: The 4DCT data for 10 patients who had been treated with Gate-IMRT for liver cancer were selected to create ITV-IMRT plans. The ITV was created using MIM software, and a moving phantom was used to simulate respiratory motion. The period and range of respiratory motion were recorded in all patients from 4DCT-generated movie data, and the same period and range were applied when operating the dynamic phantom to realize coincident respiratory conditions in each patient. The doses were recalculated with a 3 dose-volume histogram (3DVH) program based on the per-field data measured with a MapCHECK2 2-dimensional diode detector array and compared with the DVHs calculated for the Gate-IMRT plan. Results: Although a sufficient prescription dose covered the PTV during ITV-IMRT delivery, the dose homogeneity in the PTV was inferior to that with the Gate-IMRT plan. We confirmed that there were higher doses to the organs-at-risk (OARs) with ITV-IMRT, as expected when using an enlarged field, but the increased dose to the spinal cord was not significant and the increased doses to the liver and kidney could be considered as minor when the reinforced constraints were applied during IMRT plan optimization. Conclusion: Because Gate-IMRT cannot always be considered an ideal method with which to correct the respiratory motional effect, given the dosimetric variations in the gating system application and the increased treatment time, a prior analysis for optimal IMRT method selection should be performed while considering the patient's respiratory condition and IMRT plan results.

  1. A complete metric in the set of mixing transformations

    International Nuclear Information System (INIS)

    Tikhonov, Sergei V

    2007-01-01

    A metric in the set of mixing measure-preserving transformations is introduced making of it a complete separable metric space. Dense and massive subsets of this space are investigated. A generic mixing transformation is proved to have simple singular spectrum and to be a mixing of arbitrary order; all its powers are disjoint. The convolution powers of the maximal spectral type for such transformations are mutually singular if the ratio of the corresponding exponents is greater than 2. It is shown that the conjugates of a generic mixing transformation are dense, as are also the conjugates of an arbitrary fixed Cartesian product. Bibliography: 28 titles.

  2. Natural metrics and least-committed priors for articulated tracking

    DEFF Research Database (Denmark)

    Hauberg, Søren; Sommer, Stefan Horst; Pedersen, Kim Steenstrup

    2012-01-01

    of joint positions, which is embedded in a high dimensional Euclidean space. This Riemannian manifold inherits the metric from the embedding space, such that distances are measured as the combined physical length that joints travel during movements. We then develop a least-committed Brownian motion model...

  3. Student Borrowing in America: Metrics, Demographics, Default Aversion Strategies

    Science.gov (United States)

    Kesterman, Frank

    2006-01-01

    The use of Cohort Default Rate (CDR) as the primary measure of student loan defaults among undergraduates was investigated. The study used data extracted from the National Student Loan Data System (NSLDS), quantitative analysis of Likert-scale survey responses from 153 student financial aid professionals on proposed changes to present metrics and…

  4. First results from a combined analysis of CERN computing infrastructure metrics

    Science.gov (United States)

    Duellmann, Dirk; Nieke, Christian

    2017-10-01

    The IT Analysis Working Group (AWG) has been formed at CERN across individual computing units and the experiments to attempt a cross cutting analysis of computing infrastructure and application metrics. In this presentation we will describe the first results obtained using medium/long term data (1 months — 1 year) correlating box level metrics, job level metrics from LSF and HTCondor, IO metrics from the physics analysis disk pools (EOS) and networking and application level metrics from the experiment dashboards. We will cover in particular the measurement of hardware performance and prediction of job duration, the latency sensitivity of different job types and a search for bottlenecks with the production job mix in the current infrastructure. The presentation will conclude with the proposal of a small set of metrics to simplify drawing conclusions also in the more constrained environment of public cloud deployments.

  5. Conceptual Soundness, Metric Development, Benchmarking, and Targeting for PATH Subprogram Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Mosey. G.; Doris, E.; Coggeshall, C.; Antes, M.; Ruch, J.; Mortensen, J.

    2009-01-01

    The objective of this study is to evaluate the conceptual soundness of the U.S. Department of Housing and Urban Development (HUD) Partnership for Advancing Technology in Housing (PATH) program's revised goals and establish and apply a framework to identify and recommend metrics that are the most useful for measuring PATH's progress. This report provides an evaluative review of PATH's revised goals, outlines a structured method for identifying and selecting metrics, proposes metrics and benchmarks for a sampling of individual PATH programs, and discusses other metrics that potentially could be developed that may add value to the evaluation process. The framework and individual program metrics can be used for ongoing management improvement efforts and to inform broader program-level metrics for government reporting requirements.

  6. Metric approach to quantum constraints

    International Nuclear Information System (INIS)

    Brody, Dorje C; Hughston, Lane P; Gustavsson, Anna C T

    2009-01-01

    A framework for deriving equations of motion for constrained quantum systems is introduced and a procedure for its implementation is outlined. In special cases, the proposed new method, which takes advantage of the fact that the space of pure states in quantum mechanics has both a symplectic structure and a metric structure, reduces to a quantum analogue of the Dirac theory of constraints in classical mechanics. Explicit examples involving spin-1/2 particles are worked out in detail: in the first example, our approach coincides with a quantum version of the Dirac formalism, while the second example illustrates how a situation that cannot be treated by Dirac's approach can nevertheless be dealt with in the present scheme.

  7. Active Metric Learning for Supervised Classification

    OpenAIRE

    Kumaran, Krishnan; Papageorgiou, Dimitri; Chang, Yutong; Li, Minhan; Takáč, Martin

    2018-01-01

    Clustering and classification critically rely on distance metrics that provide meaningful comparisons between data points. We present mixed-integer optimization approaches to find optimal distance metrics that generalize the Mahalanobis metric extensively studied in the literature. Additionally, we generalize and improve upon leading methods by removing reliance on pre-designated "target neighbors," "triplets," and "similarity pairs." Another salient feature of our method is its ability to en...

  8. Generalized tolerance sensitivity and DEA metric sensitivity

    OpenAIRE

    Neralić, Luka; E. Wendell, Richard

    2015-01-01

    This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA). Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  9. The definitive guide to IT service metrics

    CERN Document Server

    McWhirter, Kurt

    2012-01-01

    Used just as they are, the metrics in this book will bring many benefits to both the IT department and the business as a whole. Details of the attributes of each metric are given, enabling you to make the right choices for your business. You may prefer and are encouraged to design and create your own metrics to bring even more value to your business - this book will show you how to do this, too.

  10. Generalized tolerance sensitivity and DEA metric sensitivity

    Directory of Open Access Journals (Sweden)

    Luka Neralić

    2015-03-01

    Full Text Available This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA. Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  11. Common Metrics for Human-Robot Interaction

    Science.gov (United States)

    Steinfeld, Aaron; Lewis, Michael; Fong, Terrence; Scholtz, Jean; Schultz, Alan; Kaber, David; Goodrich, Michael

    2006-01-01

    This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.

  12. Chaotic inflation with metric and matter perturbations

    International Nuclear Information System (INIS)

    Feldman, H.A.; Brandenberger, R.H.

    1989-01-01

    A perturbative scheme to analyze the evolution of both metric and scalar field perturbations in an expanding universe is developed. The scheme is applied to study chaotic inflation with initial metric and scalar field perturbations present. It is shown that initial gravitational perturbations with wavelength smaller than the Hubble radius rapidly decay. The metric simultaneously picks up small perturbations determined by the matter inhomogeneities. Both are frozen in once the wavelength exceeds the Hubble radius. (orig.)

  13. Influence of Musical Enculturation on Brain Responses to Metric Deviants

    Directory of Open Access Journals (Sweden)

    Niels T. Haumann

    2018-04-01

    Full Text Available The ability to recognize metric accents is fundamental in both music and language perception. It has been suggested that music listeners prefer rhythms that follow simple binary meters, which are common in Western music. This means that listeners expect odd-numbered beats to be strong and even-numbered beats to be weak. In support of this, studies have shown that listeners exposed to Western music show stronger novelty and incongruity related P3 and irregularity detection related mismatch negativity (MMN brain responses to attenuated odd- than attenuated even-numbered metric positions. Furthermore, behavioral evidence suggests that music listeners' preferences can be changed by long-term exposure to non-Western rhythms and meters, e.g., by listening to African or Balkan music. In our study, we investigated whether it might be possible to measure effects of music enculturation on neural responses to attenuated tones on specific metric positions. We compared the magnetic mismatch negativity (MMNm to attenuated beats in a “Western group” of listeners (n = 12 mainly exposed to Western music and a “Bicultural group” of listeners (n = 13 exposed for at least 1 year to both Sub-Saharan African music in addition to Western music. We found that in the “Western group” the MMNm was higher in amplitude to deviant tones on odd compared to even metric positions, but not in the “Bicultural group.” In support of this finding, there was also a trend of the “Western group” to rate omitted beats as more surprising on odd than even metric positions, whereas the “Bicultural group” seemed to discriminate less between metric positions in terms of surprise ratings. Also, we observed that the overall latency of the MMNm was significantly shorter in the Bicultural group compared to the Western group. These effects were not biased by possible differences in rhythm perception ability or music training, measured with the Musical Ear Test (MET

  14. Influence of Musical Enculturation on Brain Responses to Metric Deviants.

    Science.gov (United States)

    Haumann, Niels T; Vuust, Peter; Bertelsen, Freja; Garza-Villarreal, Eduardo A

    2018-01-01

    The ability to recognize metric accents is fundamental in both music and language perception. It has been suggested that music listeners prefer rhythms that follow simple binary meters, which are common in Western music. This means that listeners expect odd-numbered beats to be strong and even-numbered beats to be weak. In support of this, studies have shown that listeners exposed to Western music show stronger novelty and incongruity related P3 and irregularity detection related mismatch negativity (MMN) brain responses to attenuated odd- than attenuated even-numbered metric positions. Furthermore, behavioral evidence suggests that music listeners' preferences can be changed by long-term exposure to non-Western rhythms and meters, e.g., by listening to African or Balkan music. In our study, we investigated whether it might be possible to measure effects of music enculturation on neural responses to attenuated tones on specific metric positions. We compared the magnetic mismatch negativity (MMNm) to attenuated beats in a "Western group" of listeners ( n = 12) mainly exposed to Western music and a "Bicultural group" of listeners ( n = 13) exposed for at least 1 year to both Sub-Saharan African music in addition to Western music. We found that in the "Western group" the MMNm was higher in amplitude to deviant tones on odd compared to even metric positions, but not in the "Bicultural group." In support of this finding, there was also a trend of the "Western group" to rate omitted beats as more surprising on odd than even metric positions, whereas the "Bicultural group" seemed to discriminate less between metric positions in terms of surprise ratings. Also, we observed that the overall latency of the MMNm was significantly shorter in the Bicultural group compared to the Western group. These effects were not biased by possible differences in rhythm perception ability or music training, measured with the Musical Ear Test (MET). Furthermore, source localization analyses

  15. Assessment of the Log-Euclidean Metric Performance in Diffusion Tensor Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mostafa Charmi

    2010-06-01

    Full Text Available Introduction: Appropriate definition of the distance measure between diffusion tensors has a deep impact on Diffusion Tensor Image (DTI segmentation results. The geodesic metric is the best distance measure since it yields high-quality segmentation results. However, the important problem with the geodesic metric is a high computational cost of the algorithms based on it. The main goal of this paper is to assess the possible substitution of the geodesic metric with the Log-Euclidean one to reduce the computational cost of a statistical surface evolution algorithm. Materials and Methods: We incorporated the Log-Euclidean metric in the statistical surface evolution algorithm framework. To achieve this goal, the statistics and gradients of diffusion tensor images were defined using the Log-Euclidean metric. Numerical implementation of the segmentation algorithm was performed in the MATLAB software using the finite difference techniques. Results: In the statistical surface evolution framework, the Log-Euclidean metric was able to discriminate the torus and helix patterns in synthesis datasets and rat spinal cords in biological phantom datasets from the background better than the Euclidean and J-divergence metrics. In addition, similar results were obtained with the geodesic metric. However, the main advantage of the Log-Euclidean metric over the geodesic metric was the dramatic reduction of computational cost of the segmentation algorithm, at least by 70 times. Discussion and Conclusion: The qualitative and quantitative results have shown that the Log-Euclidean metric is a good substitute for the geodesic metric when using a statistical surface evolution algorithm in DTIs segmentation.

  16. Quantitative PET of liver functions.

    Science.gov (United States)

    Keiding, Susanne; Sørensen, Michael; Frisch, Kim; Gormsen, Lars C; Munk, Ole Lajord

    2018-01-01

    Improved understanding of liver physiology and pathophysiology is urgently needed to assist the choice of new and upcoming therapeutic modalities for patients with liver diseases. In this review, we focus on functional PET of the liver: 1) Dynamic PET with 2-deoxy-2-[ 18 F]fluoro- D -galactose ( 18 F-FDGal) provides quantitative images of the hepatic metabolic clearance K met (mL blood/min/mL liver tissue) of regional and whole-liver hepatic metabolic function. Standard-uptake-value ( SUV ) from a static liver 18 F-FDGal PET/CT scan can replace K met and is currently used clinically. 2) Dynamic liver PET/CT in humans with 11 C-palmitate and with the conjugated bile acid tracer [ N -methyl- 11 C]cholylsarcosine ( 11 C-CSar) can distinguish between individual intrahepatic transport steps in hepatic lipid metabolism and in hepatic transport of bile acid from blood to bile, respectively, showing diagnostic potential for individual patients. 3) Standard compartment analysis of dynamic PET data can lead to physiological inconsistencies, such as a unidirectional hepatic clearance of tracer from blood ( K 1 ; mL blood/min/mL liver tissue) greater than the hepatic blood perfusion. We developed a new microvascular compartment model with more physiology, by including tracer uptake into the hepatocytes from the blood flowing through the sinusoids, backflux from hepatocytes into the sinusoidal blood, and re-uptake along the sinusoidal path. Dynamic PET data include information on liver physiology which cannot be extracted using a standard compartment model. In conclusion , SUV of non-invasive static PET with 18 F-FDGal provides a clinically useful measurement of regional and whole-liver hepatic metabolic function. Secondly, assessment of individual intrahepatic transport steps is a notable feature of dynamic liver PET.

  17. Quantitative PET of liver functions

    Science.gov (United States)

    Keiding, Susanne; Sørensen, Michael; Frisch, Kim; Gormsen, Lars C; Munk, Ole Lajord

    2018-01-01

    Improved understanding of liver physiology and pathophysiology is urgently needed to assist the choice of new and upcoming therapeutic modalities for patients with liver diseases. In this review, we focus on functional PET of the liver: 1) Dynamic PET with 2-deoxy-2-[18F]fluoro-D-galactose (18F-FDGal) provides quantitative images of the hepatic metabolic clearance K met (mL blood/min/mL liver tissue) of regional and whole-liver hepatic metabolic function. Standard-uptake-value (SUV) from a static liver 18F-FDGal PET/CT scan can replace K met and is currently used clinically. 2) Dynamic liver PET/CT in humans with 11C-palmitate and with the conjugated bile acid tracer [N-methyl-11C]cholylsarcosine (11C-CSar) can distinguish between individual intrahepatic transport steps in hepatic lipid metabolism and in hepatic transport of bile acid from blood to bile, respectively, showing diagnostic potential for individual patients. 3) Standard compartment analysis of dynamic PET data can lead to physiological inconsistencies, such as a unidirectional hepatic clearance of tracer from blood (K 1; mL blood/min/mL liver tissue) greater than the hepatic blood perfusion. We developed a new microvascular compartment model with more physiology, by including tracer uptake into the hepatocytes from the blood flowing through the sinusoids, backflux from hepatocytes into the sinusoidal blood, and re-uptake along the sinusoidal path. Dynamic PET data include information on liver physiology which cannot be extracted using a standard compartment model. In conclusion, SUV of non-invasive static PET with 18F-FDGal provides a clinically useful measurement of regional and whole-liver hepatic metabolic function. Secondly, assessment of individual intrahepatic transport steps is a notable feature of dynamic liver PET. PMID:29755841

  18. Shared liver-like transcriptional characteristics in liver metastases and corresponding primary colorectal tumors.

    Science.gov (United States)

    Cheng, Jun; Song, Xuekun; Ao, Lu; Chen, Rou; Chi, Meirong; Guo, You; Zhang, Jiahui; Li, Hongdong; Zhao, Wenyuan; Guo, Zheng; Wang, Xianlong

    2018-01-01

    Background & Aims : Primary tumors of colorectal carcinoma (CRC) with liver metastasis might gain some liver-specific characteristics to adapt the liver micro-environment. This study aims to reveal potential liver-like transcriptional characteristics associated with the liver metastasis in primary colorectal carcinoma. Methods: Among the genes up-regulated in normal liver tissues versus normal colorectal tissues, we identified "liver-specific" genes whose expression levels ranked among the bottom 10% ("unexpressed") of all measured genes in both normal colorectal tissues and primary colorectal tumors without metastasis. These liver-specific genes were investigated for their expressions in both the primary tumors and the corresponding liver metastases of seven primary CRC patients with liver metastasis using microdissected samples. Results: Among the 3958 genes detected to be up-regulated in normal liver tissues versus normal colorectal tissues, we identified 12 liver-specific genes and found two of them, ANGPTL3 and CFHR5 , were unexpressed in microdissected primary colorectal tumors without metastasis but expressed in both microdissected liver metastases and corresponding primary colorectal tumors (Fisher's exact test, P colorectal tumors may express some liver-specific genes which may help the tumor cells adapt the liver micro-environment.

  19. A New Study of Two Divergence Metrics for Change Detection in Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali; Wang, Suojin; Carroll, Raymond; Zhang, Xiangliang

    2014-01-01

    Streaming data are dynamic in nature with frequent changes. To detect such changes, most methods measure the difference between the data distributions in a current time window and a reference window. Divergence metrics and density estimation are required to measure the difference between the data distributions. Our study shows that the Kullback-Leibler (KL) divergence, the most popular metric for comparing distributions, fails to detect certain changes due to its asymmetric property and its dependence on the variance of the data. We thus consider two metrics for detecting changes in univariate data streams: a symmetric KL-divergence and a divergence metric measuring the intersection area of two distributions. The experimental results show that these two metrics lead to more accurate results in change detection than baseline methods such as Change Finder and using conventional KL-divergence.

  20. A New Study of Two Divergence Metrics for Change Detection in Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali

    2014-08-01

    Streaming data are dynamic in nature with frequent changes. To detect such changes, most methods measure the difference between the data distributions in a current time window and a reference window. Divergence metrics and density estimation are required to measure the difference between the data distributions. Our study shows that the Kullback-Leibler (KL) divergence, the most popular metric for comparing distributions, fails to detect certain changes due to its asymmetric property and its dependence on the variance of the data. We thus consider two metrics for detecting changes in univariate data streams: a symmetric KL-divergence and a divergence metric measuring the intersection area of two distributions. The experimental results show that these two metrics lead to more accurate results in change detection than baseline methods such as Change Finder and using conventional KL-divergence.