Validation of the ANAM Test Battery in Parkinson's Disease
National Research Council Canada - National Science Library
Reich, Stephen; Short, Paul; Kane, Robert; Weiner, William; Shulman, Lisa; Anderson, Karen
2005-01-01
.... Age-related differences in ANAM performance were identified by the present study. It was also noted that significantly more female PD patients exhibited cognitive decline than men even though the full PD sample had more of the latter.
2015-12-01
H Liao, J Saurman, P Merugumala, I Orlovsky, X Long, S Merugumala, K Rudolph, B Rowland, K Heaton “Differences in Brain Biochemistry between...07HC; PI: Proctor)involves the creation of a research database system (ANAM4TBI Military Performance Database ( AMP -D)) which incorporates all mandated...the conclusion of Study 4, we plan utilize the AMP -D to make comparisons between Army Active Duty and National Guard groups and examine the role of
ANAM4 TBI Reaction Time-Based Tests have Prognostic Utility for Acute Concussion
2013-07-01
7:767. 2013 ANAM4 TBI Reaction Time-Based Tests Have Prognostic Utility for Acute Concussion LT Jacob N. Norris, MSC USN*; LCDR Waiter Carr, MSC USN...CDR Thomas Herzig, MSC USNf; CDR D. Waiter Labrie, MSC USNf; CDR Richard Sams, MC USN§ ABSTRACT The Concussion Restoration Care Center has used the...Work Unit No. N24LB. REFERENCES 1. Department of Defense: DoD Poiicy Guidance for Management of Mild Traumatic Brain Injury/Concussion in the Deployed
Teachers Attitude towards English in Batu Anam
Directory of Open Access Journals (Sweden)
Mah Zhi Jian
2011-07-01
Full Text Available This research investigates the attitude of 60 primary and secondary school teachers towards English in Batu Anam. A questionnaire was administered to find out whether they have a positive or a negative attitude towards the English language. Results indicate that teachers in Batu Anam generally have a positive attitude towards English. Comparison between male and female teachers, optionist and non-optionist teachers and teachers from different types of schools are also analyzed.
GPS Device Testing Based on User Performance Metrics
2015-10-02
1. Rationale for a Test Program Based on User Performance Metrics ; 2. Roberson and Associates Test Program ; 3. Status of, and Revisions to, the Roberson and Associates Test Program ; 4. Comparison of Roberson and DOT/Volpe Programs
The AGIS metric and time of test: A replication study
Counsell, S; Swift, S; Tucker, A
2016-01-01
Visual Field (VF) tests and corresponding data are commonly used in clinical practices to manage glaucoma. The standard metric used to measure glaucoma severity is the Advanced Glaucoma Intervention Studies (AGIS) metric. We know that time of day when VF tests are applied can influence a patient’s AGIS metric value; a previous study showed that this was the case for a data set of 160 patients. In this paper, we replicate that study using data from 2468 patients obtained from Moorfields Eye Ho...
Kepler Planet Detection Metrics: Statistical Bootstrap Test
Jenkins, Jon M.; Burke, Christopher J.
2016-01-01
This document describes the data produced by the Statistical Bootstrap Test over the final three Threshold Crossing Event (TCE) deliveries to NExScI: SOC 9.1 (Q1Q16)1 (Tenenbaum et al. 2014), SOC 9.2 (Q1Q17) aka DR242 (Seader et al. 2015), and SOC 9.3 (Q1Q17) aka DR253 (Twicken et al. 2016). The last few years have seen significant improvements in the SOC science data processing pipeline, leading to higher quality light curves and more sensitive transit searches. The statistical bootstrap analysis results presented here and the numerical results archived at NASAs Exoplanet Science Institute (NExScI) bear witness to these software improvements. This document attempts to introduce and describe the main features and differences between these three data sets as a consequence of the software changes.
METRIC TESTS CHARACTERISTIC FOR ESTIMATING JUMPING FOR VOLLEYBALL PLAYERS
Directory of Open Access Journals (Sweden)
Toplica Stojanović
2008-08-01
Full Text Available With goal to establish metric tests characteristics for estimating jumping for volleyball players, it was organized a pilot research on pattern of 23 volleyball players from cadet team and 23 students from high-school. For needs of this research four tests are valid for estimation, jump in block with left and right leg and jump in spike with left and right leg. Each test has been taken three times, so that we could with test-re test method determine their reliability, and with factor analysis their validity. Data were processed by multivariate analysis (item analysis, factor analysis from statistical package „Statistica 6.0 for windows“. On the results of research and discussion we can say that the tests had high coefficient of reliability, as well as factor validity, and these tests can be used to estimate jumping for volleyball players.
The Advanced Navy Aerosol Model (ANAM) : Validation of small-particle modes
Eijk, A.M.J. van; Kusmierczyk-Michulec, J.T.; Piazzola, J.P.
2011-01-01
The image quality of electro-optical sensors in the (lower-altitude marine) atmosphere is limited by aerosols, which cause contrast reduction due to transmission losses and impact on the thermal signature of objects by scattering solar radiation. The Advanced Navy Aerosol Model (ANAM) aims at
2013-12-01
disorders (including attention deficit hyperac- tivity disorder [ ADHD ]), and no gross visual (no worse than 20/30 corrected or uncorrected) or hearing...of conscious- ness, substance abuse problems/treatment, known neuro- logical disorders , major psychiatric disorders (including attention deficit ... hyperactivity disorder ), vision worse than 20/30 after correction, and hearing problems. Family his- tory of psychiatric disorders was not assessed.
2018-03-01
301 Minnesota 306 Kentucky 193 Texas 188 Total 1461 14 Task 27 (Months 85-96) Continue data quality control checks and preliminary...research hypotheses – COMPLETED Data management and data quality control checks have been completed with all data collected as part of this effort...summarizing data from Studies 1-3 are being finalized. Data management procedures for Study 4 are completed and analyses and manuscript preparations are
Motion of a spinning test particle in Vaidya's radiating metric
International Nuclear Information System (INIS)
Carmeli, M.; Charach, C.; Kaye, M.
1977-01-01
The motion of a spinning test particle in Vaidya's gravitational field is considered in the framework of Papapetrou's equations of motion. Use is made of the supplementary condition S/sup μ//sup u/ = 0, where u is the retarded Schwarzschild time coordinate. We derive the equations for the dynamical variables, and consider the conservation laws, that follow from the equations of motion. Particular cases of motion are also discussed and additional first integrals corresponding to these cases are found. Some of the new extra integrals are related to the Casimir operators of the Poincare group. It is found that under special conditions on the spin tensor components the particle follows a geodesic. Motion of the spinning test particle in the Schwarzschild field is considered as one of the particular cases
Kiechle, Frederick L; Arcenas, Rodney C; Rogers, Linda C
2014-01-01
Benchmarks and metrics related to laboratory test utilization are based on evidence-based medical literature that may suffer from a positive publication bias. Guidelines are only as good as the data reviewed to create them. Disruptive technologies require time for appropriate use to be established before utilization review will be meaningful. Metrics include monitoring the use of obsolete tests and the inappropriate use of lab tests. Test utilization by clients in a hospital outreach program can be used to monitor the impact of new clients on lab workload. A multi-disciplinary laboratory utilization committee is the most effective tool for modifying bad habits, and reviewing and approving new tests for the lab formulary or by sending them out to a reference lab. Copyright © 2013 Elsevier B.V. All rights reserved.
Krzykwa, Julie C; Olivas, Alexis; Jeffries, Marlo K Sellin
2018-06-19
The fathead minnow fish embryo toxicity (FET) test has been proposed as a more humane alternative to current toxicity testing methods, as younger organisms are thought to experience less distress during toxicant exposure. However, the FET test protocol does not include endpoints that allow for the prediction of sublethal adverse outcomes, limiting its utility relative to other test types. Researchers have proposed the development of sublethal endpoints for the FET test to increase its utility. The present study 1) developed methods for previously unmeasured sublethal metrics in fathead minnows (i.e., spontaneous contraction frequency and heart rate) and 2) investigated the responsiveness of several sublethal endpoints related to growth (wet weight, length, and growth-related gene expression), neurodevelopment (spontaneous contraction frequency, and neurodevelopmental gene expression), and cardiovascular function and development (pericardial area, eye size and cardiovascular related gene expression) as additional FET test metrics using the model toxicant 3,4-dichloroaniline. Of the growth, neurological and cardiovascular endpoints measured, length, eye size and pericardial area were found to more responsive than the other endpoints, respectively. Future studies linking alterations in these endpoints to longer-term adverse impacts are needed to fully evaluate the predictive power of these metrics in chemical and whole effluent toxicity testing. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Test of the FLRW Metric and Curvature with Strong Lens Time Delays
Energy Technology Data Exchange (ETDEWEB)
Liao, Kai [School of Science, Wuhan University of Technology, Wuhan 430070 (China); Li, Zhengxiang; Wang, Guo-Jian [Department of Astronomy, Beijing Normal University, Beijing 100875 (China); Fan, Xi-Long, E-mail: liaokai@whut.edu.cn, E-mail: xilong.fan@glasgow.ac.uk [Department of Physics and Mechanical and Electrical Engineering, Hubei University of Education, Wuhan 430205 (China)
2017-04-20
We present a new model-independent strategy for testing the Friedmann–Lemaître–Robertson–Walker (FLRW) metric and constraining cosmic curvature, based on future time-delay measurements of strongly lensed quasar-elliptical galaxy systems from the Large Synoptic Survey Telescope and supernova observations from the Dark Energy Survey. The test only relies on geometric optics. It is independent of the energy contents of the universe and the validity of the Einstein equation on cosmological scales. The study comprises two levels: testing the FLRW metric through the distance sum rule (DSR) and determining/constraining cosmic curvature. We propose an effective and efficient (redshift) evolution model for performing the former test, which allows us to concretely specify the violation criterion for the FLRW DSR. If the FLRW metric is consistent with the observations, then on the second level the cosmic curvature parameter will be constrained to ∼0.057 or ∼0.041 (1 σ ), depending on the availability of high-redshift supernovae, which is much more stringent than current model-independent techniques. We also show that the bias in the time-delay method might be well controlled, leading to robust results. The proposed method is a new independent tool for both testing the fundamental assumptions of homogeneity and isotropy in cosmology and for determining cosmic curvature. It is complementary to cosmic microwave background plus baryon acoustic oscillation analyses, which normally assume a cosmological model with dark energy domination in the late-time universe.
Test of the FLRW Metric and Curvature with Strong Lens Time Delays
International Nuclear Information System (INIS)
Liao, Kai; Li, Zhengxiang; Wang, Guo-Jian; Fan, Xi-Long
2017-01-01
We present a new model-independent strategy for testing the Friedmann–Lemaître–Robertson–Walker (FLRW) metric and constraining cosmic curvature, based on future time-delay measurements of strongly lensed quasar-elliptical galaxy systems from the Large Synoptic Survey Telescope and supernova observations from the Dark Energy Survey. The test only relies on geometric optics. It is independent of the energy contents of the universe and the validity of the Einstein equation on cosmological scales. The study comprises two levels: testing the FLRW metric through the distance sum rule (DSR) and determining/constraining cosmic curvature. We propose an effective and efficient (redshift) evolution model for performing the former test, which allows us to concretely specify the violation criterion for the FLRW DSR. If the FLRW metric is consistent with the observations, then on the second level the cosmic curvature parameter will be constrained to ∼0.057 or ∼0.041 (1 σ ), depending on the availability of high-redshift supernovae, which is much more stringent than current model-independent techniques. We also show that the bias in the time-delay method might be well controlled, leading to robust results. The proposed method is a new independent tool for both testing the fundamental assumptions of homogeneity and isotropy in cosmology and for determining cosmic curvature. It is complementary to cosmic microwave background plus baryon acoustic oscillation analyses, which normally assume a cosmological model with dark energy domination in the late-time universe.
Introduction to Lean Canvas Transformation Models and Metrics in Software Testing
Directory of Open Access Journals (Sweden)
Nidagundi Padmaraj
2016-05-01
Full Text Available Software plays a key role nowadays in all fields, from simple up to cutting-edge technologies and most of technology devices now work on software. Software development verification and validation have become very important to produce the high quality software according to business stakeholder requirements. Different software development methodologies have given a new dimension for software testing. In traditional waterfall software development software testing has approached the end point and begins with resource planning, a test plan is designed and test criteria are defined for acceptance testing. In this process most of test plan is well documented and it leads towards the time-consuming processes. For the modern software development methodology such as agile where long test processes and documentations are not followed strictly due to small iteration of software development and testing, lean canvas transformation models can be a solution. This paper provides a new dimension to find out the possibilities of adopting the lean transformation models and metrics in the software test plan to simplify the test process for further use of these test metrics on canvas.
Test and Evaluation Metrics of Crew Decision-Making And Aircraft Attitude and Energy State Awareness
Bailey, Randall E.; Ellis, Kyle K. E.; Stephens, Chad L.
2013-01-01
NASA has established a technical challenge, under the Aviation Safety Program, Vehicle Systems Safety Technologies project, to improve crew decision-making and response in complex situations. The specific objective of this challenge is to develop data and technologies which may increase a pilot's (crew's) ability to avoid, detect, and recover from adverse events that could otherwise result in accidents/incidents. Within this technical challenge, a cooperative industry-government research program has been established to develop innovative flight deck-based counter-measures that can improve the crew's ability to avoid, detect, mitigate, and recover from unsafe loss-of-aircraft state awareness - specifically, the loss of attitude awareness (i.e., Spatial Disorientation, SD) or the loss-of-energy state awareness (LESA). A critical component of this research is to develop specific and quantifiable metrics which identify decision-making and the decision-making influences during simulation and flight testing. This paper reviews existing metrics and methods for SD testing and criteria for establishing visual dominance. The development of Crew State Monitoring technologies - eye tracking and other psychophysiological - are also discussed as well as emerging new metrics for identifying channelized attention and excessive pilot workload, both of which have been shown to contribute to SD/LESA accidents or incidents.
Testing a class of non-Kerr metrics with hot spots orbiting SgrA*
International Nuclear Information System (INIS)
Liu, Dan; Li, Zilong; Bambi, Cosimo
2015-01-01
SgrA*, the supermassive black hole candidate at the Galactic Center, exhibits flares in the X-ray, NIR, and sub-mm bands that may be interpreted within a hot spot model. Light curves and images of hot spots orbiting a black hole are affected by a number of special and general relativistic effects, and they can be potentially used to check whether the object is a Kerr black hole of general relativity. However, in a previous study we have shown that the relativistic features are usually subdominant with respect to the background noise and the model-dependent properties of the hot spot, and eventually it is at most possible to estimate the frequency of the innermost stable circular orbit. In this case, tests of the Kerr metric are only possible in combination with other measurements. In the present work, we consider a class of non-Kerr spacetimes in which the hot spot orbit may be outside the equatorial plane. These metrics are difficult to constrain from the study of accretion disks and indeed current X-ray observations of stellar-mass and supermassive black hole candidates cannot put interesting bounds. Here we show that near future observations of SgrA* may do it. If the hot spot is sufficiently close to the massive object, the image affected by Doppler blueshift is brighter than the other one and this provides a specific observational signature in the hot spot's centroid track. We conclude that accurate astrometric observations of SgrA* with an instrument like GRAVITY should be able to test this class of metrics, except in the more unlikely case of a small viewing angle
Anderson, D; Charifoulline, Z; Dragu, M; Fuchsberger, K; Garnier, JC; Gorzawski, AA; Koza, M; Krol, K; Rowan, S; Stamos, K; Zerlauth, M
2014-01-01
The LHC magnet powering system is composed of thousands of individual components to assure a safe operation when operating with stored energies as high as 10GJ in the superconducting LHC magnets. Each of these components has to be thoroughly commissioned following interventions and machine shutdown periods to assure their protection function in case of powering failures. As well as having dependable tracking of test executions it is vital that the executed commissioning steps and applied analysis criteria adequately represent the operational state of each component. The Accelerator Testing (AccTesting) framework in combination with a domain specific analysis language provides the means to quantify and improve the quality of analysis for future campaigns. Dedicated tools were developed to analyse in detail the reasons for failures and success of commissioning steps in past campaigns and to compare the results with newly developed quality metrics. Observed shortcomings and discrepancies are used to propose addi...
METRIC CHARACTERISTICS OF SOME TESTS FOR EVALUATION OF AEROBIC AND ANAEROBIC CAPACITIES
Directory of Open Access Journals (Sweden)
Slobodan Stojiljković
2006-06-01
Full Text Available This research was aimed at cheking the metric characteristics of some specific functional tests often used in practice for the evaluation of aerobic and anaerobic capacities and muscular capabilities. Keeping track of the changes and behavior of the functional abilities was performed on the basis of several repeated measurements of the same test on a sample consisting of 110 examinees, Students of the nursing school “Dr Milenko Hadzic” iz Nis, 17 years of age (± 6 months, regularly attending the classes of physical education.Two measuring instruments were tested: MARGARIA TEST and HARVARD STEP TEST.The reliability of said tests was evaluated on the basis of five successive measurements using Spearman-Brown method, based on determining of the value of the coefficients of determination of all measurements and of the main component h1.The outcome revealed high reliability of the results of most of the measurements and of the first main component H1, so that the acquired results were 91.2% for the MARGARIA TEST (anaerobic capacity and 93.4% for5 the HARVARD STEP TEST (aerobic capacity.
Naugler, Christopher T; Guo, Maggie
2016-04-01
There is a need to develop and validate new metrics to access the appropriateness of laboratory test requests. The mean abnormal result rate (MARR) is a proposed measure of ordering selectivity, the premise being that higher mean abnormal rates represent more selective test ordering. As a validation of this metric, we compared the abnormal rate of lab tests with the number of tests ordered on the same requisition. We hypothesized that requisitions with larger numbers of requested tests represent less selective test ordering and therefore would have a lower overall abnormal rate. We examined 3,864,083 tests ordered on 451,895 requisitions and found that the MARR decreased from about 25% if one test was ordered to about 7% if nine or more tests were ordered, consistent with less selectivity when more tests were ordered. We then examined the MARR for community-based testing for 1,340 family physicians and found both a wide variation in MARR as well as an inverse relationship between the total tests ordered per year per physician and the physician-specific MARR. The proposed metric represents a new utilization metric for benchmarking relative selectivity of test orders among physicians. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Validation of Proposed Metrics for Two-Body Abrasion Scratch Test Analysis Standards
Street, Kenneth W., Jr.; Kobrick, Ryan L.; Klaus, David M.
2013-01-01
Abrasion of mechanical components and fabrics by soil on Earth is typically minimized by the effects of atmosphere and water. Potentially abrasive particles lose sharp and pointed geometrical features through erosion. In environments where such erosion does not exist, such as the vacuum of the Moon, particles retain sharp geometries associated with fracturing of their parent particles by micrometeorite impacts. The relationship between hardness of the abrasive and that of the material being abraded is well understood, such that the abrasive ability of a material can be estimated as a function of the ratio of the hardness of the two interacting materials. Knowing the abrasive nature of an environment (abrasive)/construction material is crucial to designing durable equipment for use in such surroundings. The objective of this work was to evaluate a set of standardized metrics proposed for characterizing a surface that has been scratched from a two-body abrasion test. This is achieved by defining a new abrasion region termed Zone of Interaction (ZOI). The ZOI describes the full surface profile of all peaks and valleys, rather than just measuring a scratch width. The ZOI has been found to be at least twice the size of a standard width measurement; in some cases, considerably greater, indicating that at least half of the disturbed surface area would be neglected without this insight. The ZOI is used to calculate a more robust data set of volume measurements that can be used to computationally reconstruct a resultant profile for de tailed analysis. Documenting additional changes to various surface roughness par ameters also allows key material attributes of importance to ultimate design applications to be quantified, such as depth of penetration and final abraded surface roughness. Further - more, by investigating the use of custom scratch tips for specific needs, the usefulness of having an abrasion metric that can measure the displaced volume in this standardized
Phi index: a new metric to test the flush early and avoid the rush hypothesis.
Samia, Diogo S M; Blumstein, Daniel T
2014-01-01
Optimal escape theory states that animals should counterbalance the costs and benefits of flight when escaping from a potential predator. However, in apparent contradiction with this well-established optimality model, birds and mammals generally initiate escape soon after beginning to monitor an approaching threat, a phenomena codified as the "Flush Early and Avoid the Rush" (FEAR) hypothesis. Typically, the FEAR hypothesis is tested using correlational statistics and is supported when there is a strong relationship between the distance at which an individual first responds behaviorally to an approaching predator (alert distance, AD), and its flight initiation distance (the distance at which it flees the approaching predator, FID). However, such correlational statistics are both inadequate to analyze relationships constrained by an envelope (such as that in the AD-FID relationship) and are sensitive to outliers with high leverage, which can lead one to erroneous conclusions. To overcome these statistical concerns we develop the phi index (Φ), a distribution-free metric to evaluate the goodness of fit of a 1:1 relationship in a constraint envelope (the prediction of the FEAR hypothesis). Using both simulation and empirical data, we conclude that Φ is superior to traditional correlational analyses because it explicitly tests the FEAR prediction, is robust to outliers, and it controls for the disproportionate influence of observations from large predictor values (caused by the constrained envelope in AD-FID relationship). Importantly, by analyzing the empirical data we corroborate the strong effect that alertness has on flight as stated by the FEAR hypothesis.
Phi index: a new metric to test the flush early and avoid the rush hypothesis.
Directory of Open Access Journals (Sweden)
Diogo S M Samia
Full Text Available Optimal escape theory states that animals should counterbalance the costs and benefits of flight when escaping from a potential predator. However, in apparent contradiction with this well-established optimality model, birds and mammals generally initiate escape soon after beginning to monitor an approaching threat, a phenomena codified as the "Flush Early and Avoid the Rush" (FEAR hypothesis. Typically, the FEAR hypothesis is tested using correlational statistics and is supported when there is a strong relationship between the distance at which an individual first responds behaviorally to an approaching predator (alert distance, AD, and its flight initiation distance (the distance at which it flees the approaching predator, FID. However, such correlational statistics are both inadequate to analyze relationships constrained by an envelope (such as that in the AD-FID relationship and are sensitive to outliers with high leverage, which can lead one to erroneous conclusions. To overcome these statistical concerns we develop the phi index (Φ, a distribution-free metric to evaluate the goodness of fit of a 1:1 relationship in a constraint envelope (the prediction of the FEAR hypothesis. Using both simulation and empirical data, we conclude that Φ is superior to traditional correlational analyses because it explicitly tests the FEAR prediction, is robust to outliers, and it controls for the disproportionate influence of observations from large predictor values (caused by the constrained envelope in AD-FID relationship. Importantly, by analyzing the empirical data we corroborate the strong effect that alertness has on flight as stated by the FEAR hypothesis.
A robust metric for screening outliers from analogue product manufacturing tests responses
Krishnan, S.; Kerkhoff, H.G.
2011-01-01
Mahalanobis distance is one of the commonly used multivariate metrics for finely segregating defective devices from non-defective ones. An associated problem with this approach is the estimation of a robust mean and a covariance matrix. In the absence of such robust estimates, especially in the
A Robust Metric for Screening Outliers from Analogue Product Manufacturing Tests Responses
Krishnan, Shaji; Krishnan, Shaji; Kerkhoff, Hans G.
2011-01-01
Mahalanobis distance is one of the commonly used multivariate metrics for finely segregating defective devices from non-defective ones. An associated problem with this approach is the estimation of a robust mean and a covariance matrix. In the absence of such robust estimates, especially in the
A contest of sensors in close range 3D imaging: performance evaluation with a new metric test object
Directory of Open Access Journals (Sweden)
M. Hess
2014-06-01
Full Text Available An independent means of 3D image quality assessment is introduced, addressing non-professional users of sensors and freeware, which is largely characterized as closed-sourced and by the absence of quality metrics for processing steps, such as alignment. A performance evaluation of commercially available, state-of-the-art close range 3D imaging technologies is demonstrated with the help of a newly developed Portable Metric Test Artefact. The use of this test object provides quality control by a quantitative assessment of 3D imaging sensors. It will enable users to give precise specifications which spatial resolution and geometry recording they expect as outcome from their 3D digitizing process. This will lead to the creation of high-quality 3D digital surrogates and 3D digital assets. The paper is presented in the form of a competition of teams, and a possible winner will emerge.
Sprague, Briana N; Hyun, Jinshil; Molenaar, Peter C M
2017-01-01
Invariance of intelligence across age is often assumed but infrequently explicitly tested. Horn and McArdle (1992) tested measurement invariance of intelligence, providing adequate model fit but might not consider all relevant aspects such as sub-test differences. The goal of the current paper is to explore age-related invariance of the WAIS-R using an alternative model that allows direct tests of age on WAIS-R subtests. Cross-sectional data on 940 participants aged 16-75 from the WAIS-R normative values were used. Subtests examined were information, comprehension, similarities, vocabulary, picture completion, block design, picture arrangement, and object assembly. The two intelligence factors considered were fluid and crystallized intelligence. Self-reported ages were divided into young (16-22, n = 300), adult (29-39, n = 275), middle (40-60, n = 205), and older (61-75, n = 160) adult groups. Results suggested partial metric invariance holds. Although most of the subtests reflected fluid and crystalized intelligence similarly across different ages, invariance did not hold for block design on fluid intelligence and picture arrangement on crystallized intelligence for older adults. Additionally, there was evidence of a correlated residual between information and vocabulary for the young adults only. This partial metric invariance model yielded acceptable model fit compared to previously-proposed invariance models of Horn and McArdle (1992). Almost complete metric invariance holds for a two-factor model of intelligence. Most of the subtests were invariant across age groups, suggesting little evidence for age-related bias in the WAIS-R. However, we did find unique relationships between two subtests and intelligence. Future studies should examine age-related differences in subtests when testing measurement invariance in intelligence.
Directory of Open Access Journals (Sweden)
Amaya L Bustinduy
2011-07-01
Full Text Available To date, there has been no standardized approach to the assessment of aerobic fitness among children who harbor parasites. In quantifying the disability associated with individual or multiple chronic infections, accurate measures of physical fitness are important metrics. This is because exercise intolerance, as seen with anemia and many other chronic disorders, reflects the body's inability to maintain adequate oxygen supply (VO(2 max to the motor tissues, which is frequently linked to reduced quality-of-life in terms of physical and job performance. The objective of our study was to examine the associations between polyparasitism, anemia, and reduced fitness in a high risk Kenyan population using novel implementation of the 20-meter shuttle run test (20mSRT, a well-standardized, low-technology physical fitness test.Four villages in coastal Kenya were surveyed during 2009-2010. Children 5-18 years were tested for infection with Schistosoma haematobium (Sh, malaria, filaria, and geohelminth infections by standard methods. After anthropometric and hemoglobin testing, fitness was assessed with the 20 mSRT. The 20 mSRT proved easy to perform, requiring only minimal staff training. Parasitology revealed high prevalence of single and multiple parasitic infections in all villages, with Sh being the most common (25-62%. Anemia prevalence was 45-58%. Using multiply-adjusted linear modeling that accounted for household clustering, decreased aerobic capacity was significantly associated with anemia, stunting, and wasting, with some gender differences.The 20 mSRT, which has excellent correlation with VO(2, is a highly feasible fitness test for low-resource settings. Our results indicate impaired fitness is common in areas endemic for parasites, where, at least in part, low fitness scores are likely to result from anemia and stunting associated with chronic infection. The 20 mSRT should be used as a common metric to quantify physical fitness and compare sub
Hu, Bo; Kalfoglou, Yannis; Dupplaw, David; Alani, Harith; Lewis, Paul; Shadbolt, Nigel
2006-01-01
In the context of the Semantic Web, many ontology-related operations, e.g. ontology ranking, segmentation, alignment, articulation, reuse, evaluation, can be boiled down to one fundamental operation: computing the similarity and/or dissimilarity among ontological entities, and in some cases among ontologies themselves. In this paper, we review standard metrics for computing distance measures and we propose a series of semantic metrics. We give a formal account of semantic metrics drawn from a...
International Nuclear Information System (INIS)
Keeton, Charles R.; Petters, A.O.
2006-01-01
We study gravitational lensing by compact objects in gravity theories that can be written in a post-post-Newtonian (PPN) framework: i.e., the metric is static and spherically symmetric, and can be written as a Taylor series in m /r, where m is the gravitational radius of the compact object. Working invariantly, we compute corrections to standard weak-deflection lensing observables at first and second order in the perturbation parameter ε=θ/θ E , where θ is the angular gravitational radius and θ E is the angular Einstein ring radius of the lens. We show that the first-order corrections to the total magnification and centroid position vanish universally for gravity theories that can be written in the PPN framework. This arises from some surprising, fundamental relations among the lensing observables in PPN gravity models. We derive these relations for the image positions, magnifications, and time delays. A deep consequence is that any violation of the universal relations would signal the need for a gravity model outside the PPN framework (provided that some basic assumptions hold). In practical terms, the relations will guide observational programs to test general relativity, modified gravity theories, and possibly the cosmic censorship conjecture. We use the new relations to identify lensing observables that are accessible to current or near-future technology, and to find combinations of observables that are most useful for probing the spacetime metric. We give explicit applications to the galactic black hole, microlensing, and the binary pulsar J0737-3039
Parra, Pablo; da Silva, Antonio; Polo, Óscar R.; Sánchez, Sebastián
2018-02-01
In this day and age, successful embedded critical software needs agile and continuous development and testing procedures. This paper presents the overall testing and code coverage metrics obtained during the unit testing procedure carried out to verify the correctness of the boot software that will run in the Instrument Control Unit (ICU) of the Energetic Particle Detector (EPD) on-board Solar Orbiter. The ICU boot software is a critical part of the project so its verification should be addressed at an early development stage, so any test case missed in this process may affect the quality of the overall on-board software. According to the European Cooperation for Space Standardization ESA standards, testing this kind of critical software must cover 100% of the source code statement and decision paths. This leads to the complete testing of fault tolerance and recovery mechanisms that have to resolve every possible memory corruption or communication error brought about by the space environment. The introduced procedure enables fault injection from the beginning of the development process and enables to fulfill the exigent code coverage demands on the boot software.
Alfonso Mora, Margareth Lorena
2017-03-30
To analyse the metric properties of the Timed Get up and Go-Modified Version Test (TGUGM), in risk assessment of falls in a group of physically active women. A sample was constituted by 202 women over 55 years of age, were assessed through a crosssectional study. The TGUGM was applied to assess their fall risk. The test was analysed by comparison of the qualitative and quantitative information and by factor analysis. The development of a logistic regression model explained the risk of falls according to the test components. The TGUGM was useful for assessing the risk of falls in the studied group. The test revealed two factors: the Get Up and the Gait with dual task . Less than twelve points in the evaluation or runtimes higher than 35 seconds was associated with high risk of falling. More than 35 seconds in the test indicated a risk fall probability greater than 0.50. Also, scores less than 12 points were associated with a delay of 7 seconds more in the execution of the test ( p = 0.0016). Factor analysis of TGUGM revealed two dimensions that can be independent predictors of risk of falling: The Get up that explains between 64% and 87% of the risk of falling, and the Gait with dual task, that explains between 77% and 95% of risk of falling.
Microcomputer-based tests for repeated-measures: Metric properties and predictive validities
Kennedy, Robert S.; Baltzley, Dennis R.; Dunlap, William P.; Wilkes, Robert L.; Kuntz, Lois-Ann
1989-01-01
A menu of psychomotor and mental acuity tests were refined. Field applications of such a battery are, for example, a study of the effects of toxic agents or exotic environments on performance readiness, or the determination of fitness for duty. The key requirement of these tasks is that they be suitable for repeated-measures applications, and so questions of stability and reliability are a continuing, central focus of this work. After the initial (practice) session, seven replications of 14 microcomputer-based performance tests (32 measures) were completed by 37 subjects. Each test in the battery had previously been shown to stabilize in less than five 90-second administrations and to possess retest reliabilities greater than r = 0.707 for three minutes of testing. However, all the tests had never been administered together as a battery and they had never been self-administered. In order to provide predictive validity for intelligence measurement, the Wechsler Adult Intelligence Scale-Revised and the Wonderlic Personnel Test were obtained on the same subjects.
Anderson, D; Audrain, M; Charifoulline, Z; Dragu, M; Fuchsberger, K; Garnier, JC; Gorzawski, AA; Koza, M; Krol, K; Rowan, S; Stamos, K; Zerlauth, M
2014-01-01
The LHC magnet powering system is composed of thousands of individual components to assure a safe operation when operating with stored energies as high as 10GJ in the superconducting LHC magnets. Each of these components has to be thoroughly commissioned following interventions and machine shutdown periods to assure their protection function in case of powering failures. As well as having dependable tracking of test executions it is vital that the executed commissioning steps and applied anal...
Bellet, Aurelien; Sebban, Marc
2015-01-01
Similarity between objects plays an important role in both human cognitive processes and artificial systems for recognition and categorization. How to appropriately measure such similarities for a given task is crucial to the performance of many machine learning, pattern recognition and data mining methods. This book is devoted to metric learning, a set of techniques to automatically learn similarity and distance functions from data that has attracted a lot of interest in machine learning and related fields in the past ten years. In this book, we provide a thorough review of the metric learnin
International Nuclear Information System (INIS)
Harper, A.F.A.; Digby, R.B.; Thong, S.P.; Lacey, F.
1978-04-01
In April 1978 a meeting of senior metrication officers convened by the Commonwealth Science Council of the Commonwealth Secretariat, was held in London. The participants were drawn from Australia, Bangladesh, Britain, Canada, Ghana, Guyana, India, Jamaica, Papua New Guinea, Solomon Islands and Trinidad and Tobago. Among other things, the meeting resolved to develop a set of guidelines to assist countries to change to SI and to compile such guidelines in the form of a working manual
Energy Technology Data Exchange (ETDEWEB)
Price, Phillip N.; Granderson, Jessica; Sohn, Michael; Addy, Nathan; Jump, David
2013-09-01
whose savings can be calculated with least error? 4. What is the state of public domain models, that is, how well do they perform, and what are the associated implications for whole-building measurement and verification (M&V)? Additional project objectives that were addressed as part of this study include: (1) clarification of the use cases and conditions for baseline modeling performance metrics, benchmarks and evaluation criteria, (2) providing guidance for determining customer suitability for baseline modeling, (3) describing the portfolio level effects of baseline model estimation errors, (4) informing PG&E’s development of EMIS technology product specifications, and (5) providing the analytical foundation for future studies about baseline modeling and saving effects of EMIS technologies. A final objective of this project was to demonstrate the application of the methodology, performance metrics, and test protocols with participating EMIS product vendors.
Deep Transfer Metric Learning.
Junlin Hu; Jiwen Lu; Yap-Peng Tan; Jie Zhou
2016-12-01
Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption does not hold in many real visual recognition applications, especially when samples are captured across different data sets. In this paper, we propose a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain. Specifically, our DTML learns a deep metric network by maximizing the inter-class variations and minimizing the intra-class variations, and minimizing the distribution divergence between the source domain and the target domain at the top layer of the network. To better exploit the discriminative information from the source domain, we further develop a deeply supervised transfer metric learning (DSTML) method by including an additional objective on DTML, where the output of both the hidden layers and the top layer are optimized jointly. To preserve the local manifold of input data points in the metric space, we present two new methods, DTML with autoencoder regularization and DSTML with autoencoder regularization. Experimental results on face verification, person re-identification, and handwritten digit recognition validate the effectiveness of the proposed methods.
Ni, W.-T.
1972-01-01
Metric theories of gravity are compiled and classified according to the types of gravitational fields they contain, and the modes of interaction among those fields. The gravitation theories considered are classified as (1) general relativity, (2) scalar-tensor theories, (3) conformally flat theories, and (4) stratified theories with conformally flat space slices. The post-Newtonian limit of each theory is constructed and its Parametrized Post-Newtonian (PPN) values are obtained by comparing it with Will's version of the formalism. Results obtained here, when combined with experimental data and with recent work by Nordtvedt and Will and by Ni, show that, of all theories thus far examined by our group, the only currently viable ones are general relativity, the Bergmann-Wagoner scalar-tensor theory and its special cases (Nordtvedt; Brans-Dicke-Jordan), and a recent, new vector-tensor theory by Nordtvedt, Hellings, and Will.
International Nuclear Information System (INIS)
McAuley, W.A.
1984-01-01
The 18.14-metric-ton-capacity (20-ton) Load-Cell-Based Weighing System (LCBWS) prototype tested at the Oak Ridge (Tennessee) Gaseous Diffusion Plant March 20-30, 1984, is semiportable and has the potential for being highly accurate. Designed by Brookhaven National Laboratory, it can be moved to cylinders for weighing as opposed to the widely used operating philosophy of most enrichment facilities of moving cylinders to stationary accountability scales. Composed mainly of commercially available, off-the-shelf hardware, the system's principal elements are two load cells that sense the weight (i.e., force) of a uranium hexafluoride (UF 6 ) cylinder suspended from the LCBWS while the cylinder is in the process of being weighed. Portability is achieved by its attachment to a double-hook, overhead-bridge crane. The LCBWS prototype is designed to weigh 9.07- and 12.70-metric ton (10- and 14-ton) UF 6 cylinders. A detailed description of the LCBWS is given, design information and criteria are supplied, a testing procedure is outlined, and initial test results are reported. A major objective of the testing is to determine the reliability and accuracy of the system. Other testing objectives include the identification of (1) potential areas for system improvements and (2) procedural modifications that will reflect an improved and more efficient system. The testing procedure described includes, but is not limited to, methods that account for temperature sensitivity of the instrumentation, the local variation in the acceleration due to gravity, and buoyance effects. Operational and safety considerations are noted. A preliminary evaluation of the March test data indicates that the LCBWS prototype has the potential to have an accuracy in the vicinity of 1 kg
Issues in Benchmark Metric Selection
Crolotte, Alain
It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.
Coverage Metrics for Model Checking
Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)
2001-01-01
When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.
International Nuclear Information System (INIS)
Ma Zhihao; Chen Jingling
2011-01-01
In this work we study metrics of quantum states, which are natural generalizations of the usual trace metric and Bures metric. Some useful properties of the metrics are proved, such as the joint convexity and contractivity under quantum operations. Our result has a potential application in studying the geometry of quantum states as well as the entanglement detection.
Gaba, Yaé Ulrich
2017-01-01
In this paper, we discuss recent results about generalized metric spaces and fixed point theory. We introduce the notion of $\\eta$-cone metric spaces, give some topological properties and prove some fixed point theorems for contractive type maps on these spaces. In particular we show that theses $\\eta$-cone metric spaces are natural generalizations of both cone metric spaces and metric type spaces.
Directory of Open Access Journals (Sweden)
Alisson Cardoso Rodrigues da Cruz
2007-12-01
Full Text Available (Fungos anamórficos (Hyphomycetes da Chapada Diamantina: novos registros para o Estado da Bahia e Brasil. Os fungos anamórficos, caracterizados pela produção de estruturas de reprodução assexuadas, são habitantes comuns do folhedo onde desempenham papel importante na decomposição. O objetivo deste trabalho foi realizar um inventário dos fungos anamórficos associados ao folhedo de plantas da Chapada Diamantina, BA. Foram realizadas 13 expedições, de dezembro/2002 a outubro/2003, para coleta de folhedo. Para verificação da presença de fungos anamórficos o material foi submetido à técnica de lavagem sucessiva com água destilada esterilizada e posteriormente incubado em câmaras-úmidas. Lâminas permanentes com as estruturas reprodutivas dos espécimes foram confeccionadas com resina PVL e depositadas no herbário HUEFS. Das 57 espécies de fungos anamórficos identificados, nove constituem novas ocorrências para o Estado da Bahia e cinco para o Brasil: Fusariella atrovirens (Berk. Sacc., Kiliophora ubiensis (Caneva & Rambelli Kuthub. & Nawawi, Paraceratocladium silvestre Castañeda, Pleurotheciopsis setiformis Castañeda e Triscelophorus deficiens (Matsush. Matsush. Incluem-se comentários e distribuição geográfica dos novos registros para o Estado da Bahia; descrições e ilustrações são apresentadas para as novas ocorrências para o Brasil.(Anamorphic fungi (Hyphomycetes from the Chapada Diamantina: new records from Bahia State and Brazil. The anamorphic fungi are characterized by production of asexual reproductive structures and are common inhabitants of the leaf litter, where they play an important role in decomposition. The aim of this work was to survey the anamorphic fungi associated with leaf litter from Chapada Diamantina, B ahia state. Thirteen expeditions took place from December/2002 to October/2003 to collect leaf litter. The serial washing technique with sterile distilled water followed by incubation in
A case study of metric-based and scenario-driven black box testing for SAP projects
Daneva, Maia; Abran, Alain; Ormandjieva, Olga; Abu Talib, Manar; Abran, Alain; Bundschuh, Manfred; Buren, Gunter; Dumke, Reiner R.
2006-01-01
Enterprise Resource Planning (ERP) projects are perceived as mission-critical initiatives in many organizations. They are parts of business transformation programs and are instrumental in improving organizational performance. In ERP implementations, testing is an activity that is crucial in order to
Fei, Yang; Wang, Wei; He, Falin; Zhong, Kun; Wang, Zhiguo
2015-10-01
The aim of this study was to use Six Sigma(SM) (Motorola Trademark Holdings, Libertyville, IL) techniques to analyze the quality of point-of-care (POC) glucose testing measurements quantitatively and to provide suggestions for improvement. In total, 151 laboratories in China were included in this investigation in 2014. Bias and coefficient of variation were collected from an external quality assessment and an internal quality control program, respectively, for POC glucose testing organized by the National Center for Clinical Laboratories. The σ values and the Quality Goal Index were used to evaluate the performance of POC glucose meters. There were 27, 30, 57, and 37 participants in the groups using Optium Xceed™ (Abbott Diabetes Care, Alameda, CA), Accu-Chek(®) Performa (Roche, Basel, Switzerland), One Touch Ultra(®) (Abbott), and "other" meters, respectively. The median of the absolute value of percentage difference varied among different lots and different groups. Among all the groups, the Abbott One Touch Ultra group had the smallest median of absolute value of percentage difference except for lot 201411, whereas the "other" group had the largest median in all five lots. More than 85% of participate laboratories satisfied the total allowable error (TEa) requirement in International Organization for Standardization standard 15197:2013, and 85.43% (129/151) of laboratories obtained intralaboratory coefficient of variations less than 1/3TEa. However, Six Sigma techniques suggested that 41.72% (63/151) to 65.56% (99/151) of the laboratories needed to improve their POC glucose testing performance, in either precision, trueness, or both. Laboratories should pay more attention on the practice of POC glucose testing and take actions to improve their performance. Only in this way can POC glucose testing really function well in clinical practice.
Burke, Christopher J.; Catanzarite, Joseph
2017-01-01
Quantifying the ability of a transiting planet survey to recover transit signals has commonly been accomplished through Monte-Carlo injection of transit signals into the observed data and subsequent running of the signal search algorithm (Gilliland et al., 2000; Weldrake et al., 2005; Burke et al., 2006). In order to characterize the performance of the Kepler pipeline (Twicken et al., 2016; Jenkins et al., 2017) on a sample of over 200,000 stars, two complementary injection and recovery tests are utilized:1. Injection of a single transit signal per target into the image or pixel-level data, hereafter referred to as pixel-level transit injection (PLTI), with subsequent processing through the Photometric Analysis (PA), Presearch Data Conditioning (PDC), Transiting Planet Search (TPS), and Data Validation (DV) modules of the Kepler pipeline. The PLTI quantification of the Kepler pipeline's completeness has been described previously by Christiansen et al. (2015, 2016); the completeness of the final SOC 9.3 Kepler pipeline acting on the Data Release 25 (DR25) light curves is described by Christiansen (2017).2. Injection of multiple transit signals per target into the normalized flux time series data with a subsequent transit search using a stream-lined version of the Transiting Planet Search (TPS) module. This test, hereafter referred to as flux-level transit injection (FLTI), is the subject of this document. By running a heavily modified version of TPS, FLTI is able to perform many injections on selected targets and determine in some detail which injected signals are recoverable. Significant numerical efficiency gains are enabled by precomputing the data conditioning steps at the onset of TPS and limiting the search parameter space (i.e., orbital period, transit duration, and ephemeris zero-point) to a small region around each injected transit signal.The PLTI test has the advantage that it follows transit signals through all processing steps of the Kepler pipeline, and
Le Prell, Colleen G; Brungart, Douglas S
2016-09-01
In humans, the accepted clinical standards for detecting hearing loss are the behavioral audiogram, based on the absolute detection threshold of pure-tones, and the threshold auditory brainstem response (ABR). The audiogram and the threshold ABR are reliable and sensitive measures of hearing thresholds in human listeners. However, recent results from noise-exposed animals demonstrate that noise exposure can cause substantial neurodegeneration in the peripheral auditory system without degrading pure-tone audiometric thresholds. It has been suggested that clinical measures of auditory performance conducted with stimuli presented above the detection threshold may be more sensitive than the behavioral audiogram in detecting early-stage noise-induced hearing loss in listeners with audiometric thresholds within normal limits. Supra-threshold speech-in-noise testing and supra-threshold ABR responses are reviewed here, given that they may be useful supplements to the behavioral audiogram for assessment of possible neurodegeneration in noise-exposed listeners. Supra-threshold tests may be useful for assessing the effects of noise on the human inner ear, and the effectiveness of interventions designed to prevent noise trauma. The current state of the science does not necessarily allow us to define a single set of best practice protocols. Nonetheless, we encourage investigators to incorporate these metrics into test batteries when feasible, with an effort to standardize procedures to the greatest extent possible as new reports emerge.
Cost Metric Algorithms for Internetwork Applications
1989-04-01
5000. Released by Under authority of M. B. Vineberg, Head . X E Jahn, Head System Design and Battle Force and Theater Architechture Branch...for public release; distribution unlimited. 4. PERFORMING ORGANIZATION REPORT NUMBER(S) S. MONITORING ORGANIZATION REPORT NUMBER(S) NOSC TR 1284 6a...NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBO 7a. NAME OF MONITORING ORGANIZATION Naval Ocean Systems Center Code 854 6c. ADDRESS (C, SftW&WZPCa
METRIC context unit architecture
Energy Technology Data Exchange (ETDEWEB)
Simpson, R.O.
1988-01-01
METRIC is an architecture for a simple but powerful Reduced Instruction Set Computer (RISC). Its speed comes from the simultaneous processing of several instruction streams, with instructions from the various streams being dispatched into METRIC's execution pipeline as they become available for execution. The pipeline is thus kept full, with a mix of instructions for several contexts in execution at the same time. True parallel programming is supported within a single execution unit, the METRIC Context Unit. METRIC's architecture provides for expansion through the addition of multiple Context Units and of specialized Functional Units. The architecture thus spans a range of size and performance from a single-chip microcomputer up through large and powerful multiprocessors. This research concentrates on the specification of the METRIC Context Unit at the architectural level. Performance tradeoffs made during METRIC's design are discussed, and projections of METRIC's performance are made based on simulation studies.
Canessa, Emanuele; Agnello, Michelangelo
Field-Programmable Gate Arrays have become more and more actractive to the developers of mission-critical and safety-critical systems. Thanks to their reconfigurability properties, as well as their I/O capabilities these devices are often employed as core logic in many different applications. On top of that, the use of soft microcontrollers can ease the complexity related to the some of the control logic of these devices, allowing to easily develop new features without having to redesign most of the control logic involved. However, for application safety-critical and mission-critical like Aerospace and High-Energy Physics these devices require a further analisys on radiation effects. The main matter of this thesis, that has been developed in collaboration with the Conseil Européen pour la Recherche Nucléaire (CERN) A Large Ion Collider Experiment (ALICE), for the planned Inner Tracking System (ITS) Upgrade, are discussed the fault tolerance metrics and the testing methodologies that can be applicable to sof...
Energy Technology Data Exchange (ETDEWEB)
L' Huillier, Benjamin; Shafieloo, Arman, E-mail: benjamin@kasi.re.kr, E-mail: shafieloo@kasi.re.kr [Korea Astronomy and Space Science Institute, Yuseong-gu, 776 Daedeok daero, Daejeon (Korea, Republic of)
2017-01-01
Using measurements of H ( z ) and d {sub A}( z ) from the Baryon Oscillation Spectroscopic Survey (BOSS) DR12 and luminosity distances from the Joint Lightcurve Analysis (JLA) compilation of supernovae (SN), we measure H {sub 0} {sub r} {sub d} without any model assumption. Our measurement of H {sub 0} r {sub d} = (10033.20{sup +333.10}{sub −371.81} (SN) ± 128.19 (BAO)) km s{sup −1} is consistent with Planck constrains for the flat ΛCDM model. We also report that higher expansion history rates h ( z ) (among the possibilities) as well as lower-bound values of H {sub 0} r {sub d} result in better internal consistency among the independent data ( H ( z ) r {sub d} and d {sub A}( z )/ r {sub d} from BAO at z =0.32 and z =0.57 and d {sub L} from JLA) we used in this work. This can be interpreted as an interesting and independent support of Planck cosmology without using any cosmic microwave background data. We then combine these observables to test the Friedmann-Lemaȋtre-Robertson-Walker (FLRW) metric and the flatness of the Universe in a model-independent way at two redshifts, namely 0.32 and 0.57, by introducing a new diagnostic for flat-FLRW, Θ( z ), which only depends on observables of BAO and SN data. Our results are consistent with a flat-FLRW Universe within 2σ.
Metric diffusion along foliations
Walczak, Szymon M
2017-01-01
Up-to-date research in metric diffusion along compact foliations is presented in this book. Beginning with fundamentals from the optimal transportation theory and the theory of foliations; this book moves on to cover Wasserstein distance, Kantorovich Duality Theorem, and the metrization of the weak topology by the Wasserstein distance. Metric diffusion is defined, the topology of the metric space is studied and the limits of diffused metrics along compact foliations are discussed. Essentials on foliations, holonomy, heat diffusion, and compact foliations are detailed and vital technical lemmas are proved to aide understanding. Graduate students and researchers in geometry, topology and dynamics of foliations and laminations will find this supplement useful as it presents facts about the metric diffusion along non-compact foliation and provides a full description of the limit for metrics diffused along foliation with at least one compact leaf on the two dimensions.
Chistyakov, Vyacheslav
2015-01-01
Aimed toward researchers and graduate students familiar with elements of functional analysis, linear algebra, and general topology; this book contains a general study of modulars, modular spaces, and metric modular spaces. Modulars may be thought of as generalized velocity fields and serve two important purposes: generate metric spaces in a unified manner and provide a weaker convergence, the modular convergence, whose topology is non-metrizable in general. Metric modular spaces are extensions of metric spaces, metric linear spaces, and classical modular linear spaces. The topics covered include the classification of modulars, metrizability of modular spaces, modular transforms and duality between modular spaces, metric and modular topologies. Applications illustrated in this book include: the description of superposition operators acting in modular spaces, the existence of regular selections of set-valued mappings, new interpretations of spaces of Lipschitzian and absolutely continuous mappings, the existe...
Prognostic Performance Metrics
National Aeronautics and Space Administration — This chapter presents several performance metrics for offline evaluation of prognostics algorithms. A brief overview of different methods employed for performance...
Directory of Open Access Journals (Sweden)
Kihong Kim
2018-02-01
Full Text Available Various kinds of metrics used for the quantitative evaluation of scholarly journals are reviewed. The impact factor and related metrics including the immediacy index and the aggregate impact factor, which are provided by the Journal Citation Reports, are explained in detail. The Eigenfactor score and the article influence score are also reviewed. In addition, journal metrics such as CiteScore, Source Normalized Impact per Paper, SCImago Journal Rank, h-index, and g-index are discussed. Limitations and problems that these metrics have are pointed out. We should be cautious to rely on those quantitative measures too much when we evaluate journals or researchers.
Muntinga, D.; Bernritter, S.
2017-01-01
Het merk staat steeds meer centraal in de organisatie. Het is daarom essentieel om de gezondheid, prestaties en ontwikkelingen van het merk te meten. Het is echter een uitdaging om de juiste brand metrics te selecteren. Een enorme hoeveelheid metrics vraagt de aandacht van merkbeheerders. Maar welke
Privacy Metrics and Boundaries
L-F. Pau (Louis-François)
2005-01-01
textabstractThis paper aims at defining a set of privacy metrics (quantitative and qualitative) in the case of the relation between a privacy protector ,and an information gatherer .The aims with such metrics are: -to allow to assess and compare different user scenarios and their differences; for
Holographic Spherically Symmetric Metrics
Petri, Michael
The holographic principle (HP) conjectures, that the maximum number of degrees of freedom of any realistic physical system is proportional to the system's boundary area. The HP has its roots in the study of black holes. It has recently been applied to cosmological solutions. In this article we apply the HP to spherically symmetric static space-times. We find that any regular spherically symmetric object saturating the HP is subject to tight constraints on the (interior) metric, energy-density, temperature and entropy-density. Whenever gravity can be described by a metric theory, gravity is macroscopically scale invariant and the laws of thermodynamics hold locally and globally, the (interior) metric of a regular holographic object is uniquely determined up to a constant factor and the interior matter-state must follow well defined scaling relations. When the metric theory of gravity is general relativity, the interior matter has an overall string equation of state (EOS) and a unique total energy-density. Thus the holographic metric derived in this article can serve as simple interior 4D realization of Mathur's string fuzzball proposal. Some properties of the holographic metric and its possible experimental verification are discussed. The geodesics of the holographic metric describe an isotropically expanding (or contracting) universe with a nearly homogeneous matter-distribution within the local Hubble volume. Due to the overall string EOS the active gravitational mass-density is zero, resulting in a coasting expansion with Ht = 1, which is compatible with the recent GRB-data.
Chernozhukov, Victor; Hansen, Christian; Spindler, Martin
2016-01-01
In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...
Schweizer, B
2005-01-01
Topics include special classes of probabilistic metric spaces, topologies, and several related structures, such as probabilistic normed and inner-product spaces. 1983 edition, updated with 3 new appendixes. Includes 17 illustrations.
National Research Council Canada - National Science Library
Olson, Teresa; Lee, Harry; Sanders, Johnnie
2002-01-01
.... We have developed the Tracker Performance Metric (TPM) specifically for this purpose. It was designed to measure the output performance, on a frame-by-frame basis, using its output position and quality...
Directory of Open Access Journals (Sweden)
2007-01-01
Full Text Available Many software and IT projects fail in completing theirs objectives because different causes of which the management of the projects has a high weight. In order to have successfully projects, lessons learned have to be used, historical data to be collected and metrics and indicators have to be computed and used to compare them with past projects and avoid failure to happen. This paper presents some metrics that can be used for the IT project management.
Mass Customization Measurements Metrics
DEFF Research Database (Denmark)
Nielsen, Kjeld; Brunø, Thomas Ditlev; Jørgensen, Kaj Asbjørn
2014-01-01
A recent survey has indicated that 17 % of companies have ceased mass customizing less than 1 year after initiating the effort. This paper presents measurement for a company’s mass customization performance, utilizing metrics within the three fundamental capabilities: robust process design, choice...... navigation, and solution space development. A mass customizer when assessing performance with these metrics can identify within which areas improvement would increase competitiveness the most and enable more efficient transition to mass customization....
The Pharmacokinetics and Efficacy of a Low-dose, Aqueous, Intranasal Scopolamine Spray
2017-09-27
Delivery device weight change was identical across subjects. Cognitive performance, measured via the Automated Neuropsychological Assessment Metrics...Elevated temp 1 Elevated temp 1 Systolic BP >140 1 Eye dilation/blurred vision 1 Throat irritation 1 Fatigue 5 Hunger 1 15m...populations using Automated Neuropsychological Assessment metrics (ANAM) tests. Archives of Clinical Neuropsychology , 22 Suppl 1, S115-S126. 12
General relativity: An erfc metric
Plamondon, Réjean
2018-06-01
This paper proposes an erfc potential to incorporate in a symmetric metric. One key feature of this model is that it relies on the existence of an intrinsic physical constant σ, a star-specific proper length that scales all its surroundings. Based thereon, the new metric is used to study the space-time geometry of a static symmetric massive object, as seen from its interior. The analytical solutions to the Einstein equation are presented, highlighting the absence of singularities and discontinuities in such a model. The geodesics are derived in their second- and first-order differential formats. Recalling the slight impact of the new model on the classical general relativity tests in the solar system, a number of facts and open problems are briefly revisited on the basis of a heuristic definition of σ. A special attention is given to gravitational collapses and non-singular black holes.
Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig
2017-01-01
This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.
Energy Technology Data Exchange (ETDEWEB)
Frye, Jason Neal; Veitch, Cynthia K.; Mateski, Mark Elliot; Michalski, John T.; Harris, James Mark; Trevino, Cassandra M.; Maruoka, Scott
2012-03-01
Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.
Adaptive metric kernel regression
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
2000-01-01
Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...
Adaptive Metric Kernel Regression
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
1998-01-01
Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...
Tice, Bradley S.
Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English language with the intention that it may be used in second language instruction. Stress is defined by its physical and acoustical correlates, and the principles of…
Engineering performance metrics
Delozier, R.; Snyder, N.
1993-03-01
Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.
Metrics for Probabilistic Geometries
DEFF Research Database (Denmark)
Tosi, Alessandra; Hauberg, Søren; Vellido, Alfredo
2014-01-01
the distribution over mappings is given by a Gaussian process. We treat the corresponding latent variable model as a Riemannian manifold and we use the expectation of the metric under the Gaussian process prior to define interpolating paths and measure distance between latent points. We show how distances...
International Nuclear Information System (INIS)
Roege, Paul E.; Collier, Zachary A.; Mancillas, James; McDonagh, John A.; Linkov, Igor
2014-01-01
Energy lies at the backbone of any advanced society and constitutes an essential prerequisite for economic growth, social order and national defense. However there is an Achilles heel to today's energy and technology relationship; namely a precarious intimacy between energy and the fiscal, social, and technical systems it supports. Recently, widespread and persistent disruptions in energy systems have highlighted the extent of this dependence and the vulnerability of increasingly optimized systems to changing conditions. Resilience is an emerging concept that offers to reconcile considerations of performance under dynamic environments and across multiple time frames by supplementing traditionally static system performance measures to consider behaviors under changing conditions and complex interactions among physical, information and human domains. This paper identifies metrics useful to implement guidance for energy-related planning, design, investment, and operation. Recommendations are presented using a matrix format to provide a structured and comprehensive framework of metrics relevant to a system's energy resilience. The study synthesizes previously proposed metrics and emergent resilience literature to provide a multi-dimensional model intended for use by leaders and practitioners as they transform our energy posture from one of stasis and reaction to one that is proactive and which fosters sustainable growth. - Highlights: • Resilience is the ability of a system to recover from adversity. • There is a need for methods to quantify and measure system resilience. • We developed a matrix-based approach to generate energy resilience metrics. • These metrics can be used in energy planning, system design, and operations
Software Quality Assurance Metrics
McRae, Kalindra A.
2004-01-01
Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.
Enterprise Sustainment Metrics
2015-06-19
are negatively impacting KPIs” (Parmenter, 2010: 31). In the current state, the Air Force’s AA and PBL metrics are once again split . AA does...must have the authority to “take immediate action to rectify situations that are negatively impacting KPIs” (Parmenter, 2010: 31). 3. Measuring...highest profitability and shareholder value for each company” (2014: 273). By systematically diagraming a process, either through a swim lane flowchart
Symmetries of the dual metrics
International Nuclear Information System (INIS)
Baleanu, D.
1998-01-01
The geometric duality between the metric g μν and a Killing tensor K μν is studied. The conditions were found when the symmetries of the metric g μν and the dual metric K μν are the same. Dual spinning space was constructed without introduction of torsion. The general results are applied to the case of Kerr-Newmann metric
Kerr metric in cosmological background
Energy Technology Data Exchange (ETDEWEB)
Vaidya, P C [Gujarat Univ., Ahmedabad (India). Dept. of Mathematics
1977-06-01
A metric satisfying Einstein's equation is given which in the vicinity of the source reduces to the well-known Kerr metric and which at large distances reduces to the Robertson-Walker metric of a nomogeneous cosmological model. The radius of the event horizon of the Kerr black hole in the cosmological background is found out.
Validation of Metrics as Error Predictors
Mendling, Jan
In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.
Comparison of luminance based metrics in different lighting conditions
DEFF Research Database (Denmark)
Wienold, J.; Kuhn, T.E.; Christoffersen, J.
In this study, we evaluate established and newly developed metrics for predicting glare using data from three different research studies. The evaluation covers two different targets: 1. How well the user’s perception of glare magnitude correlates to the prediction of the glare metrics? 2. How well...... do the glare metrics describe the subjects’ disturbance by glare? We applied Spearman correlations, logistic regressions and an accuracy evaluation, based on an ROC-analysis. The results show that five of the twelve investigated metrics are failing at least one of the statistical tests. The other...... seven metrics CGI, modified DGI, DGP, Ev, average Luminance of the image Lavg, UGP and UGR are passing all statistical tests. DGP, CGI, DGI_mod and UGP have largest AUC and might be slightly more robust. The accuracy of the predictions of afore mentioned seven metrics for the disturbance by glare lies...
Learning Low-Dimensional Metrics
Jain, Lalit; Mason, Blake; Nowak, Robert
2017-01-01
This paper investigates the theoretical foundations of metric learning, focused on three key questions that are not fully addressed in prior work: 1) we consider learning general low-dimensional (low-rank) metrics as well as sparse metrics; 2) we develop upper and lower (minimax)bounds on the generalization error; 3) we quantify the sample complexity of metric learning in terms of the dimension of the feature space and the dimension/rank of the underlying metric;4) we also bound the accuracy ...
Metrics with vanishing quantum corrections
International Nuclear Information System (INIS)
Coley, A A; Hervik, S; Gibbons, G W; Pope, C N
2008-01-01
We investigate solutions of the classical Einstein or supergravity equations that solve any set of quantum corrected Einstein equations in which the Einstein tensor plus a multiple of the metric is equated to a symmetric conserved tensor T μν (g αβ , ∂ τ g αβ , ∂ τ ∂ σ g αβ , ...,) constructed from sums of terms, the involving contractions of the metric and powers of arbitrary covariant derivatives of the curvature tensor. A classical solution, such as an Einstein metric, is called universal if, when evaluated on that Einstein metric, T μν is a multiple of the metric. A Ricci flat classical solution is called strongly universal if, when evaluated on that Ricci flat metric, T μν vanishes. It is well known that pp-waves in four spacetime dimensions are strongly universal. We focus attention on a natural generalization; Einstein metrics with holonomy Sim(n - 2) in which all scalar invariants are zero or constant. In four dimensions we demonstrate that the generalized Ghanam-Thompson metric is weakly universal and that the Goldberg-Kerr metric is strongly universal; indeed, we show that universality extends to all four-dimensional Sim(2) Einstein metrics. We also discuss generalizations to higher dimensions
Metric Learning for Hyperspectral Image Segmentation
Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca
2011-01-01
We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.
Performance metrics for the evaluation of hyperspectral chemical identification systems
Truslow, Eric; Golowich, Steven; Manolakis, Dimitris; Ingle, Vinay
2016-02-01
Remote sensing of chemical vapor plumes is a difficult but important task for many military and civilian applications. Hyperspectral sensors operating in the long-wave infrared regime have well-demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis testing problem which standard detection metrics do not fully describe. We propose using an additional performance metric for identification based on the so-called Dice index. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and identification metric. Using the proposed metrics, we demonstrate that the intuitive system design of a detector bank followed by an identifier is indeed justified when incorporating performance information beyond the standard detection metrics.
Metrics for Business Process Models
Mendling, Jan
Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.
Sharp metric obstructions for quasi-Einstein metrics
Case, Jeffrey S.
2013-02-01
Using the tractor calculus to study smooth metric measure spaces, we adapt results of Gover and Nurowski to give sharp metric obstructions to the existence of quasi-Einstein metrics on suitably generic manifolds. We do this by introducing an analogue of the Weyl tractor W to the setting of smooth metric measure spaces. The obstructions we obtain can be realized as tensorial invariants which are polynomial in the Riemann curvature tensor and its divergence. By taking suitable limits of their tensorial forms, we then find obstructions to the existence of static potentials, generalizing to higher dimensions a result of Bartnik and Tod, and to the existence of potentials for gradient Ricci solitons.
Completion of a Dislocated Metric Space
Directory of Open Access Journals (Sweden)
P. Sumati Kumari
2015-01-01
Full Text Available We provide a construction for the completion of a dislocated metric space (abbreviated d-metric space; we also prove that the completion of the metric associated with a d-metric coincides with the metric associated with the completion of the d-metric.
Metric adjusted skew information
DEFF Research Database (Denmark)
Hansen, Frank
2008-01-01
) that vanishes for observables commuting with the state. We show that the skew information is a convex function on the manifold of states. It also satisfies other requirements, proposed by Wigner and Yanase, for an effective measure-of-information content of a state relative to a conserved observable. We...... establish a connection between the geometrical formulation of quantum statistics as proposed by Chentsov and Morozova and measures of quantum information as introduced by Wigner and Yanase and extended in this article. We show that the set of normalized Morozova-Chentsov functions describing the possible......We extend the concept of Wigner-Yanase-Dyson skew information to something we call "metric adjusted skew information" (of a state with respect to a conserved observable). This "skew information" is intended to be a non-negative quantity bounded by the variance (of an observable in a state...
Performance evaluation of routing metrics for wireless mesh networks
CSIR Research Space (South Africa)
Nxumalo, SL
2009-08-01
Full Text Available for WMN. The routing metrics have not been compared with QoS parameters. This paper is a work in progress of the project in which researchers want to compare the performance of different routing metrics in WMN using a wireless test bed. Researchers...
The metric system: An introduction
Energy Technology Data Exchange (ETDEWEB)
Lumley, S.M.
1995-05-01
On July 13, 1992, Deputy Director Duane Sewell restated the Laboratory`s policy on conversion to the metric system which was established in 1974. Sewell`s memo announced the Laboratory`s intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory`s conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on July 25, 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation`s conversion to the metric system. The second part of this report is on applying the metric system.
Attack-Resistant Trust Metrics
Levien, Raph
The Internet is an amazingly powerful tool for connecting people together, unmatched in human history. Yet, with that power comes great potential for spam and abuse. Trust metrics are an attempt to compute the set of which people are trustworthy and which are likely attackers. This chapter presents two specific trust metrics developed and deployed on the Advogato Website, which is a community blog for free software developers. This real-world experience demonstrates that the trust metrics fulfilled their goals, but that for good results, it is important to match the assumptions of the abstract trust metric computation to the real-world implementation.
The metric system: An introduction
Lumley, Susan M.
On 13 Jul. 1992, Deputy Director Duane Sewell restated the Laboratory's policy on conversion to the metric system which was established in 1974. Sewell's memo announced the Laboratory's intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory's conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on 25 Jul. 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation's conversion to the metric system. The second part of this report is on applying the metric system.
Metric-adjusted skew information
DEFF Research Database (Denmark)
Liang, Cai; Hansen, Frank
2010-01-01
on a bipartite system and proved superadditivity of the Wigner-Yanase-Dyson skew informations for such states. We extend this result to the general metric-adjusted skew information. We finally show that a recently introduced extension to parameter values 1 ...We give a truly elementary proof of the convexity of metric-adjusted skew information following an idea of Effros. We extend earlier results of weak forms of superadditivity to general metric-adjusted skew information. Recently, Luo and Zhang introduced the notion of semi-quantum states...... of (unbounded) metric-adjusted skew information....
Directory of Open Access Journals (Sweden)
Isabel Garrido
2016-04-01
Full Text Available The class of metric spaces (X,d known as small-determined spaces, introduced by Garrido and Jaramillo, are properly defined by means of some type of real-valued Lipschitz functions on X. On the other hand, B-simple metric spaces introduced by Hejcman are defined in terms of some kind of bornologies of bounded subsets of X. In this note we present a common framework where both classes of metric spaces can be studied which allows us to see not only the relationships between them but also to obtain new internal characterizations of these metric properties.
Software metrics: Software quality metrics for distributed systems. [reliability engineering
Post, J. V.
1981-01-01
Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.
Multimetric indices: How many metrics?
Multimetric indices (MMI’s) often include 5 to 15 metrics, each representing a different attribute of assemblage condition, such as species diversity, tolerant taxa, and nonnative taxa. Is there an optimal number of metrics for MMIs? To explore this question, I created 1000 9-met...
Metrical Phonology: German Sound System.
Tice, Bradley S.
Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English and German languages. The objective is to promote use of metrical phonology as a tool for enhancing instruction in stress patterns in words and sentences, particularly in…
Extending cosmology: the metric approach
Mendoza, S.
2012-01-01
Comment: 2012, Extending Cosmology: The Metric Approach, Open Questions in Cosmology; Review article for an Intech "Open questions in cosmology" book chapter (19 pages, 3 figures). Available from: http://www.intechopen.com/books/open-questions-in-cosmology/extending-cosmology-the-metric-approach
International Nuclear Information System (INIS)
Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, Rene
2008-01-01
We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results
High resolution metric imaging payload
Delclaud, Y.
2017-11-01
Alcatel Space Industries has become Europe's leader in the field of high and very high resolution optical payloads, in the frame work of earth observation system able to provide military government with metric images from space. This leadership allowed ALCATEL to propose for the export market, within a French collaboration frame, a complete space based system for metric observation.
Energy Technology Data Exchange (ETDEWEB)
Gibbons, Gary W. [DAMTP, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA U.K. (United Kingdom); Volkov, Mikhail S., E-mail: gwg1@cam.ac.uk, E-mail: volkov@lmpt.univ-tours.fr [Laboratoire de Mathématiques et Physique Théorique, LMPT CNRS—UMR 7350, Université de Tours, Parc de Grandmont, Tours, 37200 France (France)
2017-05-01
We study solutions obtained via applying dualities and complexifications to the vacuum Weyl metrics generated by massive rods and by point masses. Rescaling them and extending to complex parameter values yields axially symmetric vacuum solutions containing singularities along circles that can be viewed as singular matter sources. These solutions have wormhole topology with several asymptotic regions interconnected by throats and their sources can be viewed as thin rings of negative tension encircling the throats. For a particular value of the ring tension the geometry becomes exactly flat although the topology remains non-trivial, so that the rings literally produce holes in flat space. To create a single ring wormhole of one metre radius one needs a negative energy equivalent to the mass of Jupiter. Further duality transformations dress the rings with the scalar field, either conventional or phantom. This gives rise to large classes of static, axially symmetric solutions, presumably including all previously known solutions for a gravity-coupled massless scalar field, as for example the spherically symmetric Bronnikov-Ellis wormholes with phantom scalar. The multi-wormholes contain infinite struts everywhere at the symmetry axes, apart from solutions with locally flat geometry.
Metrics for image segmentation
Rees, Gareth; Greenway, Phil; Morray, Denise
1998-07-01
An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.
Metric regularity and subdifferential calculus
International Nuclear Information System (INIS)
Ioffe, A D
2000-01-01
The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces
METRICS DEVELOPMENT FOR PATENTS.
Veiga, Daniela Francescato; Ferreira, Lydia Masako
2015-01-01
To develop a proposal for metrics for patents to be applied in assessing the postgraduate programs of Medicine III - Capes. From the reading and analysis of the 2013 area documents of all the 48 areas of Capes, a proposal for metrics for patents was developed to be applied in Medicine III programs. Except for the areas Biotechnology, Food Science, Biological Sciences III, Physical Education, Engineering I, III and IV and Interdisciplinary, most areas do not adopt a scoring system for patents. The proposal developed was based on the criteria of Biotechnology, with adaptations. In general, it will be valued, in ascending order, the deposit, the granting and licensing/production. It will also be assigned higher scores to patents registered abroad and whenever there is a participation of students. This proposal can be applied to the item Intellectual Production of the evaluation form, in subsection Technical Production/Patents. The percentage of 10% for academic programs and 40% for Masters Professionals should be maintained. The program will be scored as Very Good when it reaches 400 points or over; Good, between 200 and 399 points; Regular, between 71 and 199 points; Weak up to 70 points; Insufficient, no punctuation. Desenvolver uma proposta de métricas para patentes a serem aplicadas na avaliação dos Programas de Pós-Graduação da Área Medicina III - Capes. A partir da leitura e análise dos documentos de área de 2013 de todas as 48 Áreas da Capes, desenvolveu-se uma proposta de métricas para patentes, a ser aplicada na avaliação dos programas da área. Constatou-se que, com exceção das áreas Biotecnologia, Ciência de Alimentos, Ciências Biológicas III, Educação Física, Engenharias I, III e IV e Interdisciplinar, a maioria não adota sistema de pontuação para patentes. A proposta desenvolvida baseou-se nos critérios da Biotecnologia, com adaptações. De uma forma geral, foi valorizado, em ordem crescente, o depósito, a concessão e o
Candelas, Philip; de la Ossa, Xenia; McOrist, Jock
2017-12-01
Heterotic vacua of string theory are realised, at large radius, by a compact threefold with vanishing first Chern class together with a choice of stable holomorphic vector bundle. These form a wide class of potentially realistic four-dimensional vacua of string theory. Despite all their phenomenological promise, there is little understanding of the metric on the moduli space of these. What is sought is the analogue of special geometry for these vacua. The metric on the moduli space is important in phenomenology as it normalises D-terms and Yukawa couplings. It is also of interest in mathematics, since it generalises the metric, first found by Kobayashi, on the space of gauge field connections, to a more general context. Here we construct this metric, correct to first order in {α^{\\backprime}}, in two ways: first by postulating a metric that is invariant under background gauge transformations of the gauge field, and also by dimensionally reducing heterotic supergravity. These methods agree and the resulting metric is Kähler, as is required by supersymmetry. Checking the metric is Kähler is intricate and the anomaly cancellation equation for the H field plays an essential role. The Kähler potential nevertheless takes a remarkably simple form: it is the Kähler potential of special geometry with the Kähler form replaced by the {α^{\\backprime}}-corrected hermitian form.
Implications of Metric Choice for Common Applications of Readmission Metrics
Davies, Sheryl; Saynina, Olga; Schultz, Ellen; McDonald, Kathryn M; Baker, Laurence C
2013-01-01
Objective. To quantify the differential impact on hospital performance of three readmission metrics: all-cause readmission (ACR), 3M Potential Preventable Readmission (PPR), and Centers for Medicare and Medicaid 30-day readmission (CMS).
Development of quality metrics for ambulatory pediatric cardiology: Infection prevention.
Johnson, Jonathan N; Barrett, Cindy S; Franklin, Wayne H; Graham, Eric M; Halnon, Nancy J; Hattendorf, Brandy A; Krawczeski, Catherine D; McGovern, James J; O'Connor, Matthew J; Schultz, Amy H; Vinocur, Jeffrey M; Chowdhury, Devyani; Anderson, Jeffrey B
2017-12-01
In 2012, the American College of Cardiology's (ACC) Adult Congenital and Pediatric Cardiology Council established a program to develop quality metrics to guide ambulatory practices for pediatric cardiology. The council chose five areas on which to focus their efforts; chest pain, Kawasaki Disease, tetralogy of Fallot, transposition of the great arteries after arterial switch, and infection prevention. Here, we sought to describe the process, evaluation, and results of the Infection Prevention Committee's metric design process. The infection prevention metrics team consisted of 12 members from 11 institutions in North America. The group agreed to work on specific infection prevention topics including antibiotic prophylaxis for endocarditis, rheumatic fever, and asplenia/hyposplenism; influenza vaccination and respiratory syncytial virus prophylaxis (palivizumab); preoperative methods to reduce intraoperative infections; vaccinations after cardiopulmonary bypass; hand hygiene; and testing to identify splenic function in patients with heterotaxy. An extensive literature review was performed. When available, previously published guidelines were used fully in determining metrics. The committee chose eight metrics to submit to the ACC Quality Metric Expert Panel for review. Ultimately, metrics regarding hand hygiene and influenza vaccination recommendation for patients did not pass the RAND analysis. Both endocarditis prophylaxis metrics and the RSV/palivizumab metric passed the RAND analysis but fell out during the open comment period. Three metrics passed all analyses, including those for antibiotic prophylaxis in patients with heterotaxy/asplenia, for influenza vaccination compliance in healthcare personnel, and for adherence to recommended regimens of secondary prevention of rheumatic fever. The lack of convincing data to guide quality improvement initiatives in pediatric cardiology is widespread, particularly in infection prevention. Despite this, three metrics were
Background metric in supergravity theories
International Nuclear Information System (INIS)
Yoneya, T.
1978-01-01
In supergravity theories, we investigate the conformal anomaly of the path-integral determinant and the problem of fermion zero modes in the presence of a nontrivial background metric. Except in SO(3) -invariant supergravity, there are nonvanishing conformal anomalies. As a consequence, amplitudes around the nontrivial background metric contain unpredictable arbitrariness. The fermion zero modes which are explicitly constructed for the Euclidean Schwarzschild metric are interpreted as an indication of the supersymmetric multiplet structure of a black hole. The degree of degeneracy of a black hole is 2/sup 4n/ in SO(n) supergravity
Generalized Painleve-Gullstrand metrics
Energy Technology Data Exchange (ETDEWEB)
Lin Chunyu [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China)], E-mail: l2891112@mail.ncku.edu.tw; Soo Chopin [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China)], E-mail: cpsoo@mail.ncku.edu.tw
2009-02-02
An obstruction to the implementation of spatially flat Painleve-Gullstrand (PG) slicings is demonstrated, and explicitly discussed for Reissner-Nordstroem and Schwarzschild-anti-deSitter spacetimes. Generalizations of PG slicings which are not spatially flat but which remain regular at the horizons are introduced. These metrics can be obtained from standard spherically symmetric metrics by physical Lorentz boosts. With these generalized PG metrics, problematic contributions to the imaginary part of the action in the Parikh-Wilczek derivation of Hawking radiation due to the obstruction can be avoided.
Daylight metrics and energy savings
Energy Technology Data Exchange (ETDEWEB)
Mardaljevic, John; Heschong, Lisa; Lee, Eleanor
2009-12-31
The drive towards sustainable, low-energy buildings has increased the need for simple, yet accurate methods to evaluate whether a daylit building meets minimum standards for energy and human comfort performance. Current metrics do not account for the temporal and spatial aspects of daylight, nor of occupants comfort or interventions. This paper reviews the historical basis of current compliance methods for achieving daylit buildings, proposes a technical basis for development of better metrics, and provides two case study examples to stimulate dialogue on how metrics can be applied in a practical, real-world context.
Next-Generation Metrics: Responsible Metrics & Evaluation for Open Science
Energy Technology Data Exchange (ETDEWEB)
Wilsdon, J.; Bar-Ilan, J.; Peters, I.; Wouters, P.
2016-07-01
Metrics evoke a mixed reaction from the research community. A commitment to using data to inform decisions makes some enthusiastic about the prospect of granular, real-time analysis o of research and its wider impacts. Yet we only have to look at the blunt use of metrics such as journal impact factors, h-indices and grant income targets, to be reminded of the pitfalls. Some of the most precious qualities of academic culture resist simple quantification, and individual indicators often struggle to do justice to the richness and plurality of research. Too often, poorly designed evaluation criteria are “dominating minds, distorting behaviour and determining careers (Lawrence, 2007).” Metrics hold real power: they are constitutive of values, identities and livelihoods. How to exercise that power to more positive ends has been the focus of several recent and complementary initiatives, including the San Francisco Declaration on Research Assessment (DORA1), the Leiden Manifesto2 and The Metric Tide3 (a UK government review of the role of metrics in research management and assessment). Building on these initiatives, the European Commission, under its new Open Science Policy Platform4, is now looking to develop a framework for responsible metrics for research management and evaluation, which can be incorporated into the successor framework to Horizon 2020. (Author)
Chernozhukov, Victor; Hansen, Chris; Spindler, Martin
2016-01-01
The package High-dimensional Metrics (\\Rpackage{hdm}) is an evolving collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e.g., treatment or poli...
2017-03-01
Keywords — Trojan; integrity; trust; quantify; hardware; assurance; verification; metrics ; reference, quality ; profile I. INTRODUCTION A. The Rising...as a framework for benchmarking Trusted Part certifications. Previous work conducted in Trust Metric development has focused on measures at the...the lowest integrities. Based on the analysis, the DI metric shows measurable differentiation between all five Test Article Error Location Error
Zimmerman, Marianna
1975-01-01
Describes a classroom activity which involved sixth grade students in a learning situation including making ice cream, safety procedures in a science laboratory, calibrating a thermometer, using metric units of volume and mass. (EB)
Experiential space is hardly metric
Czech Academy of Sciences Publication Activity Database
Šikl, Radovan; Šimeček, Michal; Lukavský, Jiří
2008-01-01
Roč. 2008, č. 37 (2008), s. 58-58 ISSN 0301-0066. [European Conference on Visual Perception. 24.08-28.08.2008, Utrecht] R&D Projects: GA ČR GA406/07/1676 Institutional research plan: CEZ:AV0Z70250504 Keywords : visual space perception * metric and non-metric perceptual judgments * ecological validity Subject RIV: AN - Psychology
Phantom metrics with Killing spinors
Directory of Open Access Journals (Sweden)
W.A. Sabra
2015-11-01
Full Text Available We study metric solutions of Einstein–anti-Maxwell theory admitting Killing spinors. The analogue of the IWP metric which admits a space-like Killing vector is found and is expressed in terms of a complex function satisfying the wave equation in flat (2+1-dimensional space–time. As examples, electric and magnetic Kasner spaces are constructed by allowing the solution to depend only on the time coordinate. Euclidean solutions are also presented.
Predicting class testability using object-oriented metrics
Bruntink, Magiel; Deursen, Arie
2004-01-01
textabstractIn this paper we investigate factors of the testability of object-oriented software systems. The starting point is given by a study of the literature to obtain both an initial model of testability and existing OO metrics related to testability. Subsequently, these metrics are evaluated by means of two case studies of large Java systems for which JUnit test cases exist. The goal of this paper is to define and evaluate a set of metrics that can be used to assess the testability of t...
Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates
International Nuclear Information System (INIS)
Perfetti, Christopher M.; Rearden, Bradley T.
2015-01-01
This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.
Scalar-metric and scalar-metric-torsion gravitational theories
International Nuclear Information System (INIS)
Aldersley, S.J.
1977-01-01
The techniques of dimensional analysis and of the theory of tensorial concomitants are employed to study field equations in gravitational theories which incorporate scalar fields of the Brans-Dicke type. Within the context of scalar-metric gravitational theories, a uniqueness theorem for the geometric (or gravitational) part of the field equations is proven and a Lagrangian is determined which is uniquely specified by dimensional analysis. Within the context of scalar-metric-torsion gravitational theories a uniqueness theorem for field Lagrangians is presented and the corresponding Euler-Lagrange equations are given. Finally, an example of a scalar-metric-torsion theory is presented which is similar in many respects to the Brans-Dicke theory and the Einstein-Cartan theory
Regge calculus from discontinuous metrics
International Nuclear Information System (INIS)
Khatsymovsky, V.M.
2003-01-01
Regge calculus is considered as a particular case of the more general system where the linklengths of any two neighbouring 4-tetrahedra do not necessarily coincide on their common face. This system is treated as that one described by metric discontinuous on the faces. In the superspace of all discontinuous metrics the Regge calculus metrics form some hypersurface defined by continuity conditions. Quantum theory of the discontinuous metric system is assumed to be fixed somehow in the form of quantum measure on (the space of functionals on) the superspace. The problem of reducing this measure to the Regge hypersurface is addressed. The quantum Regge calculus measure is defined from a discontinuous metric measure by inserting the δ-function-like phase factor. The requirement that continuity conditions be imposed in a 'face-independent' way fixes this factor uniquely. The term 'face-independent' means that this factor depends only on the (hyper)plane spanned by the face, not on it's form and size. This requirement seems to be natural from the viewpoint of existence of the well-defined continuum limit maximally free of lattice artefacts
Symmetries of Taub-NUT dual metrics
International Nuclear Information System (INIS)
Baleanu, D.; Codoban, S.
1998-01-01
Recently geometric duality was analyzed for a metric which admits Killing tensors. An interesting example arises when the manifold has Killing-Yano tensors. The symmetries of the dual metrics in the case of Taub-NUT metric are investigated. Generic and non-generic symmetries of dual Taub-NUT metric are analyzed
Survey of source code metrics for evaluating testability of object oriented systems
Shaheen , Muhammad Rabee; Du Bousquet , Lydie
2010-01-01
Software testing is costly in terms of time and funds. Testability is a software characteristic that aims at producing systems easy to test. Several metrics have been proposed to identify the testability weaknesses. But it is sometimes difficult to be convinced that those metrics are really related with testability. This article is a critical survey of the source-code based metrics proposed in the literature for object-oriented software testability. It underlines the necessity to provide test...
International Nuclear Information System (INIS)
Vaidya, P.C.; Patel, L.K.; Bhatt, P.V.
1976-01-01
Using Galilean time and retarded distance as coordinates the usual Kerr metric is expressed in form similar to the Newman-Unti-Tamburino (NUT) metric. The combined Kerr-NUT metric is then investigated. In addition to the Kerr and NUT solutions of Einstein's equations, three other types of solutions are derived. These are (i) the radiating Kerr solution, (ii) the radiating NUT solution satisfying Rsub(ik) = sigmaxisub(i)xisub(k), xisub(i)xisup(i) = 0, and (iii) the associated Kerr solution satisfying Rsub(ik) = 0. Solution (i) is distinct from and simpler than the one reported earlier by Vaidya and Patel (Phys. Rev.; D7:3590 (1973)). Solutions (ii) and (iii) gave line elements which have the axis of symmetry as a singular line. (author)
Complexity Metrics for Workflow Nets
DEFF Research Database (Denmark)
Lassen, Kristian Bisgaard; van der Aalst, Wil M.P.
2009-01-01
analysts have difficulties grasping the dynamics implied by a process model. Recent empirical studies show that people make numerous errors when modeling complex business processes, e.g., about 20 percent of the EPCs in the SAP reference model have design flaws resulting in potential deadlocks, livelocks......, etc. It seems obvious that the complexity of the model contributes to design errors and a lack of understanding. It is not easy to measure complexity, however. This paper presents three complexity metrics that have been implemented in the process analysis tool ProM. The metrics are defined...... for a subclass of Petri nets named Workflow nets, but the results can easily be applied to other languages. To demonstrate the applicability of these metrics, we have applied our approach and tool to 262 relatively complex Protos models made in the context of various student projects. This allows us to validate...
Image characterization metrics for muon tomography
Luo, Weidong; Lehovich, Andre; Anashkin, Edward; Bai, Chuanyong; Kindem, Joel; Sossong, Michael; Steiger, Matt
2014-05-01
Muon tomography uses naturally occurring cosmic rays to detect nuclear threats in containers. Currently there are no systematic image characterization metrics for muon tomography. We propose a set of image characterization methods to quantify the imaging performance of muon tomography. These methods include tests of spatial resolution, uniformity, contrast, signal to noise ratio (SNR) and vertical smearing. Simulated phantom data and analysis methods were developed to evaluate metric applicability. Spatial resolution was determined as the FWHM of the point spread functions in X, Y and Z axis for 2.5cm tungsten cubes. Uniformity was measured by drawing a volume of interest (VOI) within a large water phantom and defined as the standard deviation of voxel values divided by the mean voxel value. Contrast was defined as the peak signals of a set of tungsten cubes divided by the mean voxel value of the water background. SNR was defined as the peak signals of cubes divided by the standard deviation (noise) of the water background. Vertical smearing, i.e. vertical thickness blurring along the zenith axis for a set of 2 cm thick tungsten plates, was defined as the FWHM of vertical spread function for the plate. These image metrics provided a useful tool to quantify the basic imaging properties for muon tomography.
The uniqueness of the Fisher metric as information metric
Czech Academy of Sciences Publication Activity Database
Le, Hong-Van
2017-01-01
Roč. 69, č. 4 (2017), s. 879-896 ISSN 0020-3157 Institutional support: RVO:67985840 Keywords : Chentsov’s theorem * mixed topology * monotonicity of the Fisher metric Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.049, year: 2016 https://link.springer.com/article/10.1007%2Fs10463-016-0562-0
Thermodynamic metrics and optimal paths.
Sivak, David A; Crooks, Gavin E
2012-05-11
A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.
Invariant metrics for Hamiltonian systems
International Nuclear Information System (INIS)
Rangarajan, G.; Dragt, A.J.; Neri, F.
1991-05-01
In this paper, invariant metrics are constructed for Hamiltonian systems. These metrics give rise to norms on the space of homeogeneous polynomials of phase-space variables. For an accelerator lattice described by a Hamiltonian, these norms characterize the nonlinear content of the lattice. Therefore, the performance of the lattice can be improved by minimizing the norm as a function of parameters describing the beam-line elements in the lattice. A four-fold increase in the dynamic aperture of a model FODO cell is obtained using this procedure. 7 refs
Generalization of Vaidya's radiation metric
Energy Technology Data Exchange (ETDEWEB)
Gleiser, R J; Kozameh, C N [Universidad Nacional de Cordoba (Argentina). Instituto de Matematica, Astronomia y Fisica
1981-11-01
In this paper it is shown that if Vaidya's radiation metric is considered from the point of view of kinetic theory in general relativity, the corresponding phase space distribution function can be generalized in a particular way. The new family of spherically symmetric radiation metrics obtained contains Vaidya's as a limiting situation. The Einstein field equations are solved in a ''comoving'' coordinate system. Two arbitrary functions of a single variable are introduced in the process of solving these equations. Particular examples considered are a stationary solution, a nonvacuum solution depending on a single parameter, and several limiting situations.
Technical Privacy Metrics: a Systematic Survey
Wagner, Isabel; Eckhoff, David
2018-01-01
The file attached to this record is the author's final peer reviewed version The goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature makes an informed choice of metrics challenging. As a result, instead of using existing metrics, n...
Directory of Open Access Journals (Sweden)
Bessem Samet
2013-01-01
Full Text Available In 2005, Mustafa and Sims (2006 introduced and studied a new class of generalized metric spaces, which are called G-metric spaces, as a generalization of metric spaces. We establish some useful propositions to show that many fixed point theorems on (nonsymmetric G-metric spaces given recently by many authors follow directly from well-known theorems on metric spaces. Our technique can be easily extended to other results as shown in application.
DLA Energy Biofuel Feedstock Metrics Study
2012-12-11
moderately/highly in- vasive Metric 2: Genetically modified organism ( GMO ) hazard, Yes/No and Hazard Category Metric 3: Species hybridization...4– biofuel distribution Stage # 5– biofuel use Metric 1: State inva- siveness ranking Yes Minimal Minimal No No Metric 2: GMO hazard Yes...may utilize GMO microbial or microalgae species across the applicable biofuel life cycles (stages 1–3). The following consequence Metrics 4–6 then
Reproducibility of graph metrics in fMRI networks
Directory of Open Access Journals (Sweden)
Qawi K Telesford
2010-12-01
Full Text Available The reliability of graph metrics calculated in network analysis is essential to the interpretation of complex network organization. These graph metrics are used to deduce the small-world properties in networks. In this study, we investigated the test-retest reliability of graph metrics from functional magnetic resonance imaging (fMRI data collected for two runs in 45 healthy older adults. Graph metrics were calculated on data for both runs and compared using intraclass correlation coefficient (ICC statistics and Bland-Altman (BA plots. ICC scores describe the level of absolute agreement between two measurements and provide a measure of reproducibility. For mean graph metrics, ICC scores were high for clustering coefficient (ICC=0.86, global efficiency (ICC=0.83, path length (ICC=0.79, and local efficiency (ICC=0.75; the ICC score for degree was found to be low (ICC=0.29. ICC scores were also used to generate reproducibility maps in brain space to test voxel-wise reproducibility for unsmoothed and smoothed data. Reproducibility was uniform across the brain for global efficiency and path length, but was only high in network hubs for clustering coefficient, local efficiency and degree. BA plots were used to test the measurement repeatability of all graph metrics. All graph metrics fell within the limits for repeatability. Together, these results suggest that with exception of degree, mean graph metrics are reproducible and suitable for clinical studies. Further exploration is warranted to better understand reproducibility across the brain on a voxel-wise basis.
Separable metrics and radiating stars
Indian Academy of Sciences (India)
We study the junction condition relating the pressure to heat flux at the boundary of an accelerating and expanding spherically symmetric radiating star. We transform the junction condition to an ordinary differential equation by making a separability assumption on the metric functions in the space–time variables.
Socio-technical security metrics
Gollmann, D.; Herley, C.; Koenig, V.; Pieters, W.; Sasse, M.A.
2015-01-01
Report from Dagstuhl seminar 14491. This report documents the program and the outcomes of Dagstuhl Seminar 14491 “Socio-Technical Security Metrics”. In the domain of safety, metrics inform many decisions, from the height of new dikes to the design of nuclear plants. We can state, for example, that
Leading Gainful Employment Metric Reporting
Powers, Kristina; MacPherson, Derek
2016-01-01
This chapter will address the importance of intercampus involvement in reporting of gainful employment student-level data that will be used in the calculation of gainful employment metrics by the U.S. Department of Education. The authors will discuss why building relationships within the institution is critical for effective gainful employment…
Software metrics: The key to quality software on the NCC project
Burns, Patricia J.
1993-01-01
Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.
Reproducibility of graph metrics of human brain functional networks.
Deuker, Lorena; Bullmore, Edward T; Smith, Marie; Christensen, Soren; Nathan, Pradeep J; Rockstroh, Brigitte; Bassett, Danielle S
2009-10-01
Graph theory provides many metrics of complex network organization that can be applied to analysis of brain networks derived from neuroimaging data. Here we investigated the test-retest reliability of graph metrics of functional networks derived from magnetoencephalography (MEG) data recorded in two sessions from 16 healthy volunteers who were studied at rest and during performance of the n-back working memory task in each session. For each subject's data at each session, we used a wavelet filter to estimate the mutual information (MI) between each pair of MEG sensors in each of the classical frequency intervals from gamma to low delta in the overall range 1-60 Hz. Undirected binary graphs were generated by thresholding the MI matrix and 8 global network metrics were estimated: the clustering coefficient, path length, small-worldness, efficiency, cost-efficiency, assortativity, hierarchy, and synchronizability. Reliability of each graph metric was assessed using the intraclass correlation (ICC). Good reliability was demonstrated for most metrics applied to the n-back data (mean ICC=0.62). Reliability was greater for metrics in lower frequency networks. Higher frequency gamma- and beta-band networks were less reliable at a global level but demonstrated high reliability of nodal metrics in frontal and parietal regions. Performance of the n-back task was associated with greater reliability than measurements on resting state data. Task practice was also associated with greater reliability. Collectively these results suggest that graph metrics are sufficiently reliable to be considered for future longitudinal studies of functional brain network changes.
Group covariance and metrical theory
International Nuclear Information System (INIS)
Halpern, L.
1983-01-01
The a priori introduction of a Lie group of transformations into a physical theory has often proved to be useful; it usually serves to describe special simplified conditions before a general theory can be worked out. Newton's assumptions of absolute space and time are examples where the Euclidian group and translation group have been introduced. These groups were extended to the Galilei group and modified in the special theory of relativity to the Poincare group to describe physics under the given conditions covariantly in the simplest way. The criticism of the a priori character leads to the formulation of the general theory of relativity. The general metric theory does not really give preference to a particular invariance group - even the principle of equivalence can be adapted to a whole family of groups. The physical laws covariantly inserted into the metric space are however adapted to the Poincare group. 8 references
Multi-Metric Sustainability Analysis
Energy Technology Data Exchange (ETDEWEB)
Cowlin, Shannon [National Renewable Energy Lab. (NREL), Golden, CO (United States); Heimiller, Donna [National Renewable Energy Lab. (NREL), Golden, CO (United States); Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Pless, Jacquelyn [National Renewable Energy Lab. (NREL), Golden, CO (United States); Munoz, David [Colorado School of Mines, Golden, CO (United States)
2014-12-01
A readily accessible framework that allows for evaluating impacts and comparing tradeoffs among factors in energy policy, expansion planning, and investment decision making is lacking. Recognizing this, the Joint Institute for Strategic Energy Analysis (JISEA) funded an exploration of multi-metric sustainability analysis (MMSA) to provide energy decision makers with a means to make more comprehensive comparisons of energy technologies. The resulting MMSA tool lets decision makers simultaneously compare technologies and potential deployment locations.
Sensory Metrics of Neuromechanical Trust.
Softky, William; Benford, Criscillia
2017-09-01
Today digital sources supply a historically unprecedented component of human sensorimotor data, the consumption of which is correlated with poorly understood maladies such as Internet addiction disorder and Internet gaming disorder. Because both natural and digital sensorimotor data share common mathematical descriptions, one can quantify our informational sensorimotor needs using the signal processing metrics of entropy, noise, dimensionality, continuity, latency, and bandwidth. Such metrics describe in neutral terms the informational diet human brains require to self-calibrate, allowing individuals to maintain trusting relationships. With these metrics, we define the trust humans experience using the mathematical language of computational models, that is, as a primitive statistical algorithm processing finely grained sensorimotor data from neuromechanical interaction. This definition of neuromechanical trust implies that artificial sensorimotor inputs and interactions that attract low-level attention through frequent discontinuities and enhanced coherence will decalibrate a brain's representation of its world over the long term by violating the implicit statistical contract for which self-calibration evolved. Our hypersimplified mathematical understanding of human sensorimotor processing as multiscale, continuous-time vibratory interaction allows equally broad-brush descriptions of failure modes and solutions. For example, we model addiction in general as the result of homeostatic regulation gone awry in novel environments (sign reversal) and digital dependency as a sub-case in which the decalibration caused by digital sensorimotor data spurs yet more consumption of them. We predict that institutions can use these sensorimotor metrics to quantify media richness to improve employee well-being; that dyads and family-size groups will bond and heal best through low-latency, high-resolution multisensory interaction such as shared meals and reciprocated touch; and
Metric reconstruction from Weyl scalars
Energy Technology Data Exchange (ETDEWEB)
Whiting, Bernard F; Price, Larry R [Department of Physics, PO Box 118440, University of Florida, Gainesville, FL 32611 (United States)
2005-08-07
The Kerr geometry has remained an elusive world in which to explore physics and delve into the more esoteric implications of general relativity. Following the discovery, by Kerr in 1963, of the metric for a rotating black hole, the most major advance has been an understanding of its Weyl curvature perturbations based on Teukolsky's discovery of separable wave equations some ten years later. In the current research climate, where experiments across the globe are preparing for the first detection of gravitational waves, a more complete understanding than concerns just the Weyl curvature is now called for. To understand precisely how comparatively small masses move in response to the gravitational waves they emit, a formalism has been developed based on a description of the whole spacetime metric perturbation in the neighbourhood of the emission region. Presently, such a description is not available for the Kerr geometry. While there does exist a prescription for obtaining metric perturbations once curvature perturbations are known, it has become apparent that there are gaps in that formalism which are still waiting to be filled. The most serious gaps include gauge inflexibility, the inability to include sources-which are essential when the emitting masses are considered-and the failure to describe the l = 0 and 1 perturbation properties. Among these latter properties of the perturbed spacetime, arising from a point mass in orbit, are the perturbed mass and axial component of angular momentum, as well as the very elusive Carter constant for non-axial angular momentum. A status report is given on recent work which begins to repair these deficiencies in our current incomplete description of Kerr metric perturbations.
Metric reconstruction from Weyl scalars
International Nuclear Information System (INIS)
Whiting, Bernard F; Price, Larry R
2005-01-01
The Kerr geometry has remained an elusive world in which to explore physics and delve into the more esoteric implications of general relativity. Following the discovery, by Kerr in 1963, of the metric for a rotating black hole, the most major advance has been an understanding of its Weyl curvature perturbations based on Teukolsky's discovery of separable wave equations some ten years later. In the current research climate, where experiments across the globe are preparing for the first detection of gravitational waves, a more complete understanding than concerns just the Weyl curvature is now called for. To understand precisely how comparatively small masses move in response to the gravitational waves they emit, a formalism has been developed based on a description of the whole spacetime metric perturbation in the neighbourhood of the emission region. Presently, such a description is not available for the Kerr geometry. While there does exist a prescription for obtaining metric perturbations once curvature perturbations are known, it has become apparent that there are gaps in that formalism which are still waiting to be filled. The most serious gaps include gauge inflexibility, the inability to include sources-which are essential when the emitting masses are considered-and the failure to describe the l = 0 and 1 perturbation properties. Among these latter properties of the perturbed spacetime, arising from a point mass in orbit, are the perturbed mass and axial component of angular momentum, as well as the very elusive Carter constant for non-axial angular momentum. A status report is given on recent work which begins to repair these deficiencies in our current incomplete description of Kerr metric perturbations
Sustainability Metrics: The San Luis Basin Project
Sustainability is about promoting humanly desirable dynamic regimes of the environment. Metrics: ecological footprint, net regional product, exergy, emergy, and Fisher Information. Adaptive management: (1) metrics assess problem, (2) specific problem identified, and (3) managemen...
Ranking metrics in gene set enrichment analysis: do they matter?
Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna
2017-05-12
There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner
Crowdsourcing metrics of digital collections
Directory of Open Access Journals (Sweden)
Tuula Pääkkönen
2015-12-01
Full Text Available In the National Library of Finland (NLF there are millions of digitized newspaper and journal pages, which are openly available via the public website http://digi.kansalliskirjasto.fi. To serve users better, last year the front end was completely overhauled with its main aim in crowdsourcing features, e.g., by giving end-users the opportunity to create digital clippings and a personal scrapbook from the digital collections. But how can you know whether crowdsourcing has had an impact? How much crowdsourcing functionalities have been used so far? Did crowdsourcing work? In this paper the statistics and metrics of a recent crowdsourcing effort are analysed across the different digitized material types (newspapers, journals, ephemera. The subjects, categories and keywords given by the users are analysed to see which topics are the most appealing. Some notable public uses of the crowdsourced article clippings are highlighted. These metrics give us indications on how the end-users, based on their own interests, are investigating and using the digital collections. Therefore, the suggested metrics illustrate the versatility of the information needs of the users, varying from citizen science to research purposes. By analysing the user patterns, we can respond to the new needs of the users by making minor changes to accommodate the most active participants, while still making the service more approachable for those who are trying out the functionalities for the first time. Participation in the clippings and annotations can enrich the materials in unexpected ways and can possibly pave the way for opportunities of using crowdsourcing more also in research contexts. This creates more opportunities for the goals of open science since source data becomes available, making it possible for researchers to reach out to the general public for help. In the long term, utilizing, for example, text mining methods can allow these different end-user segments to
Shuler, Robert
2018-04-01
The goal of this paper is to take a completely fresh approach to metric gravity, in which the metric principle is strictly adhered to but its properties in local space-time are derived from conservation principles, not inferred from a global field equation. The global field strength variation then gains some flexibility, but only in the regime of very strong fields (2nd-order terms) whose measurement is now being contemplated. So doing provides a family of similar gravities, differing only in strong fields, which could be developed into meaningful verification targets for strong fields after the manner in which far-field variations were used in the 20th century. General Relativity (GR) is shown to be a member of the family and this is demonstrated by deriving the Schwarzschild metric exactly from a suitable field strength assumption. The method of doing so is interesting in itself because it involves only one differential equation rather than the usual four. Exact static symmetric field solutions are also given for one pedagogical alternative based on potential, and one theoretical alternative based on inertia, and the prospects of experimentally differentiating these are analyzed. Whether the method overturns the conventional wisdom that GR is the only metric theory of gravity and that alternatives must introduce additional interactions and fields is somewhat semantical, depending on whether one views the field strength assumption as a field and whether the assumption that produces GR is considered unique in some way. It is of course possible to have other fields, and the local space-time principle can be applied to field gravities which usually are weak-field approximations having only time dilation, giving them the spatial factor and promoting them to full metric theories. Though usually pedagogical, some of them are interesting from a quantum gravity perspective. Cases are noted where mass measurement errors, or distributions of dark matter, can cause one
Danilǎ, Bogdan; Harko, Tiberiu; Lobo, Francisco S. N.; Mak, M. K.
2017-02-01
We consider the internal structure and the physical properties of specific classes of neutron, quark and Bose-Einstein condensate stars in the recently proposed hybrid metric-Palatini gravity theory, which is a combination of the metric and Palatini f (R ) formalisms. It turns out that the theory is very successful in accounting for the observed phenomenology, since it unifies local constraints at the Solar System level and the late-time cosmic acceleration, even if the scalar field is very light. In this paper, we derive the equilibrium equations for a spherically symmetric configuration (mass continuity and Tolman-Oppenheimer-Volkoff) in the framework of the scalar-tensor representation of the hybrid metric-Palatini theory, and we investigate their solutions numerically for different equations of state of neutron and quark matter, by adopting for the scalar field potential a Higgs-type form. It turns out that the scalar-tensor definition of the potential can be represented as an Clairaut differential equation, and provides an explicit form for f (R ) given by f (R )˜R +Λeff, where Λeff is an effective cosmological constant. Furthermore, stellar models, described by the stiff fluid, radiation-like, bag model and the Bose-Einstein condensate equations of state are explicitly constructed in both general relativity and hybrid metric-Palatini gravity, thus allowing an in-depth comparison between the predictions of these two gravitational theories. As a general result it turns out that for all the considered equations of state, hybrid gravity stars are more massive than their general relativistic counterparts. Furthermore, two classes of stellar models corresponding to two particular choices of the functional form of the scalar field (constant value, and logarithmic form, respectively) are also investigated. Interestingly enough, in the case of a constant scalar field the equation of state of the matter takes the form of the bag model equation of state describing
Metrics for Evaluation of Student Models
Pelanek, Radek
2015-01-01
Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…
Context-dependent ATC complexity metric
Mercado Velasco, G.A.; Borst, C.
2015-01-01
Several studies have investigated Air Traffic Control (ATC) complexity metrics in a search for a metric that could best capture workload. These studies have shown how daunting the search for a universal workload metric (one that could be applied in different contexts: sectors, traffic patterns,
Croitoru, Anca; Apreutesei, Gabriela; Mastorakis, Nikos E.
2017-09-01
The subject of this paper belongs to the theory of approximate metrics [23]. An approximate metric on X is a real application defined on X × X that satisfies only a part of the metric axioms. In a recent paper [23], we introduced a new type of approximate metric, named C-metric, that is an application which satisfies only two metric axioms: symmetry and triangular inequality. The remarkable fact in a C-metric space is that a topological structure induced by the C-metric can be defined. The innovative idea of this paper is that we obtain some convergence properties of a C-metric space in the absence of a metric. In this paper we investigate C-metric spaces. The paper is divided into four sections. Section 1 is for Introduction. In Section 2 we recall some concepts and preliminary results. In Section 3 we present some properties of C-metric spaces, such as convergence properties, a canonical decomposition and a C-fixed point theorem. Finally, in Section 4 some conclusions are highlighted.
A condition metric for Eucalyptus woodland derived from expert evaluations.
Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D
2018-02-01
The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.
Important LiDAR metrics for discriminating forest tree species in Central Europe
Shi, Yifang; Wang, Tiejun; Skidmore, Andrew K.; Heurich, Marco
2018-03-01
Numerous airborne LiDAR-derived metrics have been proposed for classifying tree species. Yet an in-depth ecological and biological understanding of the significance of these metrics for tree species mapping remains largely unexplored. In this paper, we evaluated the performance of 37 frequently used LiDAR metrics derived under leaf-on and leaf-off conditions, respectively, for discriminating six different tree species in a natural forest in Germany. We firstly assessed the correlation between these metrics. Then we applied a Random Forest algorithm to classify the tree species and evaluated the importance of the LiDAR metrics. Finally, we identified the most important LiDAR metrics and tested their robustness and transferability. Our results indicated that about 60% of LiDAR metrics were highly correlated to each other (|r| > 0.7). There was no statistically significant difference in tree species mapping accuracy between the use of leaf-on and leaf-off LiDAR metrics. However, combining leaf-on and leaf-off LiDAR metrics significantly increased the overall accuracy from 58.2% (leaf-on) and 62.0% (leaf-off) to 66.5% as well as the kappa coefficient from 0.47 (leaf-on) and 0.51 (leaf-off) to 0.58. Radiometric features, especially intensity related metrics, provided more consistent and significant contributions than geometric features for tree species discrimination. Specifically, the mean intensity of first-or-single returns as well as the mean value of echo width were identified as the most robust LiDAR metrics for tree species discrimination. These results indicate that metrics derived from airborne LiDAR data, especially radiometric metrics, can aid in discriminating tree species in a mixed temperate forest, and represent candidate metrics for tree species classification and monitoring in Central Europe.
On characterizations of quasi-metric completeness
Energy Technology Data Exchange (ETDEWEB)
Dag, H.; Romaguera, S.; Tirado, P.
2017-07-01
Hu proved in [4] that a metric space (X, d) is complete if and only if for any closed subspace C of (X, d), every Banach contraction on C has fixed point. Since then several authors have investigated the problem of characterizing the metric completeness by means of fixed point theorems. Recently this problem has been studied in the more general context of quasi-metric spaces for different notions of completeness. Here we present a characterization of a kind of completeness for quasi-metric spaces by means of a quasi-metric versions of Hu’s theorem. (Author)
DEFF Research Database (Denmark)
Gravesen, Jens
2015-01-01
and found the MacAdam ellipses which are often interpreted as defining the metric tensor at their centres. An important question is whether it is possible to define colour coordinates such that the Euclidean distance in these coordinates correspond to human perception. Using cubic splines to represent......The space of colours is a fascinating space. It is a real vector space, but no matter what inner product you put on the space the resulting Euclidean distance does not correspond to human perception of difference between colours. In 1942 MacAdam performed the first experiments on colour matching...
Product Operations Status Summary Metrics
Takagi, Atsuya; Toole, Nicholas
2010-01-01
The Product Operations Status Summary Metrics (POSSUM) computer program provides a readable view into the state of the Phoenix Operations Product Generation Subsystem (OPGS) data pipeline. POSSUM provides a user interface that can search the data store, collect product metadata, and display the results in an easily-readable layout. It was designed with flexibility in mind for support in future missions. Flexibility over various data store hierarchies is provided through the disk-searching facilities of Marsviewer. This is a proven program that has been in operational use since the first day of the Phoenix mission.
Web metrics for library and information professionals
Stuart, David
2014-01-01
This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional.Th...
Metrics for building performance assurance
Energy Technology Data Exchange (ETDEWEB)
Koles, G.; Hitchcock, R.; Sherman, M.
1996-07-01
This report documents part of the work performed in phase I of a Laboratory Directors Research and Development (LDRD) funded project entitled Building Performance Assurances (BPA). The focus of the BPA effort is to transform the way buildings are built and operated in order to improve building performance by facilitating or providing tools, infrastructure, and information. The efforts described herein focus on the development of metrics with which to evaluate building performance and for which information and optimization tools need to be developed. The classes of building performance metrics reviewed are (1) Building Services (2) First Costs, (3) Operating Costs, (4) Maintenance Costs, and (5) Energy and Environmental Factors. The first category defines the direct benefits associated with buildings; the next three are different kinds of costs associated with providing those benefits; the last category includes concerns that are broader than direct costs and benefits to the building owner and building occupants. The level of detail of the various issues reflect the current state of knowledge in those scientific areas and the ability of the to determine that state of knowledge, rather than directly reflecting the importance of these issues; it intentionally does not specifically focus on energy issues. The report describes work in progress and is intended as a resource and can be used to indicate the areas needing more investigation. Other reports on BPA activities are also available.
The predictive ability of different customer feedback metrics for retention
de Haan, Evert; Verhoef, Peter C.; Wiesel, Thorsten
This study systematically compares different customer feedback metrics (CFMs) - namely customer satisfaction, the Net Promoter Score, and the Customer Effort Score - to test their ability to predict retention across a wide range of industries. We classify the CFMs according to a time focus (past,
Remark on application of the Banach metric method to cosmology
International Nuclear Information System (INIS)
Szydlowski, M.; Heller, M.
1982-01-01
If the cosmological equations can be reduced to the form of a dynamic system, the space of all their solutions is a Banach space. The influence of different parameters on the dynamics of the world models can be easily studied by means of the Banach metric. The method is tested for the Friedman cosmological models perturbed by the bulk viscosity. (author)
Establishing Quantitative Software Metrics in Department of the Navy Programs
2016-04-01
may indicate the use of design patterns. COUPLING This metric shows how the source code depends on the strength with which classes, methods, and...Testing Fundamentals. “Defect Density.” [Online]. Available: http://softwaretestingfundamentals.com/defect-density. 10. "Atomiq Code Similarity Finder
Robust Design Impact Metrics: Measuring the effect of implementing and using Robust Design
DEFF Research Database (Denmark)
Ebro, Martin; Olesen, Jesper; Howard, Thomas J.
2014-01-01
Measuring the performance of an organisation’s product development process can be challenging due to the limited use of metrics in R&D. An organisation considering whether to use Robust Design as an integrated part of their development process may find it difficult to define whether it is relevant......, and afterwards measure the effect of having implemented it. This publication identifies and evaluates Robust Design-related metrics and finds that 2 metrics are especially useful: 1) Relative amount of R&D Resources spent after Design Verification and 2) Number of ‘change notes’ after Design Verification....... The metrics have been applied in a case company to test the assumptions made during the evaluation. It is concluded that the metrics are useful and relevant, but further work is necessary to make a proper overview and categorisation of different types of robustness related metrics....
Reliability of TMS metrics in patients with chronic incomplete spinal cord injury.
Potter-Baker, K A; Janini, D P; Frost, F S; Chabra, P; Varnerin, N; Cunningham, D A; Sankarasubramanian, V; Plow, E B
2016-11-01
Test-retest reliability analysis in individuals with chronic incomplete spinal cord injury (iSCI). The purpose of this study was to examine the reliability of neurophysiological metrics acquired with transcranial magnetic stimulation (TMS) in individuals with chronic incomplete tetraplegia. Cleveland Clinic Foundation, Cleveland, Ohio, USA. TMS metrics of corticospinal excitability, output, inhibition and motor map distribution were collected in muscles with a higher MRC grade and muscles with a lower MRC grade on the more affected side of the body. Metrics denoting upper limb function were also collected. All metrics were collected at two sessions separated by a minimum of two weeks. Reliability between sessions was determined using Spearman's correlation coefficients and concordance correlation coefficients (CCCs). We found that TMS metrics that were acquired in higher MRC grade muscles were approximately two times more reliable than those collected in lower MRC grade muscles. TMS metrics of motor map output, however, demonstrated poor reliability regardless of muscle choice (P=0.34; CCC=0.51). Correlation analysis indicated that patients with more baseline impairment and/or those in a more chronic phase of iSCI demonstrated greater variability of metrics. In iSCI, reliability of TMS metrics varies depending on the muscle grade of the tested muscle. Variability is also influenced by factors such as baseline motor function and time post SCI. Future studies that use TMS metrics in longitudinal study designs to understand functional recovery should be cautious as choice of muscle and clinical characteristics can influence reliability.
Metric approach to quantum constraints
International Nuclear Information System (INIS)
Brody, Dorje C; Hughston, Lane P; Gustavsson, Anna C T
2009-01-01
A framework for deriving equations of motion for constrained quantum systems is introduced and a procedure for its implementation is outlined. In special cases, the proposed new method, which takes advantage of the fact that the space of pure states in quantum mechanics has both a symplectic structure and a metric structure, reduces to a quantum analogue of the Dirac theory of constraints in classical mechanics. Explicit examples involving spin-1/2 particles are worked out in detail: in the first example, our approach coincides with a quantum version of the Dirac formalism, while the second example illustrates how a situation that cannot be treated by Dirac's approach can nevertheless be dealt with in the present scheme.
An Evaluation of the IntelliMetric[SM] Essay Scoring System
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine
2006-01-01
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Active Metric Learning for Supervised Classification
Kumaran, Krishnan; Papageorgiou, Dimitri; Chang, Yutong; Li, Minhan; Takáč, Martin
2018-01-01
Clustering and classification critically rely on distance metrics that provide meaningful comparisons between data points. We present mixed-integer optimization approaches to find optimal distance metrics that generalize the Mahalanobis metric extensively studied in the literature. Additionally, we generalize and improve upon leading methods by removing reliance on pre-designated "target neighbors," "triplets," and "similarity pairs." Another salient feature of our method is its ability to en...
On Nakhleh's metric for reduced phylogenetic networks
Cardona, Gabriel; Llabrés, Mercè; Rosselló, Francesc; Valiente Feruglio, Gabriel Alejandro
2009-01-01
We prove that Nakhleh’s metric for reduced phylogenetic networks is also a metric on the classes of tree-child phylogenetic networks, semibinary tree-sibling time consistent phylogenetic networks, and multilabeled phylogenetic trees. We also prove that it separates distinguishable phylogenetic networks. In this way, it becomes the strongest dissimilarity measure for phylogenetic networks available so far. Furthermore, we propose a generalization of that metric that separates arbitrary phyl...
Generalized tolerance sensitivity and DEA metric sensitivity
Neralić, Luka; E. Wendell, Richard
2015-01-01
This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA). Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.
The definitive guide to IT service metrics
McWhirter, Kurt
2012-01-01
Used just as they are, the metrics in this book will bring many benefits to both the IT department and the business as a whole. Details of the attributes of each metric are given, enabling you to make the right choices for your business. You may prefer and are encouraged to design and create your own metrics to bring even more value to your business - this book will show you how to do this, too.
Generalized tolerance sensitivity and DEA metric sensitivity
Directory of Open Access Journals (Sweden)
Luka Neralić
2015-03-01
Full Text Available This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA. Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.
Common Metrics for Human-Robot Interaction
Steinfeld, Aaron; Lewis, Michael; Fong, Terrence; Scholtz, Jean; Schultz, Alan; Kaber, David; Goodrich, Michael
2006-01-01
This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.
Chaotic inflation with metric and matter perturbations
International Nuclear Information System (INIS)
Feldman, H.A.; Brandenberger, R.H.
1989-01-01
A perturbative scheme to analyze the evolution of both metric and scalar field perturbations in an expanding universe is developed. The scheme is applied to study chaotic inflation with initial metric and scalar field perturbations present. It is shown that initial gravitational perturbations with wavelength smaller than the Hubble radius rapidly decay. The metric simultaneously picks up small perturbations determined by the matter inhomogeneities. Both are frozen in once the wavelength exceeds the Hubble radius. (orig.)
Gravitational lensing in metric theories of gravity
International Nuclear Information System (INIS)
Sereno, Mauro
2003-01-01
Gravitational lensing in metric theories of gravity is discussed. I introduce a generalized approximate metric element, inclusive of both post-post-Newtonian contributions and a gravitomagnetic field. Following Fermat's principle and standard hypotheses, I derive the time delay function and deflection angle caused by an isolated mass distribution. Several astrophysical systems are considered. In most of the cases, the gravitomagnetic correction offers the best perspectives for an observational detection. Actual measurements distinguish only marginally different metric theories from each other
About the possibility of a generalized metric
International Nuclear Information System (INIS)
Lukacs, B.; Ladik, J.
1991-10-01
The metric (the structure of the space-time) may be dependent on the properties of the object measuring it. The case of size dependence of the metric was examined. For this dependence the simplest possible form of the metric tensor has been constructed which fulfils the following requirements: there be two extremal characteristic scales; the metric be unique and the usual between them; the change be sudden in the neighbourhood of these scales; the size of the human body appear as a parameter (postulated on the basis of some philosophical arguments). Estimates have been made for the two extremal length scales according to existing observations. (author) 19 refs
Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.
2012-01-01
We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.
Enhancing Authentication Models Characteristic Metrics via ...
African Journals Online (AJOL)
In this work, we derive the universal characteristic metrics set for authentication models based on security, usability and design issues. We then compute the probability of the occurrence of each characteristic metrics in some single factor and multifactor authentication models in order to determine the effectiveness of these ...
Gravitational Metric Tensor Exterior to Rotating Homogeneous ...
African Journals Online (AJOL)
The covariant and contravariant metric tensors exterior to a homogeneous spherical body rotating uniformly about a common φ axis with constant angular velocity ω is constructed. The constructed metric tensors in this gravitational field have seven non-zero distinct components.The Lagrangian for this gravitational field is ...
Invariant metric for nonlinear symplectic maps
Indian Academy of Sciences (India)
In this paper, we construct an invariant metric in the space of homogeneous polynomials of a given degree (≥ 3). The homogeneous polynomials specify a nonlinear symplectic map which in turn represents a Hamiltonian system. By minimizing the norm constructed out of this metric as a function of system parameters, we ...
Finite Metric Spaces of Strictly negative Type
DEFF Research Database (Denmark)
Hjorth, Poul G.
If a finite metric space is of strictly negative type then its transfinite diameter is uniquely realized by an infinite extent (“load vector''). Finite metric spaces that have this property include all trees, and all finite subspaces of Euclidean and Hyperbolic spaces. We prove that if the distance...
Fixed point theory in metric type spaces
Agarwal, Ravi P; O’Regan, Donal; Roldán-López-de-Hierro, Antonio Francisco
2015-01-01
Written by a team of leading experts in the field, this volume presents a self-contained account of the theory, techniques and results in metric type spaces (in particular in G-metric spaces); that is, the text approaches this important area of fixed point analysis beginning from the basic ideas of metric space topology. The text is structured so that it leads the reader from preliminaries and historical notes on metric spaces (in particular G-metric spaces) and on mappings, to Banach type contraction theorems in metric type spaces, fixed point theory in partially ordered G-metric spaces, fixed point theory for expansive mappings in metric type spaces, generalizations, present results and techniques in a very general abstract setting and framework. Fixed point theory is one of the major research areas in nonlinear analysis. This is partly due to the fact that in many real world problems fixed point theory is the basic mathematical tool used to establish the existence of solutions to problems which arise natur...
Metric solution of a spinning mass
International Nuclear Information System (INIS)
Sato, H.
1982-01-01
Studies on a particular class of asymptotically flat and stationary metric solutions called the Kerr-Tomimatsu-Sato class are reviewed about its derivation and properties. For a further study, an almost complete list of the papers worked on the Tomimatsu-Sato metrics is given. (Auth.)
On Information Metrics for Spatial Coding.
Souza, Bryan C; Pavão, Rodrigo; Belchior, Hindiael; Tort, Adriano B L
2018-04-01
The hippocampal formation is involved in navigation, and its neuronal activity exhibits a variety of spatial correlates (e.g., place cells, grid cells). The quantification of the information encoded by spikes has been standard procedure to identify which cells have spatial correlates. For place cells, most of the established metrics derive from Shannon's mutual information (Shannon, 1948), and convey information rate in bits/s or bits/spike (Skaggs et al., 1993, 1996). Despite their widespread use, the performance of these metrics in relation to the original mutual information metric has never been investigated. In this work, using simulated and real data, we find that the current information metrics correlate less with the accuracy of spatial decoding than the original mutual information metric. We also find that the top informative cells may differ among metrics, and show a surrogate-based normalization that yields comparable spatial information estimates. Since different information metrics may identify different neuronal populations, we discuss current and alternative definitions of spatially informative cells, which affect the metric choice. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Validation of Metrics for Collaborative Systems
Directory of Open Access Journals (Sweden)
Ion IVAN
2008-01-01
Full Text Available This paper describe the new concepts of collaborative systems metrics validation. The paper define the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.
Validation of Metrics for Collaborative Systems
Ion IVAN; Cristian CIUREA
2008-01-01
This paper describe the new concepts of collaborative systems metrics validation. The paper define the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.
Software Power Metric Model: An Implementation | Akwukwuma ...
African Journals Online (AJOL)
... and the execution time (TIME) in each case was recorded. We then obtain the application functions point count. Our result shows that the proposed metric is computable, consistent in its use of unit, and is programming language independent. Keywords: Software attributes, Software power, measurement, Software metric, ...
Influence of Musical Enculturation on Brain Responses to Metric Deviants
Directory of Open Access Journals (Sweden)
Niels T. Haumann
2018-04-01
Full Text Available The ability to recognize metric accents is fundamental in both music and language perception. It has been suggested that music listeners prefer rhythms that follow simple binary meters, which are common in Western music. This means that listeners expect odd-numbered beats to be strong and even-numbered beats to be weak. In support of this, studies have shown that listeners exposed to Western music show stronger novelty and incongruity related P3 and irregularity detection related mismatch negativity (MMN brain responses to attenuated odd- than attenuated even-numbered metric positions. Furthermore, behavioral evidence suggests that music listeners' preferences can be changed by long-term exposure to non-Western rhythms and meters, e.g., by listening to African or Balkan music. In our study, we investigated whether it might be possible to measure effects of music enculturation on neural responses to attenuated tones on specific metric positions. We compared the magnetic mismatch negativity (MMNm to attenuated beats in a “Western group” of listeners (n = 12 mainly exposed to Western music and a “Bicultural group” of listeners (n = 13 exposed for at least 1 year to both Sub-Saharan African music in addition to Western music. We found that in the “Western group” the MMNm was higher in amplitude to deviant tones on odd compared to even metric positions, but not in the “Bicultural group.” In support of this finding, there was also a trend of the “Western group” to rate omitted beats as more surprising on odd than even metric positions, whereas the “Bicultural group” seemed to discriminate less between metric positions in terms of surprise ratings. Also, we observed that the overall latency of the MMNm was significantly shorter in the Bicultural group compared to the Western group. These effects were not biased by possible differences in rhythm perception ability or music training, measured with the Musical Ear Test (MET
Influence of Musical Enculturation on Brain Responses to Metric Deviants.
Haumann, Niels T; Vuust, Peter; Bertelsen, Freja; Garza-Villarreal, Eduardo A
2018-01-01
The ability to recognize metric accents is fundamental in both music and language perception. It has been suggested that music listeners prefer rhythms that follow simple binary meters, which are common in Western music. This means that listeners expect odd-numbered beats to be strong and even-numbered beats to be weak. In support of this, studies have shown that listeners exposed to Western music show stronger novelty and incongruity related P3 and irregularity detection related mismatch negativity (MMN) brain responses to attenuated odd- than attenuated even-numbered metric positions. Furthermore, behavioral evidence suggests that music listeners' preferences can be changed by long-term exposure to non-Western rhythms and meters, e.g., by listening to African or Balkan music. In our study, we investigated whether it might be possible to measure effects of music enculturation on neural responses to attenuated tones on specific metric positions. We compared the magnetic mismatch negativity (MMNm) to attenuated beats in a "Western group" of listeners ( n = 12) mainly exposed to Western music and a "Bicultural group" of listeners ( n = 13) exposed for at least 1 year to both Sub-Saharan African music in addition to Western music. We found that in the "Western group" the MMNm was higher in amplitude to deviant tones on odd compared to even metric positions, but not in the "Bicultural group." In support of this finding, there was also a trend of the "Western group" to rate omitted beats as more surprising on odd than even metric positions, whereas the "Bicultural group" seemed to discriminate less between metric positions in terms of surprise ratings. Also, we observed that the overall latency of the MMNm was significantly shorter in the Bicultural group compared to the Western group. These effects were not biased by possible differences in rhythm perception ability or music training, measured with the Musical Ear Test (MET). Furthermore, source localization analyses
Metrics for border management systems.
Energy Technology Data Exchange (ETDEWEB)
Duggan, Ruth Ann
2009-07-01
There are as many unique and disparate manifestations of border systems as there are borders to protect. Border Security is a highly complex system analysis problem with global, regional, national, sector, and border element dimensions for land, water, and air domains. The complexity increases with the multiple, and sometimes conflicting, missions for regulating the flow of people and goods across borders, while securing them for national security. These systems include frontier border surveillance, immigration management and customs functions that must operate in a variety of weather, terrain, operational conditions, cultural constraints, and geopolitical contexts. As part of a Laboratory Directed Research and Development Project 08-684 (Year 1), the team developed a reference framework to decompose this complex system into international/regional, national, and border elements levels covering customs, immigration, and border policing functions. This generalized architecture is relevant to both domestic and international borders. As part of year two of this project (09-1204), the team determined relevant relative measures to better understand border management performance. This paper describes those relative metrics and how they can be used to improve border management systems.
The metrics of science and technology
Geisler, Eliezer
2000-01-01
Dr. Geisler's far-reaching, unique book provides an encyclopedic compilation of the key metrics to measure and evaluate the impact of science and technology on academia, industry, and government. Focusing on such items as economic measures, patents, peer review, and other criteria, and supported by an extensive review of the literature, Dr. Geisler gives a thorough analysis of the strengths and weaknesses inherent in metric design, and in the use of the specific metrics he cites. His book has already received prepublication attention, and will prove especially valuable for academics in technology management, engineering, and science policy; industrial R&D executives and policymakers; government science and technology policymakers; and scientists and managers in government research and technology institutions. Geisler maintains that the application of metrics to evaluate science and technology at all levels illustrates the variety of tools we currently possess. Each metric has its own unique strengths and...
Smart Grid Status and Metrics Report Appendices
Energy Technology Data Exchange (ETDEWEB)
Balducci, Patrick J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Antonopoulos, Chrissi A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Clements, Samuel L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gorrissen, Willy J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kirkham, Harold [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ruiz, Kathleen A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Smith, David L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Weimar, Mark R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gardner, Chris [APQC, Houston, TX (United States); Varney, Jeff [APQC, Houston, TX (United States)
2014-07-01
A smart grid uses digital power control and communication technology to improve the reliability, security, flexibility, and efficiency of the electric system, from large generation through the delivery systems to electricity consumers and a growing number of distributed generation and storage resources. To convey progress made in achieving the vision of a smart grid, this report uses a set of six characteristics derived from the National Energy Technology Laboratory Modern Grid Strategy. The Smart Grid Status and Metrics Report defines and examines 21 metrics that collectively provide insight into the grid’s capacity to embody these characteristics. This appendix presents papers covering each of the 21 metrics identified in Section 2.1 of the Smart Grid Status and Metrics Report. These metric papers were prepared in advance of the main body of the report and collectively form its informational backbone.
Metrics for Polyphonic Sound Event Detection
Directory of Open Access Journals (Sweden)
Annamaria Mesaros
2016-05-01
Full Text Available This paper presents and discusses various metrics proposed for evaluation of polyphonic sound event detection systems used in realistic situations where there are typically multiple sound sources active simultaneously. The system output in this case contains overlapping events, marked as multiple sounds detected as being active at the same time. The polyphonic system output requires a suitable procedure for evaluation against a reference. Metrics from neighboring fields such as speech recognition and speaker diarization can be used, but they need to be partially redefined to deal with the overlapping events. We present a review of the most common metrics in the field and the way they are adapted and interpreted in the polyphonic case. We discuss segment-based and event-based definitions of each metric and explain the consequences of instance-based and class-based averaging using a case study. In parallel, we provide a toolbox containing implementations of presented metrics.
SU-G-BRB-16: Vulnerabilities in the Gamma Metric
International Nuclear Information System (INIS)
Neal, B; Siebers, J
2016-01-01
Purpose: To explore vulnerabilities in the gamma index metric that undermine its wide use as a radiation therapy quality assurance tool. Methods: 2D test field pairs (images) are created specifically to achieve high gamma passing rates, but to also include gross errors by exploiting the distance-to-agreement and percent-passing components of the metric. The first set has no requirement of clinical practicality, but is intended to expose vulnerabilities. The second set exposes clinically realistic vulnerabilities. To circumvent limitations inherent to user-specific tuning of prediction algorithms to match measurements, digital test cases are manually constructed, thereby mimicking high-quality image prediction. Results: With a 3 mm distance-to-agreement metric, changing field size by ±6 mm results in a gamma passing rate over 99%. For a uniform field, a lattice of passing points spaced 5 mm apart results in a passing rate of 100%. Exploiting the percent-passing component, a 10×10 cm"2 field can have a 95% passing rate when an 8 cm"2=2.8×2.8 cm"2 highly out-of-tolerance (e.g. zero dose) square is missing from the comparison image. For clinically realistic vulnerabilities, an arc plan for which a 2D image is created can have a >95% passing rate solely due to agreement in the lateral spillage, with the failing 5% in the critical target region. A field with an integrated boost (e.g whole brain plus small metastases) could neglect the metastases entirely, yet still pass with a 95% threshold. All the failure modes described would be visually apparent on a gamma-map image. Conclusion: The %gamma<1 metric has significant vulnerabilities. High passing rates can obscure critical faults in hypothetical and delivered radiation doses. Great caution should be used with gamma as a QA metric; users should inspect the gamma-map. Visual analysis of gamma-maps may be impractical for cine acquisition.
Metrical presentation boosts implicit learning of artificial grammar.
Selchenkova, Tatiana; François, Clément; Schön, Daniele; Corneyllie, Alexandra; Perrin, Fabien; Tillmann, Barbara
2014-01-01
The present study investigated whether a temporal hierarchical structure favors implicit learning. An artificial pitch grammar implemented with a set of tones was presented in two different temporal contexts, notably with either a strongly metrical structure or an isochronous structure. According to the Dynamic Attending Theory, external temporal regularities can entrain internal oscillators that guide attention over time, allowing for temporal expectations that influence perception of future events. Based on this framework, it was hypothesized that the metrical structure provides a benefit for artificial grammar learning in comparison to an isochronous presentation. Our study combined behavioral and event-related potential measurements. Behavioral results demonstrated similar learning in both participant groups. By contrast, analyses of event-related potentials showed a larger P300 component and an earlier N2 component for the strongly metrical group during the exposure phase and the test phase, respectively. These findings suggests that the temporal expectations in the strongly metrical condition helped listeners to better process the pitch dimension, leading to improved learning of the artificial grammar.
Deep Energy Retrofit Performance Metric Comparison: Eight California Case Studies
Energy Technology Data Exchange (ETDEWEB)
Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fisher, Jeremy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2014-06-01
In this paper we will present the results of monitored annual energy use data from eight residential Deep Energy Retrofit (DER) case studies using a variety of performance metrics. For each home, the details of the retrofits were analyzed, diagnostic tests to characterize the home were performed and the homes were monitored for total and individual end-use energy consumption for approximately one year. Annual performance in site and source energy, as well as carbon dioxide equivalent (CO_{2}e) emissions were determined on a per house, per person and per square foot basis to examine the sensitivity to these different metrics. All eight DERs showed consistent success in achieving substantial site energy and CO_{2}e reductions, but some projects achieved very little, if any source energy reduction. This problem emerged in those homes that switched from natural gas to electricity for heating and hot water, resulting in energy consumption dominated by electricity use. This demonstrates the crucial importance of selecting an appropriate metric to be used in guiding retrofit decisions. Also, due to the dynamic nature of DERs, with changes in occupancy, size, layout, and comfort, several performance metrics might be necessary to understand a project’s success.
Robustness Metrics: Consolidating the multiple approaches to quantify Robustness
DEFF Research Database (Denmark)
Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.
2016-01-01
robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...
Partial rectangular metric spaces and fixed point theorems.
Shukla, Satish
2014-01-01
The purpose of this paper is to introduce the concept of partial rectangular metric spaces as a generalization of rectangular metric and partial metric spaces. Some properties of partial rectangular metric spaces and some fixed point results for quasitype contraction in partial rectangular metric spaces are proved. Some examples are given to illustrate the observed results.
Measuring Information Security: Guidelines to Build Metrics
von Faber, Eberhard
Measuring information security is a genuine interest of security managers. With metrics they can develop their security organization's visibility and standing within the enterprise or public authority as a whole. Organizations using information technology need to use security metrics. Despite the clear demands and advantages, security metrics are often poorly developed or ineffective parameters are collected and analysed. This paper describes best practices for the development of security metrics. First attention is drawn to motivation showing both requirements and benefits. The main body of this paper lists things which need to be observed (characteristic of metrics), things which can be measured (how measurements can be conducted) and steps for the development and implementation of metrics (procedures and planning). Analysis and communication is also key when using security metrics. Examples are also given in order to develop a better understanding. The author wants to resume, continue and develop the discussion about a topic which is or increasingly will be a critical factor of success for any security managers in larger organizations.
Characterising risk - aggregated metrics: radiation and noise
International Nuclear Information System (INIS)
Passchier, W.
1998-01-01
The characterisation of risk is an important phase in the risk assessment - risk management process. From the multitude of risk attributes a few have to be selected to obtain a risk characteristic or profile that is useful for risk management decisions and implementation of protective measures. One way to reduce the number of attributes is aggregation. In the field of radiation protection such an aggregated metric is firmly established: effective dose. For protection against environmental noise the Health Council of the Netherlands recently proposed a set of aggregated metrics for noise annoyance and sleep disturbance. The presentation will discuss similarities and differences between these two metrics and practical limitations. The effective dose has proven its usefulness in designing radiation protection measures, which are related to the level of risk associated with the radiation practice in question, given that implicit judgements on radiation induced health effects are accepted. However, as the metric does not take into account the nature of radiation practice, it is less useful in policy discussions on the benefits and harm of radiation practices. With respect to the noise exposure metric, only one effect is targeted (annoyance), and the differences between sources are explicitly taken into account. This should make the metric useful in policy discussions with respect to physical planning and siting problems. The metric proposed has only significance on a population level, and can not be used as a predictor for individual risk. (author)
Energy functionals for Calabi-Yau metrics
International Nuclear Information System (INIS)
Headrick, M; Nassar, A
2013-01-01
We identify a set of ''energy'' functionals on the space of metrics in a given Kähler class on a Calabi-Yau manifold, which are bounded below and minimized uniquely on the Ricci-flat metric in that class. Using these functionals, we recast the problem of numerically solving the Einstein equation as an optimization problem. We apply this strategy, using the ''algebraic'' metrics (metrics for which the Kähler potential is given in terms of a polynomial in the projective coordinates), to the Fermat quartic and to a one-parameter family of quintics that includes the Fermat and conifold quintics. We show that this method yields approximations to the Ricci-flat metric that are exponentially accurate in the degree of the polynomial (except at the conifold point, where the convergence is polynomial), and therefore orders of magnitude more accurate than the balanced metrics, previously studied as approximations to the Ricci-flat metric. The method is relatively fast and easy to implement. On the theoretical side, we also show that the functionals can be used to give a heuristic proof of Yau's theorem
Metrics Feedback Cycle: measuring and improving user engagement in gamified eLearning systems
Directory of Open Access Journals (Sweden)
Adam Atkins
2017-12-01
Full Text Available This paper presents the identification, design and implementation of a set of metrics of user engagement in a gamified eLearning application. The 'Metrics Feedback Cycle' (MFC is introduced as a formal process prescribing the iterative evaluation and improvement of application-wide engagement, using data collected from metrics as input to improve related engagement features. This framework was showcased using a gamified eLearning application as a case study. In this paper, we designed a prototype and tested it with thirty-six (N=36 students to validate the effectiveness of the MFC. The analysis and interpretation of metrics data shows that the gamification features had a positive effect on user engagement, and helped identify areas in which this could be improved. We conclude that the MFC has applications in gamified systems that seek to maximise engagement by iteratively evaluating implemented features against a set of evolving metrics.
Metrics Are Needed for Collaborative Software Development
Directory of Open Access Journals (Sweden)
Mojgan Mohtashami
2011-10-01
Full Text Available There is a need for metrics for inter-organizational collaborative software development projects, encompassing management and technical concerns. In particular, metrics are needed that are aimed at the collaborative aspect itself, such as readiness for collaboration, the quality and/or the costs and benefits of collaboration in a specific ongoing project. We suggest questions and directions for such metrics, spanning the full lifespan of a collaborative project, from considering the suitability of collaboration through evaluating ongoing projects to final evaluation of the collaboration.
Indefinite metric fields and the renormalization group
International Nuclear Information System (INIS)
Sherry, T.N.
1976-11-01
The renormalization group equations are derived for the Green functions of an indefinite metric field theory. In these equations one retains the mass dependence of the coefficient functions, since in the indefinite metric theories the masses cannot be neglected. The behavior of the effective coupling constant in the asymptotic and infrared limits is analyzed. The analysis is illustrated by means of a simple model incorporating indefinite metric fields. The model scales at first order, and at this order also the effective coupling constant has both ultra-violet and infra-red fixed points, the former being the bare coupling constant
Metric learning for DNA microarray data analysis
International Nuclear Information System (INIS)
Takeuchi, Ichiro; Nakagawa, Masao; Seto, Masao
2009-01-01
In many microarray studies, gene set selection is an important preliminary step for subsequent main task such as tumor classification, cancer subtype identification, etc. In this paper, we investigate the possibility of using metric learning as an alternative to gene set selection. We develop a simple metric learning algorithm aiming to use it for microarray data analysis. Exploiting a property of the algorithm, we introduce a novel approach for extending the metric learning to be adaptive. We apply the algorithm to previously studied microarray data on malignant lymphoma subtype identification.
Software metrics a rigorous and practical approach
Fenton, Norman
2014-01-01
A Framework for Managing, Measuring, and Predicting Attributes of Software Development Products and ProcessesReflecting the immense progress in the development and use of software metrics in the past decades, Software Metrics: A Rigorous and Practical Approach, Third Edition provides an up-to-date, accessible, and comprehensive introduction to software metrics. Like its popular predecessors, this third edition discusses important issues, explains essential concepts, and offers new approaches for tackling long-standing problems.New to the Third EditionThis edition contains new material relevant
Validity of the Virtual Reality Stroop Task (VRST) in active duty military.
Armstrong, Christina M; Reger, Greg M; Edwards, Joseph; Rizzo, Albert A; Courtney, Christopher G; Parsons, Thomas D
2013-01-01
Virtual environments provide the ability to systematically deliver test stimuli in simulated contexts relevant to real world behavior. The current study evaluated the validity of the Virtual Reality Stroop Task (VRST), which presents test stimuli during a virtual reality military convoy with simulated combat threats. Active duty Army personnel (N = 49) took the VRST, a customized version of the Automated Neuropsychological Assessment Metrics (ANAM)-Fourth Edition TBI Battery (2007) that included the addition of the ANAM Stroop and Tower tests, and traditional neuropsychological measures, including the Delis-Kaplan Executive Function System version of the Color-Word Interference Test. Preliminary convergent and discriminant validity was established, and performance on the VRST was significantly associated with computerized and traditional tests of attention and executive functioning. Valid virtual reality cognitive assessments open new lines of inquiry into the impact of environmental stimuli on performance and offer promise for the future of neuropsychological assessments used with military personnel.
Peltoketo, Veli-Tapani
2014-11-01
When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.
Wireless sensor network performance metrics for building applications
Energy Technology Data Exchange (ETDEWEB)
Jang, W.S. (Department of Civil Engineering Yeungnam University 214-1 Dae-Dong, Gyeongsan-Si Gyeongsangbuk-Do 712-749 South Korea); Healy, W.M. [Building and Fire Research Laboratory, 100 Bureau Drive, Gaithersburg, MD 20899-8632 (United States)
2010-06-15
Metrics are investigated to help assess the performance of wireless sensors in buildings. Wireless sensor networks present tremendous opportunities for energy savings and improvement in occupant comfort in buildings by making data about conditions and equipment more readily available. A key barrier to their adoption, however, is the uncertainty among users regarding the reliability of the wireless links through building construction. Tests were carried out that examined three performance metrics as a function of transmitter-receiver separation distance, transmitter power level, and obstruction type. These tests demonstrated, via the packet delivery rate, a clear transition from reliable to unreliable communications at different separation distances. While the packet delivery rate is difficult to measure in actual applications, the received signal strength indication correlated well with the drop in packet delivery rate in the relatively noise-free environment used in these tests. The concept of an equivalent distance was introduced to translate the range of reliability in open field operation to that seen in a typical building, thereby providing wireless system designers a rough estimate of the necessary spacing between sensor nodes in building applications. It is anticipated that the availability of straightforward metrics on the range of wireless sensors in buildings will enable more widespread sensing in buildings for improved control and fault detection. (author)
DEFF Research Database (Denmark)
Nielsen, Niels Christian; Blackburn, Alan
2005-01-01
In this paper, the moving-windows approach to calculation and analysis of spatial metrics is tested with particular focus on forest mapping. The influence of window size on average metrics values, agreement between values from different EO-based data sources and local variance of metrics values i...
Applying Halstead's Metric to Oberon Language
Directory of Open Access Journals (Sweden)
Fawaz Ahmed Masoud
1999-12-01
Full Text Available Oberon is a small, simple and difficult programming language. The guiding principle of Oberon was a quote from Albert Einstein: "Make it as simple as possible, but not simpler". Oberon language is based on few fundamental concepts that are easy to understand and use. It supports two programming paradigms: the procedural paradigm, and the object-oriented paradigm This paper provides the application of Halstead's software science theory to Oberon programs. Applying Halstead's metric to the Oberon language has provided the analysis and measurements for module and within module maintenance complexity of programs written in Oberon. This type of analysis provides a manager or programmer with enough information about the maintenance complexity of the Oberon programs. So they can be aware of how much effort they need to maintain a certain Oberon program. The maintenance complexity of the programs written in Oberon or any other language is based on counting the number of operators and operands within the statements of the tested program. The counting process is accomplished by a program written in C language- Results are obtained, analyzed, and discussed in detail.
Metrics, Media and Advertisers: Discussing Relationship
Directory of Open Access Journals (Sweden)
Marco Aurelio de Souza Rodrigues
2014-11-01
Full Text Available This study investigates how Brazilian advertisers are adapting to new media and its attention metrics. In-depth interviews were conducted with advertisers in 2009 and 2011. In 2009, new media and its metrics were celebrated as innovations that would increase advertising campaigns overall efficiency. In 2011, this perception has changed: New media’s profusion of metrics, once seen as an advantage, started to compromise its ease of use and adoption. Among its findings, this study argues that there is an opportunity for media groups willing to shift from a product-focused strategy towards a customer-centric one, through the creation of new, simple and integrative metrics.
Networks and centroid metrics for understanding football
African Journals Online (AJOL)
Gonçalo Dias
games. However, it seems that the centroid metric, supported only by the position of players in the field ...... the strategy adopted by the coach (Gama et al., 2014). ... centroid distance as measures of team's tactical performance in youth football.
Clean Cities Annual Metrics Report 2009 (Revised)
Energy Technology Data Exchange (ETDEWEB)
Johnson, C.
2011-08-01
Document provides Clean Cities coalition metrics about the use of alternative fuels; the deployment of alternative fuel vehicles, hybrid electric vehicles (HEVs), and idle reduction initiatives; fuel economy activities; and programs to reduce vehicle miles driven.
Metric Guidelines Inservice and/or Preservice
Granito, Dolores
1978-01-01
Guidelines are given for designing teacher training for going metric. The guidelines were developed from existing guidelines, journal articles, a survey of colleges, and the detailed reactions of a panel. (MN)
Science and Technology Metrics and Other Thoughts
National Research Council Canada - National Science Library
Harman, Wayne; Staton, Robin
2006-01-01
This report explores the subject of science and technology metrics and other topics to begin to provide Navy managers, as well as scientists and engineers, additional tools and concepts with which to...
Using Activity Metrics for DEVS Simulation Profiling
Directory of Open Access Journals (Sweden)
Muzy A.
2014-01-01
Full Text Available Activity metrics can be used to profile DEVS models before and during the simulation. It is critical to get good activity metrics of models before and during their simulation. Having a means to compute a-priori activity of components (analytic activity may be worth when simulating a model (or parts of it for the first time. After, during the simulation, analytic activity can be corrected using dynamic one. In this paper, we introduce McCabe cyclomatic complexity metric (MCA to compute analytic activity. Both static and simulation activity metrics have been implemented through a plug-in of the DEVSimPy (DEVS Simulator in Python language environment and applied to DEVS models.
Evaluating and Estimating the WCET Criticality Metric
DEFF Research Database (Denmark)
Jordan, Alexander
2014-01-01
a programmer (or compiler) from targeting optimizations the right way. A possible resort is to use a metric that targets WCET and which can be efficiently computed for all code parts of a program. Similar to dynamic profiling techniques, which execute code with input that is typically expected...... for the application, based on WCET analysis we can indicate how critical a code fragment is, in relation to the worst-case bound. Computing such a metric on top of static analysis, incurs a certain overhead though, which increases with the complexity of the underlying WCET analysis. We present our approach...... to estimate the Criticality metric, by relaxing the precision of WCET analysis. Through this, we can reduce analysis time by orders of magnitude, while only introducing minor error. To evaluate our estimation approach and share our garnered experience using the metric, we evaluate real-time programs, which...
16 CFR 1511.8 - Metric references.
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Metric references. 1511.8 Section 1511.8 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS... parentheses for convenience and information only. ...
Flight Crew State Monitoring Metrics, Phase I
National Aeronautics and Space Administration — eSky will develop specific crew state metrics based on the timeliness, tempo and accuracy of pilot inputs required by the H-mode Flight Control System (HFCS)....
Supplier selection using different metric functions
Directory of Open Access Journals (Sweden)
Omosigho S.E.
2015-01-01
Full Text Available Supplier selection is an important component of supply chain management in today’s global competitive environment. Hence, the evaluation and selection of suppliers have received considerable attention in the literature. Many attributes of suppliers, other than cost, are considered in the evaluation and selection process. Therefore, the process of evaluation and selection of suppliers is a multi-criteria decision making process. The methodology adopted to solve the supplier selection problem is intuitionistic fuzzy TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution. Generally, TOPSIS is based on the concept of minimum distance from the positive ideal solution and maximum distance from the negative ideal solution. We examine the deficiencies of using only one metric function in TOPSIS and propose the use of spherical metric function in addition to the commonly used metric functions. For empirical supplier selection problems, more than one metric function should be used.
Classroom reconstruction of the Schwarzschild metric
Kassner, Klaus
2015-01-01
A promising way to introduce general relativity in the classroom is to study the physical implications of certain given metrics, such as the Schwarzschild one. This involves lower mathematical expenditure than an approach focusing on differential geometry in its full glory and permits to emphasize physical aspects before attacking the field equations. Even so, in terms of motivation, lacking justification of the metric employed may pose an obstacle. The paper discusses how to establish the we...
Marketing communication metrics for social media
Töllinen, Aarne; Karjaluoto, Heikki
2011-01-01
The objective of this paper is to develop a conceptual framework for measuring the effectiveness of social media marketing communications. Specifically, we study whether the existing marketing communications performance metrics are still valid in the changing digitalised communications landscape, or whether it is time to rethink them, or even to devise entirely new metrics. Recent advances in information technology and marketing bring a need to re-examine measurement models. We combine two im...
Some observations on a fuzzy metric space
Energy Technology Data Exchange (ETDEWEB)
Gregori, V.
2017-07-01
Let $(X,d)$ be a metric space. In this paper we provide some observations about the fuzzy metric space in the sense of Kramosil and Michalek $(Y,N,/wedge)$, where $Y$ is the set of non-negative real numbers $[0,/infty[$ and $N(x,y,t)=1$ if $d(x,y)/leq t$ and $N(x,y,t)=0$ if $d(x,y)/geq t$. (Author)
Area Regge calculus and discontinuous metrics
International Nuclear Information System (INIS)
Wainwright, Chris; Williams, Ruth M
2004-01-01
Taking the triangle areas as independent variables in the theory of Regge calculus can lead to ambiguities in the edge lengths, which can be interpreted as discontinuities in the metric. We construct solutions to area Regge calculus using a triangulated lattice and find that on a spacelike or timelike hypersurface no such discontinuity can arise. On a null hypersurface however, we can have such a situation and the resulting metric can be interpreted as a so-called refractive wave
Relaxed metrics and indistinguishability operators: the relationship
Energy Technology Data Exchange (ETDEWEB)
Martin, J.
2017-07-01
In 1982, the notion of indistinguishability operator was introduced by E. Trillas in order to fuzzify the crisp notion of equivalence relation (/cite{Trillas}). In the study of such a class of operators, an outstanding property must be pointed out. Concretely, there exists a duality relationship between indistinguishability operators and metrics. The aforesaid relationship was deeply studied by several authors that introduced a few techniques to generate metrics from indistinguishability operators and vice-versa (see, for instance, /cite{BaetsMesiar,BaetsMesiar2}). In the last years a new generalization of the metric notion has been introduced in the literature with the purpose of developing mathematical tools for quantitative models in Computer Science and Artificial Intelligence (/cite{BKMatthews,Ma}). The aforementioned generalized metrics are known as relaxed metrics. The main target of this talk is to present a study of the duality relationship between indistinguishability operators and relaxed metrics in such a way that the aforementioned classical techniques to generate both concepts, one from the other, can be extended to the new framework. (Author)
Neutron Damage Metrics and the Quantification of the Associated Uncertainty
International Nuclear Information System (INIS)
Griffin, P.J.
2012-01-01
The motivation for this work is the determination of a methodology for deriving and validating a reference metric that can be used to correlate radiation damage from neutrons of various energies and from charged particles with observed damage modes. Exposure functions for some damage modes are being used by the radiation effects community, e.g. 1-MeV-Equivalent damage in Si and in GaAs semiconductors as well as displacements per atom (dpa) and subsequent material embrittlement in iron. The limitations with the current treatment of these energy-dependent metrics include a lack of an associated covariance matrix and incomplete validation. In addition, the analytical approaches used to derive the current metrics fail to properly treat damage in compound/poly-atomic materials, the evolution and recombination of defects as a function of time since exposure, as well as the influence of dopant materials and impurities in the material of interest. The current metrics only provide a crude correlation with the damage modes of interest. They do not, typically, even distinguish between the damage effectiveness of different types of neutron-induced lattice defects, e.g. they fail to distinguish between a vacancy-oxygen defect and a divacancy with respect to the minority carrier lifetime and the decrease in gain in a Si bipolar transistor. The goal of this work is to facilitate the generation of more advanced radiation metrics that will provide an easier intercomparison of radiation damage as delivered from various types of test facilities and with various real-world nuclear applications. One first needs to properly define the scope of the radiation damage application that is a concern before an appropriate damage metric is selected. The fidelity of the metric selected and the range of environmental parameters under which the metric can be correlated with the damage should match the intended application. It should address the scope of real-world conditions where the metric will
Baby universe metric equivalent to an interior black-hole metric
International Nuclear Information System (INIS)
Gonzalez-Diaz, P.F.
1991-01-01
It is shown that the maximally extended metric corresponding to a large wormhole is the unique possible wormhole metric whose baby universe sector is conformally equivalent ot the maximal inextendible Kruskal metric corresponding to the interior region of a Schwarzschild black hole whose gravitational radius is half the wormhole neck radius. The physical implications of this result in the black hole evaporation process are discussed. (orig.)
Evaluation of Vehicle-Based Crash Severity Metrics.
Tsoi, Ada H; Gabler, Hampton C
2015-01-01
Vehicle change in velocity (delta-v) is a widely used crash severity metric used to estimate occupant injury risk. Despite its widespread use, delta-v has several limitations. Of most concern, delta-v is a vehicle-based metric which does not consider the crash pulse or the performance of occupant restraints, e.g. seatbelts and airbags. Such criticisms have prompted the search for alternative impact severity metrics based upon vehicle kinematics. The purpose of this study was to assess the ability of the occupant impact velocity (OIV), acceleration severity index (ASI), vehicle pulse index (VPI), and maximum delta-v (delta-v) to predict serious injury in real world crashes. The study was based on the analysis of event data recorders (EDRs) downloaded from the National Automotive Sampling System / Crashworthiness Data System (NASS-CDS) 2000-2013 cases. All vehicles in the sample were GM passenger cars and light trucks involved in a frontal collision. Rollover crashes were excluded. Vehicles were restricted to single-event crashes that caused an airbag deployment. All EDR data were checked for a successful, completed recording of the event and that the crash pulse was complete. The maximum abbreviated injury scale (MAIS) was used to describe occupant injury outcome. Drivers were categorized into either non-seriously injured group (MAIS2-) or seriously injured group (MAIS3+), based on the severity of any injuries to the thorax, abdomen, and spine. ASI and OIV were calculated according to the Manual for Assessing Safety Hardware. VPI was calculated according to ISO/TR 12353-3, with vehicle-specific parameters determined from U.S. New Car Assessment Program crash tests. Using binary logistic regression, the cumulative probability of injury risk was determined for each metric and assessed for statistical significance, goodness-of-fit, and prediction accuracy. The dataset included 102,744 vehicles. A Wald chi-square test showed each vehicle-based crash severity metric
Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling
Mog, Robert A.
1997-01-01
Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.
Summary of the U.S. National Workshop. Results for ATF Metrics Development
International Nuclear Information System (INIS)
Pasamehmetoglu, Kemal
2013-01-01
This presentation discussed the main outcomes of a recent US National Workshop on Accident-Tolerant Fuels, focusing on results for ATF metrics development. All thermal, mechanical and chemical properties are relevant in defining the metrics for accident tolerance, but considerable testing and analyses are needed to identify the dominant attributes and quantify the metrics. Current analysis tools are not fully adequate to complete the task, and a need was highlighted for strong collaborations to complete the experimental data to qualify the new tools
The dynamics of metric-affine gravity
International Nuclear Information System (INIS)
Vitagliano, Vincenzo; Sotiriou, Thomas P.; Liberati, Stefano
2011-01-01
Highlights: → The role and the dynamics of the connection in metric-affine theories is explored. → The most general second order action does not lead to a dynamical connection. → Including higher order invariants excites new degrees of freedom in the connection. → f(R) actions are also discussed and shown to be a non- representative class. - Abstract: Metric-affine theories of gravity provide an interesting alternative to general relativity: in such an approach, the metric and the affine (not necessarily symmetric) connection are independent quantities. Furthermore, the action should include covariant derivatives of the matter fields, with the covariant derivative naturally defined using the independent connection. As a result, in metric-affine theories a direct coupling involving matter and connection is also present. The role and the dynamics of the connection in such theories is explored. We employ power counting in order to construct the action and search for the minimal requirements it should satisfy for the connection to be dynamical. We find that for the most general action containing lower order invariants of the curvature and the torsion the independent connection does not carry any dynamics. It actually reduces to the role of an auxiliary field and can be completely eliminated algebraically in favour of the metric and the matter field, introducing extra interactions with respect to general relativity. However, we also show that including higher order terms in the action radically changes this picture and excites new degrees of freedom in the connection, making it (or parts of it) dynamical. Constructing actions that constitute exceptions to this rule requires significant fine tuned and/or extra a priori constraints on the connection. We also consider f(R) actions as a particular example in order to show that they constitute a distinct class of metric-affine theories with special properties, and as such they cannot be used as representative toy
US Rocket Propulsion Industrial Base Health Metrics
Doreswamy, Rajiv
2013-01-01
The number of active liquid rocket engine and solid rocket motor development programs has severely declined since the "space race" of the 1950s and 1960s center dot This downward trend has been exacerbated by the retirement of the Space Shuttle, transition from the Constellation Program to the Space launch System (SLS) and similar activity in DoD programs center dot In addition with consolidation in the industry, the rocket propulsion industrial base is under stress. To Improve the "health" of the RPIB, we need to understand - The current condition of the RPIB - How this compares to past history - The trend of RPIB health center dot This drives the need for a concise set of "metrics" - Analogous to the basic data a physician uses to determine the state of health of his patients - Easy to measure and collect - The trend is often more useful than the actual data point - Can be used to focus on problem areas and develop preventative measures The nation's capability to conceive, design, develop, manufacture, test, and support missions using liquid rocket engines and solid rocket motors that are critical to its national security, economic health and growth, and future scientific needs. center dot The RPIB encompasses US government, academic, and commercial (including industry primes and their supplier base) research, development, test, evaluation, and manufacturing capabilities and facilities. center dot The RPIB includes the skilled workforce, related intellectual property, engineering and support services, and supply chain operations and management. This definition touches the five main segments of the U.S. RPIB as categorized by the USG: defense, intelligence community, civil government, academia, and commercial sector. The nation's capability to conceive, design, develop, manufacture, test, and support missions using liquid rocket engines and solid rocket motors that are critical to its national security, economic health and growth, and future scientific needs
Evaluation metrics for biostatistical and epidemiological collaborations.
Rubio, Doris McGartland; Del Junco, Deborah J; Bhore, Rafia; Lindsell, Christopher J; Oster, Robert A; Wittkowski, Knut M; Welty, Leah J; Li, Yi-Ju; Demets, Dave
2011-10-15
Increasing demands for evidence-based medicine and for the translation of biomedical research into individual and public health benefit have been accompanied by the proliferation of special units that offer expertise in biostatistics, epidemiology, and research design (BERD) within academic health centers. Objective metrics that can be used to evaluate, track, and improve the performance of these BERD units are critical to their successful establishment and sustainable future. To develop a set of reliable but versatile metrics that can be adapted easily to different environments and evolving needs, we consulted with members of BERD units from the consortium of academic health centers funded by the Clinical and Translational Science Award Program of the National Institutes of Health. Through a systematic process of consensus building and document drafting, we formulated metrics that covered the three identified domains of BERD practices: the development and maintenance of collaborations with clinical and translational science investigators, the application of BERD-related methods to clinical and translational research, and the discovery of novel BERD-related methodologies. In this article, we describe the set of metrics and advocate their use for evaluating BERD practices. The routine application, comparison of findings across diverse BERD units, and ongoing refinement of the metrics will identify trends, facilitate meaningful changes, and ultimately enhance the contribution of BERD activities to biomedical research. Copyright © 2011 John Wiley & Sons, Ltd.
A Metric on Phylogenetic Tree Shapes.
Colijn, C; Plazzotta, G
2018-01-01
The shapes of evolutionary trees are influenced by the nature of the evolutionary process but comparisons of trees from different processes are hindered by the challenge of completely describing tree shape. We present a full characterization of the shapes of rooted branching trees in a form that lends itself to natural tree comparisons. We use this characterization to define a metric, in the sense of a true distance function, on tree shapes. The metric distinguishes trees from random models known to produce different tree shapes. It separates trees derived from tropical versus USA influenza A sequences, which reflect the differing epidemiology of tropical and seasonal flu. We describe several metrics based on the same core characterization, and illustrate how to extend the metric to incorporate trees' branch lengths or other features such as overall imbalance. Our approach allows us to construct addition and multiplication on trees, and to create a convex metric on tree shapes which formally allows computation of average tree shapes. © The Author(s) 2017. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Future of the PCI Readmission Metric.
Wasfy, Jason H; Yeh, Robert W
2016-03-01
Between 2013 and 2014, the Centers for Medicare and Medicaid Services and the National Cardiovascular Data Registry publically reported risk-adjusted 30-day readmission rates after percutaneous coronary intervention (PCI) as a pilot project. A key strength of this public reporting effort included risk adjustment with clinical rather than administrative data. Furthermore, because readmission after PCI is common, expensive, and preventable, this metric has substantial potential to improve quality and value in American cardiology care. Despite this, concerns about the metric exist. For example, few PCI readmissions are caused by procedural complications, limiting the extent to which improved procedural technique can reduce readmissions. Also, similar to other readmission measures, PCI readmission is associated with socioeconomic status and race. Accordingly, the metric may unfairly penalize hospitals that care for underserved patients. Perhaps in the context of these limitations, Centers for Medicare and Medicaid Services has not yet included PCI readmission among metrics that determine Medicare financial penalties. Nevertheless, provider organizations may still wish to focus on this metric to improve value for cardiology patients. PCI readmission is associated with low-risk chest discomfort and patient anxiety. Therefore, patient education, improved triage mechanisms, and improved care coordination offer opportunities to minimize PCI readmissions. Because PCI readmission is common and costly, reducing PCI readmission offers provider organizations a compelling target to improve the quality of care, and also performance in contracts involve shared financial risk. © 2016 American Heart Association, Inc.
g-Weak Contraction in Ordered Cone Rectangular Metric Spaces
Directory of Open Access Journals (Sweden)
S. K. Malhotra
2013-01-01
Full Text Available We prove some common fixed-point theorems for the ordered g-weak contractions in cone rectangular metric spaces without assuming the normality of cone. Our results generalize some recent results from cone metric and cone rectangular metric spaces into ordered cone rectangular metric spaces. Examples are provided which illustrate the results.
Defining a Progress Metric for CERT RMM Improvement
2017-09-14
REV-03.18.2016.0 Defining a Progress Metric for CERT-RMM Improvement Gregory Crabb Nader Mehravari David Tobar September 2017 TECHNICAL ...fendable resource allocation decisions. Technical metrics measure aspects of controls implemented through technology (systems, soft- ware, hardware...implementation metric would be the percentage of users who have received anti-phishing training . • Effectiveness/efficiency metrics measure whether
NASA education briefs for the classroom. Metrics in space
The use of metric measurement in space is summarized for classroom use. Advantages of the metric system over the English measurement system are described. Some common metric units are defined, as are special units for astronomical study. International system unit prefixes and a conversion table of metric/English units are presented. Questions and activities for the classroom are recommended.
MetricForensics: A Multi-Level Approach for Mining Volatile Graphs
Energy Technology Data Exchange (ETDEWEB)
Henderson, Keith [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Eliassi-Rad, Tina [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Faloutsos, Christos [Carnegie Mellon Univ., Pittsburgh, PA (United States); Akoglu, Leman [Carnegie Mellon Univ., Pittsburgh, PA (United States); Li, Lei [Carnegie Mellon Univ., Pittsburgh, PA (United States); Maruhashi, Koji [Fujitsu Laboratories Ltd., Kanagawa (Japan); Prakash, B. Aditya [Carnegie Mellon Univ., Pittsburgh, PA (United States); Tong, H [Carnegie Mellon Univ., Pittsburgh, PA (United States)
2010-02-08
Advances in data collection and storage capacity have made it increasingly possible to collect highly volatile graph data for analysis. Existing graph analysis techniques are not appropriate for such data, especially in cases where streaming or near-real-time results are required. An example that has drawn significant research interest is the cyber-security domain, where internet communication traces are collected and real-time discovery of events, behaviors, patterns and anomalies is desired. We propose MetricForensics, a scalable framework for analysis of volatile graphs. MetricForensics combines a multi-level “drill down" approach, a collection of user-selected graph metrics and a collection of analysis techniques. At each successive level, more sophisticated metrics are computed and the graph is viewed at a finer temporal resolution. In this way, MetricForensics scales to highly volatile graphs by only allocating resources for computationally expensive analysis when an interesting event is discovered at a coarser resolution first. We test MetricForensics on three real-world graphs: an enterprise IP trace, a trace of legitimate and malicious network traffic from a research institution, and the MIT Reality Mining proximity sensor data. Our largest graph has »3M vertices and »32M edges, spanning 4:5 days. The results demonstrate the scalability and capability of MetricForensics in analyzing volatile graphs; and highlight four novel phenomena in such graphs: elbows, broken correlations, prolonged spikes, and strange stars.
SOCIAL METRICS APPLIED TO SMART TOURISM
Directory of Open Access Journals (Sweden)
O. Cervantes
2016-09-01
Full Text Available We present a strategy to make productive use of semantically-related social data, from a user-centered semantic network, in order to help users (tourists and citizens in general to discover cultural heritage, points of interest and available services in a smart city. This data can be used to personalize recommendations in a smart tourism application. Our approach is based on flow centrality metrics typically used in social network analysis: flow betweenness, flow closeness and eccentricity. These metrics are useful to discover relevant nodes within the network yielding nodes that can be interpreted as suggestions (venues or services to users. We describe the semantic network built on graph model, as well as social metrics algorithms used to produce recommendations. We also present challenges and results from a prototypical implementation applied to the case study of the City of Puebla, Mexico.
Landscape pattern metrics and regional assessment
O'Neill, R. V.; Riitters, K.H.; Wickham, J.D.; Jones, K.B.
1999-01-01
The combination of remote imagery data, geographic information systems software, and landscape ecology theory provides a unique basis for monitoring and assessing large-scale ecological systems. The unique feature of the work has been the need to develop and interpret quantitative measures of spatial pattern-the landscape indices. This article reviews what is known about the statistical properties of these pattern metrics and suggests some additional metrics based on island biogeography, percolation theory, hierarchy theory, and economic geography. Assessment applications of this approach have required interpreting the pattern metrics in terms of specific environmental endpoints, such as wildlife and water quality, and research into how to represent synergystic effects of many overlapping sources of stress.
A bi-metric theory of gravitation
International Nuclear Information System (INIS)
Rosen, N.
1975-01-01
The bi-metric theory of gravitation proposed previously is simplified in that the auxiliary conditions are discarded, the two metric tensors being tied together only by means of the boundary conditions. Some of the properties of the field of a particle are investigated; there is no black hole, and it appears that no gravitational collapse can take place. Although the proposed theory and general relativity are at present observationally indistinguishable, some differences are pointed out which may some day be susceptible of observation. An alternative bi-metric theory is considered which gives for the precession of the perihelion 5/6 of the value given by general relativity; it seems less satisfactory than the present theory from the aesthetic point of view. (author)
Steiner trees for fixed orientation metrics
DEFF Research Database (Denmark)
Brazil, Marcus; Zachariasen, Martin
2009-01-01
We consider the problem of constructing Steiner minimum trees for a metric defined by a polygonal unit circle (corresponding to s = 2 weighted legal orientations in the plane). A linear-time algorithm to enumerate all angle configurations for degree three Steiner points is given. We provide...... a simple proof that the angle configuration for a Steiner point extends to all Steiner points in a full Steiner minimum tree, such that at most six orientations suffice for edges in a full Steiner minimum tree. We show that the concept of canonical forms originally introduced for the uniform orientation...... metric generalises to the fixed orientation metric. Finally, we give an O(s n) time algorithm to compute a Steiner minimum tree for a given full Steiner topology with n terminal leaves....
Metrical and dynamical aspects in complex analysis
2017-01-01
The central theme of this reference book is the metric geometry of complex analysis in several variables. Bridging a gap in the current literature, the text focuses on the fine behavior of the Kobayashi metric of complex manifolds and its relationships to dynamical systems, hyperbolicity in the sense of Gromov and operator theory, all very active areas of research. The modern points of view expressed in these notes, collected here for the first time, will be of interest to academics working in the fields of several complex variables and metric geometry. The different topics are treated coherently and include expository presentations of the relevant tools, techniques and objects, which will be particularly useful for graduate and PhD students specializing in the area.
Social Metrics Applied to Smart Tourism
Cervantes, O.; Gutiérrez, E.; Gutiérrez, F.; Sánchez, J. A.
2016-09-01
We present a strategy to make productive use of semantically-related social data, from a user-centered semantic network, in order to help users (tourists and citizens in general) to discover cultural heritage, points of interest and available services in a smart city. This data can be used to personalize recommendations in a smart tourism application. Our approach is based on flow centrality metrics typically used in social network analysis: flow betweenness, flow closeness and eccentricity. These metrics are useful to discover relevant nodes within the network yielding nodes that can be interpreted as suggestions (venues or services) to users. We describe the semantic network built on graph model, as well as social metrics algorithms used to produce recommendations. We also present challenges and results from a prototypical implementation applied to the case study of the City of Puebla, Mexico.
Towards Video Quality Metrics Based on Colour Fractal Geometry
Directory of Open Access Journals (Sweden)
Richard Noël
2010-01-01
Full Text Available Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.
Urban Landscape Metrics for Climate and Sustainability Assessments
Cochran, F. V.; Brunsell, N. A.
2014-12-01
To test metrics for rapid identification of urban classes and sustainable urban forms, we examine the configuration of urban landscapes using satellite remote sensing data. We adopt principles from landscape ecology and urban planning to evaluate urban heterogeneity and design themes that may constitute more sustainable urban forms, including compactness (connectivity), density, mixed land uses, diversity, and greening. Using 2-D wavelet and multi-resolution analysis, landscape metrics, and satellite-derived indices of vegetation fraction and impervious surface, the spatial variability of Landsat and MODIS data from metropolitan areas of Manaus and São Paulo, Brazil are investigated. Landscape metrics for density, connectivity, and diversity, like the Shannon Diversity Index, are used to assess the diversity of urban buildings, geographic extent, and connectedness. Rapid detection of urban classes for low density, medium density, high density, and tall building district at the 1-km scale are needed for use in climate models. If the complexity of finer-scale urban characteristics can be related to the neighborhood scale both climate and sustainability assessments may be more attainable across urban areas.
Kerr metric in the deSitter background
International Nuclear Information System (INIS)
Vaidya, P.C.
1984-01-01
In addition to the Kerr metric with cosmological constant Λ several other metrics are presented giving a Kerr-like solution of Einstein's equations in the background of deSitter universe. A new metric of what may be termed as rotating deSitter space-time devoid of matter but containing null fluid with twisting null rays, has been presented. This metric reduces to the standard deSitter metric when the twist in the rays vanishes. Kerr metric in this background is the immediate generalization of Schwarzschild's exterior metric with cosmological constant. (author)
Active Metric Learning from Relative Comparisons
Xiong, Sicheng; Rosales, Rómer; Pei, Yuanli; Fern, Xiaoli Z.
2014-01-01
This work focuses on active learning of distance metrics from relative comparison information. A relative comparison specifies, for a data point triplet $(x_i,x_j,x_k)$, that instance $x_i$ is more similar to $x_j$ than to $x_k$. Such constraints, when available, have been shown to be useful toward defining appropriate distance metrics. In real-world applications, acquiring constraints often require considerable human effort. This motivates us to study how to select and query the most useful ...
Heuristic extension of the Schwarzschild metric
International Nuclear Information System (INIS)
Espinosa, J.M.
1982-01-01
The Schwarzschild solution of Einstein's equations of gravitation has several singularities. It is known that the singularity at r = 2Gm/c 2 is only apparent, a result of the coordinates in which the solution was found. Paradoxical results occuring near the singularity show the system of coordinates is incomplete. We introduce a simple, two-dimensional metric with an apparent singularity that makes it incomplete. By a straightforward, heuristic procedure we extend and complete this simple metric. We then use the same procedure to give a heuristic derivation of the Kruskal system of coordinates, which is known to extend the Schwarzschild manifold past its apparent singularity and produce a complete manifold
Metric inhomogeneous Diophantine approximation in positive characteristic
DEFF Research Database (Denmark)
Kristensen, Simon
2011-01-01
We obtain asymptotic formulae for the number of solutions to systems of inhomogeneous linear Diophantine inequalities over the field of formal Laurent series with coefficients from a finite fields, which are valid for almost every such system. Here `almost every' is with respect to Haar measure...... of the coefficients of the homogeneous part when the number of variables is at least two (singly metric case), and with respect to the Haar measure of all coefficients for any number of variables (doubly metric case). As consequences, we derive zero-one laws in the spirit of the Khintchine-Groshev Theorem and zero...
Metric inhomogeneous Diophantine approximation in positive characteristic
DEFF Research Database (Denmark)
Kristensen, S.
We obtain asymptotic formulae for the number of solutions to systems of inhomogeneous linear Diophantine inequalities over the field of formal Laurent series with coefficients from a finite fields, which are valid for almost every such system. Here 'almost every' is with respect to Haar measure...... of the coefficients of the homogeneous part when the number of variables is at least two (singly metric case), and with respect to the Haar measure of all coefficients for any number of variables (doubly metric case). As consequences, we derive zero-one laws in the spirit of the Khintchine--Groshev Theorem and zero...
Jacobi-Maupertuis metric and Kepler equation
Chanda, Sumanto; Gibbons, Gary William; Guha, Partha
This paper studies the application of the Jacobi-Eisenhart lift, Jacobi metric and Maupertuis transformation to the Kepler system. We start by reviewing fundamentals and the Jacobi metric. Then we study various ways to apply the lift to Kepler-related systems: first as conformal description and Bohlin transformation of Hooke’s oscillator, second in contact geometry and third in Houri’s transformation [T. Houri, Liouville integrability of Hamiltonian systems and spacetime symmetry (2016), www.geocities.jp/football_physician/publication.html], coupled with Milnor’s construction [J. Milnor, On the geometry of the Kepler problem, Am. Math. Mon. 90 (1983) 353-365] with eccentric anomaly.
SU-G-BRB-16: Vulnerabilities in the Gamma Metric
Energy Technology Data Exchange (ETDEWEB)
Neal, B; Siebers, J [University of Virginia Health System, Charlottesville, VA (United States)
2016-06-15
Purpose: To explore vulnerabilities in the gamma index metric that undermine its wide use as a radiation therapy quality assurance tool. Methods: 2D test field pairs (images) are created specifically to achieve high gamma passing rates, but to also include gross errors by exploiting the distance-to-agreement and percent-passing components of the metric. The first set has no requirement of clinical practicality, but is intended to expose vulnerabilities. The second set exposes clinically realistic vulnerabilities. To circumvent limitations inherent to user-specific tuning of prediction algorithms to match measurements, digital test cases are manually constructed, thereby mimicking high-quality image prediction. Results: With a 3 mm distance-to-agreement metric, changing field size by ±6 mm results in a gamma passing rate over 99%. For a uniform field, a lattice of passing points spaced 5 mm apart results in a passing rate of 100%. Exploiting the percent-passing component, a 10×10 cm{sup 2} field can have a 95% passing rate when an 8 cm{sup 2}=2.8×2.8 cm{sup 2} highly out-of-tolerance (e.g. zero dose) square is missing from the comparison image. For clinically realistic vulnerabilities, an arc plan for which a 2D image is created can have a >95% passing rate solely due to agreement in the lateral spillage, with the failing 5% in the critical target region. A field with an integrated boost (e.g whole brain plus small metastases) could neglect the metastases entirely, yet still pass with a 95% threshold. All the failure modes described would be visually apparent on a gamma-map image. Conclusion: The %gamma<1 metric has significant vulnerabilities. High passing rates can obscure critical faults in hypothetical and delivered radiation doses. Great caution should be used with gamma as a QA metric; users should inspect the gamma-map. Visual analysis of gamma-maps may be impractical for cine acquisition.
Diagnostic on the appropriation of metrics in software medium enterprises of Medellin city
Directory of Open Access Journals (Sweden)
Piedad Metaute P.
2016-06-01
Full Text Available This article was produced as a result of the investigation, "Ownership and use of metrics in software medium-sized city of Medellin." The objective of this research was to conduct an assessment of the ownership and use of metrics, seeking to make recommendations that contribute to the strengthening of academia and the productive sector in this topic. The methodology used was based on documentary review related to international norms, standards, methodologies, guides and tools that address software quality metrics especially applicable during Software Engineering. The main sources consulted were books, journals and articles, which could raise the foundation for such research, likewise, field research was used, it applied to medium-sized enterprises engaged in the construction of the product, where contact he had with people involved in these processes, of which data pertaining to real contexts where the events are generated are obtained. topics were addressed as project control, process control, software engineering, control of product quality software, application time metrics, applying metrics at different stages, certifications metrics, methodologies, tools used, processes where contributions in their application, types of tests which are applied, among others, which resulted, argued discussion findings generated from the respective regulations, best practices and needs of different contexts where they are used metrics apply software products in addition to the respective conclusions and practical implications that allowed for an assessment of the ownership and use of metrics in software medium-sized city of Medellin, as well as some suggestions for the academy, aimed at strengthening subjects whose responsibility generating skills in Software Engineering, especially in the metrics, and contextualized for significant contributions to the industry.
Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.
Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy
2016-01-01
Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.
Quantitative properties of the Schwarzschild metric
Czech Academy of Sciences Publication Activity Database
Křížek, Michal; Křížek, Filip
2018-01-01
Roč. 2018, č. 1 (2018), s. 1-10 Institutional support: RVO:67985840 Keywords : exterior and interior Schwarzschild metric * proper radius * coordinate radius Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://astro.shu-bg.net/pasb/index_files/Papers/2018/SCHWARZ8.pdf
Strong Ideal Convergence in Probabilistic Metric Spaces
Indian Academy of Sciences (India)
In the present paper we introduce the concepts of strongly ideal convergent sequence and strong ideal Cauchy sequence in a probabilistic metric (PM) space endowed with the strong topology, and establish some basic facts. Next, we define the strong ideal limit points and the strong ideal cluster points of a sequence in this ...
lakemorpho: Calculating lake morphometry metrics in R.
Hollister, Jeffrey; Stachelek, Joseph
2017-01-01
Metrics describing the shape and size of lakes, known as lake morphometry metrics, are important for any limnological study. In cases where a lake has long been the subject of study these data are often already collected and are openly available. Many other lakes have these data collected, but access is challenging as it is often stored on individual computers (or worse, in filing cabinets) and is available only to the primary investigators. The vast majority of lakes fall into a third category in which the data are not available. This makes broad scale modelling of lake ecology a challenge as some of the key information about in-lake processes are unavailable. While this valuable in situ information may be difficult to obtain, several national datasets exist that may be used to model and estimate lake morphometry. In particular, digital elevation models and hydrography have been shown to be predictive of several lake morphometry metrics. The R package lakemorpho has been developed to utilize these data and estimate the following morphometry metrics: surface area, shoreline length, major axis length, minor axis length, major and minor axis length ratio, shoreline development, maximum depth, mean depth, volume, maximum lake length, mean lake width, maximum lake width, and fetch. In this software tool article we describe the motivation behind developing lakemorpho , discuss the implementation in R, and describe the use of lakemorpho with an example of a typical use case.
Contraction theorems in fuzzy metric space
International Nuclear Information System (INIS)
Farnoosh, R.; Aghajani, A.; Azhdari, P.
2009-01-01
In this paper, the results on fuzzy contractive mapping proposed by Dorel Mihet will be proved for B-contraction and C-contraction in the case of George and Veeramani fuzzy metric space. The existence of fixed point with weaker conditions will be proved; that is, instead of the convergence of subsequence, p-convergence of subsequence is used.
Inferring feature relevances from metric learning
DEFF Research Database (Denmark)
Schulz, Alexander; Mokbel, Bassam; Biehl, Michael
2015-01-01
Powerful metric learning algorithms have been proposed in the last years which do not only greatly enhance the accuracy of distance-based classifiers and nearest neighbor database retrieval, but which also enable the interpretability of these operations by assigning explicit relevance weights...
DIGITAL MARKETING: SUCCESS METRICS, FUTURE TRENDS
Preeti Kaushik
2017-01-01
Abstract – Business Marketing is one of the prospective which has been tremendously affected by digital world in last few years. Digital marketing refers to doing advertising through digital channels. This paper provides detailed study of metrics to measure success of digital marketing platform and glimpse of future of technologies by 2020.
Assessing Software Quality Through Visualised Cohesion Metrics
Directory of Open Access Journals (Sweden)
Timothy Shih
2001-05-01
Full Text Available Cohesion is one of the most important factors for software quality as well as maintainability, reliability and reusability. Module cohesion is defined as a quality attribute that seeks for measuring the singleness of the purpose of a module. The module of poor quality can be a serious obstacle to the system quality. In order to design a good software quality, software managers and engineers need to introduce cohesion metrics to measure and produce desirable software. A highly cohesion software is thought to be a desirable constructing. In this paper, we propose a function-oriented cohesion metrics based on the analysis of live variables, live span and the visualization of processing element dependency graph. We give six typical cohesion examples to be measured as our experiments and justification. Therefore, a well-defined, well-normalized, well-visualized and well-experimented cohesion metrics is proposed to indicate and thus enhance software cohesion strength. Furthermore, this cohesion metrics can be easily incorporated with software CASE tool to help software engineers to improve software quality.
Metric propositional neighborhood logics on natural numbers
DEFF Research Database (Denmark)
Bresolin, Davide; Della Monica, Dario; Goranko, Valentin
2013-01-01
Metric Propositional Neighborhood Logic (MPNL) over natural numbers. MPNL features two modalities referring, respectively, to an interval that is “met by” the current one and to an interval that “meets” the current one, plus an infinite set of length constraints, regarded as atomic propositions...
Calabi–Yau metrics and string compactification
Directory of Open Access Journals (Sweden)
Michael R. Douglas
2015-09-01
Full Text Available Yau proved an existence theorem for Ricci-flat Kähler metrics in the 1970s, but we still have no closed form expressions for them. Nevertheless there are several ways to get approximate expressions, both numerical and analytical. We survey some of this work and explain how it can be used to obtain physical predictions from superstring theory.
Goedel-type metrics in various dimensions
International Nuclear Information System (INIS)
Guerses, Metin; Karasu, Atalay; Sarioglu, Oezguer
2005-01-01
Goedel-type metrics are introduced and used in producing charged dust solutions in various dimensions. The key ingredient is a (D - 1)-dimensional Riemannian geometry which is then employed in constructing solutions to the Einstein-Maxwell field equations with a dust distribution in D dimensions. The only essential field equation in the procedure turns out to be the source-free Maxwell's equation in the relevant background. Similarly the geodesics of this type of metric are described by the Lorentz force equation for a charged particle in the lower dimensional geometry. It is explicitly shown with several examples that Goedel-type metrics can be used in obtaining exact solutions to various supergravity theories and in constructing spacetimes that contain both closed timelike and closed null curves and that contain neither of these. Among the solutions that can be established using non-flat backgrounds, such as the Tangherlini metrics in (D - 1)-dimensions, there exists a class which can be interpreted as describing black-hole-type objects in a Goedel-like universe
Strong Statistical Convergence in Probabilistic Metric Spaces
Şençimen, Celaleddin; Pehlivan, Serpil
2008-01-01
In this article, we introduce the concepts of strongly statistically convergent sequence and strong statistically Cauchy sequence in a probabilistic metric (PM) space endowed with the strong topology, and establish some basic facts. Next, we define the strong statistical limit points and the strong statistical cluster points of a sequence in this space and investigate the relations between these concepts.
Language Games: University Responses to Ranking Metrics
Heffernan, Troy A.; Heffernan, Amanda
2018-01-01
League tables of universities that measure performance in various ways are now commonplace, with numerous bodies providing their own rankings of how institutions throughout the world are seen to be performing on a range of metrics. This paper uses Lyotard's notion of language games to theorise that universities are regaining some power over being…
A new universal colour image fidelity metric
Toet, A.; Lucassen, M.P.
2003-01-01
We extend a recently introduced universal grayscale image quality index to a newly developed perceptually decorrelated colour space. The resulting colour image fidelity metric quantifies the distortion of a processed colour image relative to its original version. We evaluated the new colour image
Standardised metrics for global surgical surveillance.
Weiser, Thomas G; Makary, Martin A; Haynes, Alex B; Dziekan, Gerald; Berry, William R; Gawande, Atul A
2009-09-26
Public health surveillance relies on standardised metrics to evaluate disease burden and health system performance. Such metrics have not been developed for surgical services despite increasing volume, substantial cost, and high rates of death and disability associated with surgery. The Safe Surgery Saves Lives initiative of WHO's Patient Safety Programme has developed standardised public health metrics for surgical care that are applicable worldwide. We assembled an international panel of experts to develop and define metrics for measuring the magnitude and effect of surgical care in a population, while taking into account economic feasibility and practicability. This panel recommended six measures for assessing surgical services at a national level: number of operating rooms, number of operations, number of accredited surgeons, number of accredited anaesthesia professionals, day-of-surgery death ratio, and postoperative in-hospital death ratio. We assessed the feasibility of gathering such statistics at eight diverse hospitals in eight countries and incorporated them into the WHO Guidelines for Safe Surgery, in which methods for data collection, analysis, and reporting are outlined.
A Lagrangian-dependent metric space
International Nuclear Information System (INIS)
El-Tahir, A.
1989-08-01
A generalized Lagrangian-dependent metric of the static isotropic spacetime is derived. Its behaviour should be governed by imposing physical constraints allowing to avert the pathological features of gravity at the strong field domain. This would restrict the choice of the Lagrangian form. (author). 10 refs
Clean Cities 2011 Annual Metrics Report
Energy Technology Data Exchange (ETDEWEB)
Johnson, C.
2012-12-01
This report details the petroleum savings and vehicle emissions reductions achieved by the U.S. Department of Energy's Clean Cities program in 2011. The report also details other performance metrics, including the number of stakeholders in Clean Cities coalitions, outreach activities by coalitions and national laboratories, and alternative fuel vehicles deployed.
Clean Cities 2010 Annual Metrics Report
Energy Technology Data Exchange (ETDEWEB)
Johnson, C.
2012-10-01
This report details the petroleum savings and vehicle emissions reductions achieved by the U.S. Department of Energy's Clean Cities program in 2010. The report also details other performance metrics, including the number of stakeholders in Clean Cities coalitions, outreach activities by coalitions and national laboratories, and alternative fuel vehicles deployed.
Genetic basis of a cognitive complexity metric
Hansell, Narelle K; Halford, Graeme S; Andrews, Glenda; Shum, David H K; Harris, Sarah E; Davies, Gail; Franic, Sanja; Christoforou, Andrea; Zietsch, Brendan; Painter, Jodie; Medland, Sarah E; Ehli, Erik A; Davies, Gareth E; Steen, Vidar M; Lundervold, Astri J; Reinvang, Ivar; Montgomery, Grant W; Espeseth, Thomas; Hulshoff Pol, Hilleke E; Starr, John M; Martin, Nicholas G; Le Hellard, Stephanie; Boomsma, Dorret I; Deary, Ian J; Wright, Margaret J
2015-01-01
Relational complexity (RC) is a metric reflecting capacity limitation in relational processing. It plays a crucial role in higher cognitive processes and is an endophenotype for several disorders. However, the genetic underpinnings of complex relational processing have not been investigated. Using
Genetic Basis of a Cognitive Complexity Metric
Hansell, N.K.; Halford, G.S.; Andrews, G.; Shum, D.H.K.; Harris, S.E.; Davies, G.; Franic, S.; Christoforou, A.; Zietsch, B.; Painter, J.; Medland, S.E.; Ehli, E.A.; Davies, G.E.; Steen, V.M.; Lundervold, A.J.; Reinvang, I.; Montgomery, G.W.; Espeseth, T.; Hulshoff Pol, H.E.; Starr, J.M.; Martin, N.G.; Le Hellard, S.; Boomsma, D.I.; Deary, I.J.; Wright, M.J.
2015-01-01
Relational complexity (RC) is a metric reflecting capacity limitation in relational processing. It plays a crucial role in higher cognitive processes and is an endophenotype for several disorders. However, the genetic underpinnings of complex relational processing have not been investigated. Using
Business model metrics : An open repository
Heikkila, M.; Bouwman, W.A.G.A.; Heikkila, J.; Solaimani, S.; Janssen, W.
2015-01-01
Development of successful business models has become a necessity in turbulent business environments, but compared to research on business modeling tools, attention to the role of metrics in designing business models in literature is limited. Building on existing approaches to business models and
Software quality metrics aggregation in industry
Mordal, K.; Anquetil, N.; Laval, J.; Serebrenik, A.; Vasilescu, B.N.; Ducasse, S.
2013-01-01
With the growing need for quality assessment of entire software systems in the industry, new issues are emerging. First, because most software quality metrics are defined at the level of individual software components, there is a need for aggregation methods to summarize the results at the system
Invariance group of the Finster metric function
International Nuclear Information System (INIS)
Asanov, G.S.
1985-01-01
An invariance group of the Finsler metric function is introduced and studied that directly generalized the respective concept (a group of Euclidean rolations) of the Rieman geometry. A sequential description of the isotopic invariance of physical fields on the base of the Finsler geometry is possible in terms of this group
Sigma Routing Metric for RPL Protocol
Directory of Open Access Journals (Sweden)
Paul Sanmartin
2018-04-01
Full Text Available This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX. However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption.
Metrics derived from fish assemblages as indicators of environmental degradation in Cerrado streams
Directory of Open Access Journals (Sweden)
Milton P. Ávila
2018-04-01
Full Text Available ABSTRACT The development of effective monitoring tools depends on finding sensitive metrics that are capable of detecting the most important environmental impacts at a given region. We assessed if metrics derived from stream fish assemblages reflect physical habitat degradation and changes in land cover. We sampled the ichthyofauna and environmental characteristics of 16 stream sites of first and second order in the Upper Tocantins River basin. The streams were classified according to their environmental characteristics into reference (n = 5, intermediate (n = 4, and impacted (n = 7. A total of 4,079 individuals in five orders, 12 families, and 30 species were collected. Of the 20 metrics tested, eight were non-collinear and were tested for their performance in discriminating among groups of streams. Three metrics were sensitive to the gradient of degradation: Berger-Parker dominance index, percentage of characiform fish, and percentage of rheophilic individuals. Some commonly used metrics did not reflect the disturbances and many others were redundant with those that did. These results indicate that the metrics derived from fish assemblages may be informative for identifying the conservation status of streams, with the potential to be used in biomonitoring.
Prototypic Development and Evaluation of a Medium Format Metric Camera
Hastedt, H.; Rofallski, R.; Luhmann, T.; Rosenbauer, R.; Ochsner, D.; Rieke-Zapp, D.
2018-05-01
Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2-3 m in each direction) and large volumes (around 20 x 20 x 1-10 m). The requested precision in object space (1σ RMS) is defined to be within 0.1-0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1) high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2) a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3) a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002). Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm-0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement). All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.
PROTOTYPIC DEVELOPMENT AND EVALUATION OF A MEDIUM FORMAT METRIC CAMERA
Directory of Open Access Journals (Sweden)
H. Hastedt
2018-05-01
Full Text Available Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2–3 m in each direction and large volumes (around 20 x 20 x 1–10 m. The requested precision in object space (1σ RMS is defined to be within 0.1–0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1 high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2 a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3 a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002. Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm–0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement. All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.
Directory of Open Access Journals (Sweden)
Chia-Kuang Tsai
Full Text Available Dementia is the supreme worldwide burden for welfare and the health care system in the 21st century. The early identification and control of the modifiable risk factors of dementia are important. Global-cognitive health (GCH metrics, encompassing controllable cardiovascular health (CVH and non-CVH risk factors of dementia, is a newly developed approach to assess the risk of cognitive impairment. The components of ideal GCH metrics includes better education, non-obesity, normal blood pressure, no smoking, no depression, ideal physical activity, good social integration, normal glycated hemoglobin (HbA1c, and normal hearing. This study focuses on the association between ideal GCH metrics and the cognitive function in young adults by investigating the Third Health and Nutrition Examination Survey (NHANES III database, which has not been reported previously. A total of 1243 participants aged 17 to 39 years were recruited in this study. Cognitive functioning was evaluated by the simple reaction time test (SRTT, symbol-digit substitution test (SDST, and serial digit learning test (SDLT. Participants with significantly higher scores of GCH metrics had better cognitive performance (p for trend <0.01 in three cognitive tests. Moreover, better education, ideal physical activity, good social integration and normal glycated hemoglobin were the optimistic components of ideal GCH metrics associated with better cognitive performance after adjusting for covariates (p < 0.05 in three cognitive tests. These findings emphasize the importance of a preventive strategy for modifiable dementia risk factors to enhance cognitive functioning during adulthood.
Anam Cara, St Canice's Road, Glasnevin, Dublin 11.
LENUS (Irish Health Repository)
O'Neill, Barry J
2014-04-01
Antiphospholipid syndrome and systemic erythematosus have been associated with metatarsal stress fractures. Stress fractures of the Lisfranc joint complex are uncommon injuries but have been reported to occur most frequently in ballet dancers. We present a case of an avulsion fracture of the Lisfranc joint complex that occurred spontaneously. We have reviewed the association between systemic conditions and metatarsal fractures and proposed a series of hypothetical pathological events that may have contributed to this unusual injury.
Einstein, the exponential metric, and a proposed gravitational Michelson-Morley experiment
International Nuclear Information System (INIS)
Yilmaz, H.
1979-01-01
An early but potentially important remark of Einstein on the exponential nature of time-dilation is discussed. Using the same argument for the length-contraction, plus two alternative kinematical assumptions, the Schwarzschild and exponential metrics are derived. A gravitational Michelson-Morley experiment with one arm directed along the vertical is proposed to test the metrics. The experiment may be considered as a laboratory test of the Schwarzschild field and possibly a test of the black-hole interpretation of collapsed matter
Quantitative application of sigma metrics in medical biochemistry.
Nanda, Sunil Kumar; Ray, Lopamudra
2013-12-01
Laboratory errors are result of a poorly designed quality system in the laboratory. Six Sigma is an error reduction methodology that has been successfully applied at Motorola and General Electric. Sigma (σ) is the mathematical symbol for standard deviation (SD). Sigma methodology can be applied wherever an outcome of a process has to be measured. A poor outcome is counted as an error or defect. This is quantified as defects per million (DPM). A six sigma process is one in which 99.999666% of the products manufactured are statistically expected to be free of defects. Six sigma concentrates, on regulating a process to 6 SDs, represents 3.4 DPM (defects per million) opportunities. It can be inferred that as sigma increases, the consistency and steadiness of the test improves, thereby reducing the operating costs. We aimed to gauge performance of our laboratory parameters by sigma metrics. Evaluation of sigma metrics in interpretation of parameter performance in clinical biochemistry. The six month internal QC (October 2012 to march 2013) and EQAS (external quality assurance scheme) were extracted for the parameters-Glucose, Urea, Creatinine, Total Bilirubin, Total Protein, Albumin, Uric acid, Total Cholesterol, Triglycerides, Chloride, SGOT, SGPT and ALP. Coefficient of variance (CV) were calculated from internal QC for these parameters. Percentage bias for these parameters was calculated from the EQAS. Total allowable errors were followed as per Clinical Laboratory Improvement Amendments (CLIA) guidelines. Sigma metrics were calculated from CV, percentage bias and total allowable error for the above mentioned parameters. For parameters - Total bilirubin, uric acid, SGOT, SGPT and ALP, the sigma values were found to be more than 6. For parameters - glucose, Creatinine, triglycerides, urea, the sigma values were found to be between 3 to 6. For parameters - total protein, albumin, cholesterol and chloride, the sigma values were found to be less than 3. ALP was the best
Culture, intangibles and metrics in environmental management.
Satterfield, Terre; Gregory, Robin; Klain, Sarah; Roberts, Mere; Chan, Kai M
2013-03-15
The demand for better representation of cultural considerations in environmental management is increasingly evident. As two cases in point, ecosystem service approaches increasingly include cultural services, and resource planners recognize indigenous constituents and the cultural knowledge they hold as key to good environmental management. Accordingly, collaborations between anthropologists, planners, decision makers and biodiversity experts about the subject of culture are increasingly common-but also commonly fraught. Those whose expertise is culture often engage in such collaborations because they worry a practitioner from 'elsewhere' will employ a 'measure of culture' that is poorly or naively conceived. Those from an economic or biophysical training must grapple with the intangible properties of culture as they intersect with economic, biological or other material measures. This paper seeks to assist those who engage in collaborations to characterize cultural benefits or impacts relevant to decision-making in three ways; by: (i) considering the likely mindset of would-be collaborators; (ii) providing examples of tested approaches that might enable innovation; and (iii) characterizing the kinds of obstacles that are in principle solvable through methodological alternatives. We accomplish these tasks in part by examining three cases wherein culture was a critical variable in environmental decision making: risk management in New Zealand associated with Māori concerns about genetically modified organisms; cultural services to assist marine planning in coastal British Columbia; and a decision-making process involving a local First Nation about water flows in a regulated river in western Canada. We examine how 'culture' came to be manifest in each case, drawing from ethnographic and cultural-models interviews and using subjective metrics (recommended by theories of judgment and decision making) to express cultural concerns. We conclude that the characterization of
Observable traces of non-metricity: New constraints on metric-affine gravity
Delhom-Latorre, Adrià; Olmo, Gonzalo J.; Ronco, Michele
2018-05-01
Relaxing the Riemannian condition to incorporate geometric quantities such as torsion and non-metricity may allow to explore new physics associated with defects in a hypothetical space-time microstructure. Here we show that non-metricity produces observable effects in quantum fields in the form of 4-fermion contact interactions, thereby allowing us to constrain the scale of non-metricity to be greater than 1 TeV by using results on Bahbah scattering. Our analysis is carried out in the framework of a wide class of theories of gravity in the metric-affine approach. The bound obtained represents an improvement of several orders of magnitude to previous experimental constraints.
Conformal and related changes of metric on the product of two almost contact metric manifolds.
Blair, D. E.
1990-01-01
This paper studies conformal and related changes of the product metric on the product of two almost contact metric manifolds. It is shown that if one factor is Sasakian, the other is not, but that locally the second factor is of the type studied by Kenmotsu. The results are more general and given in terms of trans-Sasakian, α-Sasakian and β-Kenmotsu structures.
Development of a perceptually calibrated objective metric of noise
Keelan, Brian W.; Jin, Elaine W.; Prokushkin, Sergey
2011-01-01
A system simulation model was used to create scene-dependent noise masks that reflect current performance of mobile phone cameras. Stimuli with different overall magnitudes of noise and with varying mixtures of red, green, blue, and luminance noises were included in the study. Eleven treatments in each of ten pictorial scenes were evaluated by twenty observers using the softcopy ruler method. In addition to determining the quality loss function in just noticeable differences (JNDs) for the average observer and scene, transformations for different combinations of observer sensitivity and scene susceptibility were derived. The psychophysical results were used to optimize an objective metric of isotropic noise based on system noise power spectra (NPS), which were integrated over a visual frequency weighting function to yield perceptually relevant variances and covariances in CIE L*a*b* space. Because the frequency weighting function is expressed in terms of cycles per degree at the retina, it accounts for display pixel size and viewing distance effects, so application-specific predictions can be made. Excellent results were obtained using only L* and a* variances and L*a* covariance, with relative weights of 100, 5, and 12, respectively. The positive a* weight suggests that the luminance (photopic) weighting is slightly narrow on the long wavelength side for predicting perceived noisiness. The L*a* covariance term, which is normally negative, reflects masking between L* and a* noise, as confirmed in informal evaluations. Test targets in linear sRGB and rendered L*a*b* spaces for each treatment are available at http://www.aptina.com/ImArch/ to enable other researchers to test metrics of their own design and calibrate them to JNDs of quality loss without performing additional observer experiments. Such JND-calibrated noise metrics are particularly valuable for comparing the impact of noise and other attributes, and for computing overall image quality.
Development and sensitivity testing of alternative mobility metrics.
2012-03-01
The Oregon Highway Plans (OHP) mobility policies guide various planning and programming activities of the Oregon : Department of Transportation (ODOT). Among these activities are ODOTs land use change review responsibilities under : the Transpo...
Integrated Metrics for Improving the Life Cycle Approach to Assessing Product System Sustainability
Directory of Open Access Journals (Sweden)
Wesley Ingwersen
2014-03-01
Full Text Available Life cycle approaches are critical for identifying and reducing environmental burdens of products. While these methods can indicate potential environmental impacts of a product, current Life Cycle Assessment (LCA methods fail to integrate the multiple impacts of a system into unified measures of social, economic or environmental performance related to sustainability. Integrated metrics that combine multiple aspects of system performance based on a common scientific or economic principle have proven to be valuable for sustainability evaluation. In this work, we propose methods of adapting four integrated metrics for use with LCAs of product systems: ecological footprint, emergy, green net value added, and Fisher information. These metrics provide information on the full product system in land, energy, monetary equivalents, and as a unitless information index; each bundled with one or more indicators for reporting. When used together and for relative comparison, integrated metrics provide a broader coverage of sustainability aspects from multiple theoretical perspectives that is more likely to illuminate potential issues than individual impact indicators. These integrated metrics are recommended for use in combination with traditional indicators used in LCA. Future work will test and demonstrate the value of using these integrated metrics and combinations to assess product system sustainability.
Santa Vélez, Camilo; Enea Romano, Antonio
2018-05-01
Static coordinates can be convenient to solve the vacuum Einstein's equations in presence of spherical symmetry, but for cosmological applications comoving coordinates are more suitable to describe an expanding Universe, especially in the framework of cosmological perturbation theory (CPT). Using CPT we develop a method to transform static spherically symmetric (SSS) modifications of the de Sitter solution from static coordinates to the Newton gauge. We test the method with the Schwarzschild de Sitter (SDS) metric and then derive general expressions for the Bardeen's potentials for a class of SSS metrics obtained by adding to the de Sitter metric a term linear in the mass and proportional to a general function of the radius. Using the gauge invariance of the Bardeen's potentials we then obtain a gauge invariant definition of the turn around radius. We apply the method to an SSS solution of the Brans-Dicke theory, confirming the results obtained independently by solving the perturbation equations in the Newton gauge. The Bardeen's potentials are then derived for new SSS metrics involving logarithmic, power law and exponential modifications of the de Sitter metric. We also apply the method to SSS metrics which give flat rotation curves, computing the radial energy density profile in comoving coordinates in presence of a cosmological constant.
Metrics for measuring distances in configuration spaces
International Nuclear Information System (INIS)
Sadeghi, Ali; Ghasemi, S. Alireza; Schaefer, Bastian; Mohr, Stephan; Goedecker, Stefan; Lill, Markus A.
2013-01-01
In order to characterize molecular structures we introduce configurational fingerprint vectors which are counterparts of quantities used experimentally to identify structures. The Euclidean distance between the configurational fingerprint vectors satisfies the properties of a metric and can therefore safely be used to measure dissimilarities between configurations in the high dimensional configuration space. In particular we show that these metrics are a perfect and computationally cheap replacement for the root-mean-square distance (RMSD) when one has to decide whether two noise contaminated configurations are identical or not. We introduce a Monte Carlo approach to obtain the global minimum of the RMSD between configurations, which is obtained from a global minimization over all translations, rotations, and permutations of atomic indices
A perceptual metric for photo retouching.
Kee, Eric; Farid, Hany
2011-12-13
In recent years, advertisers and magazine editors have been widely criticized for taking digital photo retouching to an extreme. Impossibly thin, tall, and wrinkle- and blemish-free models are routinely splashed onto billboards, advertisements, and magazine covers. The ubiquity of these unrealistic and highly idealized images has been linked to eating disorders and body image dissatisfaction in men, women, and children. In response, several countries have considered legislating the labeling of retouched photos. We describe a quantitative and perceptually meaningful metric of photo retouching. Photographs are rated on the degree to which they have been digitally altered by explicitly modeling and estimating geometric and photometric changes. This metric correlates well with perceptual judgments of photo retouching and can be used to objectively judge by how much a retouched photo has strayed from reality.
Metric-Aware Secure Service Orchestration
Directory of Open Access Journals (Sweden)
Gabriele Costa
2012-12-01
Full Text Available Secure orchestration is an important concern in the internet of service. Next to providing the required functionality the composite services must also provide a reasonable level of security in order to protect sensitive data. Thus, the orchestrator has a need to check whether the complex service is able to satisfy certain properties. Some properties are expressed with metrics for precise definition of requirements. Thus, the problem is to analyse the values of metrics for a complex business process. In this paper we extend our previous work on analysis of secure orchestration with quantifiable properties. We show how to define, verify and enforce quantitative security requirements in one framework with other security properties. The proposed approach should help to select the most suitable service architecture and guarantee fulfilment of the declared security requirements.
Machine Learning for ATLAS DDM Network Metrics
Lassnig, Mario; The ATLAS collaboration; Vamosi, Ralf
2016-01-01
The increasing volume of physics data is posing a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from our ongoing automation efforts. First, we describe our framework for distributed data management and network metrics, automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for network-aware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.
Beyond Lovelock gravity: Higher derivative metric theories
Crisostomi, M.; Noui, K.; Charmousis, C.; Langlois, D.
2018-02-01
We consider theories describing the dynamics of a four-dimensional metric, whose Lagrangian is diffeomorphism invariant and depends at most on second derivatives of the metric. Imposing degeneracy conditions we find a set of Lagrangians that, apart form the Einstein-Hilbert one, are either trivial or contain more than 2 degrees of freedom. Among the partially degenerate theories, we recover Chern-Simons gravity, endowed with constraints whose structure suggests the presence of instabilities. Then, we enlarge the class of parity violating theories of gravity by introducing new "chiral scalar-tensor theories." Although they all raise the same concern as Chern-Simons gravity, they can nevertheless make sense as low energy effective field theories or, by restricting them to the unitary gauge (where the scalar field is uniform), as Lorentz breaking theories with a parity violating sector.
Interiors of Vaidya's radiating metric: Gravitational collapse
International Nuclear Information System (INIS)
Fayos, F.; Jaen, X.; Llanta, E.; Senovilla, J.M.M.
1992-01-01
Using the Darmois junction conditions, we give the necessary and sufficient conditions for the matching of a general spherically symmetric metric to a Vaidya radiating solution. We present also these conditions in terms of the physical quantities of the corresponding energy-momentum tensors. The physical interpretation of the results and their possible applications are studied, and we also perform a detailed analysis of previous work on the subject by other authors
Anisotropic rectangular metric for polygonal surface remeshing
Pellenard, Bertrand
2013-06-18
We propose a new method for anisotropic polygonal surface remeshing. Our algorithm takes as input a surface triangle mesh. An anisotropic rectangular metric, defined at each triangle facet of the input mesh, is derived from both a user-specified normal-based tolerance error and the requirement to favor rectangle-shaped polygons. Our algorithm uses a greedy optimization procedure that adds, deletes and relocates generators so as to match two criteria related to partitioning and conformity.
A Metrics Approach for Collaborative Systems
Directory of Open Access Journals (Sweden)
Cristian CIUREA
2009-01-01
Full Text Available This article presents different types of collaborative systems, their structure and classification. This paper defines the concept of virtual campus as a collaborative system. It builds architecture for virtual campus oriented on collaborative training processes. It analyses the quality characteristics of collaborative systems and propose techniques for metrics construction and validation in order to evaluate them. The article analyzes different ways to increase the efficiency and the performance level in collaborative banking systems.
Preserved Network Metrics across Translated Texts
Cabatbat, Josephine Jill T.; Monsanto, Jica P.; Tapang, Giovanni A.
2014-09-01
Co-occurrence language networks based on Bible translations and the Universal Declaration of Human Rights (UDHR) translations in different languages were constructed and compared with random text networks. Among the considered network metrics, the network size, N, the normalized betweenness centrality (BC), and the average k-nearest neighbors, knn, were found to be the most preserved across translations. Moreover, similar frequency distributions of co-occurring network motifs were observed for translated texts networks.
Anisotropic rectangular metric for polygonal surface remeshing
Pellenard, Bertrand; Morvan, Jean-Marie; Alliez, Pierre
2013-01-01
We propose a new method for anisotropic polygonal surface remeshing. Our algorithm takes as input a surface triangle mesh. An anisotropic rectangular metric, defined at each triangle facet of the input mesh, is derived from both a user-specified normal-based tolerance error and the requirement to favor rectangle-shaped polygons. Our algorithm uses a greedy optimization procedure that adds, deletes and relocates generators so as to match two criteria related to partitioning and conformity.
Smart Grid Status and Metrics Report
Energy Technology Data Exchange (ETDEWEB)
Balducci, Patrick J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Weimar, Mark R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kirkham, Harold [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2014-07-01
To convey progress made in achieving the vision of a smart grid, this report uses a set of six characteristics derived from the National Energy Technology Laboratory Modern Grid Strategy. It measures 21 metrics to provide insight into the grid’s capacity to embody these characteristics. This report looks across a spectrum of smart grid concerns to measure the status of smart grid deployment and impacts.
Metrics in Keplerian orbits quotient spaces
Milanov, Danila V.
2018-03-01
Quotient spaces of Keplerian orbits are important instruments for the modelling of orbit samples of celestial bodies on a large time span. We suppose that variations of the orbital eccentricities, inclinations and semi-major axes remain sufficiently small, while arbitrary perturbations are allowed for the arguments of pericentres or longitudes of the nodes, or both. The distance between orbits or their images in quotient spaces serves as a numerical criterion for such problems of Celestial Mechanics as search for common origin of meteoroid streams, comets, and asteroids, asteroid families identification, and others. In this paper, we consider quotient sets of the non-rectilinear Keplerian orbits space H. Their elements are identified irrespective of the values of pericentre arguments or node longitudes. We prove that distance functions on the quotient sets, introduced in Kholshevnikov et al. (Mon Not R Astron Soc 462:2275-2283, 2016), satisfy metric space axioms and discuss theoretical and practical importance of this result. Isometric embeddings of the quotient spaces into R^n, and a space of compact subsets of H with Hausdorff metric are constructed. The Euclidean representations of the orbits spaces find its applications in a problem of orbit averaging and computational algorithms specific to Euclidean space. We also explore completions of H and its quotient spaces with respect to corresponding metrics and establish a relation between elements of the extended spaces and rectilinear trajectories. Distance between an orbit and subsets of elliptic and hyperbolic orbits is calculated. This quantity provides an upper bound for the metric value in a problem of close orbits identification. Finally the invariance of the equivalence relations in H under coordinates change is discussed.
The Planck Vacuum and the Schwarzschild Metrics
Directory of Open Access Journals (Sweden)
Daywitt W. C.
2009-07-01
Full Text Available The Planck vacuum (PV is assumed to be the source of the visible universe. So under conditions of sufficient stress, there must exist a pathway through which energy from the PV can travel into this universe. Conversely, the passage of energy from the visible universe to the PV must also exist under the same stressful conditions. The following examines two versions of the Schwarzschild metric equation for compatability with this open-pathway idea.
Metrics and Its Function in Poetry
Institute of Scientific and Technical Information of China (English)
XIAO Zhong-qiong; CHEN Min-jie
2013-01-01
Poetry is a special combination of musical and linguistic qualities-of sounds both regarded as pure sound and as mean-ingful speech. Part of the pleasure of poetry lies in its relationship with music. Metrics, including rhythm and meter, is an impor-tant method for poetry to express poetic sentiment. Through the introduction of poetic language and typical examples, the writer of this paper tries to discuss the relationship between sound and meaning.
A Fundamental Metric for Metal Recycling Applied to Coated Magnesium
Meskers, C.E.M.; Reuter, M.A.; Boin, U.; Kvithyld, A.
2008-01-01
A fundamental metric for the assessment of the recyclability and, hence, the sustainability of coated magnesium scrap is presented; this metric combines kinetics and thermodynamics. The recycling process, consisting of thermal decoating and remelting, was studied by thermogravimetry and differential
Ideal Based Cyber Security Technical Metrics for Control Systems
Energy Technology Data Exchange (ETDEWEB)
W. F. Boyer; M. A. McQueen
2007-10-01
Much of the world's critical infrastructure is at risk from attack through electronic networks connected to control systems. Security metrics are important because they provide the basis for management decisions that affect the protection of the infrastructure. A cyber security technical metric is the security relevant output from an explicit mathematical model that makes use of objective measurements of a technical object. A specific set of technical security metrics are proposed for use by the operators of control systems. Our proposed metrics are based on seven security ideals associated with seven corresponding abstract dimensions of security. We have defined at least one metric for each of the seven ideals. Each metric is a measure of how nearly the associated ideal has been achieved. These seven ideals provide a useful structure for further metrics development. A case study shows how the proposed metrics can be applied to an operational control system.
43 CFR 12.915 - Metric system of measurement.
2010-10-01
... procurements, grants, and other business-related activities. Metric implementation may take longer where the... recipient, such as when foreign competitors are producing competing products in non-metric units. (End of...
The Jacobi metric for timelike geodesics in static spacetimes
Gibbons, G. W.
2016-01-01
It is shown that the free motion of massive particles moving in static spacetimes is given by the geodesics of an energy-dependent Riemannian metric on the spatial sections analogous to Jacobi's metric in classical dynamics. In the massless limit Jacobi's metric coincides with the energy independent Fermat or optical metric. For stationary metrics, it is known that the motion of massless particles is given by the geodesics of an energy independent Finslerian metric of Randers type. The motion of massive particles is governed by neither a Riemannian nor a Finslerian metric. The properies of the Jacobi metric for massive particles moving outside the horizon of a Schwarschild black hole are described. By constrast with the massless case, the Gaussian curvature of the equatorial sections is not always negative.
Monitor-Based Statistical Model Checking for Weighted Metric Temporal Logic
DEFF Research Database (Denmark)
Bulychev, Petr; David, Alexandre; Larsen, Kim Guldstrand
2012-01-01
We present a novel approach and implementation for ana- lysing weighted timed automata (WTA) with respect to the weighted metric temporal logic (WMTL≤ ). Based on a stochastic semantics of WTAs, we apply statistical model checking (SMC) to estimate and test probabilities of satisfaction with desi......We present a novel approach and implementation for ana- lysing weighted timed automata (WTA) with respect to the weighted metric temporal logic (WMTL≤ ). Based on a stochastic semantics of WTAs, we apply statistical model checking (SMC) to estimate and test probabilities of satisfaction...
Factor structure of the Tomimatsu-Sato metrics
International Nuclear Information System (INIS)
Perjes, Z.
1989-02-01
Based on an earlier result stating that δ = 3 Tomimatsu-Sato (TS) metrics can be factored over the field of integers, an analogous representation for higher TS metrics was sought. It is shown that the factoring property of TS metrics follows from the structure of special Hankel determinants. A set of linear algebraic equations determining the factors was defined, and the factors of the first five TS metrics were tabulated, together with their primitive factors. (R.P.) 4 refs.; 2 tabs
What can article-level metrics do for you?
Fenner, Martin
2013-10-01
Article-level metrics (ALMs) provide a wide range of metrics about the uptake of an individual journal article by the scientific community after publication. They include citations, usage statistics, discussions in online comments and social media, social bookmarking, and recommendations. In this essay, we describe why article-level metrics are an important extension of traditional citation-based journal metrics and provide a number of example from ALM data collected for PLOS Biology.
A convergence theory for probabilistic metric spaces | Jäger ...
African Journals Online (AJOL)
We develop a theory of probabilistic convergence spaces based on Tardiff's neighbourhood systems for probabilistic metric spaces. We show that the resulting category is a topological universe and we characterize a subcategory that is isomorphic to the category of probabilistic metric spaces. Keywords: Probabilistic metric ...
Understanding Acceptance of Software Metrics--A Developer Perspective
Umarji, Medha
2009-01-01
Software metrics are measures of software products and processes. Metrics are widely used by software organizations to help manage projects, improve product quality and increase efficiency of the software development process. However, metrics programs tend to have a high failure rate in organizations, and developer pushback is one of the sources…
Modified intuitionistic fuzzy metric spaces and some fixed point theorems
International Nuclear Information System (INIS)
Saadati, R.; Sedghi, S.; Shobe, N.
2008-01-01
Since the intuitionistic fuzzy metric space has extra conditions (see [Gregori V, Romaguera S, Veereamani P. A note on intuitionistic fuzzy metric spaces. Chaos, Solitons and Fractals 2006;28:902-5]). In this paper, we consider modified intuitionistic fuzzy metric spaces and prove some fixed point theorems in these spaces. All the results presented in this paper are new
Tide or Tsunami? The Impact of Metrics on Scholarly Research
Bonnell, Andrew G.
2016-01-01
Australian universities are increasingly resorting to the use of journal metrics such as impact factors and ranking lists in appraisal and promotion processes, and are starting to set quantitative "performance expectations" which make use of such journal-based metrics. The widespread use and misuse of research metrics is leading to…
Robustness of climate metrics under climate policy ambiguity
International Nuclear Information System (INIS)
Ekholm, Tommi; Lindroos, Tomi J.; Savolainen, Ilkka
2013-01-01
Highlights: • We assess the economic impacts of using different climate metrics. • The setting is cost-efficient scenarios for three interpretations of the 2C target. • With each target setting, the optimal metric is different. • Therefore policy ambiguity prevents the selection of an optimal metric. • Robust metric values that perform well with multiple policy targets however exist. -- Abstract: A wide array of alternatives has been proposed as the common metrics with which to compare the climate impacts of different emission types. Different physical and economic metrics and their parameterizations give diverse weights between e.g. CH 4 and CO 2 , and fixing the metric from one perspective makes it sub-optimal from another. As the aims of global climate policy involve some degree of ambiguity, it is not possible to determine a metric that would be optimal and consistent with all policy aims. This paper evaluates the cost implications of using predetermined metrics in cost-efficient mitigation scenarios. Three formulations of the 2 °C target, including both deterministic and stochastic approaches, shared a wide range of metric values for CH 4 with which the mitigation costs are only slightly above the cost-optimal levels. Therefore, although ambiguity in current policy might prevent us from selecting an optimal metric, it can be possible to select robust metric values that perform well with multiple policy targets
Graev metrics on free products and HNN extensions
DEFF Research Database (Denmark)
Slutsky, Konstantin
2014-01-01
We give a construction of two-sided invariant metrics on free products (possibly with amalgamation) of groups with two-sided invariant metrics and, under certain conditions, on HNN extensions of such groups. Our approach is similar to the Graev's construction of metrics on free groups over pointed...
The universal connection and metrics on moduli spaces
International Nuclear Information System (INIS)
Massamba, Fortune; Thompson, George
2003-11-01
We introduce a class of metrics on gauge theoretic moduli spaces. These metrics are made out of the universal matrix that appears in the universal connection construction of M. S. Narasimhan and S. Ramanan. As an example we construct metrics on the c 2 = 1 SU(2) moduli space of instantons on R 4 for various universal matrices. (author)
ST-intuitionistic fuzzy metric space with properties
Arora, Sahil; Kumar, Tanuj
2017-07-01
In this paper, we define ST-intuitionistic fuzzy metric space and the notion of convergence and completeness properties of cauchy sequences is studied. Further, we prove some properties of ST-intuitionistic fuzzy metric space. Finally, we introduce the concept of symmetric ST Intuitionistic Fuzzy metric space.
Term Based Comparison Metrics for Controlled and Uncontrolled Indexing Languages
Good, B. M.; Tennis, J. T.
2009-01-01
Introduction: We define a collection of metrics for describing and comparing sets of terms in controlled and uncontrolled indexing languages and then show how these metrics can be used to characterize a set of languages spanning folksonomies, ontologies and thesauri. Method: Metrics for term set characterization and comparison were identified and…
Software architecture analysis tool : software architecture metrics collection
Muskens, J.; Chaudron, M.R.V.; Westgeest, R.
2002-01-01
The Software Engineering discipline lacks the ability to evaluate software architectures. Here we describe a tool for software architecture analysis that is based on metrics. Metrics can be used to detect possible problems and bottlenecks in software architectures. Even though metrics do not give a
Otherwise Engaged : Social Media from Vanity Metrics to Critical Analytics
Rogers, R.
2018-01-01
Vanity metrics is a term that captures the measurement and display of how well one is doing in the “success theater” of social media. The notion of vanity metrics implies a critique of metrics concerning both the object of measurement as well as their capacity to measure unobtrusively or only to
Meter Detection in Symbolic Music Using Inner Metric Analysis
de Haas, W.B.; Volk, A.
2016-01-01
In this paper we present PRIMA: a new model tailored to symbolic music that detects the meter and the first downbeat position of a piece. Given onset data, the metrical structure of a piece is interpreted using the Inner Metric Analysis (IMA) model. IMA identifies the strong and weak metrical
Regional Sustainability: The San Luis Basin Metrics Project
There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute. Moreover, individual metrics may not capture all aspects of a system that are relevant to sust...
Stisen, S.; Demirel, C.; Koch, J.
2017-12-01
Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing
Griffith, Michael B; Lazorchak, James M; Herlihy, Alan T
2004-07-01
If bioassessments are to help diagnose the specific environmental stressors affecting streams, a better understanding is needed of the relationships between community metrics and ambient criteria or ambient bioassays. However, this relationship is not simple, because metrics assess responses at the community level of biological organization, while ambient criteria and ambient bioassays assess or are based on responses at the individual level. For metals, the relationship is further complicated by the influence of other chemical variables, such as hardness, on their bioavailability and toxicity. In 1993 and 1994, U.S. Environmental Protection Agency (U.S. EPA) conducted a Regional Environmental Monitoring and Assessment Program (REMAP) survey on wadeable streams in Colorado's (USA) Southern Rockies Ecoregion. In this ecoregion, mining over the past century has resulted in metals contamination of streams. The surveys collected data on fish and macroinvertebrate assemblages, physical habitat, and sediment and water chemistry and toxicity. These data provide a framework for assessing diagnostic community metrics for specific environmental stressors. We characterized streams as metals-affected based on exceedence of hardness-adjusted criteria for cadmium, copper, lead, and zinc in water; on water toxicity tests (48-h Pimephales promelas and Ceriodaphnia dubia survival); on exceedence of sediment threshold effect levels (TELs); or on sediment toxicity tests (7-d Hyalella azteca survival and growth). Macroinvertebrate and fish metrics were compared among affected and unaffected sites to identify metrics sensitive to metals. Several macroinvertebrate metrics, particularly richness metrics, were less in affected streams, while other metrics were not. This is a function of the sensitivity of the individual metrics to metals effects. Fish metrics were less sensitive to metals because of the low diversity of fish in these streams.
Enhanced Data Representation by Kernel Metric Learning for Dementia Diagnosis
Directory of Open Access Journals (Sweden)
David Cárdenas-Peña
2017-07-01
Full Text Available Alzheimer's disease (AD is the kind of dementia that affects the most people around the world. Therefore, an early identification supporting effective treatments is required to increase the life quality of a wide number of patients. Recently, computer-aided diagnosis tools for dementia using Magnetic Resonance Imaging scans have been successfully proposed to discriminate between patients with AD, mild cognitive impairment, and healthy controls. Most of the attention has been given to the clinical data, provided by initiatives as the ADNI, supporting reliable researches on intervention, prevention, and treatments of AD. Therefore, there is a need for improving the performance of classification machines. In this paper, we propose a kernel framework for learning metrics that enhances conventional machines and supports the diagnosis of dementia. Our framework aims at building discriminative spaces through the maximization of center kernel alignment function, aiming at improving the discrimination of the three considered neurological classes. The proposed metric learning performance is evaluated on the widely-known ADNI database using three supervised classification machines (k-nn, SVM and NNs for multi-class and bi-class scenarios from structural MRIs. Specifically, from ADNI collection 286 AD patients, 379 MCI patients and 231 healthy controls are used for development and validation of our proposed metric learning framework. For the experimental validation, we split the data into two subsets: 30% of subjects used like a blindfolded assessment and 70% employed for parameter tuning. Then, in the preprocessing stage, each structural MRI scan a total of 310 morphological measurements are automatically extracted from by FreeSurfer software package and concatenated to build an input feature matrix. Obtained test performance results, show that including a supervised metric learning improves the compared baseline classifiers in both scenarios. In the multi
Strategic Human Resource Metrics: A Perspective of the General Systems Theory
Directory of Open Access Journals (Sweden)
Chux Gervase Iwu
2016-04-01
Full Text Available Measuring and quantifying strategic human resource outcomes in relation to key performance criteria is essential to developing value-adding metrics. Objectives This paper posits (using a general systems lens that strategic human resource metrics should interpret the relationship between attitudinal human resource outcomes and performance criteria such as profitability, quality or customer service. Approach Using the general systems model as underpinning theory, the study assesses the variation in response to a Likert type questionnaire with twenty-four (24 items measuring the major attitudinal dispositions of HRM outcomes (employee commitment, satisfaction, engagement and embeddedness. Results A Chi-square test (Chi-square test statistic = 54.898, p=0.173 showed that variation in responses to the attitudinal statements occurred due to chance. This was interpreted to mean that attitudinal human resource outcomes influence performance as a unit of system components. The neutral response was found to be associated with the ‘reject’ response than the ‘acceptance’ response. Value The study offers suggestion on the determination of strategic HR metrics and recommends the use of systems theory in HRM related studies. Implications This study provides another dimension to human resource metrics by arguing that strategic human resource metrics should measure the relationship between attitudinal human resource outcomes and performance using a systems perspective.
DEFF Research Database (Denmark)
Bendixen, Carsten
2014-01-01
Bidrag med en kortfattet, introducerende, perspektiverende og begrebsafklarende fremstilling af begrebet test i det pædagogiske univers.......Bidrag med en kortfattet, introducerende, perspektiverende og begrebsafklarende fremstilling af begrebet test i det pædagogiske univers....
Extremal limits of the C metric: Nariai, Bertotti-Robinson, and anti-Nariai C metrics
International Nuclear Information System (INIS)
Dias, Oscar J.C.; Lemos, Jose P.S.
2003-01-01
In two previous papers we have analyzed the C metric in a background with a cosmological constant Λ, namely, the de-Sitter (dS) C metric (Λ>0), and the anti-de Sitter (AdS) C metric (Λ 0, Λ=0, and Λ 2 xS-tilde 2 ) to each point in the deformed two-sphere S-tilde 2 corresponds a dS 2 spacetime, except for one point which corresponds to a dS 2 spacetime with an infinite straight strut or string. There are other important new features that appear. One expects that the solutions found in this paper are unstable and decay into a slightly nonextreme black hole pair accelerated by a strut or by strings. Moreover, the Euclidean version of these solutions mediate the quantum process of black hole pair creation that accompanies the decay of the dS and AdS spaces
Massless and massive quanta resulting from a mediumlike metric tensor
International Nuclear Information System (INIS)
Soln, J.
1985-01-01
A simple model of the ''primordial'' scalar field theory is presented in which the metric tensor is a generalization of the metric tensor from electrodynamics in a medium. The radiation signal corresponding to the scalar field propagates with a velocity that is generally less than c. This signal can be associated simultaneously with imaginary and real effective (momentum-dependent) masses. The requirement that the imaginary effective mass vanishes, which we take to be the prerequisite for the vacuumlike signal propagation, leads to the ''spontaneous'' splitting of the metric tensor into two distinct metric tensors: one metric tensor gives rise to masslesslike radiation and the other to a massive particle. (author)
Principle of space existence and De Sitter metric
International Nuclear Information System (INIS)
Mal'tsev, V.K.
1990-01-01
The selection principle for the solutions of the Einstein equations suggested in a series of papers implies the existence of space (g ik ≠ 0) only in the presence of matter (T ik ≠0). This selection principle (principle of space existence, in the Markov terminology) implies, in the general case, the absence of the cosmological solution with the De Sitter metric. On the other hand, the De Sitter metric is necessary for describing both inflation and deflation periods of the Universe. It is shown that the De Sitter metric is also allowed by the selection principle under discussion if the metric experiences the evolution into the Friedmann metric
Pragmatic security metrics applying metametrics to information security
Brotby, W Krag
2013-01-01
Other books on information security metrics discuss number theory and statistics in academic terms. Light on mathematics and heavy on utility, PRAGMATIC Security Metrics: Applying Metametrics to Information Security breaks the mold. This is the ultimate how-to-do-it guide for security metrics.Packed with time-saving tips, the book offers easy-to-follow guidance for those struggling with security metrics. Step by step, it clearly explains how to specify, develop, use, and maintain an information security measurement system (a comprehensive suite of metrics) to
Classification in medical images using adaptive metric k-NN
Chen, C.; Chernoff, K.; Karemore, G.; Lo, P.; Nielsen, M.; Lauze, F.
2010-03-01
The performance of the k-nearest neighborhoods (k-NN) classifier is highly dependent on the distance metric used to identify the k nearest neighbors of the query points. The standard Euclidean distance is commonly used in practice. This paper investigates the performance of k-NN classifier with respect to different adaptive metrics in the context of medical imaging. We propose using adaptive metrics such that the structure of the data is better described, introducing some unsupervised learning knowledge in k-NN. We investigated four different metrics are estimated: a theoretical metric based on the assumption that images are drawn from Brownian Image Model (BIM), the normalized metric based on variance of the data, the empirical metric is based on the empirical covariance matrix of the unlabeled data, and an optimized metric obtained by minimizing the classification error. The spectral structure of the empirical covariance also leads to Principal Component Analysis (PCA) performed on it which results the subspace metrics. The metrics are evaluated on two data sets: lateral X-rays of the lumbar aortic/spine region, where we use k-NN for performing abdominal aorta calcification detection; and mammograms, where we use k-NN for breast cancer risk assessment. The results show that appropriate choice of metric can improve classification.
THE ROLE OF ARTICLE LEVEL METRICS IN SCIENTIFIC PUBLISHING
Directory of Open Access Journals (Sweden)
Vladimir TRAJKOVSKI
2016-04-01
Full Text Available Emerging metrics based on article-level does not exclude traditional metrics based on citations to the journal, but complements them. Article-level metrics (ALMs provide a wide range of metrics about the uptake of an individual journal article by the scientific community after publication. They include citations, statistics of usage, discussions in online comments and social media, social bookmarking, and recommendations. In this editorial, the role of article level metrics in publishing scientific papers has been described. Article-Level Metrics (ALMs are rapidly emerging as important tools to quantify how individual articles are being discussed, shared, and used. Data sources depend on the tool, but they include classic metrics indicators depending on citations, academic social networks (Mendeley, CiteULike, Delicious and social media (Facebook, Twitter, blogs, and Youtube. The most popular tools used to apply this new metrics are: Public Library of Science - Article-Level Metrics, Altmetric, Impactstory and Plum Analytics. Journal Impact Factor (JIF does not consider impact or influence beyond citations count as this count reflected only through Thomson Reuters’ Web of Science® database. JIF provides indicator related to the journal, but not related to a published paper. Thus, altmetrics now becomes an alternative metrics for performance assessment of individual scientists and their contributed scholarly publications. Macedonian scholarly publishers have to work on implementing of article level metrics in their e-journals. It is the way to increase their visibility and impact in the world of science.
Evaluation of Subjective and Objective Performance Metrics for Haptically Controlled Robotic Systems
Directory of Open Access Journals (Sweden)
Cong Dung Pham
2014-07-01
Full Text Available This paper studies in detail how different evaluation methods perform when it comes to describing the performance of haptically controlled mobile manipulators. Particularly, we investigate how well subjective metrics perform compared to objective metrics. To find the best metrics to describe the performance of a control scheme is challenging when human operators are involved; how the user perceives the performance of the controller does not necessarily correspond to the directly measurable metrics normally used in controller evaluation. It is therefore important to study whether there is any correspondence between how the user perceives the performance of a controller, and how it performs in terms of directly measurable metrics such as the time used to perform a task, number of errors, accuracy, and so on. To perform these tests we choose a system that consists of a mobile manipulator that is controlled by an operator through a haptic device. This is a good system for studying different performance metrics as the performance can be determined by subjective metrics based on feedback from the users, and also as objective and directly measurable metrics. The system consists of a robotic arm which provides for interaction and manipulation, which is mounted on a mobile base which extends the workspace of the arm. The operator thus needs to perform both interaction and locomotion using a single haptic device. While the position of the on-board camera is determined by the base motion, the principal control objective is the motion of the manipulator arm. This calls for intelligent control allocation between the base and the manipulator arm in order to obtain intuitive control of both the camera and the arm. We implement three different approaches to the control allocation problem, i.e., whether the vehicle or manipulator arm actuation is applied to generate the desired motion. The performance of the different control schemes is evaluated, and our
Outsourced Similarity Search on Metric Data Assets
DEFF Research Database (Denmark)
Yiu, Man Lung; Assent, Ira; Jensen, Christian S.
2012-01-01
. Outsourcing offers the data owner scalability and a low initial investment. The need for privacy may be due to the data being sensitive (e.g., in medicine), valuable (e.g., in astronomy), or otherwise confidential. Given this setting, the paper presents techniques that transform the data prior to supplying......This paper considers a cloud computing setting in which similarity querying of metric data is outsourced to a service provider. The data is to be revealed only to trusted users, not to the service provider or anyone else. Users query the server for the most similar data objects to a query example...
New Metrics from a Fractional Gravitational Field
International Nuclear Information System (INIS)
El-Nabulsi, Rami Ahmad
2017-01-01
Agop et al. proved in Commun. Theor. Phys. (2008) that, a Reissner–Nordstrom type metric is obtained, if gauge gravitational field in a fractal spacetime is constructed by means of concepts of scale relativity. We prove in this short communication that similar result is obtained if gravity in D-spacetime dimensions is fractionalized by means of the Glaeske–Kilbas–Saigo fractional. Besides, non-singular gravitational fields are obtained without using extra-dimensions. We present few examples to show that these gravitational fields hold a number of motivating features in spacetime physics. (paper)
Energy Metrics for State Government Buildings
Michael, Trevor
Measuring true progress towards energy conservation goals requires the accurate reporting and accounting of energy consumption. An accurate energy metrics framework is also a critical element for verifiable Greenhouse Gas Inventories. Energy conservation in government can reduce expenditures on energy costs leaving more funds available for public services. In addition to monetary savings, conserving energy can help to promote energy security, air quality, and a reduction of carbon footprint. With energy consumption/GHG inventories recently produced at the Federal level, state and local governments are beginning to also produce their own energy metrics systems. In recent years, many states have passed laws and executive orders which require their agencies to reduce energy consumption. In June 2008, SC state government established a law to achieve a 20% energy usage reduction in state buildings by 2020. This study examines case studies from other states who have established similar goals to uncover the methods used to establish an energy metrics system. Direct energy consumption in state government primarily comes from buildings and mobile sources. This study will focus exclusively on measuring energy consumption in state buildings. The case studies reveal that many states including SC are having issues gathering the data needed to accurately measure energy consumption across all state buildings. Common problems found include a lack of enforcement and incentives that encourage state agencies to participate in any reporting system. The case studies are aimed at finding the leverage used to gather the needed data. The various approaches at coercing participation will hopefully reveal methods that SC can use to establish the accurate metrics system needed to measure progress towards its 20% by 2020 energy reduction goal. Among the strongest incentives found in the case studies is the potential for monetary savings through energy efficiency. Framing energy conservation
Multi-Robot Assembly Strategies and Metrics
MARVEL, JEREMY A.; BOSTELMAN, ROGER; FALCO, JOE
2018-01-01
We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies. PMID:29497234
Metric preheating and limitations of linearized gravity
International Nuclear Information System (INIS)
Bassett, Bruce A.; Tamburini, Fabrizio; Kaiser, David I.; Maartens, Roy
1999-01-01
During the preheating era after inflation, resonant amplification of quantum field fluctuations takes place. Recently it has become clear that this must be accompanied by resonant amplification of scalar metric fluctuations, since the two are united by Einstein's equations. Furthermore, this 'metric preheating' enhances particle production, and leads to gravitational rescattering effects even at linear order. In multi-field models with strong preheating (q>>1), metric perturbations are driven non-linear, with the strongest amplification typically on super-Hubble scales (k→0). This amplification is causal, being due to the super-Hubble coherence of the inflaton condensate, and is accompanied by resonant growth of entropy perturbations. The amplification invalidates the use of the linearized Einstein field equations, irrespective of the amount of fine-tuning of the initial conditions. This has serious implications on all scales - from large-angle cosmic microwave background (CMB) anisotropies to primordial black holes. We investigate the (q,k) parameter space in a two-field model, and introduce the time to non-linearity, t nl , as the timescale for the breakdown of the linearized Einstein equations. t nl is a robust indicator of resonance behavior, showing the fine structure in q and k that one expects from a quasi-Floquet system, and we argue that t nl is a suitable generalization of the static Floquet index in an expanding universe. Backreaction effects are expected to shut down the linear resonances, but cannot remove the existing amplification, which threatens the viability of strong preheating when confronted with the CMB. Mode-mode coupling and turbulence tend to re-establish scale invariance, but this process is limited by causality and for small k the primordial scale invariance of the spectrum may be destroyed. We discuss ways to escape the above conclusions, including secondary phases of inflation and preheating solely to fermions. The exclusion principle
Alternative kinetic energy metrics for Lagrangian systems
Sarlet, W.; Prince, G.
2010-11-01
We examine Lagrangian systems on \\ {R}^n with standard kinetic energy terms for the possibility of additional, alternative Lagrangians with kinetic energy metrics different to the Euclidean one. Using the techniques of the inverse problem in the calculus of variations we find necessary and sufficient conditions for the existence of such Lagrangians. We illustrate the problem in two and three dimensions with quadratic and cubic potentials. As an aside we show that the well-known anomalous Lagrangians for the Coulomb problem can be removed by switching on a magnetic field, providing an appealing resolution of the ambiguous quantizations of the hydrogen atom.
Differential geometry bundles, connections, metrics and curvature
Taubes, Clifford Henry
2011-01-01
Bundles, connections, metrics and curvature are the 'lingua franca' of modern differential geometry and theoretical physics. This book will supply a graduate student in mathematics or theoretical physics with the fundamentals of these objects. Many of the tools used in differential topology are introduced and the basic results about differentiable manifolds, smooth maps, differential forms, vector fields, Lie groups, and Grassmanians are all presented here. Other material covered includes the basic theorems about geodesics and Jacobi fields, the classification theorem for flat connections, the
Multi-Robot Assembly Strategies and Metrics.
Marvel, Jeremy A; Bostelman, Roger; Falco, Joe
2018-02-01
We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies.
Indefinite metric and regularization of electrodynamics
International Nuclear Information System (INIS)
Gaudin, M.
1984-06-01
The invariant regularization of Pauli and Villars in quantum electrodynamics can be considered as deriving from a local and causal lagrangian theory for spin 1/2 bosons, by introducing an indefinite metric and a condition on the allowed states similar to the Lorentz condition. The consequences are the asymptotic freedom of the photon's propagator. We present a calcultion of the effective charge to the fourth order in the coupling as a function of the auxiliary masses, the theory avoiding all mass divergencies to this order [fr
Metrics for comparing plasma mass filters
Energy Technology Data Exchange (ETDEWEB)
Fetterman, Abraham J.; Fisch, Nathaniel J. [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08540 (United States)
2011-10-15
High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter.
Metrics for comparing plasma mass filters
International Nuclear Information System (INIS)
Fetterman, Abraham J.; Fisch, Nathaniel J.
2011-01-01
High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter.
Decision Analysis for Metric Selection on a Clinical Quality Scorecard.
Guth, Rebecca M; Storey, Patricia E; Vitale, Michael; Markan-Aurora, Sumita; Gordon, Randolph; Prevost, Traci Q; Dunagan, Wm Claiborne; Woeltje, Keith F
2016-09-01
Clinical quality scorecards are used by health care institutions to monitor clinical performance and drive quality improvement. Because of the rapid proliferation of quality metrics in health care, BJC HealthCare found it increasingly difficult to select the most impactful scorecard metrics while still monitoring metrics for regulatory purposes. A 7-step measure selection process was implemented incorporating Kepner-Tregoe Decision Analysis, which is a systematic process that considers key criteria that must be satisfied in order to make the best decision. The decision analysis process evaluates what metrics will most appropriately fulfill these criteria, as well as identifies potential risks associated with a particular metric in order to identify threats to its implementation. Using this process, a list of 750 potential metrics was narrowed to 25 that were selected for scorecard inclusion. This decision analysis process created a more transparent, reproducible approach for selecting quality metrics for clinical quality scorecards. © The Author(s) 2015.
Balanced metrics for vector bundles and polarised manifolds
DEFF Research Database (Denmark)
Garcia Fernandez, Mario; Ross, Julius
2012-01-01
leads to a Hermitian-Einstein metric on E and a constant scalar curvature Kähler metric in c_1(L). For special values of α, limits of balanced metrics are solutions of a system of coupled equations relating a Hermitian-Einstein metric on E and a Kähler metric in c1(L). For this, we compute the top two......We consider a notion of balanced metrics for triples (X, L, E) which depend on a parameter α, where X is smooth complex manifold with an ample line bundle L and E is a holomorphic vector bundle over X. For generic choice of α, we prove that the limit of a convergent sequence of balanced metrics...
Construction of Einstein-Sasaki metrics in D≥7
International Nuclear Information System (INIS)
Lue, H.; Pope, C. N.; Vazquez-Poritz, J. F.
2007-01-01
We construct explicit Einstein-Kaehler metrics in all even dimensions D=2n+4≥6, in terms of a 2n-dimensional Einstein-Kaehler base metric. These are cohomogeneity 2 metrics which have the new feature of including a NUT-type parameter, or gravomagnetic charge, in addition to..' in addition to mass and rotation parameters. Using a canonical construction, these metrics all yield Einstein-Sasaki metrics in dimensions D=2n+5≥7. As is commonly the case in this type of construction, for suitable choices of the free parameters the Einstein-Sasaki metrics can extend smoothly onto complete and nonsingular manifolds, even though the underlying Einstein-Kaehler metric has conical singularities. We discuss some explicit examples in the case of seven-dimensional Einstein-Sasaki spaces. These new spaces can provide supersymmetric backgrounds in M theory, which play a role in the AdS 4 /CFT 3 correspondence
National Metrical Types in Nineteenth Century Art Song
Directory of Open Access Journals (Sweden)
Leigh VanHandel
2010-01-01
Full Text Available William Rothstein’s article “National metrical types in music of the eighteenth and early nineteenth centuries” (2008 proposes a distinction between the metrical habits of 18th and early 19th century German music and those of Italian and French music of that period. Based on theoretical treatises and compositional practice, he outlines these national metrical types and discusses the characteristics of each type. This paper presents the results of a study designed to determine whether, and to what degree, Rothstein’s characterizations of national metrical types are present in 19th century French and German art song. Studying metrical habits in this genre may provide a lens into changing metrical conceptions of 19th century theorists and composers, as well as to the metrical habits and compositional style of individual 19th century French and German art song composers.
Metrication: An economic wake-up call for US industry
Carver, G. P.
1993-03-01
As the international standard of measurement, the metric system is one key to success in the global marketplace. International standards have become an important factor in international economic competition. Non-metric products are becoming increasingly unacceptable in world markets that favor metric products. Procurement is the primary federal tool for encouraging and helping U.S. industry to convert voluntarily to the metric system. Besides the perceived unwillingness of the customer, certain regulatory language, and certain legal definitions in some states, there are no major impediments to conversion of the remaining non-metric industries to metric usage. Instead, there are good reasons for changing, including an opportunity to rethink many industry standards and to take advantage of size standardization. Also, when the remaining industries adopt the metric system, they will come into conformance with federal agencies engaged in similar activities.
Patrick, Christopher J; Yuan, Lester L
2017-07-01
Flow alteration is widespread in streams, but current understanding of the effects of differences in flow characteristics on stream biological communities is incomplete. We tested hypotheses about the effect of variation in hydrology on stream communities by using generalized additive models to relate watershed information to the values of different flow metrics at gauged sites. Flow models accounted for 54-80% of the spatial variation in flow metric values among gauged sites. We then used these models to predict flow metrics in 842 ungauged stream sites in the mid-Atlantic United States that were sampled for fish, macroinvertebrates, and environmental covariates. Fish and macroinvertebrate assemblages were characterized in terms of a suite of metrics that quantified aspects of community composition, diversity, and functional traits that were expected to be associated with differences in flow characteristics. We related modeled flow metrics to biological metrics in a series of stressor-response models. Our analyses identified both drying and base flow instability as explaining 30-50% of the observed variability in fish and invertebrate community composition. Variations in community composition were related to variations in the prevalence of dispersal traits in invertebrates and trophic guilds in fish. The results demonstrate that we can use statistical models to predict hydrologic conditions at bioassessment sites, which, in turn, we can use to estimate relationships between flow conditions and biological characteristics. This analysis provides an approach to quantify the effects of spatial variation in flow metrics using readily available biomonitoring data. © 2017 by the Ecological Society of America.
Fanpage metrics analysis. "Study on content engagement"
Rahman, Zoha; Suberamanian, Kumaran; Zanuddin, Hasmah Binti; Moghavvemi, Sedigheh; Nasir, Mohd Hairul Nizam Bin Md
2016-08-01
Social Media is now determined as an excellent communicative tool to connect directly with consumers. One of the most significant ways to connect with the consumers through these Social Networking Sites (SNS) is to create a facebook fanpage with brand contents and to place different posts periodically on these fanpages. In measuring social networking sites' effectiveness, corporate houses are now analyzing metrics in terms of calculating engagement rate, number of comments/share and likings in fanpages. So now, it is very important for the marketers to know the effectiveness of different contents or posts of fanpages in order to increase the fan responsiveness and engagement rate in the fan pages. In the study the authors have analyzed total 1834 brand posts from 17 international brands of Electronics companies. Data of 9 months (From December 2014 to August 2015) have been collected for analyses, which were available online in the Brand' fan pages. An econometrics analysis is conducted using Eviews 9, to determine the impact of different contents on fanpage engagement. The study picked the four most frequently posted content to determine their impact on PTA (people Talking About) metrics and Fanpage engagement activities.
Network Community Detection on Metric Space
Directory of Open Access Journals (Sweden)
Suman Saha
2015-08-01
Full Text Available Community detection in a complex network is an important problem of much interest in recent years. In general, a community detection algorithm chooses an objective function and captures the communities of the network by optimizing the objective function, and then, one uses various heuristics to solve the optimization problem to extract the interesting communities for the user. In this article, we demonstrate the procedure to transform a graph into points of a metric space and develop the methods of community detection with the help of a metric defined for a pair of points. We have also studied and analyzed the community structure of the network therein. The results obtained with our approach are very competitive with most of the well-known algorithms in the literature, and this is justified over the large collection of datasets. On the other hand, it can be observed that time taken by our algorithm is quite less compared to other methods and justifies the theoretical findings.
Value of the Company and Marketing Metrics
Directory of Open Access Journals (Sweden)
André Luiz Ramos
2013-12-01
Full Text Available Thinking marketing strategies from a resource-based perspective (Barney, 1991, proposing assets as either tangible, organizational and human, and from Constantin and Luch’s vision (1994, where strategic resources can be tanbigle or intangible, internal or external to the firm, raises a research approach on Marketing and Finance. According to Srivastava, Shervani and Fahey (1998 there are 3 market assets types, which generate firm value. Firm value can be measured by discounted cashflow, compromising marketing activities with value generation forcasts (Anderson, 1982; Day, Fahey, 1988; Doyle, 2000; Rust et al., 2004a. The economic value of marketing strategies and marketing metrics are calling strategy researchers’ and marketing managers’ attention, making clear the need for building a bridge able to articulate marketing and finance form a strategic perspective. This article proposes an analytical framework based on different scientific approaches envolving risk and return promoted by marketing strategies and points out advances concerning both methodological approaches and marketing strategies and its impact on firm metrics and value, usgin Srinivasan and Hanssens (2009 as a start point.
Defining a standard metric for electricity savings
International Nuclear Information System (INIS)
Koomey, Jonathan; Akbari, Hashem; Blumstein, Carl; Brown, Marilyn; Brown, Richard; Calwell, Chris; Carter, Sheryl; Cavanagh, Ralph; Chang, Audrey; Claridge, David; Craig, Paul; Diamond, Rick; Eto, Joseph H; Fulkerson, William; Gadgil, Ashok; Geller, Howard; Goldemberg, Jose; Goldman, Chuck; Goldstein, David B; Greenberg, Steve
2010-01-01
The growing investment by governments and electric utilities in energy efficiency programs highlights the need for simple tools to help assess and explain the size of the potential resource. One technique that is commonly used in this effort is to characterize electricity savings in terms of avoided power plants, because it is easier for people to visualize a power plant than it is to understand an abstraction such as billions of kilowatt-hours. Unfortunately, there is no standardization around the characteristics of such power plants. In this letter we define parameters for a standard avoided power plant that have physical meaning and intuitive plausibility, for use in back-of-the-envelope calculations. For the prototypical plant this article settles on a 500 MW existing coal plant operating at a 70% capacity factor with 7% T and D losses. Displacing such a plant for one year would save 3 billion kWh/year at the meter and reduce emissions by 3 million metric tons of CO 2 per year. The proposed name for this metric is the Rosenfeld, in keeping with the tradition among scientists of naming units in honor of the person most responsible for the discovery and widespread adoption of the underlying scientific principle in question-Dr Arthur H Rosenfeld.
Defining a standard metric for electricity savings
Energy Technology Data Exchange (ETDEWEB)
Koomey, Jonathan [Lawrence Berkeley National Laboratory and Stanford University, PO Box 20313, Oakland, CA 94620-0313 (United States); Akbari, Hashem; Blumstein, Carl; Brown, Marilyn; Brown, Richard; Calwell, Chris; Carter, Sheryl; Cavanagh, Ralph; Chang, Audrey; Claridge, David; Craig, Paul; Diamond, Rick; Eto, Joseph H; Fulkerson, William; Gadgil, Ashok; Geller, Howard; Goldemberg, Jose; Goldman, Chuck; Goldstein, David B; Greenberg, Steve, E-mail: JGKoomey@stanford.ed
2010-01-15
The growing investment by governments and electric utilities in energy efficiency programs highlights the need for simple tools to help assess and explain the size of the potential resource. One technique that is commonly used in this effort is to characterize electricity savings in terms of avoided power plants, because it is easier for people to visualize a power plant than it is to understand an abstraction such as billions of kilowatt-hours. Unfortunately, there is no standardization around the characteristics of such power plants. In this letter we define parameters for a standard avoided power plant that have physical meaning and intuitive plausibility, for use in back-of-the-envelope calculations. For the prototypical plant this article settles on a 500 MW existing coal plant operating at a 70% capacity factor with 7% T and D losses. Displacing such a plant for one year would save 3 billion kWh/year at the meter and reduce emissions by 3 million metric tons of CO{sub 2} per year. The proposed name for this metric is the Rosenfeld, in keeping with the tradition among scientists of naming units in honor of the person most responsible for the discovery and widespread adoption of the underlying scientific principle in question-Dr Arthur H Rosenfeld.
Covariant electrodynamics in linear media: Optical metric
Thompson, Robert T.
2018-03-01
While the postulate of covariance of Maxwell's equations for all inertial observers led Einstein to special relativity, it was the further demand of general covariance—form invariance under general coordinate transformations, including between accelerating frames—that led to general relativity. Several lines of inquiry over the past two decades, notably the development of metamaterial-based transformation optics, has spurred a greater interest in the role of geometry and space-time covariance for electrodynamics in ponderable media. I develop a generally covariant, coordinate-free framework for electrodynamics in general dielectric media residing in curved background space-times. In particular, I derive a relation for the spatial medium parameters measured by an arbitrary timelike observer. In terms of those medium parameters I derive an explicit expression for the pseudo-Finslerian optical metric of birefringent media and show how it reduces to a pseudo-Riemannian optical metric for nonbirefringent media. This formulation provides a basis for a unified approach to ray and congruence tracing through media in curved space-times that may smoothly vary among positively refracting, negatively refracting, and vacuum.
Axisymmetric plasma equilibria in a Kerr metric
Elsässer, Klaus
2001-10-01
Plasma equilibria near a rotating black hole are considered within the multifluid description. An isothermal two-component plasma with electrons and positrons or ions is determined by four structure functions and the boundary conditions. These structure functions are the Bernoulli function and the toroidal canonical momentum per mass for each species. The quasi-neutrality assumption (no charge density, no toroidal current) allows to solve Maxwell's equations analytically for any axisymmetric stationary metric, and to reduce the fluid equations to one single scalar equation for the stream function \\chi of the positrons or ions, respectively. The basic smallness parameter is the ratio of the skin depth of electrons to the scale length of the metric and fluid quantities, and, in the case of an electron-ion plasma, the mass ratio m_e/m_i. The \\chi-equation can be solved by standard methods, and simple solutions for a Kerr geometry are available; they show characteristic flow patterns, depending on the structure functions and the boundary conditions.
Defining a Standard Metric for Electricity Savings
Energy Technology Data Exchange (ETDEWEB)
Brown, Marilyn; Akbari, Hashem; Blumstein, Carl; Koomey, Jonathan; Brown, Richard; Calwell, Chris; Carter, Sheryl; Cavanagh, Ralph; Chang, Audrey; Claridge, David; Craig, Paul; Diamond, Rick; Eto, Joseph H.; Fulkerson, William; Gadgil, Ashok; Geller, Howard; Goldemberg, Jose; Goldman, Chuck; Goldstein, David B.; Greenberg, Steve; Hafemeister, David; Harris, Jeff; Harvey, Hal; Heitz, Eric; Hirst, Eric; Hummel, Holmes; Kammen, Dan; Kelly, Henry; Laitner, Skip; Levine, Mark; Lovins, Amory; Masters, Gil; McMahon, James E.; Meier, Alan; Messenger, Michael; Millhone, John; Mills, Evan; Nadel, Steve; Nordman, Bruce; Price, Lynn; Romm, Joe; Ross, Marc; Rufo, Michael; Sathaye, Jayant; Schipper, Lee; Schneider, Stephen H; Sweeney, James L; Verdict, Malcolm; Vorsatz, Diana; Wang, Devra; Weinberg, Carl; Wilk, Richard; Wilson, John; Worrell, Ernst
2009-03-01
The growing investment by governments and electric utilities in energy efficiency programs highlights the need for simple tools to help assess and explain the size of the potential resource. One technique that is commonly used in this effort is to characterize electricity savings in terms of avoided power plants, because it is easier for people to visualize a power plant than it is to understand an abstraction such as billions of kilowatt-hours. Unfortunately, there is no standardization around the characteristics of such power plants. In this letter we define parameters for a standard avoided power plant that have physical meaning and intuitive plausibility, for use in back-of-the-envelope calculations. For the prototypical plant this article settles on a 500 MW existing coal plant operating at a 70percent capacity factor with 7percent T&D losses. Displacing such a plant for one year would save 3 billion kW h per year at the meter and reduce emissions by 3 million metric tons of CO2 per year. The proposed name for this metric is the Rosenfeld, in keeping with the tradition among scientists of naming units in honor of the person most responsible for the discovery and widespread adoption of the underlying scientific principle in question--Dr. Arthur H. Rosenfeld.
On the Use of Software Metrics as a Predictor of Software Security Problems
2013-01-01
models to determine if additional metrics are required to increase the accuracy of the model: non-security SCSA warnings, code churn and size, the...vulnerabilities reported by testing and those found in the field. Summary of Most Important Results We evaluated our model on three commercial telecommunications
An empirical comparison of a dynamic software testability metric to static cyclomatic complexity
Voas, Jeffrey M.; Miller, Keith W.; Payne, Jeffrey E.
1993-01-01
This paper compares the dynamic testability prediction technique termed 'sensitivity analysis' to the static testability technique termed cyclomatic complexity. The application that we chose in this empirical study is a CASE generated version of a B-737 autoland system. For the B-737 system we analyzed, we isolated those functions that we predict are more prone to hide errors during system/reliability testing. We also analyzed the code with several other well-known static metrics. This paper compares and contrasts the results of sensitivity analysis to the results of the static metrics.
Kerzner, Harold
2017-01-01
With the growth of complex projects, stakeholder involvement, and advancements in visual-based technology, metrics and KPIs (key performance indicators) are key factors in evaluating project performance. Dashboard reporting systems provide accessible project performance data, and sharing this vital data in a concise and consistent manner is a key communication responsibility of all project managers. This 3rd edition of Kerzner’s groundbreaking work includes the following updates: new sections on processing dashboard information, portfolio management PMO and metrics, and BI tool flexibility. PPT decks by chapter and a test bank will be available for use in seminar presentations and courses.
A Survey of Health Management User Objectives Related to Diagnostic and Prognostic Metrics
Wheeler, Kevin R.; Kurtoglu, Tolga; Poll, Scott D.
2010-01-01
One of the most prominent technical challenges to effective deployment of health management systems is the vast difference in user objectives with respect to engineering development. In this paper, a detailed survey on the objectives of different users of health management systems is presented. These user objectives are then mapped to the metrics typically encountered in the development and testing of two main systems health management functions: diagnosis and prognosis. Using this mapping, the gaps between user goals and the metrics associated with diagnostics and prognostics are identified and presented with a collection of lessons learned from previous studies that include both industrial and military aerospace applications.
Borelli, Michael L.
This document details the administrative issues associated with guiding a school district through its metrication efforts. Issues regarding staff development, curriculum development, and the acquisition of instructional resources are considered. Alternative solutions are offered. Finally, an overall implementation strategy is discussed with…
A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output.
Stevanovic, Stefan; Pervan, Boris
2018-01-19
We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator's estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered.
Guo, Hao; Cao, Xiaohua; Liu, Zhifen; Li, Haifang; Chen, Junjie; Zhang, Kerang
2012-12-05
Resting state functional brain networks have been widely studied in brain disease research. However, it is currently unclear whether abnormal resting state functional brain network metrics can be used with machine learning for the classification of brain diseases. Resting state functional brain networks were constructed for 28 healthy controls and 38 major depressive disorder patients by thresholding partial correlation matrices of 90 regions. Three nodal metrics were calculated using graph theory-based approaches. Nonparametric permutation tests were then used for group comparisons of topological metrics, which were used as classified features in six different algorithms. We used statistical significance as the threshold for selecting features and measured the accuracies of six classifiers with different number of features. A sensitivity analysis method was used to evaluate the importance of different features. The result indicated that some of the regions exhibited significantly abnormal nodal centralities, including the limbic system, basal ganglia, medial temporal, and prefrontal regions. Support vector machine with radial basis kernel function algorithm and neural network algorithm exhibited the highest average accuracy (79.27 and 78.22%, respectively) with 28 features (Pdisorder is associated with abnormal functional brain network topological metrics and statistically significant nodal metrics can be successfully used for feature selection in classification algorithms.
Social Media Metrics Importance and Usage Frequency in Latvia
Directory of Open Access Journals (Sweden)
Ronalds Skulme
2017-12-01
Full Text Available Purpose of the article: The purpose of this paper was to explore which social media marketing metrics are most often used and are most important for marketing experts in Latvia and can be used to evaluate marketing campaign effectiveness. Methodology/methods: In order to achieve the aim of this paper several theoretical and practical research methods were used, such as theoretical literature analysis, surveying and grouping. First of all, theoretical research about social media metrics was conducted. Authors collected information about social media metric grouping methods and the most frequently mentioned social media metrics in the literature. The collected information was used as the foundation for the expert surveys. The expert surveys were used to collect information from Latvian marketing professionals to determine which social media metrics are used most often and which social media metrics are most important in Latvia. Scientific aim: The scientific aim of this paper was to identify if social media metrics importance varies depending on the consumer purchase decision stage. Findings: Information about the most important and most often used social media marketing metrics in Latvia was collected. A new social media grouping framework is proposed. Conclusions: The main conclusion is that the importance and the usage frequency of the social media metrics is changing depending of consumer purchase decisions stage the metric is used to evaluate.
Reconstructing the metric of the local Universe from number counts observations
Energy Technology Data Exchange (ETDEWEB)
Vallejo, Sergio Andres [ICRANet, Piazza della Repubblica 10, I-65122 Pescara (Italy); Romano, Antonio Enea, E-mail: antonio.enea.romano@cern.ch [Theoretical Physics Department, CERN, CH-1211 Geneva 23 (Switzerland)
2017-10-01
Number counts observations available with new surveys such as the Euclid mission will be an important source of information about the metric of the Universe. We compute the low red-shift expansion for the energy density and the density contrast using an exact spherically symmetric solution in presence of a cosmological constant. At low red-shift the expansion is more precise than linear perturbation theory prediction. We then use the local expansion to reconstruct the metric from the monopole of the density contrast. We test the inversion method using numerical calculations and find a good agreement within the regime of validity of the red-shift expansion. The method could be applied to observational data to reconstruct the metric of the local Universe with a level of precision higher than the one achievable using perturbation theory.
Higher-order geodesic deviations applied to the Kerr metric
Colistete, R J; Kerner, R
2002-01-01
Starting with an exact and simple geodesic, we generate approximate geodesics by summing up higher-order geodesic deviations within a general relativistic setting, without using Newtonian and post-Newtonian approximations. We apply this method to the problem of closed orbital motion of test particles in the Kerr metric spacetime. With a simple circular orbit in the equatorial plane taken as the initial geodesic, we obtain finite eccentricity orbits in the form of Taylor series with the eccentricity playing the role of a small parameter. The explicit expressions of these higher-order geodesic deviations are derived using successive systems of linear equations with constant coefficients, whose solutions are of harmonic oscillator type. This scheme gives best results when applied to orbits with low eccentricities, but with arbitrary possible values of (GM/Rc sup 2).
Measurable Control System Security through Ideal Driven Technical Metrics
Energy Technology Data Exchange (ETDEWEB)
Miles McQueen; Wayne Boyer; Sean McBride; Marie Farrar; Zachary Tudor
2008-01-01
The Department of Homeland Security National Cyber Security Division supported development of a small set of security ideals as a framework to establish measurable control systems security. Based on these ideals, a draft set of proposed technical metrics was developed to allow control systems owner-operators to track improvements or degradations in their individual control systems security posture. The technical metrics development effort included review and evaluation of over thirty metrics-related documents. On the bases of complexity, ambiguity, or misleading and distorting effects the metrics identified during the reviews were determined to be weaker than necessary to aid defense against the myriad threats posed by cyber-terrorism to human safety, as well as to economic prosperity. Using the results of our metrics review and the set of security ideals as a starting point for metrics development, we identified thirteen potential technical metrics - with at least one metric supporting each ideal. Two case study applications of the ideals and thirteen metrics to control systems were then performed to establish potential difficulties in applying both the ideals and the metrics. The case studies resulted in no changes to the ideals, and only a few deletions and refinements to the thirteen potential metrics. This led to a final proposed set of ten core technical metrics. To further validate the security ideals, the modifications made to the original thirteen potential metrics, and the final proposed set of ten core metrics, seven separate control systems security assessments performed over the past three years were reviewed for findings and recommended mitigations. These findings and mitigations were then mapped to the security ideals and metrics to assess gaps in their coverage. The mappings indicated that there are no gaps in the security ideals and that the ten core technical metrics provide significant coverage of standard security issues with 87% coverage. Based
Metrics correlation and analysis service (MCAS)
International Nuclear Information System (INIS)
Baranovski, Andrew; Dykstra, Dave; Garzoglio, Gabriele; Hesselroth, Ted; Mhashilkar, Parag; Levshina, Tanya
2010-01-01
The complexity of Grid workflow activities and their associated software stacks inevitably involves multiple organizations, ownership, and deployment domains. In this setting, important and common tasks such as the correlation and display of metrics and debugging information (fundamental ingredients of troubleshooting) are challenged by the informational entropy inherent to independently maintained and operated software components. Because such an information pool is disorganized, it is a difficult environment for business intelligence analysis i.e. troubleshooting, incident investigation, and trend spotting. The mission of the MCAS project is to deliver a software solution to help with adaptation, retrieval, correlation, and display of workflow-driven data and of type-agnostic events, generated by loosely coupled or fully decoupled middleware.
Metrics correlation and analysis service (MCAS)
International Nuclear Information System (INIS)
Baranovski, Andrew; Dykstra, Dave; Garzoglio, Gabriele; Hesselroth, Ted; Mhashilkar, Parag; Levshina, Tanya
2009-01-01
The complexity of Grid workflow activities and their associated software stacks inevitably involves multiple organizations, ownership, and deployment domains. In this setting, important and common tasks such as the correlation and display of metrics and debugging information (fundamental ingredients of troubleshooting) are challenged by the informational entropy inherent to independently maintained and operated software components. Because such an information 'pond' is disorganized, it a difficult environment for business intelligence analysis i.e. troubleshooting, incident investigation and trend spotting. The mission of the MCAS project is to deliver a software solution to help with adaptation, retrieval, correlation, and display of workflow-driven data and of type-agnostic events, generated by disjoint middleware.
Development of Technology Transfer Economic Growth Metrics
Mastrangelo, Christina M.
1998-01-01
The primary objective of this project is to determine the feasibility of producing technology transfer metrics that answer the question: Do NASA/MSFC technical assistance activities impact economic growth? The data for this project resides in a 7800-record database maintained by Tec-Masters, Incorporated. The technology assistance data results from survey responses from companies and individuals who have interacted with NASA via a Technology Transfer Agreement, or TTA. The goal of this project was to determine if the existing data could provide indications of increased wealth. This work demonstrates that there is evidence that companies that used NASA technology transfer have a higher job growth rate than the rest of the economy. It also shows that the jobs being supported are jobs in higher wage SIC codes, and this indicates improvements in personal wealth. Finally, this work suggests that with correct data, the wealth issue may be addressed.
MESUR metrics from scholarly usage of resources
CERN. Geneva; Van de Sompel, Herbert
2007-01-01
Usage data is increasingly regarded as a valuable resource in the assessment of scholarly communication items. However, the development of quantitative, usage-based indicators of scholarly impact is still in its infancy. The Digital Library Research & Prototyping Team at the Los Alamos National Laboratory's Research library has therefore started a program to expand the set of usage-based tools for the assessment of scholarly communication items. The two-year MESUR project, funded by the Andrew W. Mellon Foundation, aims to define and validate a range of usage-based impact metrics, and issue guidelines with regards to their characteristics and proper application. The MESUR project is constructing a large-scale semantic model of the scholarly community that seamlessly integrates a wide range of bibliographic, citation and usage data. Functioning as a reference data set, this model is analyzed to characterize the intricate networks of typed relationships that exist in the scholarly community. The resulting c...
Einstein metrics and Brans-Dicke superfields
International Nuclear Information System (INIS)
Marques, S.
1988-01-01
It is obtained here a space conformal to the Einstein space-time, making the transition from an internal bosonic space, constructed with the Majorana constant spinors in the Majorana representation, to a bosonic ''superspace,'' through the use of Einstein vierbeins. These spaces are related to a Grassmann space constructed with the Majorana spinors referred to above, where the ''metric'' is a function of internal bosonic coordinates. The conformal function is a scale factor in the zone of gravitational radiation. A conformal function dependent on space-time coordinates can be constructed in that region when we introduce Majorana spinors which are functions of those coordinates. With this we obtain a scalar field of Brans-Dicke type. 11 refs
Advanced reactors: the case for metric design
International Nuclear Information System (INIS)
Ruby, L.
1986-01-01
The author argues that DOE should insist that all design specifications for advanced reactors be in the International System of Units (SI) in accordance with the Metric Conversion Act of 1975. Despite a lack of leadership from the federal government, industry has had to move toward conversion in order to compete on world markets. The US is the only major country without a scheduled conversion program. SI avoids the disadvantages of ambiguous names, non-coherent units, multiple units for the same quantity, multiple definitions, as well as barriers to international exchange and marketing and problems in comparing safety and code parameters. With a first step by DOE, the Nuclear Regulatory Commission should add the same requirements to reactor licensing guidelines. 4 references
Analytical Cost Metrics : Days of Future Past
Energy Technology Data Exchange (ETDEWEB)
Prajapati, Nirmal [Colorado State Univ., Fort Collins, CO (United States); Rajopadhye, Sanjay [Colorado State Univ., Fort Collins, CO (United States); Djidjev, Hristo Nikolov [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-02-20
As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems research is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”
Clean Cities 2013 Annual Metrics Report
Energy Technology Data Exchange (ETDEWEB)
Johnson, C.; Singer, M.
2014-10-01
Each year, the U.S. Department of Energy asks its Clean Cities program coordinators to submit annual reports of their activities and accomplishments for the previous calendar year. Data and information are submitted via an online database that is maintained as part of the Alternative Fuels Data Center (AFDC) at the National Renewable Energy Laboratory (NREL). Coordinators submit a range of data that characterize the membership, funding, projects, and activities of their coalitions. They also submit data about sales of alternative fuels, deployment of alternative fuel vehicles (AFVs) and hybrid electric vehicles (HEVs), idle-reduction (IR) initiatives, fuel economy activities, and programs to reduce vehicle miles traveled (VMT). NREL analyzes the data and translates them into petroleum-use reduction impacts, which are summarized in this 2013 Annual Metrics Report.
Clean Cities 2014 Annual Metrics Report
Energy Technology Data Exchange (ETDEWEB)
Johnson, Caley [National Renewable Energy Lab. (NREL), Golden, CO (United States); Singer, Mark [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2015-12-22
Each year, the U.S. Department of Energy asks its Clean Cities program coordinators to submit annual reports of their activities and accomplishments for the previous calendar year. Data and information are submitted via an online database that is maintained as part of the Alternative Fuels Data Center (AFDC) at the National Renewable Energy Laboratory (NREL). Coordinators submit a range of data that characterize the membership, funding, projects, and activities of their coalitions. They also submit data about sales of alternative fuels, deployment of alternative fuel vehicles (AFVs) and hybrid electric vehicles (HEVs), idle-reduction (IR) initiatives, fuel economy activities, and programs to reduce vehicle miles traveled (VMT). NREL analyzes the data and translates them into petroleum-use reduction impacts, which are summarized in this 2014 Annual Metrics Report.
Outsourced similarity search on metric data assets
Yiu, Man Lung
2012-02-01
This paper considers a cloud computing setting in which similarity querying of metric data is outsourced to a service provider. The data is to be revealed only to trusted users, not to the service provider or anyone else. Users query the server for the most similar data objects to a query example. Outsourcing offers the data owner scalability and a low-initial investment. The need for privacy may be due to the data being sensitive (e.g., in medicine), valuable (e.g., in astronomy), or otherwise confidential. Given this setting, the paper presents techniques that transform the data prior to supplying it to the service provider for similarity queries on the transformed data. Our techniques provide interesting trade-offs between query cost and accuracy. They are then further extended to offer an intuitive privacy guarantee. Empirical studies with real data demonstrate that the techniques are capable of offering privacy while enabling efficient and accurate processing of similarity queries.
Special metrics and group actions in geometry
Fino, Anna; Musso, Emilio; Podestà, Fabio; Vezzoni, Luigi
2017-01-01
The volume is a follow-up to the INdAM meeting “Special metrics and quaternionic geometry” held in Rome in November 2015. It offers a panoramic view of a selection of cutting-edge topics in differential geometry, including 4-manifolds, quaternionic and octonionic geometry, twistor spaces, harmonic maps, spinors, complex and conformal geometry, homogeneous spaces and nilmanifolds, special geometries in dimensions 5–8, gauge theory, symplectic and toric manifolds, exceptional holonomy and integrable systems. The workshop was held in honor of Simon Salamon, a leading international scholar at the forefront of academic research who has made significant contributions to all these subjects. The articles published here represent a compelling testimony to Salamon’s profound and longstanding impact on the mathematical community. Target readership includes graduate students and researchers working in Riemannian and complex geometry, Lie theory and mathematical physics.
Quasi-metrics, midpoints and applications
Energy Technology Data Exchange (ETDEWEB)
Valero, O.
2017-07-01
In applied sciences, the scientific community uses simultaneously different kinds of information coming from several sources in order to infer a conclusion or working decision. In the literature there are many techniques for merging the information and providing, hence, a meaningful fused data. In mostpractical cases such fusion methods are based on aggregation operators on somenumerical values, i.e. the aim of the fusion process is to obtain arepresentative number from a finite sequence of numerical data. In the aforementioned cases, the input data presents some kind of imprecision and for thisreason it is represented as fuzzy sets. Moreover, in such problems the comparisons between the numerical values that represent the information described by the fuzzy sets become necessary. The aforementioned comparisons are made by means of a distance defined on fuzzy sets. Thus, the numerical operators aggregating distances between fuzzy sets as incoming data play a central role in applied problems. Recently, J.J. Nieto and A. Torres gave some applications of the aggregation of distances on fuzzy sets to the study of real medical data in /cite{Nieto}. These applications are based on the notion of segment joining two given fuzzy sets and on the notion of set of midpoints between fuzzy sets. A few results obtained by Nieto and Torres have been generalized in turn by Casasnovas and Rossell/'{o} in /cite{Casas,Casas2}. Nowadays, quasi-metrics provide efficient tools in some fields of computer science and in bioinformatics. Motivated by the exposed facts, a study of segments joining two fuzzy sets and of midpoints between fuzzy sets when the measure, used for comparisons, is a quasi-metric has been made in /cite{Casas3, SebVal2013,TiradoValero}. (Author)
Analytic convergence of harmonic metrics for parabolic Higgs bundles
Kim, Semin; Wilkin, Graeme
2018-04-01
In this paper we investigate the moduli space of parabolic Higgs bundles over a punctured Riemann surface with varying weights at the punctures. We show that the harmonic metric depends analytically on the weights and the stable Higgs bundle. This gives a Higgs bundle generalisation of a theorem of McOwen on the existence of hyperbolic cone metrics on a punctured surface within a given conformal class, and a generalisation of a theorem of Judge on the analytic parametrisation of these metrics.
Exact solutions of strong gravity in generalized metrics
International Nuclear Information System (INIS)
Hojman, R.; Smailagic, A.
1981-05-01
We consider classical solutions for the strong gravity theory of Salam and Strathdee in a wider class of metrics with positive, zero and negative curvature. It turns out that such solutions exist and their relevance for quark confinement is explored. Only metrics with positive curvature (spherical symmetry) give a confining potential in a simple picture of the scalar hadron. This supports the idea of describing the hadron as a closed microuniverse of the strong metric. (author)
An accurate metric for the spacetime around neutron stars
Pappas, George
2016-01-01
The problem of having an accurate description of the spacetime around neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to inf...
Problems in Systematic Application of Software Metrics and Possible Solution
Rakic, Gordana; Budimac, Zoran
2013-01-01
Systematic application of software metric techniques can lead to significant improvements of the quality of a final software product. However, there is still the evident lack of wider utilization of software metrics techniques and tools due to many reasons. In this paper we investigate some limitations of contemporary software metrics tools and then propose construction of a new tool that would solve some of the problems. We describe the promising prototype, its internal structure, and then f...
Two-dimensional manifolds with metrics of revolution
International Nuclear Information System (INIS)
Sabitov, I Kh
2000-01-01
This is a study of the topological and metric structure of two-dimensional manifolds with a metric that is locally a metric of revolution. In the case of compact manifolds this problem can be thoroughly investigated, and in particular it is explained why there are no closed analytic surfaces of revolution in R 3 other than a sphere and a torus (moreover, in the smoothness class C ∞ such surfaces, understood in a certain generalized sense, exist in any topological class)
A software quality model and metrics for risk assessment
Hyatt, L.; Rosenberg, L.
1996-01-01
A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.
Chaos of discrete dynamical systems in complete metric spaces
International Nuclear Information System (INIS)
Shi Yuming; Chen Guanrong
2004-01-01
This paper is concerned with chaos of discrete dynamical systems in complete metric spaces. Discrete dynamical systems governed by continuous maps in general complete metric spaces are first discussed, and two criteria of chaos are then established. As a special case, two corresponding criteria of chaos for discrete dynamical systems in compact subsets of metric spaces are obtained. These results have extended and improved the existing relevant results of chaos in finite-dimensional Euclidean spaces
The Extended HANDS Characterization and Analysis of Metric Biases
Kelecy, T.; Knox, R.; Cognion, R.
The Extended High Accuracy Network Determination System (Extended HANDS) consists of a network of low cost, high accuracy optical telescopes designed to support space surveillance and development of space object characterization technologies. Comprising off-the-shelf components, the telescopes are designed to provide sub arc-second astrometric accuracy. The design and analysis team are in the process of characterizing the system through development of an error allocation tree whose assessment is supported by simulation, data analysis, and calibration tests. The metric calibration process has revealed 1-2 arc-second biases in the right ascension and declination measurements of reference satellite position, and these have been observed to have fairly distinct characteristics that appear to have some dependence on orbit geometry and tracking rates. The work presented here outlines error models developed to aid in development of the system error budget, and examines characteristic errors (biases, time dependence, etc.) that might be present in each of the relevant system elements used in the data collection and processing, including the metric calibration processing. The relevant reference frames are identified, and include the sensor (CCD camera) reference frame, Earth-fixed topocentric frame, topocentric inertial reference frame, and the geocentric inertial reference frame. The errors modeled in each of these reference frames, when mapped into the topocentric inertial measurement frame, reveal how errors might manifest themselves through the calibration process. The error analysis results that are presented use satellite-sensor geometries taken from periods where actual measurements were collected, and reveal how modeled errors manifest themselves over those specific time periods. These results are compared to the real calibration metric data (right ascension and declination residuals), and sources of the bias are hypothesized. In turn, the actual right ascension and
Objectively Quantifying Radiation Esophagitis With Novel Computed Tomography–Based Metrics
Energy Technology Data Exchange (ETDEWEB)
Niedzielski, Joshua S., E-mail: jsniedzielski@mdanderson.org [Department of Radiation Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); University of Texas Houston Graduate School of Biomedical Science, Houston, Texas (United States); Yang, Jinzhong [Department of Radiation Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); University of Texas Houston Graduate School of Biomedical Science, Houston, Texas (United States); Stingo, Francesco [Department of Biostatistics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); Martel, Mary K.; Mohan, Radhe [Department of Radiation Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); University of Texas Houston Graduate School of Biomedical Science, Houston, Texas (United States); Gomez, Daniel R. [Department of Radiation Oncology, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); Briere, Tina M. [Department of Radiation Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); University of Texas Houston Graduate School of Biomedical Science, Houston, Texas (United States); Liao, Zhongxing [Department of Radiation Oncology, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); Court, Laurence E. [Department of Radiation Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); University of Texas Houston Graduate School of Biomedical Science, Houston, Texas (United States)
2016-02-01
Purpose: To study radiation-induced esophageal expansion as an objective measure of radiation esophagitis in patients with non-small cell lung cancer (NSCLC) treated with intensity modulated radiation therapy. Methods and Materials: Eighty-five patients had weekly intra-treatment CT imaging and esophagitis scoring according to Common Terminlogy Criteria for Adverse Events 4.0, (24 Grade 0, 45 Grade 2, and 16 Grade 3). Nineteen esophageal expansion metrics based on mean, maximum, spatial length, and volume of expansion were calculated as voxel-based relative volume change, using the Jacobian determinant from deformable image registration between the planning and weekly CTs. An anatomic variability correction method was validated and applied to these metrics to reduce uncertainty. An analysis of expansion metrics and radiation esophagitis grade was conducted using normal tissue complication probability from univariate logistic regression and Spearman rank for grade 2 and grade 3 esophagitis endpoints, as well as the timing of expansion and esophagitis grade. Metrics' performance in classifying esophagitis was tested with receiver operating characteristic analysis. Results: Expansion increased with esophagitis grade. Thirteen of 19 expansion metrics had receiver operating characteristic area under the curve values >0.80 for both grade 2 and grade 3 esophagitis endpoints, with the highest performance from maximum axial expansion (MaxExp1) and esophageal length with axial expansion ≥30% (LenExp30%) with area under the curve values of 0.93 and 0.91 for grade 2, 0.90 and 0.90 for grade 3 esophagitis, respectively. Conclusions: Esophageal expansion may be a suitable objective measure of esophagitis, particularly maximum axial esophageal expansion and esophageal length with axial expansion ≥30%, with 2.1 Jacobian value and 98.6 mm as the metric value for 50% probability of grade 3 esophagitis. The uncertainty in esophageal Jacobian calculations can be reduced
Objectively Quantifying Radiation Esophagitis With Novel Computed Tomography–Based Metrics
International Nuclear Information System (INIS)
Niedzielski, Joshua S.; Yang, Jinzhong; Stingo, Francesco; Martel, Mary K.; Mohan, Radhe; Gomez, Daniel R.; Briere, Tina M.; Liao, Zhongxing; Court, Laurence E.
2016-01-01
Purpose: To study radiation-induced esophageal expansion as an objective measure of radiation esophagitis in patients with non-small cell lung cancer (NSCLC) treated with intensity modulated radiation therapy. Methods and Materials: Eighty-five patients had weekly intra-treatment CT imaging and esophagitis scoring according to Common Terminlogy Criteria for Adverse Events 4.0, (24 Grade 0, 45 Grade 2, and 16 Grade 3). Nineteen esophageal expansion metrics based on mean, maximum, spatial length, and volume of expansion were calculated as voxel-based relative volume change, using the Jacobian determinant from deformable image registration between the planning and weekly CTs. An anatomic variability correction method was validated and applied to these metrics to reduce uncertainty. An analysis of expansion metrics and radiation esophagitis grade was conducted using normal tissue complication probability from univariate logistic regression and Spearman rank for grade 2 and grade 3 esophagitis endpoints, as well as the timing of expansion and esophagitis grade. Metrics' performance in classifying esophagitis was tested with receiver operating characteristic analysis. Results: Expansion increased with esophagitis grade. Thirteen of 19 expansion metrics had receiver operating characteristic area under the curve values >0.80 for both grade 2 and grade 3 esophagitis endpoints, with the highest performance from maximum axial expansion (MaxExp1) and esophageal length with axial expansion ≥30% (LenExp30%) with area under the curve values of 0.93 and 0.91 for grade 2, 0.90 and 0.90 for grade 3 esophagitis, respectively. Conclusions: Esophageal expansion may be a suitable objective measure of esophagitis, particularly maximum axial esophageal expansion and esophageal length with axial expansion ≥30%, with 2.1 Jacobian value and 98.6 mm as the metric value for 50% probability of grade 3 esophagitis. The uncertainty in esophageal Jacobian calculations can be reduced
Cosmology of hybrid metric-Palatini f(X)-gravity
International Nuclear Information System (INIS)
Capozziello, Salvatore; Harko, Tiberiu; Koivisto, Tomi S.; Lobo, Francisco S.N.; Olmo, Gonzalo J.
2013-01-01
A new class of modified theories of gravity, consisting of the superposition of the metric Einstein-Hilbert Lagrangian with an f(R) term constructed à la Palatini was proposed recently. The dynamically equivalent scalar-tensor representation of the model was also formulated, and it was shown that even if the scalar field is very light, the theory passes the Solar System observational constraints. Therefore the model predicts the existence of a long-range scalar field, modifying the cosmological and galactic dynamics. An explicit model that passes the local tests and leads to cosmic acceleration was also obtained. In the present work, it is shown that the theory can be also formulated in terms of the quantity X≡κ 2 T+R, where T and R are the traces of the stress-energy and Ricci tensors, respectively. The variable X represents the deviation with respect to the field equation trace of general relativity. The cosmological applications of this hybrid metric-Palatini gravitational theory are also explored, and cosmological solutions coming from the scalar-tensor representation of f(X)-gravity are presented. Criteria to obtain cosmic acceleration are discussed and the field equations are analyzed as a dynamical system. Several classes of dynamical cosmological solutions, depending on the functional form of the effective scalar field potential, describing both accelerating and decelerating Universes are explicitly obtained. Furthermore, the cosmological perturbation equations are derived and applied to uncover the nature of the propagating scalar degree of freedom and the signatures these models predict in the large-scale structure
Eyetracking Metrics in Young Onset Alzheimer's Disease: A Window into Cognitive Visual Functions.
Pavisic, Ivanna M; Firth, Nicholas C; Parsons, Samuel; Rego, David Martinez; Shakespeare, Timothy J; Yong, Keir X X; Slattery, Catherine F; Paterson, Ross W; Foulkes, Alexander J M; Macpherson, Kirsty; Carton, Amelia M; Alexander, Daniel C; Shawe-Taylor, John; Fox, Nick C; Schott, Jonathan M; Crutch, Sebastian J; Primativo, Silvia
2017-01-01
Young onset Alzheimer's disease (YOAD) is defined as symptom onset before the age of 65 years and is particularly associated with phenotypic heterogeneity. Atypical presentations, such as the clinic-radiological visual syndrome posterior cortical atrophy (PCA), often lead to delays in accurate diagnosis. Eyetracking has been used to demonstrate basic oculomotor impairments in individuals with dementia. In the present study, we aim to explore the relationship between eyetracking metrics and standard tests of visual cognition in individuals with YOAD. Fifty-seven participants were included: 36 individuals with YOAD ( n = 26 typical AD; n = 10 PCA) and 21 age-matched healthy controls. Participants completed three eyetracking experiments: fixation, pro-saccade, and smooth pursuit tasks. Summary metrics were used as outcome measures and their predictive value explored looking at correlations with visuoperceptual and visuospatial metrics. Significant correlations between eyetracking metrics and standard visual cognitive estimates are reported. A machine-learning approach using a classification method based on the smooth pursuit raw eyetracking data discriminates with approximately 95% accuracy patients and controls in cross-validation tests. Results suggest that the eyetracking paradigms of a relatively simple and specific nature provide measures not only reflecting basic oculomotor characteristics but also predicting higher order visuospatial and visuoperceptual impairments. Eyetracking measures can represent extremely useful markers during the diagnostic phase and may be exploited as potential outcome measures for clinical trials.
Eyetracking Metrics in Young Onset Alzheimer’s Disease: A Window into Cognitive Visual Functions
Pavisic, Ivanna M.; Firth, Nicholas C.; Parsons, Samuel; Rego, David Martinez; Shakespeare, Timothy J.; Yong, Keir X. X.; Slattery, Catherine F.; Paterson, Ross W.; Foulkes, Alexander J. M.; Macpherson, Kirsty; Carton, Amelia M.; Alexander, Daniel C.; Shawe-Taylor, John; Fox, Nick C.; Schott, Jonathan M.; Crutch, Sebastian J.; Primativo, Silvia
2017-01-01
Young onset Alzheimer’s disease (YOAD) is defined as symptom onset before the age of 65 years and is particularly associated with phenotypic heterogeneity. Atypical presentations, such as the clinic-radiological visual syndrome posterior cortical atrophy (PCA), often lead to delays in accurate diagnosis. Eyetracking has been used to demonstrate basic oculomotor impairments in individuals with dementia. In the present study, we aim to explore the relationship between eyetracking metrics and standard tests of visual cognition in individuals with YOAD. Fifty-seven participants were included: 36 individuals with YOAD (n = 26 typical AD; n = 10 PCA) and 21 age-matched healthy controls. Participants completed three eyetracking experiments: fixation, pro-saccade, and smooth pursuit tasks. Summary metrics were used as outcome measures and their predictive value explored looking at correlations with visuoperceptual and visuospatial metrics. Significant correlations between eyetracking metrics and standard visual cognitive estimates are reported. A machine-learning approach using a classification method based on the smooth pursuit raw eyetracking data discriminates with approximately 95% accuracy patients and controls in cross-validation tests. Results suggest that the eyetracking paradigms of a relatively simple and specific nature provide measures not only reflecting basic oculomotor characteristics but also predicting higher order visuospatial and visuoperceptual impairments. Eyetracking measures can represent extremely useful markers during the diagnostic phase and may be exploited as potential outcome measures for clinical trials. PMID:28824534
Eyetracking Metrics in Young Onset Alzheimer’s Disease: A Window into Cognitive Visual Functions
Directory of Open Access Journals (Sweden)
Ivanna M. Pavisic
2017-08-01
Full Text Available Young onset Alzheimer’s disease (YOAD is defined as symptom onset before the age of 65 years and is particularly associated with phenotypic heterogeneity. Atypical presentations, such as the clinic-radiological visual syndrome posterior cortical atrophy (PCA, often lead to delays in accurate diagnosis. Eyetracking has been used to demonstrate basic oculomotor impairments in individuals with dementia. In the present study, we aim to explore the relationship between eyetracking metrics and standard tests of visual cognition in individuals with YOAD. Fifty-seven participants were included: 36 individuals with YOAD (n = 26 typical AD; n = 10 PCA and 21 age-matched healthy controls. Participants completed three eyetracking experiments: fixation, pro-saccade, and smooth pursuit tasks. Summary metrics were used as outcome measures and their predictive value explored looking at correlations with visuoperceptual and visuospatial metrics. Significant correlations between eyetracking metrics and standard visual cognitive estimates are reported. A machine-learning approach using a classification method based on the smooth pursuit raw eyetracking data discriminates with approximately 95% accuracy patients and controls in cross-validation tests. Results suggest that the eyetracking paradigms of a relatively simple and specific nature provide measures not only reflecting basic oculomotor characteristics but also predicting higher order visuospatial and visuoperceptual impairments. Eyetracking measures can represent extremely useful markers during the diagnostic phase and may be exploited as potential outcome measures for clinical trials.
Analysis of Skeletal Muscle Metrics as Predictors of Functional Task Performance
Ryder, Jeffrey W.; Buxton, Roxanne E.; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle J.; Fiedler, James; Ploutz-Snyder, Robert J.; Bloomberg, Jacob J.; Ploutz-Snyder, Lori L.
2010-01-01
PURPOSE: The ability to predict task performance using physiological performance metrics is vital to ensure that astronauts can execute their jobs safely and effectively. This investigation used a weighted suit to evaluate task performance at various ratios of strength, power, and endurance to body weight. METHODS: Twenty subjects completed muscle performance tests and functional tasks representative of those that would be required of astronauts during planetary exploration (see table for specific tests/tasks). Subjects performed functional tasks while wearing a weighted suit with additional loads ranging from 0-120% of initial body weight. Performance metrics were time to completion for all tasks except hatch opening, which consisted of total work. Task performance metrics were plotted against muscle metrics normalized to "body weight" (subject weight + external load; BW) for each trial. Fractional polynomial regression was used to model the relationship between muscle and task performance. CONCLUSION: LPMIF/BW is the best predictor of performance for predominantly lower-body tasks that are ambulatory and of short duration. LPMIF/BW is a very practical predictor of occupational task performance as it is quick and relatively safe to perform. Accordingly, bench press work best predicts hatch-opening work performance.
Shackelford, Stacy; Garofalo, Evan; Shalin, Valerie; Pugh, Kristy; Chen, Hegang; Pasley, Jason; Sarani, Babak; Henry, Sharon; Bowyer, Mark; Mackenzie, Colin F
2015-07-01
Maintaining trauma-specific surgical skills is an ongoing challenge for surgical training programs. An objective assessment of surgical skills is needed. We hypothesized that a validated surgical performance assessment tool could detect differences following a training intervention. We developed surgical performance assessment metrics based on discussion with expert trauma surgeons, video review of 10 experts and 10 novice surgeons performing three vascular exposure procedures and lower extremity fasciotomy on cadavers, and validated the metrics with interrater reliability testing by five reviewers blinded to level of expertise and a consensus conference. We tested these performance metrics in 12 surgical residents (Year 3-7) before and 2 weeks after vascular exposure skills training in the Advanced Surgical Skills for Exposure in Trauma (ASSET) course. Performance was assessed in three areas as follows: knowledge (anatomic, management), procedure steps, and technical skills. Time to completion of procedures was recorded, and these metrics were combined into a single performance score, the Trauma Readiness Index (TRI). Wilcoxon matched-pairs signed-ranks test compared pretraining/posttraining effects. Mean time to complete procedures decreased by 4.3 minutes (from 13.4 minutes to 9.1 minutes). The performance component most improved by the 1-day skills training was procedure steps, completion of which increased by 21%. Technical skill scores improved by 12%. Overall knowledge improved by 3%, with 18% improvement in anatomic knowledge. TRI increased significantly from 50% to 64% with ASSET training. Interrater reliability of the surgical performance assessment metrics was validated with single intraclass correlation coefficient of 0.7 to 0.98. A trauma-relevant surgical performance assessment detected improvements in specific procedure steps and anatomic knowledge taught during a 1-day course, quantified by the TRI. ASSET training reduced time to complete vascular
10 CFR 600.306 - Metric system of measurement.
2010-01-01
... cause significant inefficiencies or loss of markets to United States firms. (b) Recipients are... Requirements for Grants and Cooperative Agreements With For-Profit Organizations General § 600.306 Metric... Competitiveness Act of 1988 (15 U.S.C. 205) and implemented by Executive Order 12770, states that: (1) The metric...
On the topology defined by Thurston's asymmetric metric
DEFF Research Database (Denmark)
Papadopoulos, Athanase; Theret, Guillaume
2007-01-01
that the topology that the asymmetric metric L induces on Teichmüller space is the same as the usual topology. Furthermore, we show that L satisfies the axioms of a (not necessarily symmetric) metric in the sense of Busemann and conclude that L is complete in the sense of Busemann....
Path integral measure for first-order and metric gravities
International Nuclear Information System (INIS)
Aros, Rodrigo; Contreras, Mauricio; Zanelli, Jorge
2003-01-01
The equivalence between the path integrals for first-order gravity and the standard torsion-free, metric gravity in 3 + 1 dimensions is analysed. Starting with the path integral for first-order gravity, the correct measure for the path integral of the metric theory is obtained
Converging from Branching to Linear Metrics on Markov Chains
DEFF Research Database (Denmark)
Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand
2015-01-01
time in the size of the MC. The upper-approximants are Kantorovich-like pseudometrics, i.e. branching-time distances, that converge point-wise to the linear-time metrics. This convergence is interesting in itself, since it reveals a nontrivial relation between branching and linear-time metric...
Effects of Metric Change on Workers’ Tools and Training.
1981-07-01
understanding of the metric system, and particularly a lack of fluency in converting customary measurements to metric measuremerts, may increase the...assembly, installing, and repairing occupations 84 Painting, plastering, waterproofing, cementing , and related occupations 85 Excavating, grading... cementing , and related occupations 85 Excavating, grading, paving, and related occupations 86 Construction occupations, n.e.c. 89 Structural work
48 CFR 611.002-70 - Metric system implementation.
2010-10-01
... with security, operations, economic, technical, logistical, training and safety requirements. (3) The... total cost of the retrofit, including redesign costs, exceeds $50,000; (ii) Metric is not the accepted... office with an explanation for the disapproval. (7) The in-house operating metric costs shall be...
Empirical analysis of change metrics for software fault prediction
Choudhary, Garvit Rajesh; Kumar, Sandeep; Kumar, Kuldeep; Mishra, Alok; Catal, Cagatay
2018-01-01
A quality assurance activity, known as software fault prediction, can reduce development costs and improve software quality. The objective of this study is to investigate change metrics in conjunction with code metrics to improve the performance of fault prediction models. Experimental studies are
Predicting class testability using object-oriented metrics
M. Bruntink (Magiel); A. van Deursen (Arie)
2004-01-01
textabstractIn this paper we investigate factors of the testability of object-oriented software systems. The starting point is given by a study of the literature to obtain both an initial model of testability and existing OO metrics related to testability. Subsequently, these metrics are evaluated
Comparative Study of Trace Metrics between Bibliometrics and Patentometrics
Directory of Open Access Journals (Sweden)
Fred Y. Ye
2016-06-01
Full Text Available Purpose: To comprehensively evaluate the overall performance of a group or an individual in both bibliometrics and patentometrics. Design/methodology/approach: Trace metrics were applied to the top 30 universities in the 2014 Academic Ranking of World Universities (ARWU — computer sciences, the top 30 ESI highly cited papers in the computer sciences field in 2014, as well as the top 30 assignees and the top 30 most cited patents in the National Bureau of Economic Research (NBER computer hardware and software category. Findings: We found that, by applying trace metrics, the research or marketing impact efficiency, at both group and individual levels, was clearly observed. Furthermore, trace metrics were more sensitive to the different publication-citation distributions than the average citation and h-index were. Research limitations: Trace metrics considered publications with zero citations as negative contributions. One should clarify how he/she evaluates a zero-citation paper or patent before applying trace metrics. Practical implications: Decision makers could regularly examinine the performance of their university/company by applying trace metrics and adjust their policies accordingly. Originality/value: Trace metrics could be applied both in bibliometrics and patentometrics and provide a comprehensive view. Moreover, the high sensitivity and unique impact efficiency view provided by trace metrics can facilitate decision makers in examining and adjusting their policies.
Self-dual metrics with self-dual Killing vectors
International Nuclear Information System (INIS)
Tod, K.P.; Ward, R.S.
1979-01-01
Twistor methods are used to derive a class of solutions to Einstein's vacuum equations, with anti-self dual Weyl tensor. In particular, all metrics with a Killing vector whose derivative is anti-self-dual and which admit a real positive-definite section are exhibited and shown to coincide with the metrics of Hawking. (author)
Scalar metric fluctuations in space-time matter inflation
International Nuclear Information System (INIS)
Anabitarte, Mariano; Bellini, Mauricio
2006-01-01
Using the Ponce de Leon background metric, which describes a 5D universe in an apparent vacuum: G-bar AB =0, we study the effective 4D evolution of both, the inflaton and gauge-invariant scalar metric fluctuations, in the recently introduced model of space-time matter inflation
22 CFR 226.15 - Metric system of measurement.
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Metric system of measurement. 226.15 Section 226.15 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT ADMINISTRATION OF ASSISTANCE AWARDS TO U.S. NON-GOVERNMENTAL ORGANIZATIONS Pre-award Requirements § 226.15 Metric system of measurement. (a...
Presic-Boyd-Wong Type Results in Ordered Metric Spaces
Directory of Open Access Journals (Sweden)
Satish Shukla
2014-04-01
Full Text Available The purpose of this paper is to prove some Presic-Boyd-Wong type fixed point theorems in ordered metric spaces. The results of this paper generalize the famous results of Presic and Boyd-Wong in ordered metric spaces. We also initiate the homotopy result in product spaces. Some examples are provided which illustrate the results proved herein.
A heuristic way of obtaining the Kerr metric
International Nuclear Information System (INIS)
Enderlein, J.
1997-01-01
An intuitive, straightforward way of finding the metric of a rotating black hole is presented, based on the algebra of differential forms. The representation obtained for the metric displays a simplicity which is not obvious in the usual Boyer Lindquist coordinates. copyright 1997 American Association of Physics Teachers
On the L2-metric of vortex moduli spaces
Baptista, J.M.
2011-01-01
We derive general expressions for the Kähler form of the L2-metric in terms of standard 2-forms on vortex moduli spaces. In the case of abelian vortices in gauged linear sigma-models, this allows us to compute explicitly the Kähler class of the L2-metric. As an application we compute the total
Probabilistic G-Metric space and some fixed point results
Directory of Open Access Journals (Sweden)
A. R. Janfada
2013-01-01
Full Text Available In this note we introduce the notions of generalized probabilistic metric spaces and generalized Menger probabilistic metric spaces. After making our elementary observations and proving some basic properties of these spaces, we are going to prove some fixed point result in these spaces.
Socio-Technical Security Metrics (Dagstuhl Seminar 14491)
Gollmann, Dieter; Herley, Cormac; Koenig, Vincent; Pieters, Wolter; Sasse, Martina Angela
2015-01-01
This report documents the program and the outcomes of Dagstuhl Seminar 14491 "Socio-Technical Security Metrics". In the domain of safety, metrics inform many decisions, from the height of new dikes to the design of nuclear plants. We can state, for example, that the dikes should be high enough to
Radiating c metric: an example of a proper Ricci Collineation
International Nuclear Information System (INIS)
Aulestia, L.; Nunez, L.; Patino, A.; Rago, H.; Herrera, L.
1984-01-01
A generalization of the charged c metric to the nonstationary case is given. The possibility of associating the energy-momentum tensor with the electromagnetic or neutrino field is discussed. It is shown that, for a specific choice of the time-dependent parameters, the metric admits at least a two-parameter group of proper Ricci collineations
On the Metric-based Approximate Minimization of Markov Chains
DEFF Research Database (Denmark)
Bacci, Giovanni; Bacci, Giorgio; Larsen, Kim Guldstrand
2018-01-01
In this paper we address the approximate minimization problem of Markov Chains (MCs) from a behavioral metric-based perspective. Specifically, given a finite MC and a positive integer k, we are looking for an MC with at most k states having minimal distance to the original. The metric considered...
On the Metric-Based Approximate Minimization of Markov Chains
DEFF Research Database (Denmark)
Bacci, Giovanni; Bacci, Giorgio; Larsen, Kim Guldstrand
2017-01-01
We address the behavioral metric-based approximate minimization problem of Markov Chains (MCs), i.e., given a finite MC and a positive integer k, we are interested in finding a k-state MC of minimal distance to the original. By considering as metric the bisimilarity distance of Desharnais at al...
Implementing Metrics at a District Level. Administrative Guide. Revised Edition.
Borelli, Michael L.; Morelli, Sandra Z.
Administrative concerns in implementing metrics at a district level are discussed and specific recommendations are made regarding them. The paper considers the extent and manner of staff training necessary, the curricular changes associated with metrics, and the distinctions between elementary and secondary programs. Appropriate instructional…
20 CFR 435.15 - Metric system of measurement.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Metric system of measurement. 435.15 Section 435.15 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE REQUIREMENTS FOR... metric system is the preferred measurement system for U.S. trade and commerce. The Act requires each...
Choosing the Greenest Synthesis: A Multivariate Metric Green Chemistry Exercise
Mercer, Sean M.; Andraos, John; Jessop, Philip G.
2012-01-01
The ability to correctly identify the greenest of several syntheses is a particularly useful asset for young chemists in the growing green economy. The famous univariate metrics atom economy and environmental factor provide insufficient information to allow for a proper selection of a green process. Multivariate metrics, such as those used in…
76 FR 53885 - Patent and Trademark Resource Centers Metrics
2011-08-30
... DEPARTMENT OF COMMERCE United States Patent and Trademark Office Patent and Trademark Resource Centers Metrics ACTION: Proposed collection; comment request. SUMMARY: The United States Patent and... ``Patent and Trademark Resource Centers Metrics comment'' in the subject line of the message. Mail: Susan K...
Author Impact Metrics in Communication Sciences and Disorder Research
Stuart, Andrew; Faucette, Sarah P.; Thomas, William Joseph
2017-01-01
Purpose: The purpose was to examine author-level impact metrics for faculty in the communication sciences and disorder research field across a variety of databases. Method: Author-level impact metrics were collected for faculty from 257 accredited universities in the United States and Canada. Three databases (i.e., Google Scholar, ResearchGate,…
Evaluating hydrological model performance using information theory-based metrics
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...
Using metrics in stability of stochastic programming problems
Czech Academy of Sciences Publication Activity Database
Houda, Michal
2005-01-01
Roč. 13, č. 1 (2005), s. 128-134 ISSN 0572-3043 R&D Projects: GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic programming * quantitative stability * Wasserstein metrics * Kolmogorov metrics * simulation study Subject RIV: BB - Applied Statistics, Operational Research
A Practical Method for Collecting Social Media Campaign Metrics
Gharis, Laurie W.; Hightower, Mary F.
2017-01-01
Today's Extension professionals are tasked with more work and fewer resources. Integrating social media campaigns into outreach efforts can be an efficient way to meet work demands. If resources go toward social media, a practical method for collecting metrics is needed. Collecting metrics adds one more task to the workloads of Extension…
Discriminatory Data Mapping by Matrix-Based Supervised Learning Metrics
Strickert, M.; Schneider, P.; Keilwagen, J.; Villmann, T.; Biehl, M.; Hammer, B.
2008-01-01
Supervised attribute relevance detection using cross-comparisons (SARDUX), a recently proposed method for data-driven metric learning, is extended from dimension-weighted Minkowski distances to metrics induced by a data transformation matrix Ω for modeling mutual attribute dependence. Given class
27 CFR 4.72 - Metric standards of fill.
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Metric standards of fill. 4.72 Section 4.72 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS LABELING AND ADVERTISING OF WINE Standards of Fill for Wine § 4.72 Metric...
Assessing precision, bias and sigma-metrics of 53 measurands of the Alinity ci system.
Westgard, Sten; Petrides, Victoria; Schneider, Sharon; Berman, Marvin; Herzogenrath, Jörg; Orzechowski, Anthony
2017-12-01
Assay performance is dependent on the accuracy and precision of a given method. These attributes can be combined into an analytical Sigma-metric, providing a simple value for laboratorians to use in evaluating a test method's capability to meet its analytical quality requirements. Sigma-metrics were determined for 37 clinical chemistry assays, 13 immunoassays, and 3 ICT methods on the Alinity ci system. Analytical Performance Specifications were defined for the assays, following a rationale of using CLIA goals first, then Ricos Desirable goals when CLIA did not regulate the method, and then other sources if the Ricos Desirable goal was unrealistic. A precision study was conducted at Abbott on each assay using the Alinity ci system following the CLSI EP05-A2 protocol. Bias was estimated following the CLSI EP09-A3 protocol using samples with concentrations spanning the assay's measuring interval tested in duplicate on the Alinity ci system and ARCHITECT c8000 and i2000 SR systems, where testing was also performed at Abbott. Using the regression model, the %bias was estimated at an important medical decisions point. Then the Sigma-metric was estimated for each assay and was plotted on a method decision chart. The Sigma-metric was calculated using the equation: Sigma-metric=(%TEa-|%bias|)/%CV. The Sigma-metrics and Normalized Method Decision charts demonstrate that a majority of the Alinity assays perform at least at five Sigma or higher, at or near critical medical decision levels. More than 90% of the assays performed at Five and Six Sigma. None performed below Three Sigma. Sigma-metrics plotted on Normalized Method Decision charts provide useful evaluations of performance. The majority of Alinity ci system assays had sigma values >5 and thus laboratories can expect excellent or world class performance. Laboratorians can use these tools as aids in choosing high-quality products, further contributing to the delivery of excellent quality healthcare for patients
Metrics to describe the effects of landscape pattern on hydrology in a lotic peatland
Yuan, J.; Cohen, M. J.; Kaplan, D. A.; Acharya, S.; Larsen, L.; Nungesser, M.
2013-12-01
Strong reciprocal interactions exist between landscape patterns and ecological processes. Hydrology is the dominant abiotic driver of ecological processes in wetlands, particularly flowing wetlands, but is both the control on and controlled by the geometry of vegetation patterning. Landscape metrics are widely used to quantitatively link pattern and process. Our goal here was to use several candidate spatial pattern metrics to predict the effects of wetland vegetation pattern on hydrologic regime, specifically hydroperiod, in the ridge-slough patterned landscape of the Everglades. The metrics focus on the capacity for longitudinally connected flow, and thus the ability of this low-gradient patterned landscape to route water from upstream. We first explored flow friction cost (FFC), a weighted spatial distance procedure wherein ridges have a high flow cost than sloughs by virtue of their elevation and vegetation structure, to evaluate water movement through different landscape configurations. We also investigated existing published flow metrics, specifically the Directional Connectivity Index (DCI) and Landscape Discharge Competence (LDC), that seek to quantify connectivity, one of the sentinel targets of ecological restoration. Hydroperiod was estimated using a numerical hydrologic model (SWIFT 2D) in real and synthetic landscapes with varying vegetation properties ( patch anisotropy, ridge density). Synthetic landscapes were constrained by the geostatistical properties of the best conserved patterned, and contained five anisotropy levels and seven ridge density levels. These were used to construct the relationship between landscape metrics and hydroperiod. Then, using historical images from 1940 to 2004, we applied the metrics toback-cast hydroperiod. Current vegetation maps were used to test scale dependency for each metric. Our results suggest that both FFC and DCI are good predictors of hydroperiod under free flowing conditions, and that they can be used
ISS Logistics Hardware Disposition and Metrics Validation
Rogers, Toneka R.
2010-01-01
I was assigned to the Logistics Division of the International Space Station (ISS)/Spacecraft Processing Directorate. The Division consists of eight NASA engineers and specialists that oversee the logistics portion of the Checkout, Assembly, and Payload Processing Services (CAPPS) contract. Boeing, their sub-contractors and the Boeing Prime contract out of Johnson Space Center, provide the Integrated Logistics Support for the ISS activities at Kennedy Space Center. Essentially they ensure that spares are available to support flight hardware processing and the associated ground support equipment (GSE). Boeing maintains a Depot for electrical, mechanical and structural modifications and/or repair capability as required. My assigned task was to learn project management techniques utilized by NASA and its' contractors to provide an efficient and effective logistics support infrastructure to the ISS program. Within the Space Station Processing Facility (SSPF) I was exposed to Logistics support components, such as, the NASA Spacecraft Services Depot (NSSD) capabilities, Mission Processing tools, techniques and Warehouse support issues, required for integrating Space Station elements at the Kennedy Space Center. I also supported the identification of near-term ISS Hardware and Ground Support Equipment (GSE) candidates for excessing/disposition prior to October 2010; and the validation of several Logistics Metrics used by the contractor to measure logistics support effectiveness.
Securing Health Sensing Using Integrated Circuit Metric
Tahir, Ruhma; Tahir, Hasan; McDonald-Maier, Klaus
2015-01-01
Convergence of technologies from several domains of computing and healthcare have aided in the creation of devices that can help health professionals in monitoring their patients remotely. An increase in networked healthcare devices has resulted in incidents related to data theft, medical identity theft and insurance fraud. In this paper, we discuss the design and implementation of a secure lightweight wearable health sensing system. The proposed system is based on an emerging security technology called Integrated Circuit Metric (ICMetric) that extracts the inherent features of a device to generate a unique device identification. In this paper, we provide details of how the physical characteristics of a health sensor can be used for the generation of hardware “fingerprints”. The obtained fingerprints are used to deliver security services like authentication, confidentiality, secure admission and symmetric key generation. The generated symmetric key is used to securely communicate the health records and data of the patient. Based on experimental results and the security analysis of the proposed scheme, it is apparent that the proposed system enables high levels of security for health monitoring in resource optimized manner. PMID:26492250
Metric integration architecture for product development
Sieger, David B.
1997-06-01
Present-day product development endeavors utilize the concurrent engineering philosophy as a logical means for incorporating a variety of viewpoints into the design of products. Since this approach provides no explicit procedural provisions, it is necessary to establish at least a mental coupling with a known design process model. The central feature of all such models is the management and transformation of information. While these models assist in structuring the design process, characterizing the basic flow of operations that are involved, they provide no guidance facilities. The significance of this feature, and the role it plays in the time required to develop products, is increasing in importance due to the inherent process dynamics, system/component complexities, and competitive forces. The methodology presented in this paper involves the use of a hierarchical system structure, discrete event system specification (DEVS), and multidimensional state variable based metrics. This approach is unique in its capability to quantify designer's actions throughout product development, provide recommendations about subsequent activity selection, and coordinate distributed activities of designers and/or design teams across all design stages. Conceptual design tool implementation results are used to demonstrate the utility of this technique in improving the incremental decision making process.
Creating meaningful business continuity management programme metrics.
Strong, Brian
2010-11-01
The popular axiom, 'what gets measured gets done', is often applied in the quality management and continuous improvement disciplines. This truism is also useful to business continuity practitioners as they continually strive to prove the value of their organisation's investment in a business continuity management (BCM) programme. BCM practitioners must also remain relevant to their organisations as executives focus on the bottom line and maintaining stakeholder confidence. It seems that executives always find a way, whether in a hallway or elevator, to ask BCM professionals about the company's level of readiness. When asked, they must be ready with an informed response. The establishment of a process to measure business continuity programme performance and organisational readiness has emerged as a key component of US Department of Homeland Security 'Voluntary Private Sector Preparedness (PS-Prep) Program' standards where the overarching goal is to improve private sector preparedness for disasters and emergencies. The purpose of this paper is two-fold: to introduce continuity professionals to best practices that should be considered when developing a BCM metrics programme as well as providing a case study of how a large health insurance company researched, developed and implemented a process to measure BCM programme performance and company readiness.
Viscous shear in the Kerr metric
International Nuclear Information System (INIS)
Anderson, M.R.; Lemos, J.P.S.
1988-01-01
Models of viscous flows on to black holes commonly assume a zero-torque boundary condition at the radius of the last stable Keplerian orbit. It is here shown that this condition is wrong. The viscous torque is generally non-zero at both the last stable orbit and the horizon itself. The existence of a non-zero viscous torque at the horizon does not require the transfer of energy or angular momentum across any spacelike distance, and so does not violate causality. Further, in comparison with the viscous torque in the distant, Newtonian regime, the viscous torque on the horizon is often reversed, so that angular momentum is viscously advected inwards rather than outwards. This phenomenon is first suggested by an analysis of the quasi-stationary case, and then demonstrated explicitly for a series of cold, dynamical flows which fall freely from the last stable orbit in the Schwarzschild and Kerr metrics. In the steady flows constructed here, the net torque on the hole is always directed in the usual sense; any reversal in the viscous torque is offset by an increase in the convected flux of angular momentum. (author)
On degenerate metrics, dark matter and unification
Searight, Trevor P.
2017-12-01
A five-dimensional theory of relativity is presented which suggests that gravitation and electromagnetism may be unified using a degenerate metric. There are four fields (in the four-dimensional sense): a tensor field, two vector fields, and a scalar field, and they are unified with a combination of a gauge-like invariance and a reflection symmetry which means that both vector fields are photons. The gauge-like invariance implies that the fifth dimension is not directly observable; it also implies that charge is a constant of motion. The scalar field is analogous to the Brans-Dicke scalar field, and the theory tends towards the Einstein-Maxwell theory in the limit as the coupling constant tends to infinity. As there is some scope for fields to vary in the fifth dimension, it is possible for the photons to have wave behaviour in the fifth dimension. The wave behaviour has two effects: it gives mass to the photons, and it prevents them from interacting directly with normal matter. These massive photons still act as a source of gravity, however, and therefore they are candidates for dark matter.
Relativistic gas in a Schwarzschild metric
International Nuclear Information System (INIS)
Kremer, Gilberto M
2013-01-01
A relativistic gas in a Schwarzschild metric is studied within the framework of a relativistic Boltzmann equation in the presence of gravitational fields, where Marle’s model for the collision operator of the Boltzmann equation is employed. The transport coefficients of the bulk and shear viscosities and thermal conductivity are determined from the Chapman–Enskog method. It is shown that the transport coefficients depend on the gravitational potential. Expressions for the transport coefficients in the presence of weak gravitational fields in the non-relativistic (low temperature) and ultra-relativistic (high temperature) limiting cases are given. Apart from the temperature gradient the heat flux has two relativistic terms. The first one, proposed by Eckart, is due to the inertia of energy and represents an isothermal heat flux when matter is accelerated. The other, suggested by Tolman, is proportional to the gravitational potential gradient and indicates that—in the absence of an acceleration field—a state of equilibrium of a relativistic gas in a gravitational field can be attained only if the temperature gradient is counterbalanced by a gravitational potential gradient. (paper)
Securing Health Sensing Using Integrated Circuit Metric
Directory of Open Access Journals (Sweden)
Ruhma Tahir
2015-10-01
Full Text Available Convergence of technologies from several domains of computing and healthcare have aided in the creation of devices that can help health professionals in monitoring their patients remotely. An increase in networked healthcare devices has resulted in incidents related to data theft, medical identity theft and insurance fraud. In this paper, we discuss the design and implementation of a secure lightweight wearable health sensing system. The proposed system is based on an emerging security technology called Integrated Circuit Metric (ICMetric that extracts the inherent features of a device to generate a unique device identification. In this paper, we provide details of how the physical characteristics of a health sensor can be used for the generation of hardware “fingerprints”. The obtained fingerprints are used to deliver security services like authentication, confidentiality, secure admission and symmetric key generation. The generated symmetric key is used to securely communicate the health records and data of the patient. Based on experimental results and the security analysis of the proposed scheme, it is apparent that the proposed system enables high levels of security for health monitoring in resource optimized manner.
Securing health sensing using integrated circuit metric.
Tahir, Ruhma; Tahir, Hasan; McDonald-Maier, Klaus
2015-10-20
Convergence of technologies from several domains of computing and healthcare have aided in the creation of devices that can help health professionals in monitoring their patients remotely. An increase in networked healthcare devices has resulted in incidents related to data theft, medical identity theft and insurance fraud. In this paper, we discuss the design and implementation of a secure lightweight wearable health sensing system. The proposed system is based on an emerging security technology called Integrated Circuit Metric (ICMetric) that extracts the inherent features of a device to generate a unique device identification. In this paper, we provide details of how the physical characteristics of a health sensor can be used for the generation of hardware "fingerprints". The obtained fingerprints are used to deliver security services like authentication, confidentiality, secure admission and symmetric key generation. The generated symmetric key is used to securely communicate the health records and data of the patient. Based on experimental results and the security analysis of the proposed scheme, it is apparent that the proposed system enables high levels of security for health monitoring in resource optimized manner.
Genetic basis of a cognitive complexity metric.
Directory of Open Access Journals (Sweden)
Narelle K Hansell
Full Text Available Relational complexity (RC is a metric reflecting capacity limitation in relational processing. It plays a crucial role in higher cognitive processes and is an endophenotype for several disorders. However, the genetic underpinnings of complex relational processing have not been investigated. Using the classical twin model, we estimated the heritability of RC and genetic overlap with intelligence (IQ, reasoning, and working memory in a twin and sibling sample aged 15-29 years (N = 787. Further, in an exploratory search for genetic loci contributing to RC, we examined associated genetic markers and genes in our Discovery sample and selected loci for replication in four independent samples (ALSPAC, LBC1936, NTR, NCNG, followed by meta-analysis (N>6500 at the single marker level. Twin modelling showed RC is highly heritable (67%, has considerable genetic overlap with IQ (59%, and is a major component of genetic covariation between reasoning and working memory (72%. At the molecular level, we found preliminary support for four single-marker loci (one in the gene DGKB, and at a gene-based level for the NPS gene, having influence on cognition. These results indicate that genetic sources influencing relational processing are a key component of the genetic architecture of broader cognitive abilities. Further, they suggest a genetic cascade, whereby genetic factors influencing capacity limitation in relational processing have a flow-on effect to more complex cognitive traits, including reasoning and working memory, and ultimately, IQ.
Metrics for comparing dynamic earthquake rupture simulations
Barall, Michael; Harris, Ruth A.
2014-01-01
Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.
Characterizing granular networks using topological metrics
Dijksman, Joshua A.; Kovalcinova, Lenka; Ren, Jie; Behringer, Robert P.; Kramar, Miroslav; Mischaikow, Konstantin; Kondic, Lou
2018-04-01
We carry out a direct comparison of experimental and numerical realizations of the exact same granular system as it undergoes shear jamming. We adjust the numerical methods used to optimally represent the experimental settings and outcomes up to microscopic contact force dynamics. Measures presented here range from microscopic through mesoscopic to systemwide characteristics of the system. Topological properties of the mesoscopic force networks provide a key link between microscales and macroscales. We report two main findings: (1) The number of particles in the packing that have at least two contacts is a good predictor for the mechanical state of the system, regardless of strain history and packing density. All measures explored in both experiments and numerics, including stress-tensor-derived measures and contact numbers depend in a universal manner on the fraction of nonrattler particles, fNR. (2) The force network topology also tends to show this universality, yet the shape of the master curve depends much more on the details of the numerical simulations. In particular we show that adding force noise to the numerical data set can significantly alter the topological features in the data. We conclude that both fNR and topological metrics are useful measures to consider when quantifying the state of a granular system.
Using Publication Metrics to Highlight Academic Productivity and Research Impact
Carpenter, Christopher R.; Cone, David C.; Sarli, Cathy C.
2016-01-01
This article provides a broad overview of widely available measures of academic productivity and impact using publication data and highlights uses of these metrics for various purposes. Metrics based on publication data include measures such as number of publications, number of citations, the journal impact factor score, and the h-index, as well as emerging metrics based on document-level metrics. Publication metrics can be used for a variety of purposes for tenure and promotion, grant applications and renewal reports, benchmarking, recruiting efforts, and administrative purposes for departmental or university performance reports. The authors also highlight practical applications of measuring and reporting academic productivity and impact to emphasize and promote individual investigators, grant applications, or department output. PMID:25308141
Accuracy and precision in the calculation of phenology metrics
DEFF Research Database (Denmark)
Ferreira, Ana Sofia; Visser, Andre; MacKenzie, Brian
2014-01-01
a phenology metric is first determined from a noise- and gap-free time series, and again once it has been modified. We show that precision is a greater concern than accuracy for many of these metrics, an important point that has been hereto overlooked in the literature. The variability in precision between...... phenology metrics is substantial, but it can be improved by the use of preprocessing techniques (e.g., gap-filling or smoothing). Furthermore, there are important differences in the inherent variability of the metrics that may be crucial in the interpretation of studies based upon them. Of the considered......Phytoplankton phenology (the timing of seasonal events) is a commonly used indicator for evaluating responses of marine ecosystems to climate change. However, phenological metrics are vulnerable to observation-(bloom amplitude, missing data, and observational noise) and analysis-related (temporal...
Degraded visual environment image/video quality metrics
Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.
2014-06-01
A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.
Developing a Security Metrics Scorecard for Healthcare Organizations.
Elrefaey, Heba; Borycki, Elizabeth; Kushniruk, Andrea
2015-01-01
In healthcare, information security is a key aspect of protecting a patient's privacy and ensuring systems availability to support patient care. Security managers need to measure the performance of security systems and this can be achieved by using evidence-based metrics. In this paper, we describe the development of an evidence-based security metrics scorecard specific to healthcare organizations. Study participants were asked to comment on the usability and usefulness of a prototype of a security metrics scorecard that was developed based on current research in the area of general security metrics. Study findings revealed that scorecards need to be customized for the healthcare setting in order for the security information to be useful and usable in healthcare organizations. The study findings resulted in the development of a security metrics scorecard that matches the healthcare security experts' information requirements.
A practical approach to determine dose metrics for nanomaterials.
Delmaar, Christiaan J E; Peijnenburg, Willie J G M; Oomen, Agnes G; Chen, Jingwen; de Jong, Wim H; Sips, Adriënne J A M; Wang, Zhuang; Park, Margriet V D Z
2015-05-01
Traditionally, administered mass is used to describe doses of conventional chemical substances in toxicity studies. For deriving toxic doses of nanomaterials, mass and chemical composition alone may not adequately describe the dose, because particles with the same chemical composition can have completely different toxic mass doses depending on properties such as particle size. Other dose metrics such as particle number, volume, or surface area have been suggested, but consensus is lacking. The discussion regarding the most adequate dose metric for nanomaterials clearly needs a systematic, unbiased approach to determine the most appropriate dose metric for nanomaterials. In the present study, the authors propose such an approach and apply it to results from in vitro and in vivo experiments with silver and silica nanomaterials. The proposed approach is shown to provide a convenient tool to systematically investigate and interpret dose metrics of nanomaterials. Recommendations for study designs aimed at investigating dose metrics are provided. © 2015 SETAC.
Fisher information metrics for binary classifier evaluation and training
CERN. Geneva
2018-01-01
Different evaluation metrics for binary classifiers are appropriate to different scientific domains and even to different problems within the same domain. This presentation focuses on the optimisation of event selection to minimise statistical errors in HEP parameter estimation, a problem that is best analysed in terms of the maximisation of Fisher information about the measured parameters. After describing a general formalism to derive evaluation metrics based on Fisher information, three more specific metrics are introduced for the measurements of signal cross sections in counting experiments (FIP1) or distribution fits (FIP2) and for the measurements of other parameters from distribution fits (FIP3). The FIP2 metric is particularly interesting because it can be derived from any ROC curve, provided that prevalence is also known. In addition to its relation to measurement errors when used as an evaluation criterion (which makes it more interesting that the ROC AUC), a further advantage of the FIP2 metric is ...
Clinical Outcome Metrics for Optimization of Robust Training
Ebert, D.; Byrne, V. E.; McGuire, K. M.; Hurst, V. W., IV; Kerstman, E. L.; Cole, R. W.; Sargsyan, A. E.; Garcia, K. M.; Reyes, D.; Young, M.
2016-01-01
Introduction: The emphasis of this research is on the Human Research Program (HRP) Exploration Medical Capability's (ExMC) "Risk of Unacceptable Health and Mission Outcomes Due to Limitations of In-Flight Medical Capabilities." Specifically, this project aims to contribute to the closure of gap ExMC 2.02: We do not know how the inclusion of a physician crew medical officer quantitatively impacts clinical outcomes during exploration missions. The experiments are specifically designed to address clinical outcome differences between physician and non-physician cohorts in both near-term and longer-term (mission impacting) outcomes. Methods: Medical simulations will systematically compare success of individual diagnostic and therapeutic procedure simulations performed by physician and non-physician crew medical officer (CMO) analogs using clearly defined short-term (individual procedure) outcome metrics. In the subsequent step of the project, the procedure simulation outcomes will be used as input to a modified version of the NASA Integrated Medical Model (IMM) to analyze the effect of the outcome (degree of success) of individual procedures (including successful, imperfectly performed, and failed procedures) on overall long-term clinical outcomes and the consequent mission impacts. The procedures to be simulated are endotracheal intubation, fundoscopic examination, kidney/urinary ultrasound, ultrasound-guided intravenous catheter insertion, and a differential diagnosis exercise. Multiple assessment techniques will be used, centered on medical procedure simulation studies occurring at 3, 6, and 12 months after initial training (as depicted in the following flow diagram of the experiment design). Discussion: Analysis of procedure outcomes in the physician and non-physician groups and their subsets (tested at different elapsed times post training) will allow the team to 1) define differences between physician and non-physician CMOs in terms of both procedure performance
2012-03-02
... Performance Metrics; Commission Staff Request Comments on Performance Metrics for Regions Outside of RTOs and... performance communicate about the benefits of RTOs and, where appropriate, (2) changes that need to be made to... common set of performance measures for markets both within and outside of ISOs/RTOs. As recommended by...
A comparison theorem of the Kobayashi metric and the Bergman metric on a class of Reinhardt domains
International Nuclear Information System (INIS)
Weiping Yin.
1990-03-01
A comparison theorem for the Kobayashi and Bergman metric is given on a class of Reinhardt domains in C n . In the meantime, we obtain a class of complete invariant Kaehler metrics for these domains of the special cases. (author). 5 refs
James D. Wickham; Robert V. O' Neill; Kurt H. Riitters; Timothy G. Wade; K. Bruce Jones
1997-01-01
Calculation of landscape metrics from land-cover data is becoming increasingly common. Some studies have shown that these measurements are sensitive to differences in land-cover composition, but none are known to have tested also their a sensitivity to land-cover misclassification. An error simulation model was written to test the sensitivity of selected land-scape...
Accounting for no net loss: A critical assessment of biodiversity offsetting metrics and methods.
Carreras Gamarra, Maria Jose; Lassoie, James Philip; Milder, Jeffrey
2018-08-15
Biodiversity offset strategies are based on the explicit calculation of both losses and gains necessary to establish ecological equivalence between impact and offset areas. Given the importance of quantifying biodiversity values, various accounting methods and metrics are continuously being developed and tested for this purpose. Considering the wide array of alternatives, selecting an appropriate one for a specific project can be not only challenging, but also crucial; accounting methods can strongly influence the biodiversity outcomes of an offsetting strategy, and if not well-suited to the context and values being offset, a no net loss outcome might not be delivered. To date there has been no systematic review or comparative classification of the available biodiversity accounting alternatives that aim at facilitating metric selection, and no tools that guide decision-makers throughout such a complex process. We fill this gap by developing a set of analyses to support (i) identifying the spectrum of available alternatives, (ii) understanding the characteristics of each and, ultimately (iii) making the most sensible and sound decision about which one to implement. The metric menu, scoring matrix, and decision tree developed can be used by biodiversity offsetting practitioners to help select an existing metric, and thus achieve successful outcomes that advance the goal of no net loss of biodiversity. Copyright © 2018 Elsevier Ltd. All rights reserved.
Individuality evaluation for paper based artifact-metrics using transmitted light image
Yamakoshi, Manabu; Tanaka, Junichi; Furuie, Makoto; Hirabayashi, Masashi; Matsumoto, Tsutomu
2008-02-01
Artifact-metrics is an automated method of authenticating artifacts based on a measurable intrinsic characteristic. Intrinsic characters, such as microscopic random-patterns made during the manufacturing process, are very difficult to copy. A transmitted light image of the distribution can be used for artifact-metrics, since the fiber distribution of paper is random. Little is known about the individuality of the transmitted light image although it is an important requirement for intrinsic characteristic artifact-metrics. Measuring individuality requires that the intrinsic characteristic of each artifact significantly differs, so having sufficient individuality can make an artifact-metric system highly resistant to brute force attack. Here we investigate the influence of paper category, matching size of sample, and image-resolution on the individuality of a transmitted light image of paper through a matching test using those images. More concretely, we evaluate FMR/FNMR curves by calculating similarity scores with matches using correlation coefficients between pairs of scanner input images, and the individuality of paper by way of estimated EER with probabilistic measure through a matching method based on line segments, which can localize the influence of rotation gaps of a sample in the case of large matching size. As a result, we found that the transmitted light image of paper has a sufficient individuality.
Theory and experiments in general relativity and other metric theories of gravity
International Nuclear Information System (INIS)
Ciufolini, I.
1984-01-01
In Chapter I, after an introduction to theories of gravity alternative to general relativity, metric theories, and the post-Newtonian parameterized (PNN) formalism, a new class of metric theories of gravity is defined. As a result the post-Newtonian approximation of the new theories is not described by the PPN formalism. In fact under the weak field and slow motion hypothesis, the post-Newtonian expression of the metric tensor contains an infinite set of new terms and correspondingly an infinite set of new PPN parameters. Chapter II, III, and IV are devoted to new experiments to test general relativity and other metric theories of gravity. In particular, in chapter IV, it is shown that two general relativistics effects, the Lense-Thirring and De Sitter-Fokker precessions of the nodal lines of an Earth artificial satellite are today detectable using high altitude laser ranged artificial satellites such as Lageos. The orbit of this satellite is known with unprecedented accuracy. The author then describes a method of measuring these relativistic precessions using Lageos together with another high altitude laser ranged similar satellite with appropriately chosen orbital parameters
PREDICTION METRICS FOR CHEMICAL DETECTION IN LONG-WAVE INFRARED HYPERSPECTRAL IMAGERY
Energy Technology Data Exchange (ETDEWEB)
Chilton, M.; Walsh, S.J.; Daly, D.S.
2009-01-01
Natural and man-made chemical processes generate gaseous plumes that may be detected by hyperspectral imaging, which produces a matrix of spectra affected by the chemical constituents of the plume, the atmosphere, the bounding background surface and instrument noise. A physics-based model of observed radiance shows that high chemical absorbance and low background emissivity result in a larger chemical signature. Using simulated hyperspectral imagery, this study investigated two metrics which exploited this relationship. The objective was to explore how well the chosen metrics predicted when a chemical would be more easily detected when comparing one background type to another. The two predictor metrics correctly rank ordered the backgrounds for about 94% of the chemicals tested as compared to the background rank orders from Whitened Matched Filtering (a detection algorithm) of the simulated spectra. These results suggest that the metrics provide a reasonable summary of how the background emissivity and chemical absorbance interact to produce the at-sensor chemical signal. This study suggests that similarly effective predictors that account for more general physical conditions may be derived.
Karakolis, Thomas; Bhan, Shivam; Crotin, Ryan L
2013-08-01
In Major League Baseball (MLB), games pitched, total innings pitched, total pitches thrown, innings pitched per game, and pitches thrown per game are used to measure cumulative work. Often, pitchers are allocated limits, based on pitches thrown per game and total innings pitched in a season, in an attempt to prevent future injuries. To date, the efficacy in predicting injuries from these cumulative work metrics remains in question. It was hypothesized that the cumulative work metrics would be a significant predictor for future injury in MLB pitchers. Correlations between cumulative work for pitchers during 2002-07 and injury days in the following seasons were examined using regression analyses to test this hypothesis. Each metric was then "binned" into smaller cohorts to examine trends in the associated risk of injury for each cohort. During the study time period, 27% of pitchers were injured after a season in which they pitched. Although some interesting trends were noticed during the binning process, based on the regression analyses, it was found that no cumulative work metric was a significant predictor for future injury. It was concluded that management of a pitcher's playing schedule based on these cumulative work metrics alone could not be an effective means of preventing injury. These findings indicate that an integrated approach to injury prevention is required. This approach will likely involve advanced cumulative work metrics and biomechanical assessment.
Alotaibi, Naif M; Guha, Daipayan; Fallah, Aria; Aldakkan, Abdulrahman; Nassiri, Farshad; Badhiwala, Jetan H; Ibrahim, George M; Shamji, Mohammed F; Macdonald, R Loch; Lozano, Andres M
2016-06-01
Social media plays an increasingly important role in dissemination of knowledge and raising awareness of selected topics among the general public and the academic community. To investigate the relationship between social media metrics and academic indices of neurosurgical programs and journals. A 2-step online search was performed to identify official social media accounts of neurosurgical departments that were accredited by the Accreditation Council for Graduate Medical Education and the Royal College of Physicians and Surgeons of Canada. Dedicated neurosurgery and spine journals' social media accounts also were identified through an online search on SCImago Journal and Country Rank portal. Nonparametric tests were performed with bootstrapping to compare groups and to look for correlations between social media and academic metrics. We identified 36 social media accounts officially affiliated with academic neurosurgical institutions. These accounts represented 22 of 119 neurosurgical programs in North America (18.4%). The presence of a social media account for neurosurgical departments was associated with statistically significant higher values of academic impact metrics (P social media metrics for neurosurgical department accounts, however, did not correlate with any values of academic indices. For journals, there were 11 journals present on social media and had greater academic metrics compared with journals without social media presence (P Social media presence is associated with stronger academic bibliometrics profiles for both neurosurgical departments and journals. The impact of social media metrics on indices of scientific impact in neurosurgery is not known. Copyright © 2016 Elsevier Inc. All rights reserved.
Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.
Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier
2017-07-10
A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.
McPhail, C.; Maier, H. R.; Kwakkel, J. H.; Giuliani, M.; Castelletti, A.; Westra, S.
2018-02-01
Robustness is being used increasingly for decision analysis in relation to deep uncertainty and many metrics have been proposed for its quantification. Recent studies have shown that the application of different robustness metrics can result in different rankings of decision alternatives, but there has been little discussion of what potential causes for this might be. To shed some light on this issue, we present a unifying framework for the calculation of robustness metrics, which assists with understanding how robustness metrics work, when they should be used, and why they sometimes disagree. The framework categorizes the suitability of metrics to a decision-maker based on (1) the decision-context (i.e., the suitability of using absolute performance or regret), (2) the decision-maker's preferred level of risk aversion, and (3) the decision-maker's preference toward maximizing performance, minimizing variance, or some higher-order moment. This article also introduces a conceptual framework describing when relative robustness values of decision alternatives obtained using different metrics are likely to agree and disagree. This is used as a measure of how "stable" the ranking of decision alternatives is when determined using different robustness metrics. The framework is tested on three case studies, including water supply augmentation in Adelaide, Australia, the operation of a multipurpose regulated lake in Italy, and flood protection for a hypothetical river based on a reach of the river Rhine in the Netherlands. The proposed conceptual framework is confirmed by the case study results, providing insight into the reasons for disagreements between rankings obtained using different robustness metrics.
Energy Technology Data Exchange (ETDEWEB)
Desai, V; Labby, Z; Culberson, W [University of Wisc Madison, Madison, WI (United States)
2016-06-15
Purpose: To determine whether body site-specific treatment plans form unique “plan class” clusters in a multi-dimensional analysis of plan complexity metrics such that a single beam quality correction determined for a representative plan could be universally applied within the “plan class”, thereby increasing the dosimetric accuracy of a detector’s response within a subset of similarly modulated nonstandard deliveries. Methods: We collected 95 clinical volumetric modulated arc therapy (VMAT) plans from four body sites (brain, lung, prostate, and spine). The lung data was further subdivided into SBRT and non-SBRT data for a total of five plan classes. For each control point in each plan, a variety of aperture-based complexity metrics were calculated and stored as unique characteristics of each patient plan. A multiple comparison of means analysis was performed such that every plan class was compared to every other plan class for every complexity metric in order to determine which groups could be considered different from one another. Statistical significance was assessed after correcting for multiple hypothesis testing. Results: Six out of a possible 10 pairwise plan class comparisons were uniquely distinguished based on at least nine out of 14 of the proposed metrics (Brain/Lung, Brain/SBRT lung, Lung/Prostate, Lung/SBRT Lung, Lung/Spine, Prostate/SBRT Lung). Eight out of 14 of the complexity metrics could distinguish at least six out of the possible 10 pairwise plan class comparisons. Conclusion: Aperture-based complexity metrics could prove to be useful tools to quantitatively describe a distinct class of treatment plans. Certain plan-averaged complexity metrics could be considered unique characteristics of a particular plan. A new approach to generating plan-class specific reference (pcsr) fields could be established through a targeted preservation of select complexity metrics or a clustering algorithm that identifies plans exhibiting similar
Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions
Energy Technology Data Exchange (ETDEWEB)
Mathew, Paul; Sartor, Dale; Tschudi, William
2009-07-13
This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.
Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions
Energy Technology Data Exchange (ETDEWEB)
Mathew, Paul; Greenberg, Steve; Sartor, Dale
2009-07-13
This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.
Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.
Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen
2017-06-01
The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.
Relevance of motion-related assessment metrics in laparoscopic surgery.
Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J
2013-06-01
Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.
An accurate metric for the spacetime around rotating neutron stars
Pappas, George
2017-04-01
The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parametrized metric, I.e. a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work, we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parametrized by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parametrization of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a three-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.
Enhanced Accident Tolerant LWR Fuels National Metrics Workshop Report
Energy Technology Data Exchange (ETDEWEB)
Lori Braase
2013-01-01
Commercialization. The activities performed during the feasibility assessment phase include laboratory scale experiments; fuel performance code updates; and analytical assessment of economic, operational, safety, fuel cycle, and environmental impacts of the new concepts. The development and qualification stage will consist of fuel fabrication and large scale irradiation and safety basis testing, leading to qualification and ultimate NRC licensing of the new fuel. The commercialization phase initiates technology transfer to industry for implementation. Attributes for fuels with enhanced accident tolerance include improved reaction kinetics with steam and slower hydrogen generation rate, while maintaining acceptable cladding thermo-mechanical properties; fuel thermo-mechanical properties; fuel-clad interactions; and fission-product behavior. These attributes provide a qualitative guidance for parameters that must be considered in the development of fuels and cladding with enhanced accident tolerance. However, quantitative metrics must be developed for these attributes. To initiate the quantitative metrics development, a Light Water Reactor Enhanced Accident Tolerant Fuels Metrics Development Workshop was held October 10-11, 2012, in Germantown, Maryland. This document summarizes the structure and outcome of the two-day workshop. Questions regarding the content can be directed to Lori Braase, 208-526-7763, lori.braase@inl.gov.
Metric space construction for the boundary of space-time
International Nuclear Information System (INIS)
Meyer, D.A.
1986-01-01
A distance function between points in space-time is defined and used to consider the manifold as a topological metric space. The properties of the distance function are investigated: conditions under which the metric and manifold topologies agree, the relationship with the causal structure of the space-time and with the maximum lifetime function of Wald and Yip, and in terms of the space of causal curves. The space-time is then completed as a topological metric space; the resultant boundary is compared with the causal boundary and is also calculated for some pertinent examples
Metrics for assessing retailers based on consumer perception
Directory of Open Access Journals (Sweden)
Klimin Anastasii
2017-01-01
Full Text Available The article suggests a new look at trading platforms, which is called “metrics.” Metrics are a way to look at the point of sale in a large part from the buyer’s side. The buyer enters the store and make buying decision based on those factors that the seller often does not consider, or considers in part, because “does not see” them, since he is not a buyer. The article proposes the classification of retailers, metrics and a methodology for their determination, presents the results of an audit of retailers in St. Petersburg on the proposed methodology.
Inflation with non-minimal coupling. Metric vs. Palatini formulations
International Nuclear Information System (INIS)
Bauer, F.; Demir, D.A.; Izmir Institute of Technology
2008-03-01
We analyze non-minimally coupled scalar field theories in metric (second-order) and Palatini (first-order) formalisms in a comparative fashion. After contrasting them in a general setup, we specialize to inflation and find that the two formalisms differ in their predictions for various cosmological parameters. The main reason is that dependencies on the non-minimal coupling parameter are different in the two formalisms. For successful inflation, the Palatini approach prefers a much larger value for the non-minimal coupling parameter than the Metric approach. Unlike the Metric formalism, in Palatini, the inflaton stays well below the Planck scale whereby providing a natural inflationary epoch. (orig.)
Kerr-Newman metric in deSitter background
International Nuclear Information System (INIS)
Patel, L.K.; Koppar, S.S.; Bhatt, P.V.
1987-01-01
In addition to the Kerr-Newman metric with cosmological constant several other metrics are presented giving Kerr-Newman type solutions of Einstein-Maxwell field equations in the background of deSitter universe. The electromagnetic field in all the solutions is assumed to be source-free. A new metric of what may be termed as an electrovac rotating deSitter space-time- a space-time devoid of matter but containing source-free electromagnetic field and a null fluid with twisting rays-has been presented. In the absence of the electromagnetic field, these solutions reduce to those discussed by Vaidya (1984). 8 refs. (author)
Comparison of routing metrics for wireless mesh networks
CSIR Research Space (South Africa)
Nxumalo, SL
2011-09-01
Full Text Available in each and every relay node so as to find the next hop for the packet. A routing metric is simply a measure used for selecting the best path, used by a routing protocol. Figure 2 shows the relationship between a routing protocol and the routing... on its QoS-awareness level. The routing metrics that considered QoS the most were selected from each group. This section discusses the four routing metrics that were compared in this paper, which are: hop count (HOP), expected transmission count (ETX...
Hermitian-Einstein metrics on parabolic stable bundles
International Nuclear Information System (INIS)
Li Jiayu; Narasimhan, M.S.
1995-12-01
Let M-bar be a compact complex manifold of complex dimension two with a smooth Kaehler metric and D a smooth divisor on M-bar. If E is a rank 2 holomorphic vector bundle on M-bar with a stable parabolic structure along D, we prove the existence of a metric on E' = E module MbarD (compatible with the parabolic structure) which is Hermitian-Einstein with respect to the restriction of Kaehler metric of M-barD. A converse is also proved. (author). 24 refs
SIP end to end performance metrics
Vozňák, Miroslav; Rozhon, Jan
2012-01-01
The paper deals with a SIP performance testing methodology. The main contribution to the field of performance testing of SIP infrastructure consists in the possibility to perform the standardized stress tests with the developed SIP TesterApp without a deeper knowledge in the area of SIP communication. The developed tool exploits several of open-source applications such as jQuery, Python, JSON and the cornerstone SIP generator SIPp, the result is highly modifiable and the ...
An Enhanced TIMESAT Algorithm for Estimating Vegetation Phenology Metrics from MODIS Data
Tan, Bin; Morisette, Jeffrey T.; Wolfe, Robert E.; Gao, Feng; Ederer, Gregory A.; Nightingale, Joanne; Pedelty, Jeffrey A.
2012-01-01
An enhanced TIMESAT algorithm was developed for retrieving vegetation phenology metrics from 250 m and 500 m spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indexes (VI) over North America. MODIS VI data were pre-processed using snow-cover and land surface temperature data, and temporally smoothed with the enhanced TIMESAT algorithm. An objective third derivative test was applied to define key phenology dates and retrieve a set of phenology metrics. This algorithm has been applied to two MODIS VIs: Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). In this paper, we describe the algorithm and use EVI as an example to compare three sets of TIMESAT algorithm/MODIS VI combinations: a) original TIMESAT algorithm with original MODIS VI, b) original TIMESAT algorithm with pre-processed MODIS VI, and c) enhanced TIMESAT and pre-processed MODIS VI. All retrievals were compared with ground phenology observations, some made available through the National Phenology Network. Our results show that for MODIS data in middle to high latitude regions, snow and land surface temperature information is critical in retrieving phenology metrics from satellite observations. The results also show that the enhanced TIMESAT algorithm can better accommodate growing season start and end dates that vary significantly from year to year. The TIMESAT algorithm improvements contribute to more spatial coverage and more accurate retrievals of the phenology metrics. Among three sets of TIMESAT/MODIS VI combinations, the start of the growing season metric predicted by the enhanced TIMESAT algorithm using pre-processed MODIS VIs has the best associations with ground observed vegetation greenup dates.
Rapporteur Report: Sources and Exposure Metrics for ELF Epidemiology (Part 1) (invited paper)
International Nuclear Information System (INIS)
Matthes, R.
1999-01-01
High quality epidemiological studies on the possible link between exposure to non-ionizing radiation and human health effects are of great importance for radiation protection in this area. The main sources of ELF fields are domestic appliances, different electrical energy distribution systems and all kinds of electrical machinery and devices at the workplace. In general, ELF fields present in the environment, show complex temporal patterns and spatial distributions, depending on the generating source. The complete characterisation of the different field sources often requires highly sophisticated instrumentation, and this is therefore not feasible within the scope of epidemiological studies. On average, individual exposure from ELF fields is low in both the working environment and in residential areas. Only at certain workplaces are people subject to significant ELF exposure with regard to biological effects. Different methods have been developed to determine levels of exposure received by study subjects, with the aim to rank exposed and non-exposed groups in epidemiological studies. These include spot measurements, calculations or modelling. The different methods used to estimate total exposure in epidemiological studies may result to a differing extent in a misclassification of the study subjects. Equally important for future studies is the selection of the appropriate exposure metric. The most widely used metric so far is the time-weighted average and thus represents a quasi standard metric for use in epidemiological studies. Beside, wire codes have been used for a long time in residential studies and job titles are often used in occupational studies. On the basis of the experience gained in previous studies, it would be desirable to develop standardised, state-of-the-art protocols to improve exposure assessment. New surrogates and metrics were proposed as the basis for further studies. But only few of these have recently undergone preliminary testing. A
Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon
2018-05-01
The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.
DeJournett, Jeremy; DeJournett, Leon
2017-11-01
Effective glucose control in the intensive care unit (ICU) setting has the potential to decrease morbidity and mortality rates and thereby decrease health care expenditures. To evaluate what constitutes effective glucose control, typically several metrics are reported, including time in range, time in mild and severe hypoglycemia, coefficient of variation, and others. To date, there is no one metric that combines all of these individual metrics to give a number indicative of overall performance. We proposed a composite metric that combines 5 commonly reported metrics, and we used this composite metric to compare 6 glucose controllers. We evaluated the following controllers: Ideal Medical Technologies (IMT) artificial-intelligence-based controller, Yale protocol, Glucommander, Wintergerst et al PID controller, GRIP, and NICE-SUGAR. We evaluated each controller across 80 simulated patients, 4 clinically relevant exogenous dextrose infusions, and one nonclinical infusion as a test of the controller's ability to handle difficult situations. This gave a total of 2400 5-day simulations, and 585 604 individual glucose values for analysis. We used a random walk sensor error model that gave a 10% MARD. For each controller, we calculated severe hypoglycemia (140 mg/dL), and coefficient of variation (CV), as well as our novel controller metric. For the controllers tested, we achieved the following median values for our novel controller scoring metric: IMT: 88.1, YALE: 46.7, GLUC: 47.2, PID: 50, GRIP: 48.2, NICE: 46.4. The novel scoring metric employed in this study shows promise as a means for evaluating new and existing ICU-based glucose controllers, and it could be used in the future to compare results of glucose control studies in critical care. The IMT AI-based glucose controller demonstrated the most consistent performance results based on this new metric.
Development of quality metrics for ambulatory pediatric cardiology: Chest pain.
Lu, Jimmy C; Bansal, Manish; Behera, Sarina K; Boris, Jeffrey R; Cardis, Brian; Hokanson, John S; Kakavand, Bahram; Jedeikin, Roy
2017-12-01
As part of the American College of Cardiology Adult Congenital and Pediatric Cardiology Section effort to develop quality metrics (QMs) for ambulatory pediatric practice, the chest pain subcommittee aimed to develop QMs for evaluation of chest pain. A group of 8 pediatric cardiologists formulated candidate QMs in the areas of history, physical examination, and testing. Consensus candidate QMs were submitted to an expert panel for scoring by the RAND-UCLA modified Delphi process. Recommended QMs were then available for open comments from all members. These QMs are intended for use in patients 5-18 years old, referred for initial evaluation of chest pain in an ambulatory pediatric cardiology clinic, with no known history of pediatric or congenital heart disease. A total of 10 candidate QMs were submitted; 2 were rejected by the expert panel, and 5 were removed after the open comment period. The 3 approved QMs included: (1) documentation of family history of cardiomyopathy, early coronary artery disease or sudden death, (2) performance of electrocardiogram in all patients, and (3) performance of an echocardiogram to evaluate coronary arteries in patients with exertional chest pain. Despite practice variation and limited prospective data, 3 QMs were approved, with measurable data points which may be extracted from the medical record. However, further prospective studies are necessary to define practice guidelines and to develop appropriate use criteria in this population. © 2017 Wiley Periodicals, Inc.
Metrical expectations from preceding prosody influence perception of lexical stress.
Brown, Meredith; Salverda, Anne Pier; Dilley, Laura C; Tanenhaus, Michael K
2015-04-01
Two visual-world experiments tested the hypothesis that expectations based on preceding prosody influence the perception of suprasegmental cues to lexical stress. The results demonstrate that listeners' consideration of competing alternatives with different stress patterns (e.g., 'jury/gi'raffe) can be influenced by the fundamental frequency and syllable timing patterns across material preceding a target word. When preceding stressed syllables distal to the target word shared pitch and timing characteristics with the first syllable of the target word, pictures of alternatives with primary lexical stress on the first syllable (e.g., jury) initially attracted more looks than alternatives with unstressed initial syllables (e.g., giraffe). This effect was modulated when preceding unstressed syllables had pitch and timing characteristics similar to the initial syllable of the target word, with more looks to alternatives with unstressed initial syllables (e.g., giraffe) than to those with stressed initial syllables (e.g., jury). These findings suggest that expectations about the acoustic realization of upcoming speech include information about metrical organization and lexical stress and that these expectations constrain the initial interpretation of suprasegmental stress cues. These distal prosody effects implicate online probabilistic inferences about the sources of acoustic-phonetic variation during spoken-word recognition. (c) 2015 APA, all rights reserved.
Countermeasure development using a formalised metric-based process
Barker, Laurence
2008-10-01
Guided weapons, are a potent threat to both air and surface platforms; to protect the platform, Countermeasures are often used to disrupt the operation of the tracking system. Development of effective techniques to defeat the guidance sensors is a complex activity. The countermeasure often responds to the behaviour of a responsive sensor system, creating a "closed loop" interaction. Performance assessment is difficult, and determining that enough knowledge exists to make a case that a platform is adequately protected is challenging. A set of metrics known as Countermeasure Confidence Levels (CCL) is described. These set out a measure of confidence in prediction of Countermeasure performance. The CCL scale provides, for the first time, a method to determine whether enough evidence exists to support development activity and introduction to operational service. Application of the CCL scale to development of a hypothetical countermeasure is described. This tracks how the countermeasure is matured from initial concept to in-service application. The purpose of each stage is described, together with a description of what work is likely to be needed. This will involve timely use of analysis, simulation, laboratory work and field testing. The use of the CCL scale at key decision points is described. These include procurement decision points, and entry-to-service decisions. Each stage requires collection of evidence of effectiveness. Completeness of the available evidence can be assessed, and duplication can be avoided. Read-across between concepts, weapon systems and platforms can be addressed and the impact of technology insertion can be assessed.
Development, Validation, and Implementation of a Medical Judgment Metric
Directory of Open Access Journals (Sweden)
Rami A. Ahmed DO, MHPE
2017-06-01
Full Text Available Background: Medical decision making is a critical, yet understudied, aspect of medical education. Aims: To develop the Medical Judgment Metric (MJM, a numerical rubric to quantify good decisions in practice in simulated environments; and to obtain initial preliminary evidence of reliability and validity of the tool. Methods: The individual MJM items, domains, and sections of the MJM were built based on existing standardized frameworks. Content validity was determined by a convenient sample of eight experts. The MJM instrument was pilot tested in four medical simulations with a team of three medical raters assessing 40 participants with four levels of medical experience and skill. Results: Raters were highly consistent in their MJM scores in each scenario (intraclass correlation coefficient 0.965 to 0.987 as well as their evaluation of the expected patient outcome (Fleiss’s Kappa 0.791 to 0.906. For each simulation scenario, average rater cut-scores significantly predicted expected loss of life or stabilization (Cohen’s Kappa 0.851 to 0.880. Discussion : The MJM demonstrated preliminary evidence of reliability and validity.
Narrowing the Gap Between QoS Metrics and Web QoE Using Above-the-fold Metrics
da Hora, Diego Neves; Asrese, Alemnew; Christophides, Vassilis; Teixeira, Renata; Rossi, Dario
2018-01-01
International audience; Page load time (PLT) is still the most common application Quality of Service (QoS) metric to estimate the Quality of Experience (QoE) of Web users. Yet, recent literature abounds with proposals for alternative metrics (e.g., Above The Fold, SpeedIndex and variants) that aim at better estimating user QoE. The main purpose of this work is thus to thoroughly investigate a mapping between established and recently proposed objective metrics and user QoE. We obtain ground tr...
Quantum anomalies for generalized Euclidean Taub-NUT metrics
International Nuclear Information System (INIS)
Cotaescu, Ion I; Moroianu, Sergiu; Visinescu, Mihai
2005-01-01
The generalized Taub-NUT metrics exhibit in general gravitational anomalies. This is in contrast with the fact that the original Taub-NUT metric does not exhibit gravitational anomalies, which is a consequence of the fact that it admits Killing-Yano tensors forming Staeckel-Killing tensors as products. We have found that for axial anomalies, interpreted as the index of the Dirac operator, the presence of Killing-Yano tensors is irrelevant. In order to evaluate the axial anomalies, we compute the index of the Dirac operator with the APS boundary condition on balls and on annular domains. The result is an explicit number-theoretic quantity depending on the radii of the domain. This quantity is 0 for metrics close to the original Taub-NUT metric but it does not vanish in general
Analyses Of Two End-User Software Vulnerability Exposure Metrics
Energy Technology Data Exchange (ETDEWEB)
Jason L. Wright; Miles McQueen; Lawrence Wellman
2012-08-01
The risk due to software vulnerabilities will not be completely resolved in the near future. Instead, putting reliable vulnerability measures into the hands of end-users so that informed decisions can be made regarding the relative security exposure incurred by choosing one software package over another is of importance. To that end, we propose two new security metrics, average active vulnerabilities (AAV) and vulnerability free days (VFD). These metrics capture both the speed with which new vulnerabilities are reported to vendors and the rate at which software vendors fix them. We then examine how the metrics are computed using currently available datasets and demonstrate their estimation in a simulation experiment using four different browsers as a case study. Finally, we discuss how the metrics may be used by the various stakeholders of software and to software usage decisions.
Research on cardiovascular disease prediction based on distance metric learning
Ni, Zhuang; Liu, Kui; Kang, Guixia
2018-04-01
Distance metric learning algorithm has been widely applied to medical diagnosis and exhibited its strengths in classification problems. The k-nearest neighbour (KNN) is an efficient method which treats each feature equally. The large margin nearest neighbour classification (LMNN) improves the accuracy of KNN by learning a global distance metric, which did not consider the locality of data distributions. In this paper, we propose a new distance metric algorithm adopting cosine metric and LMNN named COS-SUBLMNN which takes more care about local feature of data to overcome the shortage of LMNN and improve the classification accuracy. The proposed methodology is verified on CVDs patient vector derived from real-world medical data. The Experimental results show that our method provides higher accuracy than KNN and LMNN did, which demonstrates the effectiveness of the Risk predictive model of CVDs based on COS-SUBLMNN.
Computing the Gromov hyperbolicity constant of a discrete metric space
Ismail, Anas
2012-01-01
, and many other areas of research. The Gromov hyperbolicity constant of several families of graphs and geometric spaces has been determined. However, so far, the only known algorithm for calculating the Gromov hyperbolicity constant δ of a discrete metric
Some applications on tangent bundle with Kaluza-Klein metric
Directory of Open Access Journals (Sweden)
Murat Altunbaş
2017-01-01
Full Text Available In this paper, differential equations of geodesics; parallelism, incompressibility and closeness conditions of the horizontal and complete lift of the vector fields are investigated with respect to Kaluza-Klein metric on tangent bundle.
Curvature properties of four-dimensional Walker metrics
International Nuclear Information System (INIS)
Chaichi, M; Garcia-Rio, E; Matsushita, Y
2005-01-01
A Walker n-manifold is a semi-Riemannian manifold, which admits a field of parallel null r-planes, r ≤ n/2. In the present paper we study curvature properties of a Walker 4-manifold (M, g) which admits a field of parallel null 2-planes. The metric g is necessarily of neutral signature (+ + - -). Such a Walker 4-manifold is the lowest dimensional example not of Lorentz type. There are three functions of coordinates which define a Walker metric. Some recent work shows that a Walker 4-manifold of restricted type whose metric is characterized by two functions exhibits a large variety of symplectic structures, Hermitian structures, Kaehler structures, etc. For such a restricted Walker 4-manifold, we shall study mainly curvature properties, e.g., conditions for a Walker metric to be Einstein, Osserman, or locally conformally flat, etc. One of our main results is the exact solutions to the Einstein equations for a restricted Walker 4-manifold
Office Skills: Metric Problems in the Typing Classroom
Panagoplos, Nicholas A.
1978-01-01
Discusses problems of metric conversion in the typewriting classroom, as most typewriters have spacing in inches, and shows how to teach students to adjust their typewritten work for this spacing. (MF)
On a Theorem of Khan in a Generalized Metric Space
Directory of Open Access Journals (Sweden)
Jamshaid Ahmad
2013-01-01
Full Text Available Existence and uniqueness of fixed points are established for a mapping satisfying a contractive condition involving a rational expression on a generalized metric space. Several particular cases and applications as well as some illustrative examples are given.
A bridge role metric model for nodes in software networks.
Directory of Open Access Journals (Sweden)
Bo Li
Full Text Available A bridge role metric model is put forward in this paper. Compared with previous metric models, our solution of a large-scale object-oriented software system as a complex network is inherently more realistic. To acquire nodes and links in an undirected network, a new model that presents the crucial connectivity of a module or the hub instead of only centrality as in previous metric models is presented. Two previous metric models are described for comparison. In addition, it is obvious that the fitting curve between the Bre results and degrees can well be fitted by a power law. The model represents many realistic characteristics of actual software structures, and a hydropower simulation system is taken as an example. This paper makes additional contributions to an accurate understanding of module design of software systems and is expected to be beneficial to software engineering practices.
Tripled Fixed Point in Ordered Multiplicative Metric Spaces
Directory of Open Access Journals (Sweden)
Laishram Shanjit
2017-06-01
Full Text Available In this paper, we present some triple fixed point theorems in partially ordered multiplicative metric spaces depended on another function. Our results generalise the results of [6] and [5].